00:00:00.001 Started by upstream project "autotest-per-patch" build number 132376 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.091 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.092 The recommended git tool is: git 00:00:00.092 using credential 00000000-0000-0000-0000-000000000002 00:00:00.094 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.135 Fetching changes from the remote Git repository 00:00:00.138 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.179 Using shallow fetch with depth 1 00:00:00.179 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.179 > git --version # timeout=10 00:00:00.227 > git --version # 'git version 2.39.2' 00:00:00.227 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.263 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.263 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.625 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.637 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.653 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:07.653 > git config core.sparsecheckout # timeout=10 00:00:07.665 > git read-tree -mu HEAD # timeout=10 00:00:07.683 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:07.704 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:07.704 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:07.796 [Pipeline] Start of Pipeline 00:00:07.810 [Pipeline] library 00:00:07.811 Loading library shm_lib@master 00:00:07.811 Library shm_lib@master is cached. Copying from home. 00:00:07.828 [Pipeline] node 00:00:07.835 Running on WFP8 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:07.837 [Pipeline] { 00:00:07.849 [Pipeline] catchError 00:00:07.851 [Pipeline] { 00:00:07.862 [Pipeline] wrap 00:00:07.870 [Pipeline] { 00:00:07.875 [Pipeline] stage 00:00:07.877 [Pipeline] { (Prologue) 00:00:08.086 [Pipeline] sh 00:00:08.374 + logger -p user.info -t JENKINS-CI 00:00:08.392 [Pipeline] echo 00:00:08.394 Node: WFP8 00:00:08.402 [Pipeline] sh 00:00:08.699 [Pipeline] setCustomBuildProperty 00:00:08.712 [Pipeline] echo 00:00:08.714 Cleanup processes 00:00:08.719 [Pipeline] sh 00:00:09.003 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:09.003 3779514 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:09.017 [Pipeline] sh 00:00:09.300 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:09.300 ++ grep -v 'sudo pgrep' 00:00:09.300 ++ awk '{print $1}' 00:00:09.300 + sudo kill -9 00:00:09.300 + true 00:00:09.315 [Pipeline] cleanWs 00:00:09.325 [WS-CLEANUP] Deleting project workspace... 00:00:09.325 [WS-CLEANUP] Deferred wipeout is used... 00:00:09.331 [WS-CLEANUP] done 00:00:09.335 [Pipeline] setCustomBuildProperty 00:00:09.348 [Pipeline] sh 00:00:09.630 + sudo git config --global --replace-all safe.directory '*' 00:00:09.752 [Pipeline] httpRequest 00:00:10.050 [Pipeline] echo 00:00:10.052 Sorcerer 10.211.164.20 is alive 00:00:10.061 [Pipeline] retry 00:00:10.064 [Pipeline] { 00:00:10.077 [Pipeline] httpRequest 00:00:10.082 HttpMethod: GET 00:00:10.083 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:10.083 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:10.105 Response Code: HTTP/1.1 200 OK 00:00:10.105 Success: Status code 200 is in the accepted range: 200,404 00:00:10.106 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:28.417 [Pipeline] } 00:00:28.434 [Pipeline] // retry 00:00:28.441 [Pipeline] sh 00:00:28.720 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:28.737 [Pipeline] httpRequest 00:00:29.183 [Pipeline] echo 00:00:29.186 Sorcerer 10.211.164.20 is alive 00:00:29.196 [Pipeline] retry 00:00:29.198 [Pipeline] { 00:00:29.212 [Pipeline] httpRequest 00:00:29.217 HttpMethod: GET 00:00:29.217 URL: http://10.211.164.20/packages/spdk_46fd068fc0e4066fa292fa4cae14e3de3d3789c9.tar.gz 00:00:29.218 Sending request to url: http://10.211.164.20/packages/spdk_46fd068fc0e4066fa292fa4cae14e3de3d3789c9.tar.gz 00:00:29.237 Response Code: HTTP/1.1 200 OK 00:00:29.238 Success: Status code 200 is in the accepted range: 200,404 00:00:29.238 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_46fd068fc0e4066fa292fa4cae14e3de3d3789c9.tar.gz 00:00:54.381 [Pipeline] } 00:00:54.398 [Pipeline] // retry 00:00:54.407 [Pipeline] sh 00:00:54.690 + tar --no-same-owner -xf spdk_46fd068fc0e4066fa292fa4cae14e3de3d3789c9.tar.gz 00:00:57.233 [Pipeline] sh 00:00:57.514 + git -C spdk log --oneline -n5 00:00:57.514 46fd068fc test/nvme/xnvme: Add io_uring_cmd 00:00:57.514 4d3e9954d test/nvme/xnvme: Add different io patterns 00:00:57.514 d5455995c test/nvme/xnvme: Add simple RPC validation test 00:00:57.514 69d73d129 test/nvme/xnvme: Add simple test with SPDK's fio plugin 00:00:57.514 637d0d0b9 scripts/rpc: Fix conserve_cpu arg in bdev_xnvme_create() 00:00:57.524 [Pipeline] } 00:00:57.536 [Pipeline] // stage 00:00:57.545 [Pipeline] stage 00:00:57.548 [Pipeline] { (Prepare) 00:00:57.566 [Pipeline] writeFile 00:00:57.589 [Pipeline] sh 00:00:57.871 + logger -p user.info -t JENKINS-CI 00:00:57.884 [Pipeline] sh 00:00:58.169 + logger -p user.info -t JENKINS-CI 00:00:58.182 [Pipeline] sh 00:00:58.464 + cat autorun-spdk.conf 00:00:58.464 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:58.464 SPDK_TEST_NVMF=1 00:00:58.464 SPDK_TEST_NVME_CLI=1 00:00:58.464 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:58.464 SPDK_TEST_NVMF_NICS=e810 00:00:58.464 SPDK_TEST_VFIOUSER=1 00:00:58.464 SPDK_RUN_UBSAN=1 00:00:58.464 NET_TYPE=phy 00:00:58.471 RUN_NIGHTLY=0 00:00:58.476 [Pipeline] readFile 00:00:58.502 [Pipeline] withEnv 00:00:58.504 [Pipeline] { 00:00:58.517 [Pipeline] sh 00:00:58.803 + set -ex 00:00:58.803 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:00:58.803 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:58.803 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:58.803 ++ SPDK_TEST_NVMF=1 00:00:58.803 ++ SPDK_TEST_NVME_CLI=1 00:00:58.803 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:58.803 ++ SPDK_TEST_NVMF_NICS=e810 00:00:58.803 ++ SPDK_TEST_VFIOUSER=1 00:00:58.803 ++ SPDK_RUN_UBSAN=1 00:00:58.803 ++ NET_TYPE=phy 00:00:58.803 ++ RUN_NIGHTLY=0 00:00:58.803 + case $SPDK_TEST_NVMF_NICS in 00:00:58.803 + DRIVERS=ice 00:00:58.803 + [[ tcp == \r\d\m\a ]] 00:00:58.803 + [[ -n ice ]] 00:00:58.803 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:58.803 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:58.803 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:00:58.803 rmmod: ERROR: Module irdma is not currently loaded 00:00:58.803 rmmod: ERROR: Module i40iw is not currently loaded 00:00:58.803 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:58.803 + true 00:00:58.803 + for D in $DRIVERS 00:00:58.803 + sudo modprobe ice 00:00:58.803 + exit 0 00:00:58.813 [Pipeline] } 00:00:58.827 [Pipeline] // withEnv 00:00:58.832 [Pipeline] } 00:00:58.846 [Pipeline] // stage 00:00:58.855 [Pipeline] catchError 00:00:58.857 [Pipeline] { 00:00:58.871 [Pipeline] timeout 00:00:58.871 Timeout set to expire in 1 hr 0 min 00:00:58.873 [Pipeline] { 00:00:58.891 [Pipeline] stage 00:00:58.893 [Pipeline] { (Tests) 00:00:58.905 [Pipeline] sh 00:00:59.185 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:59.185 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:59.185 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:59.185 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:00:59.185 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:59.185 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:59.185 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:00:59.185 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:59.185 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:59.185 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:59.185 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:00:59.185 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:59.185 + source /etc/os-release 00:00:59.185 ++ NAME='Fedora Linux' 00:00:59.185 ++ VERSION='39 (Cloud Edition)' 00:00:59.185 ++ ID=fedora 00:00:59.185 ++ VERSION_ID=39 00:00:59.185 ++ VERSION_CODENAME= 00:00:59.185 ++ PLATFORM_ID=platform:f39 00:00:59.185 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:00:59.185 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:59.185 ++ LOGO=fedora-logo-icon 00:00:59.185 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:00:59.185 ++ HOME_URL=https://fedoraproject.org/ 00:00:59.185 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:00:59.185 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:59.185 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:59.185 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:59.185 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:00:59.185 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:59.185 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:00:59.185 ++ SUPPORT_END=2024-11-12 00:00:59.185 ++ VARIANT='Cloud Edition' 00:00:59.185 ++ VARIANT_ID=cloud 00:00:59.185 + uname -a 00:00:59.185 Linux spdk-wfp-08 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:00:59.185 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:01.722 Hugepages 00:01:01.722 node hugesize free / total 00:01:01.722 node0 1048576kB 0 / 0 00:01:01.722 node0 2048kB 0 / 0 00:01:01.722 node1 1048576kB 0 / 0 00:01:01.722 node1 2048kB 0 / 0 00:01:01.722 00:01:01.722 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:01.722 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:01.722 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:01.722 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:01.722 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:01.722 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:01.722 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:01.722 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:01.722 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:01.722 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:01:01.722 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:01.722 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:01.722 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:01.722 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:01.722 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:01.722 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:01.722 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:01.722 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:01.722 + rm -f /tmp/spdk-ld-path 00:01:01.722 + source autorun-spdk.conf 00:01:01.722 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:01.722 ++ SPDK_TEST_NVMF=1 00:01:01.722 ++ SPDK_TEST_NVME_CLI=1 00:01:01.722 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:01.722 ++ SPDK_TEST_NVMF_NICS=e810 00:01:01.722 ++ SPDK_TEST_VFIOUSER=1 00:01:01.722 ++ SPDK_RUN_UBSAN=1 00:01:01.722 ++ NET_TYPE=phy 00:01:01.722 ++ RUN_NIGHTLY=0 00:01:01.722 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:01.722 + [[ -n '' ]] 00:01:01.722 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:01.723 + for M in /var/spdk/build-*-manifest.txt 00:01:01.723 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:01.723 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:01.982 + for M in /var/spdk/build-*-manifest.txt 00:01:01.982 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:01.982 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:01.982 + for M in /var/spdk/build-*-manifest.txt 00:01:01.982 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:01.982 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:01.982 ++ uname 00:01:01.982 + [[ Linux == \L\i\n\u\x ]] 00:01:01.982 + sudo dmesg -T 00:01:01.982 + sudo dmesg --clear 00:01:01.982 + dmesg_pid=3780438 00:01:01.982 + [[ Fedora Linux == FreeBSD ]] 00:01:01.982 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:01.982 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:01.982 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:01.982 + [[ -x /usr/src/fio-static/fio ]] 00:01:01.982 + export FIO_BIN=/usr/src/fio-static/fio 00:01:01.982 + FIO_BIN=/usr/src/fio-static/fio 00:01:01.982 + sudo dmesg -Tw 00:01:01.982 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:01.982 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:01.982 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:01.982 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:01.982 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:01.982 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:01.982 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:01.982 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:01.982 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:01.982 10:55:29 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:01.982 10:55:29 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:01.982 10:55:29 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:01.982 10:55:29 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:01:01.982 10:55:29 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:01:01.983 10:55:29 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:01.983 10:55:29 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:01:01.983 10:55:29 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:01:01.983 10:55:29 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:01:01.983 10:55:29 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:01:01.983 10:55:29 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:01:01.983 10:55:29 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:01.983 10:55:29 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:01.983 10:55:29 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:01.983 10:55:29 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:01.983 10:55:29 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:01.983 10:55:29 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:01.983 10:55:29 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:01.983 10:55:29 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:01.983 10:55:29 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:01.983 10:55:29 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:01.983 10:55:29 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:01.983 10:55:29 -- paths/export.sh@5 -- $ export PATH 00:01:01.983 10:55:29 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:01.983 10:55:29 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:01.983 10:55:29 -- common/autobuild_common.sh@493 -- $ date +%s 00:01:01.983 10:55:29 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732096529.XXXXXX 00:01:01.983 10:55:29 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732096529.sAT8Ye 00:01:01.983 10:55:29 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:01:01.983 10:55:29 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:01:01.983 10:55:29 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:01.983 10:55:29 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:01.983 10:55:29 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:01.983 10:55:29 -- common/autobuild_common.sh@509 -- $ get_config_params 00:01:01.983 10:55:29 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:01.983 10:55:29 -- common/autotest_common.sh@10 -- $ set +x 00:01:01.983 10:55:29 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:01.983 10:55:29 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:01:01.983 10:55:29 -- pm/common@17 -- $ local monitor 00:01:02.242 10:55:29 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:02.242 10:55:29 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:02.242 10:55:29 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:02.242 10:55:29 -- pm/common@21 -- $ date +%s 00:01:02.242 10:55:29 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:02.242 10:55:29 -- pm/common@21 -- $ date +%s 00:01:02.242 10:55:29 -- pm/common@25 -- $ sleep 1 00:01:02.242 10:55:29 -- pm/common@21 -- $ date +%s 00:01:02.242 10:55:29 -- pm/common@21 -- $ date +%s 00:01:02.242 10:55:29 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732096529 00:01:02.242 10:55:29 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732096529 00:01:02.242 10:55:29 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732096529 00:01:02.242 10:55:29 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732096529 00:01:02.242 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732096529_collect-cpu-load.pm.log 00:01:02.242 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732096529_collect-vmstat.pm.log 00:01:02.242 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732096529_collect-cpu-temp.pm.log 00:01:02.242 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732096529_collect-bmc-pm.bmc.pm.log 00:01:03.179 10:55:30 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:01:03.179 10:55:30 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:03.179 10:55:30 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:03.179 10:55:30 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:03.179 10:55:30 -- spdk/autobuild.sh@16 -- $ date -u 00:01:03.179 Wed Nov 20 09:55:30 AM UTC 2024 00:01:03.179 10:55:30 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:03.179 v25.01-pre-207-g46fd068fc 00:01:03.179 10:55:30 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:03.179 10:55:30 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:03.179 10:55:30 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:03.179 10:55:30 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:03.179 10:55:30 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:03.179 10:55:30 -- common/autotest_common.sh@10 -- $ set +x 00:01:03.179 ************************************ 00:01:03.179 START TEST ubsan 00:01:03.179 ************************************ 00:01:03.179 10:55:30 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:03.179 using ubsan 00:01:03.179 00:01:03.179 real 0m0.000s 00:01:03.179 user 0m0.000s 00:01:03.179 sys 0m0.000s 00:01:03.179 10:55:30 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:03.179 10:55:30 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:03.179 ************************************ 00:01:03.179 END TEST ubsan 00:01:03.179 ************************************ 00:01:03.179 10:55:30 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:03.179 10:55:30 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:03.179 10:55:30 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:03.179 10:55:30 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:03.179 10:55:30 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:03.179 10:55:30 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:03.179 10:55:30 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:03.179 10:55:30 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:03.179 10:55:30 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:01:03.439 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:03.439 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:03.698 Using 'verbs' RDMA provider 00:01:16.848 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:29.059 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:29.059 Creating mk/config.mk...done. 00:01:29.059 Creating mk/cc.flags.mk...done. 00:01:29.059 Type 'make' to build. 00:01:29.059 10:55:56 -- spdk/autobuild.sh@70 -- $ run_test make make -j96 00:01:29.059 10:55:56 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:29.059 10:55:56 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:29.059 10:55:56 -- common/autotest_common.sh@10 -- $ set +x 00:01:29.059 ************************************ 00:01:29.059 START TEST make 00:01:29.059 ************************************ 00:01:29.059 10:55:56 make -- common/autotest_common.sh@1129 -- $ make -j96 00:01:29.317 make[1]: Nothing to be done for 'all'. 00:01:30.707 The Meson build system 00:01:30.707 Version: 1.5.0 00:01:30.707 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:30.707 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:30.707 Build type: native build 00:01:30.707 Project name: libvfio-user 00:01:30.707 Project version: 0.0.1 00:01:30.707 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:30.707 C linker for the host machine: cc ld.bfd 2.40-14 00:01:30.707 Host machine cpu family: x86_64 00:01:30.707 Host machine cpu: x86_64 00:01:30.707 Run-time dependency threads found: YES 00:01:30.707 Library dl found: YES 00:01:30.707 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:30.707 Run-time dependency json-c found: YES 0.17 00:01:30.707 Run-time dependency cmocka found: YES 1.1.7 00:01:30.707 Program pytest-3 found: NO 00:01:30.707 Program flake8 found: NO 00:01:30.707 Program misspell-fixer found: NO 00:01:30.707 Program restructuredtext-lint found: NO 00:01:30.707 Program valgrind found: YES (/usr/bin/valgrind) 00:01:30.707 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:30.707 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:30.707 Compiler for C supports arguments -Wwrite-strings: YES 00:01:30.707 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:30.707 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:30.707 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:30.707 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:30.707 Build targets in project: 8 00:01:30.707 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:30.707 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:30.707 00:01:30.707 libvfio-user 0.0.1 00:01:30.707 00:01:30.707 User defined options 00:01:30.707 buildtype : debug 00:01:30.707 default_library: shared 00:01:30.707 libdir : /usr/local/lib 00:01:30.707 00:01:30.707 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:30.965 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:31.225 [1/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:31.225 [2/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:31.225 [3/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:31.225 [4/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:31.225 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:31.225 [6/37] Compiling C object samples/null.p/null.c.o 00:01:31.225 [7/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:31.225 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:31.225 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:31.225 [10/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:31.225 [11/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:31.225 [12/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:31.225 [13/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:31.225 [14/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:31.225 [15/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:31.225 [16/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:31.225 [17/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:31.225 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:31.225 [19/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:31.225 [20/37] Compiling C object samples/server.p/server.c.o 00:01:31.225 [21/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:31.225 [22/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:31.225 [23/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:31.225 [24/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:31.225 [25/37] Compiling C object samples/client.p/client.c.o 00:01:31.225 [26/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:31.225 [27/37] Linking target samples/client 00:01:31.225 [28/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:31.225 [29/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:31.225 [30/37] Linking target test/unit_tests 00:01:31.225 [31/37] Linking target lib/libvfio-user.so.0.0.1 00:01:31.484 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:31.484 [33/37] Linking target samples/server 00:01:31.484 [34/37] Linking target samples/lspci 00:01:31.484 [35/37] Linking target samples/gpio-pci-idio-16 00:01:31.484 [36/37] Linking target samples/null 00:01:31.484 [37/37] Linking target samples/shadow_ioeventfd_server 00:01:31.484 INFO: autodetecting backend as ninja 00:01:31.484 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:31.484 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:32.052 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:32.052 ninja: no work to do. 00:01:37.323 The Meson build system 00:01:37.323 Version: 1.5.0 00:01:37.323 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:37.323 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:37.323 Build type: native build 00:01:37.323 Program cat found: YES (/usr/bin/cat) 00:01:37.323 Project name: DPDK 00:01:37.323 Project version: 24.03.0 00:01:37.323 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:37.323 C linker for the host machine: cc ld.bfd 2.40-14 00:01:37.323 Host machine cpu family: x86_64 00:01:37.323 Host machine cpu: x86_64 00:01:37.323 Message: ## Building in Developer Mode ## 00:01:37.323 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:37.323 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:37.323 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:37.323 Program python3 found: YES (/usr/bin/python3) 00:01:37.323 Program cat found: YES (/usr/bin/cat) 00:01:37.323 Compiler for C supports arguments -march=native: YES 00:01:37.323 Checking for size of "void *" : 8 00:01:37.323 Checking for size of "void *" : 8 (cached) 00:01:37.323 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:01:37.323 Library m found: YES 00:01:37.323 Library numa found: YES 00:01:37.323 Has header "numaif.h" : YES 00:01:37.323 Library fdt found: NO 00:01:37.323 Library execinfo found: NO 00:01:37.323 Has header "execinfo.h" : YES 00:01:37.323 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:37.323 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:37.323 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:37.323 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:37.323 Run-time dependency openssl found: YES 3.1.1 00:01:37.323 Run-time dependency libpcap found: YES 1.10.4 00:01:37.323 Has header "pcap.h" with dependency libpcap: YES 00:01:37.323 Compiler for C supports arguments -Wcast-qual: YES 00:01:37.323 Compiler for C supports arguments -Wdeprecated: YES 00:01:37.323 Compiler for C supports arguments -Wformat: YES 00:01:37.323 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:37.323 Compiler for C supports arguments -Wformat-security: NO 00:01:37.323 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:37.323 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:37.323 Compiler for C supports arguments -Wnested-externs: YES 00:01:37.323 Compiler for C supports arguments -Wold-style-definition: YES 00:01:37.323 Compiler for C supports arguments -Wpointer-arith: YES 00:01:37.323 Compiler for C supports arguments -Wsign-compare: YES 00:01:37.323 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:37.323 Compiler for C supports arguments -Wundef: YES 00:01:37.323 Compiler for C supports arguments -Wwrite-strings: YES 00:01:37.323 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:37.323 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:37.323 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:37.323 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:37.323 Program objdump found: YES (/usr/bin/objdump) 00:01:37.323 Compiler for C supports arguments -mavx512f: YES 00:01:37.323 Checking if "AVX512 checking" compiles: YES 00:01:37.323 Fetching value of define "__SSE4_2__" : 1 00:01:37.323 Fetching value of define "__AES__" : 1 00:01:37.323 Fetching value of define "__AVX__" : 1 00:01:37.323 Fetching value of define "__AVX2__" : 1 00:01:37.323 Fetching value of define "__AVX512BW__" : 1 00:01:37.323 Fetching value of define "__AVX512CD__" : 1 00:01:37.323 Fetching value of define "__AVX512DQ__" : 1 00:01:37.323 Fetching value of define "__AVX512F__" : 1 00:01:37.323 Fetching value of define "__AVX512VL__" : 1 00:01:37.323 Fetching value of define "__PCLMUL__" : 1 00:01:37.323 Fetching value of define "__RDRND__" : 1 00:01:37.323 Fetching value of define "__RDSEED__" : 1 00:01:37.323 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:37.323 Fetching value of define "__znver1__" : (undefined) 00:01:37.323 Fetching value of define "__znver2__" : (undefined) 00:01:37.323 Fetching value of define "__znver3__" : (undefined) 00:01:37.323 Fetching value of define "__znver4__" : (undefined) 00:01:37.323 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:37.323 Message: lib/log: Defining dependency "log" 00:01:37.323 Message: lib/kvargs: Defining dependency "kvargs" 00:01:37.323 Message: lib/telemetry: Defining dependency "telemetry" 00:01:37.323 Checking for function "getentropy" : NO 00:01:37.323 Message: lib/eal: Defining dependency "eal" 00:01:37.323 Message: lib/ring: Defining dependency "ring" 00:01:37.323 Message: lib/rcu: Defining dependency "rcu" 00:01:37.323 Message: lib/mempool: Defining dependency "mempool" 00:01:37.323 Message: lib/mbuf: Defining dependency "mbuf" 00:01:37.323 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:37.323 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:37.323 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:37.323 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:37.323 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:37.323 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:37.323 Compiler for C supports arguments -mpclmul: YES 00:01:37.323 Compiler for C supports arguments -maes: YES 00:01:37.323 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:37.323 Compiler for C supports arguments -mavx512bw: YES 00:01:37.323 Compiler for C supports arguments -mavx512dq: YES 00:01:37.323 Compiler for C supports arguments -mavx512vl: YES 00:01:37.323 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:37.323 Compiler for C supports arguments -mavx2: YES 00:01:37.323 Compiler for C supports arguments -mavx: YES 00:01:37.323 Message: lib/net: Defining dependency "net" 00:01:37.323 Message: lib/meter: Defining dependency "meter" 00:01:37.323 Message: lib/ethdev: Defining dependency "ethdev" 00:01:37.323 Message: lib/pci: Defining dependency "pci" 00:01:37.323 Message: lib/cmdline: Defining dependency "cmdline" 00:01:37.323 Message: lib/hash: Defining dependency "hash" 00:01:37.323 Message: lib/timer: Defining dependency "timer" 00:01:37.323 Message: lib/compressdev: Defining dependency "compressdev" 00:01:37.323 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:37.323 Message: lib/dmadev: Defining dependency "dmadev" 00:01:37.323 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:37.323 Message: lib/power: Defining dependency "power" 00:01:37.323 Message: lib/reorder: Defining dependency "reorder" 00:01:37.323 Message: lib/security: Defining dependency "security" 00:01:37.323 Has header "linux/userfaultfd.h" : YES 00:01:37.323 Has header "linux/vduse.h" : YES 00:01:37.323 Message: lib/vhost: Defining dependency "vhost" 00:01:37.323 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:37.323 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:37.323 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:37.323 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:37.323 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:37.323 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:37.323 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:37.323 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:37.323 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:37.323 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:37.323 Program doxygen found: YES (/usr/local/bin/doxygen) 00:01:37.323 Configuring doxy-api-html.conf using configuration 00:01:37.323 Configuring doxy-api-man.conf using configuration 00:01:37.323 Program mandb found: YES (/usr/bin/mandb) 00:01:37.323 Program sphinx-build found: NO 00:01:37.323 Configuring rte_build_config.h using configuration 00:01:37.323 Message: 00:01:37.323 ================= 00:01:37.323 Applications Enabled 00:01:37.323 ================= 00:01:37.323 00:01:37.323 apps: 00:01:37.323 00:01:37.323 00:01:37.323 Message: 00:01:37.323 ================= 00:01:37.323 Libraries Enabled 00:01:37.323 ================= 00:01:37.323 00:01:37.324 libs: 00:01:37.324 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:37.324 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:37.324 cryptodev, dmadev, power, reorder, security, vhost, 00:01:37.324 00:01:37.324 Message: 00:01:37.324 =============== 00:01:37.324 Drivers Enabled 00:01:37.324 =============== 00:01:37.324 00:01:37.324 common: 00:01:37.324 00:01:37.324 bus: 00:01:37.324 pci, vdev, 00:01:37.324 mempool: 00:01:37.324 ring, 00:01:37.324 dma: 00:01:37.324 00:01:37.324 net: 00:01:37.324 00:01:37.324 crypto: 00:01:37.324 00:01:37.324 compress: 00:01:37.324 00:01:37.324 vdpa: 00:01:37.324 00:01:37.324 00:01:37.324 Message: 00:01:37.324 ================= 00:01:37.324 Content Skipped 00:01:37.324 ================= 00:01:37.324 00:01:37.324 apps: 00:01:37.324 dumpcap: explicitly disabled via build config 00:01:37.324 graph: explicitly disabled via build config 00:01:37.324 pdump: explicitly disabled via build config 00:01:37.324 proc-info: explicitly disabled via build config 00:01:37.324 test-acl: explicitly disabled via build config 00:01:37.324 test-bbdev: explicitly disabled via build config 00:01:37.324 test-cmdline: explicitly disabled via build config 00:01:37.324 test-compress-perf: explicitly disabled via build config 00:01:37.324 test-crypto-perf: explicitly disabled via build config 00:01:37.324 test-dma-perf: explicitly disabled via build config 00:01:37.324 test-eventdev: explicitly disabled via build config 00:01:37.324 test-fib: explicitly disabled via build config 00:01:37.324 test-flow-perf: explicitly disabled via build config 00:01:37.324 test-gpudev: explicitly disabled via build config 00:01:37.324 test-mldev: explicitly disabled via build config 00:01:37.324 test-pipeline: explicitly disabled via build config 00:01:37.324 test-pmd: explicitly disabled via build config 00:01:37.324 test-regex: explicitly disabled via build config 00:01:37.324 test-sad: explicitly disabled via build config 00:01:37.324 test-security-perf: explicitly disabled via build config 00:01:37.324 00:01:37.324 libs: 00:01:37.324 argparse: explicitly disabled via build config 00:01:37.324 metrics: explicitly disabled via build config 00:01:37.324 acl: explicitly disabled via build config 00:01:37.324 bbdev: explicitly disabled via build config 00:01:37.324 bitratestats: explicitly disabled via build config 00:01:37.324 bpf: explicitly disabled via build config 00:01:37.324 cfgfile: explicitly disabled via build config 00:01:37.324 distributor: explicitly disabled via build config 00:01:37.324 efd: explicitly disabled via build config 00:01:37.324 eventdev: explicitly disabled via build config 00:01:37.324 dispatcher: explicitly disabled via build config 00:01:37.324 gpudev: explicitly disabled via build config 00:01:37.324 gro: explicitly disabled via build config 00:01:37.324 gso: explicitly disabled via build config 00:01:37.324 ip_frag: explicitly disabled via build config 00:01:37.324 jobstats: explicitly disabled via build config 00:01:37.324 latencystats: explicitly disabled via build config 00:01:37.324 lpm: explicitly disabled via build config 00:01:37.324 member: explicitly disabled via build config 00:01:37.324 pcapng: explicitly disabled via build config 00:01:37.324 rawdev: explicitly disabled via build config 00:01:37.324 regexdev: explicitly disabled via build config 00:01:37.324 mldev: explicitly disabled via build config 00:01:37.324 rib: explicitly disabled via build config 00:01:37.324 sched: explicitly disabled via build config 00:01:37.324 stack: explicitly disabled via build config 00:01:37.324 ipsec: explicitly disabled via build config 00:01:37.324 pdcp: explicitly disabled via build config 00:01:37.324 fib: explicitly disabled via build config 00:01:37.324 port: explicitly disabled via build config 00:01:37.324 pdump: explicitly disabled via build config 00:01:37.324 table: explicitly disabled via build config 00:01:37.324 pipeline: explicitly disabled via build config 00:01:37.324 graph: explicitly disabled via build config 00:01:37.324 node: explicitly disabled via build config 00:01:37.324 00:01:37.324 drivers: 00:01:37.324 common/cpt: not in enabled drivers build config 00:01:37.324 common/dpaax: not in enabled drivers build config 00:01:37.324 common/iavf: not in enabled drivers build config 00:01:37.324 common/idpf: not in enabled drivers build config 00:01:37.324 common/ionic: not in enabled drivers build config 00:01:37.324 common/mvep: not in enabled drivers build config 00:01:37.324 common/octeontx: not in enabled drivers build config 00:01:37.324 bus/auxiliary: not in enabled drivers build config 00:01:37.324 bus/cdx: not in enabled drivers build config 00:01:37.324 bus/dpaa: not in enabled drivers build config 00:01:37.324 bus/fslmc: not in enabled drivers build config 00:01:37.324 bus/ifpga: not in enabled drivers build config 00:01:37.324 bus/platform: not in enabled drivers build config 00:01:37.324 bus/uacce: not in enabled drivers build config 00:01:37.324 bus/vmbus: not in enabled drivers build config 00:01:37.324 common/cnxk: not in enabled drivers build config 00:01:37.324 common/mlx5: not in enabled drivers build config 00:01:37.324 common/nfp: not in enabled drivers build config 00:01:37.324 common/nitrox: not in enabled drivers build config 00:01:37.324 common/qat: not in enabled drivers build config 00:01:37.324 common/sfc_efx: not in enabled drivers build config 00:01:37.324 mempool/bucket: not in enabled drivers build config 00:01:37.324 mempool/cnxk: not in enabled drivers build config 00:01:37.324 mempool/dpaa: not in enabled drivers build config 00:01:37.324 mempool/dpaa2: not in enabled drivers build config 00:01:37.324 mempool/octeontx: not in enabled drivers build config 00:01:37.324 mempool/stack: not in enabled drivers build config 00:01:37.324 dma/cnxk: not in enabled drivers build config 00:01:37.324 dma/dpaa: not in enabled drivers build config 00:01:37.324 dma/dpaa2: not in enabled drivers build config 00:01:37.324 dma/hisilicon: not in enabled drivers build config 00:01:37.324 dma/idxd: not in enabled drivers build config 00:01:37.324 dma/ioat: not in enabled drivers build config 00:01:37.324 dma/skeleton: not in enabled drivers build config 00:01:37.324 net/af_packet: not in enabled drivers build config 00:01:37.324 net/af_xdp: not in enabled drivers build config 00:01:37.324 net/ark: not in enabled drivers build config 00:01:37.324 net/atlantic: not in enabled drivers build config 00:01:37.324 net/avp: not in enabled drivers build config 00:01:37.324 net/axgbe: not in enabled drivers build config 00:01:37.324 net/bnx2x: not in enabled drivers build config 00:01:37.324 net/bnxt: not in enabled drivers build config 00:01:37.324 net/bonding: not in enabled drivers build config 00:01:37.324 net/cnxk: not in enabled drivers build config 00:01:37.324 net/cpfl: not in enabled drivers build config 00:01:37.324 net/cxgbe: not in enabled drivers build config 00:01:37.324 net/dpaa: not in enabled drivers build config 00:01:37.324 net/dpaa2: not in enabled drivers build config 00:01:37.324 net/e1000: not in enabled drivers build config 00:01:37.324 net/ena: not in enabled drivers build config 00:01:37.324 net/enetc: not in enabled drivers build config 00:01:37.324 net/enetfec: not in enabled drivers build config 00:01:37.324 net/enic: not in enabled drivers build config 00:01:37.324 net/failsafe: not in enabled drivers build config 00:01:37.324 net/fm10k: not in enabled drivers build config 00:01:37.324 net/gve: not in enabled drivers build config 00:01:37.324 net/hinic: not in enabled drivers build config 00:01:37.324 net/hns3: not in enabled drivers build config 00:01:37.324 net/i40e: not in enabled drivers build config 00:01:37.324 net/iavf: not in enabled drivers build config 00:01:37.324 net/ice: not in enabled drivers build config 00:01:37.324 net/idpf: not in enabled drivers build config 00:01:37.324 net/igc: not in enabled drivers build config 00:01:37.324 net/ionic: not in enabled drivers build config 00:01:37.324 net/ipn3ke: not in enabled drivers build config 00:01:37.324 net/ixgbe: not in enabled drivers build config 00:01:37.324 net/mana: not in enabled drivers build config 00:01:37.324 net/memif: not in enabled drivers build config 00:01:37.324 net/mlx4: not in enabled drivers build config 00:01:37.324 net/mlx5: not in enabled drivers build config 00:01:37.324 net/mvneta: not in enabled drivers build config 00:01:37.324 net/mvpp2: not in enabled drivers build config 00:01:37.324 net/netvsc: not in enabled drivers build config 00:01:37.324 net/nfb: not in enabled drivers build config 00:01:37.324 net/nfp: not in enabled drivers build config 00:01:37.324 net/ngbe: not in enabled drivers build config 00:01:37.324 net/null: not in enabled drivers build config 00:01:37.324 net/octeontx: not in enabled drivers build config 00:01:37.324 net/octeon_ep: not in enabled drivers build config 00:01:37.324 net/pcap: not in enabled drivers build config 00:01:37.324 net/pfe: not in enabled drivers build config 00:01:37.324 net/qede: not in enabled drivers build config 00:01:37.324 net/ring: not in enabled drivers build config 00:01:37.324 net/sfc: not in enabled drivers build config 00:01:37.324 net/softnic: not in enabled drivers build config 00:01:37.324 net/tap: not in enabled drivers build config 00:01:37.324 net/thunderx: not in enabled drivers build config 00:01:37.324 net/txgbe: not in enabled drivers build config 00:01:37.324 net/vdev_netvsc: not in enabled drivers build config 00:01:37.324 net/vhost: not in enabled drivers build config 00:01:37.324 net/virtio: not in enabled drivers build config 00:01:37.324 net/vmxnet3: not in enabled drivers build config 00:01:37.324 raw/*: missing internal dependency, "rawdev" 00:01:37.324 crypto/armv8: not in enabled drivers build config 00:01:37.324 crypto/bcmfs: not in enabled drivers build config 00:01:37.324 crypto/caam_jr: not in enabled drivers build config 00:01:37.324 crypto/ccp: not in enabled drivers build config 00:01:37.324 crypto/cnxk: not in enabled drivers build config 00:01:37.324 crypto/dpaa_sec: not in enabled drivers build config 00:01:37.324 crypto/dpaa2_sec: not in enabled drivers build config 00:01:37.324 crypto/ipsec_mb: not in enabled drivers build config 00:01:37.324 crypto/mlx5: not in enabled drivers build config 00:01:37.324 crypto/mvsam: not in enabled drivers build config 00:01:37.324 crypto/nitrox: not in enabled drivers build config 00:01:37.325 crypto/null: not in enabled drivers build config 00:01:37.325 crypto/octeontx: not in enabled drivers build config 00:01:37.325 crypto/openssl: not in enabled drivers build config 00:01:37.325 crypto/scheduler: not in enabled drivers build config 00:01:37.325 crypto/uadk: not in enabled drivers build config 00:01:37.325 crypto/virtio: not in enabled drivers build config 00:01:37.325 compress/isal: not in enabled drivers build config 00:01:37.325 compress/mlx5: not in enabled drivers build config 00:01:37.325 compress/nitrox: not in enabled drivers build config 00:01:37.325 compress/octeontx: not in enabled drivers build config 00:01:37.325 compress/zlib: not in enabled drivers build config 00:01:37.325 regex/*: missing internal dependency, "regexdev" 00:01:37.325 ml/*: missing internal dependency, "mldev" 00:01:37.325 vdpa/ifc: not in enabled drivers build config 00:01:37.325 vdpa/mlx5: not in enabled drivers build config 00:01:37.325 vdpa/nfp: not in enabled drivers build config 00:01:37.325 vdpa/sfc: not in enabled drivers build config 00:01:37.325 event/*: missing internal dependency, "eventdev" 00:01:37.325 baseband/*: missing internal dependency, "bbdev" 00:01:37.325 gpu/*: missing internal dependency, "gpudev" 00:01:37.325 00:01:37.325 00:01:37.325 Build targets in project: 85 00:01:37.325 00:01:37.325 DPDK 24.03.0 00:01:37.325 00:01:37.325 User defined options 00:01:37.325 buildtype : debug 00:01:37.325 default_library : shared 00:01:37.325 libdir : lib 00:01:37.325 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:37.325 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:37.325 c_link_args : 00:01:37.325 cpu_instruction_set: native 00:01:37.325 disable_apps : test-sad,test-acl,test-dma-perf,test-pipeline,test-compress-perf,test-fib,test-flow-perf,test-crypto-perf,test-bbdev,test-eventdev,pdump,test-mldev,test-cmdline,graph,test-security-perf,test-pmd,test,proc-info,test-regex,dumpcap,test-gpudev 00:01:37.325 disable_libs : port,sched,rib,node,ipsec,distributor,gro,eventdev,pdcp,acl,member,latencystats,efd,stack,regexdev,rawdev,bpf,metrics,gpudev,pipeline,pdump,table,fib,dispatcher,mldev,gso,cfgfile,bitratestats,ip_frag,graph,lpm,jobstats,argparse,pcapng,bbdev 00:01:37.325 enable_docs : false 00:01:37.325 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:01:37.325 enable_kmods : false 00:01:37.325 max_lcores : 128 00:01:37.325 tests : false 00:01:37.325 00:01:37.325 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:37.899 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:37.899 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:37.899 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:37.899 [3/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:37.899 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:37.899 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:37.899 [6/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:37.899 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:37.899 [8/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:37.899 [9/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:37.899 [10/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:37.899 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:37.899 [12/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:37.899 [13/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:37.899 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:37.899 [15/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:37.899 [16/268] Linking static target lib/librte_kvargs.a 00:01:37.899 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:37.899 [18/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:37.899 [19/268] Linking static target lib/librte_log.a 00:01:38.158 [20/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:38.158 [21/268] Linking static target lib/librte_pci.a 00:01:38.158 [22/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:38.158 [23/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:38.158 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:38.158 [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:38.423 [26/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:38.423 [27/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:38.423 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:38.423 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:38.423 [30/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:38.423 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:38.423 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:38.423 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:38.423 [34/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:38.423 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:38.423 [36/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:38.423 [37/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:38.423 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:38.423 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:38.423 [40/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:38.423 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:38.423 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:38.423 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:38.423 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:38.423 [45/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:38.423 [46/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:38.423 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:38.423 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:38.423 [49/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:38.423 [50/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:38.423 [51/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:38.423 [52/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:38.423 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:38.423 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:38.423 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:38.423 [56/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:38.423 [57/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:38.423 [58/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:38.423 [59/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:38.423 [60/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:38.423 [61/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:38.423 [62/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:38.423 [63/268] Linking static target lib/librte_meter.a 00:01:38.423 [64/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:38.423 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:38.423 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:38.423 [67/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:38.423 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:38.423 [69/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:38.423 [70/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:38.423 [71/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:38.423 [72/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:38.423 [73/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:38.423 [74/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:38.423 [75/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:38.423 [76/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:38.423 [77/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:38.423 [78/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:38.423 [79/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:38.423 [80/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:38.423 [81/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:38.423 [82/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:38.423 [83/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:38.423 [84/268] Linking static target lib/librte_telemetry.a 00:01:38.423 [85/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:38.423 [86/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:38.423 [87/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:38.423 [88/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:38.423 [89/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:38.423 [90/268] Linking static target lib/librte_ring.a 00:01:38.423 [91/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:38.423 [92/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:38.423 [93/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:38.423 [94/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:38.423 [95/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:38.423 [96/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:38.423 [97/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:38.423 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:38.423 [99/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:38.424 [100/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:38.424 [101/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.424 [102/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.682 [103/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:38.682 [104/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:38.682 [105/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:38.682 [106/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:38.682 [107/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:38.682 [108/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:38.682 [109/268] Linking static target lib/librte_rcu.a 00:01:38.682 [110/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:38.682 [111/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:38.682 [112/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:38.682 [113/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:38.682 [114/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:38.682 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:38.682 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:38.682 [117/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:38.682 [118/268] Linking static target lib/librte_net.a 00:01:38.682 [119/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:38.682 [120/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:38.682 [121/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:38.682 [122/268] Linking static target lib/librte_mempool.a 00:01:38.682 [123/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:38.682 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:38.682 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:38.682 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:38.682 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:38.682 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:38.682 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:38.682 [130/268] Linking static target lib/librte_cmdline.a 00:01:38.682 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:38.682 [132/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.682 [133/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:38.682 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:38.682 [135/268] Linking static target lib/librte_eal.a 00:01:38.682 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:38.682 [137/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.682 [138/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:38.682 [139/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:38.682 [140/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:38.682 [141/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.682 [142/268] Linking target lib/librte_log.so.24.1 00:01:38.682 [143/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:38.682 [144/268] Linking static target lib/librte_mbuf.a 00:01:38.942 [145/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:38.942 [146/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.942 [147/268] Linking static target lib/librte_timer.a 00:01:38.942 [148/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:38.942 [149/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:38.942 [150/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:38.942 [151/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:38.942 [152/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:38.942 [153/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:38.942 [154/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:38.942 [155/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:38.942 [156/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.942 [157/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.942 [158/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:38.942 [159/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:38.942 [160/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:38.942 [161/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:38.942 [162/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:38.942 [163/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:38.942 [164/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:38.942 [165/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:38.942 [166/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:38.942 [167/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:38.942 [168/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:38.942 [169/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:38.942 [170/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:38.942 [171/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:38.942 [172/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:38.942 [173/268] Linking target lib/librte_kvargs.so.24.1 00:01:38.942 [174/268] Linking target lib/librte_telemetry.so.24.1 00:01:38.942 [175/268] Linking static target lib/librte_reorder.a 00:01:38.942 [176/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:38.942 [177/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:38.942 [178/268] Linking static target lib/librte_power.a 00:01:38.942 [179/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:38.942 [180/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:38.942 [181/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:38.942 [182/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:38.942 [183/268] Linking static target lib/librte_compressdev.a 00:01:38.942 [184/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:38.942 [185/268] Linking static target lib/librte_dmadev.a 00:01:38.942 [186/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:39.201 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:39.201 [188/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:39.201 [189/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:39.201 [190/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:39.201 [191/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:39.201 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:39.201 [193/268] Linking static target drivers/librte_bus_vdev.a 00:01:39.201 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:39.201 [195/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:39.201 [196/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:39.201 [197/268] Linking static target lib/librte_security.a 00:01:39.201 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:39.201 [199/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:39.201 [200/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:39.201 [201/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:39.201 [202/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:39.201 [203/268] Linking static target drivers/librte_bus_pci.a 00:01:39.201 [204/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:39.201 [205/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:39.201 [206/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:39.201 [207/268] Linking static target lib/librte_hash.a 00:01:39.201 [208/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:39.201 [209/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:39.201 [210/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.201 [211/268] Linking static target drivers/librte_mempool_ring.a 00:01:39.461 [212/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.461 [213/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.461 [214/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:39.461 [215/268] Linking static target lib/librte_cryptodev.a 00:01:39.461 [216/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.461 [217/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:39.461 [218/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.461 [219/268] Linking static target lib/librte_ethdev.a 00:01:39.719 [220/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.719 [221/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.719 [222/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.719 [223/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.977 [224/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.977 [225/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:39.977 [226/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.235 [227/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.172 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:41.172 [229/268] Linking static target lib/librte_vhost.a 00:01:41.431 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.807 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.080 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.019 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.019 [234/268] Linking target lib/librte_eal.so.24.1 00:01:49.019 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:01:49.019 [236/268] Linking target lib/librte_ring.so.24.1 00:01:49.019 [237/268] Linking target lib/librte_timer.so.24.1 00:01:49.019 [238/268] Linking target lib/librte_meter.so.24.1 00:01:49.019 [239/268] Linking target lib/librte_pci.so.24.1 00:01:49.019 [240/268] Linking target drivers/librte_bus_vdev.so.24.1 00:01:49.019 [241/268] Linking target lib/librte_dmadev.so.24.1 00:01:49.278 [242/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:01:49.278 [243/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:01:49.278 [244/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:01:49.278 [245/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:01:49.278 [246/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:01:49.278 [247/268] Linking target drivers/librte_bus_pci.so.24.1 00:01:49.278 [248/268] Linking target lib/librte_rcu.so.24.1 00:01:49.278 [249/268] Linking target lib/librte_mempool.so.24.1 00:01:49.278 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:01:49.278 [251/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:01:49.278 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:01:49.537 [253/268] Linking target lib/librte_mbuf.so.24.1 00:01:49.537 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:01:49.537 [255/268] Linking target lib/librte_reorder.so.24.1 00:01:49.537 [256/268] Linking target lib/librte_cryptodev.so.24.1 00:01:49.537 [257/268] Linking target lib/librte_compressdev.so.24.1 00:01:49.537 [258/268] Linking target lib/librte_net.so.24.1 00:01:49.795 [259/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:01:49.795 [260/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:01:49.795 [261/268] Linking target lib/librte_hash.so.24.1 00:01:49.795 [262/268] Linking target lib/librte_cmdline.so.24.1 00:01:49.795 [263/268] Linking target lib/librte_security.so.24.1 00:01:49.795 [264/268] Linking target lib/librte_ethdev.so.24.1 00:01:49.795 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:01:50.054 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:01:50.054 [267/268] Linking target lib/librte_power.so.24.1 00:01:50.054 [268/268] Linking target lib/librte_vhost.so.24.1 00:01:50.054 INFO: autodetecting backend as ninja 00:01:50.054 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 96 00:02:00.173 CC lib/ut_mock/mock.o 00:02:00.173 CC lib/log/log.o 00:02:00.173 CC lib/ut/ut.o 00:02:00.173 CC lib/log/log_flags.o 00:02:00.173 CC lib/log/log_deprecated.o 00:02:00.173 LIB libspdk_ut_mock.a 00:02:00.173 LIB libspdk_log.a 00:02:00.173 LIB libspdk_ut.a 00:02:00.173 SO libspdk_ut.so.2.0 00:02:00.173 SO libspdk_ut_mock.so.6.0 00:02:00.173 SO libspdk_log.so.7.1 00:02:00.431 SYMLINK libspdk_ut.so 00:02:00.431 SYMLINK libspdk_ut_mock.so 00:02:00.431 SYMLINK libspdk_log.so 00:02:00.689 CC lib/ioat/ioat.o 00:02:00.689 CC lib/dma/dma.o 00:02:00.689 CC lib/util/base64.o 00:02:00.689 CXX lib/trace_parser/trace.o 00:02:00.689 CC lib/util/bit_array.o 00:02:00.689 CC lib/util/cpuset.o 00:02:00.689 CC lib/util/crc16.o 00:02:00.689 CC lib/util/crc32.o 00:02:00.689 CC lib/util/crc32c.o 00:02:00.689 CC lib/util/crc32_ieee.o 00:02:00.689 CC lib/util/crc64.o 00:02:00.689 CC lib/util/dif.o 00:02:00.689 CC lib/util/fd.o 00:02:00.689 CC lib/util/fd_group.o 00:02:00.689 CC lib/util/file.o 00:02:00.689 CC lib/util/hexlify.o 00:02:00.689 CC lib/util/iov.o 00:02:00.689 CC lib/util/math.o 00:02:00.689 CC lib/util/net.o 00:02:00.689 CC lib/util/pipe.o 00:02:00.689 CC lib/util/strerror_tls.o 00:02:00.689 CC lib/util/string.o 00:02:00.689 CC lib/util/uuid.o 00:02:00.689 CC lib/util/xor.o 00:02:00.689 CC lib/util/zipf.o 00:02:00.689 CC lib/util/md5.o 00:02:00.947 CC lib/vfio_user/host/vfio_user_pci.o 00:02:00.947 CC lib/vfio_user/host/vfio_user.o 00:02:00.947 LIB libspdk_dma.a 00:02:00.947 SO libspdk_dma.so.5.0 00:02:00.947 LIB libspdk_ioat.a 00:02:00.947 SO libspdk_ioat.so.7.0 00:02:00.947 SYMLINK libspdk_dma.so 00:02:00.947 SYMLINK libspdk_ioat.so 00:02:00.947 LIB libspdk_vfio_user.a 00:02:01.204 SO libspdk_vfio_user.so.5.0 00:02:01.204 LIB libspdk_util.a 00:02:01.204 SYMLINK libspdk_vfio_user.so 00:02:01.204 SO libspdk_util.so.10.1 00:02:01.204 SYMLINK libspdk_util.so 00:02:01.462 LIB libspdk_trace_parser.a 00:02:01.462 SO libspdk_trace_parser.so.6.0 00:02:01.462 SYMLINK libspdk_trace_parser.so 00:02:01.720 CC lib/idxd/idxd.o 00:02:01.720 CC lib/idxd/idxd_user.o 00:02:01.720 CC lib/conf/conf.o 00:02:01.720 CC lib/idxd/idxd_kernel.o 00:02:01.720 CC lib/rdma_utils/rdma_utils.o 00:02:01.720 CC lib/json/json_parse.o 00:02:01.720 CC lib/env_dpdk/env.o 00:02:01.720 CC lib/json/json_util.o 00:02:01.720 CC lib/vmd/vmd.o 00:02:01.720 CC lib/env_dpdk/memory.o 00:02:01.720 CC lib/json/json_write.o 00:02:01.720 CC lib/env_dpdk/pci.o 00:02:01.720 CC lib/vmd/led.o 00:02:01.720 CC lib/env_dpdk/init.o 00:02:01.720 CC lib/env_dpdk/threads.o 00:02:01.720 CC lib/env_dpdk/pci_ioat.o 00:02:01.720 CC lib/env_dpdk/pci_virtio.o 00:02:01.720 CC lib/env_dpdk/pci_vmd.o 00:02:01.720 CC lib/env_dpdk/pci_idxd.o 00:02:01.720 CC lib/env_dpdk/pci_event.o 00:02:01.720 CC lib/env_dpdk/sigbus_handler.o 00:02:01.720 CC lib/env_dpdk/pci_dpdk.o 00:02:01.720 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:01.720 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:01.978 LIB libspdk_conf.a 00:02:01.978 SO libspdk_conf.so.6.0 00:02:01.978 LIB libspdk_rdma_utils.a 00:02:01.978 LIB libspdk_json.a 00:02:01.978 SO libspdk_rdma_utils.so.1.0 00:02:01.978 SYMLINK libspdk_conf.so 00:02:01.978 SO libspdk_json.so.6.0 00:02:01.978 SYMLINK libspdk_rdma_utils.so 00:02:01.978 SYMLINK libspdk_json.so 00:02:01.978 LIB libspdk_idxd.a 00:02:02.236 SO libspdk_idxd.so.12.1 00:02:02.236 LIB libspdk_vmd.a 00:02:02.236 SO libspdk_vmd.so.6.0 00:02:02.236 SYMLINK libspdk_idxd.so 00:02:02.236 SYMLINK libspdk_vmd.so 00:02:02.236 CC lib/rdma_provider/common.o 00:02:02.236 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:02.236 CC lib/jsonrpc/jsonrpc_server.o 00:02:02.236 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:02.236 CC lib/jsonrpc/jsonrpc_client.o 00:02:02.236 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:02.495 LIB libspdk_rdma_provider.a 00:02:02.495 SO libspdk_rdma_provider.so.7.0 00:02:02.495 LIB libspdk_jsonrpc.a 00:02:02.495 SYMLINK libspdk_rdma_provider.so 00:02:02.495 SO libspdk_jsonrpc.so.6.0 00:02:02.755 SYMLINK libspdk_jsonrpc.so 00:02:02.755 LIB libspdk_env_dpdk.a 00:02:02.755 SO libspdk_env_dpdk.so.15.1 00:02:02.755 SYMLINK libspdk_env_dpdk.so 00:02:03.014 CC lib/rpc/rpc.o 00:02:03.014 LIB libspdk_rpc.a 00:02:03.273 SO libspdk_rpc.so.6.0 00:02:03.273 SYMLINK libspdk_rpc.so 00:02:03.533 CC lib/trace/trace.o 00:02:03.533 CC lib/trace/trace_flags.o 00:02:03.533 CC lib/trace/trace_rpc.o 00:02:03.533 CC lib/notify/notify.o 00:02:03.533 CC lib/notify/notify_rpc.o 00:02:03.533 CC lib/keyring/keyring.o 00:02:03.533 CC lib/keyring/keyring_rpc.o 00:02:03.792 LIB libspdk_notify.a 00:02:03.792 SO libspdk_notify.so.6.0 00:02:03.792 LIB libspdk_keyring.a 00:02:03.792 LIB libspdk_trace.a 00:02:03.792 SO libspdk_keyring.so.2.0 00:02:03.792 SO libspdk_trace.so.11.0 00:02:03.792 SYMLINK libspdk_notify.so 00:02:03.792 SYMLINK libspdk_keyring.so 00:02:03.792 SYMLINK libspdk_trace.so 00:02:04.052 CC lib/sock/sock.o 00:02:04.052 CC lib/sock/sock_rpc.o 00:02:04.052 CC lib/thread/thread.o 00:02:04.052 CC lib/thread/iobuf.o 00:02:04.621 LIB libspdk_sock.a 00:02:04.621 SO libspdk_sock.so.10.0 00:02:04.621 SYMLINK libspdk_sock.so 00:02:04.880 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:04.880 CC lib/nvme/nvme_ctrlr.o 00:02:04.880 CC lib/nvme/nvme_fabric.o 00:02:04.880 CC lib/nvme/nvme_ns_cmd.o 00:02:04.880 CC lib/nvme/nvme_ns.o 00:02:04.880 CC lib/nvme/nvme_pcie_common.o 00:02:04.880 CC lib/nvme/nvme_pcie.o 00:02:04.880 CC lib/nvme/nvme_qpair.o 00:02:04.880 CC lib/nvme/nvme.o 00:02:04.880 CC lib/nvme/nvme_quirks.o 00:02:04.880 CC lib/nvme/nvme_transport.o 00:02:04.880 CC lib/nvme/nvme_discovery.o 00:02:04.880 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:04.880 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:04.880 CC lib/nvme/nvme_tcp.o 00:02:04.880 CC lib/nvme/nvme_opal.o 00:02:04.880 CC lib/nvme/nvme_io_msg.o 00:02:04.880 CC lib/nvme/nvme_poll_group.o 00:02:04.880 CC lib/nvme/nvme_zns.o 00:02:04.880 CC lib/nvme/nvme_stubs.o 00:02:04.880 CC lib/nvme/nvme_auth.o 00:02:04.880 CC lib/nvme/nvme_cuse.o 00:02:04.880 CC lib/nvme/nvme_vfio_user.o 00:02:04.880 CC lib/nvme/nvme_rdma.o 00:02:05.139 LIB libspdk_thread.a 00:02:05.397 SO libspdk_thread.so.11.0 00:02:05.397 SYMLINK libspdk_thread.so 00:02:05.656 CC lib/virtio/virtio.o 00:02:05.656 CC lib/virtio/virtio_pci.o 00:02:05.656 CC lib/virtio/virtio_vhost_user.o 00:02:05.656 CC lib/virtio/virtio_vfio_user.o 00:02:05.656 CC lib/init/json_config.o 00:02:05.656 CC lib/blob/blobstore.o 00:02:05.656 CC lib/init/subsystem.o 00:02:05.656 CC lib/blob/request.o 00:02:05.656 CC lib/init/subsystem_rpc.o 00:02:05.656 CC lib/blob/zeroes.o 00:02:05.656 CC lib/init/rpc.o 00:02:05.656 CC lib/blob/blob_bs_dev.o 00:02:05.656 CC lib/accel/accel.o 00:02:05.656 CC lib/fsdev/fsdev.o 00:02:05.656 CC lib/fsdev/fsdev_io.o 00:02:05.656 CC lib/accel/accel_rpc.o 00:02:05.656 CC lib/vfu_tgt/tgt_endpoint.o 00:02:05.656 CC lib/accel/accel_sw.o 00:02:05.656 CC lib/fsdev/fsdev_rpc.o 00:02:05.656 CC lib/vfu_tgt/tgt_rpc.o 00:02:05.915 LIB libspdk_init.a 00:02:05.915 LIB libspdk_virtio.a 00:02:05.915 SO libspdk_init.so.6.0 00:02:05.915 LIB libspdk_vfu_tgt.a 00:02:05.915 SO libspdk_virtio.so.7.0 00:02:05.915 SO libspdk_vfu_tgt.so.3.0 00:02:05.915 SYMLINK libspdk_init.so 00:02:05.915 SYMLINK libspdk_virtio.so 00:02:05.915 SYMLINK libspdk_vfu_tgt.so 00:02:06.175 LIB libspdk_fsdev.a 00:02:06.175 SO libspdk_fsdev.so.2.0 00:02:06.175 SYMLINK libspdk_fsdev.so 00:02:06.175 CC lib/event/app.o 00:02:06.175 CC lib/event/reactor.o 00:02:06.175 CC lib/event/log_rpc.o 00:02:06.175 CC lib/event/app_rpc.o 00:02:06.175 CC lib/event/scheduler_static.o 00:02:06.434 LIB libspdk_accel.a 00:02:06.434 SO libspdk_accel.so.16.0 00:02:06.434 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:06.692 SYMLINK libspdk_accel.so 00:02:06.692 LIB libspdk_nvme.a 00:02:06.692 LIB libspdk_event.a 00:02:06.692 SO libspdk_event.so.14.0 00:02:06.692 SO libspdk_nvme.so.15.0 00:02:06.692 SYMLINK libspdk_event.so 00:02:06.949 CC lib/bdev/bdev.o 00:02:06.949 SYMLINK libspdk_nvme.so 00:02:06.949 CC lib/bdev/bdev_rpc.o 00:02:06.949 CC lib/bdev/bdev_zone.o 00:02:06.949 CC lib/bdev/part.o 00:02:06.949 CC lib/bdev/scsi_nvme.o 00:02:06.949 LIB libspdk_fuse_dispatcher.a 00:02:06.949 SO libspdk_fuse_dispatcher.so.1.0 00:02:07.208 SYMLINK libspdk_fuse_dispatcher.so 00:02:07.774 LIB libspdk_blob.a 00:02:07.774 SO libspdk_blob.so.11.0 00:02:08.033 SYMLINK libspdk_blob.so 00:02:08.292 CC lib/blobfs/blobfs.o 00:02:08.292 CC lib/blobfs/tree.o 00:02:08.292 CC lib/lvol/lvol.o 00:02:08.860 LIB libspdk_bdev.a 00:02:08.860 SO libspdk_bdev.so.17.0 00:02:08.860 LIB libspdk_blobfs.a 00:02:08.860 SO libspdk_blobfs.so.10.0 00:02:08.860 SYMLINK libspdk_bdev.so 00:02:08.860 LIB libspdk_lvol.a 00:02:08.860 SYMLINK libspdk_blobfs.so 00:02:08.860 SO libspdk_lvol.so.10.0 00:02:08.860 SYMLINK libspdk_lvol.so 00:02:09.118 CC lib/ftl/ftl_core.o 00:02:09.118 CC lib/ftl/ftl_init.o 00:02:09.118 CC lib/ftl/ftl_layout.o 00:02:09.118 CC lib/nvmf/ctrlr.o 00:02:09.118 CC lib/ftl/ftl_debug.o 00:02:09.118 CC lib/ftl/ftl_io.o 00:02:09.118 CC lib/nbd/nbd.o 00:02:09.118 CC lib/nvmf/ctrlr_discovery.o 00:02:09.118 CC lib/scsi/dev.o 00:02:09.118 CC lib/ublk/ublk.o 00:02:09.118 CC lib/nbd/nbd_rpc.o 00:02:09.118 CC lib/nvmf/ctrlr_bdev.o 00:02:09.118 CC lib/ftl/ftl_sb.o 00:02:09.118 CC lib/scsi/lun.o 00:02:09.118 CC lib/ftl/ftl_l2p.o 00:02:09.118 CC lib/ublk/ublk_rpc.o 00:02:09.118 CC lib/scsi/port.o 00:02:09.118 CC lib/nvmf/subsystem.o 00:02:09.118 CC lib/ftl/ftl_l2p_flat.o 00:02:09.118 CC lib/nvmf/nvmf.o 00:02:09.118 CC lib/scsi/scsi.o 00:02:09.118 CC lib/ftl/ftl_nv_cache.o 00:02:09.118 CC lib/scsi/scsi_bdev.o 00:02:09.118 CC lib/nvmf/nvmf_rpc.o 00:02:09.118 CC lib/ftl/ftl_band.o 00:02:09.118 CC lib/scsi/scsi_pr.o 00:02:09.118 CC lib/nvmf/transport.o 00:02:09.118 CC lib/ftl/ftl_band_ops.o 00:02:09.118 CC lib/nvmf/stubs.o 00:02:09.118 CC lib/scsi/scsi_rpc.o 00:02:09.118 CC lib/nvmf/tcp.o 00:02:09.118 CC lib/ftl/ftl_writer.o 00:02:09.118 CC lib/scsi/task.o 00:02:09.118 CC lib/ftl/ftl_rq.o 00:02:09.118 CC lib/nvmf/mdns_server.o 00:02:09.118 CC lib/ftl/ftl_reloc.o 00:02:09.118 CC lib/ftl/ftl_l2p_cache.o 00:02:09.118 CC lib/nvmf/vfio_user.o 00:02:09.118 CC lib/nvmf/rdma.o 00:02:09.118 CC lib/ftl/ftl_p2l.o 00:02:09.118 CC lib/ftl/mngt/ftl_mngt.o 00:02:09.118 CC lib/ftl/ftl_p2l_log.o 00:02:09.118 CC lib/nvmf/auth.o 00:02:09.118 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:09.118 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:09.118 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:09.118 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:09.118 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:09.118 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:09.118 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:09.118 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:09.118 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:09.118 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:09.118 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:09.118 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:09.118 CC lib/ftl/utils/ftl_md.o 00:02:09.118 CC lib/ftl/utils/ftl_conf.o 00:02:09.118 CC lib/ftl/utils/ftl_bitmap.o 00:02:09.118 CC lib/ftl/utils/ftl_mempool.o 00:02:09.118 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:09.118 CC lib/ftl/utils/ftl_property.o 00:02:09.118 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:09.118 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:09.118 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:09.118 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:09.118 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:09.118 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:09.118 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:09.118 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:09.118 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:09.118 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:09.118 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:09.118 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:09.118 CC lib/ftl/ftl_trace.o 00:02:09.118 CC lib/ftl/base/ftl_base_bdev.o 00:02:09.118 CC lib/ftl/base/ftl_base_dev.o 00:02:09.685 LIB libspdk_nbd.a 00:02:09.685 SO libspdk_nbd.so.7.0 00:02:09.685 LIB libspdk_scsi.a 00:02:09.685 SYMLINK libspdk_nbd.so 00:02:09.944 LIB libspdk_ublk.a 00:02:09.944 SO libspdk_scsi.so.9.0 00:02:09.944 SO libspdk_ublk.so.3.0 00:02:09.944 SYMLINK libspdk_scsi.so 00:02:09.944 SYMLINK libspdk_ublk.so 00:02:10.203 CC lib/vhost/vhost.o 00:02:10.203 CC lib/vhost/vhost_rpc.o 00:02:10.203 CC lib/vhost/vhost_scsi.o 00:02:10.203 CC lib/vhost/vhost_blk.o 00:02:10.203 CC lib/iscsi/conn.o 00:02:10.203 CC lib/iscsi/init_grp.o 00:02:10.203 CC lib/vhost/rte_vhost_user.o 00:02:10.203 CC lib/iscsi/iscsi.o 00:02:10.203 CC lib/iscsi/param.o 00:02:10.203 CC lib/iscsi/portal_grp.o 00:02:10.203 CC lib/iscsi/tgt_node.o 00:02:10.203 CC lib/iscsi/iscsi_subsystem.o 00:02:10.203 CC lib/iscsi/iscsi_rpc.o 00:02:10.203 CC lib/iscsi/task.o 00:02:10.203 LIB libspdk_ftl.a 00:02:10.461 SO libspdk_ftl.so.9.0 00:02:10.719 SYMLINK libspdk_ftl.so 00:02:10.978 LIB libspdk_nvmf.a 00:02:10.978 SO libspdk_nvmf.so.20.0 00:02:10.978 LIB libspdk_vhost.a 00:02:10.978 SO libspdk_vhost.so.8.0 00:02:11.238 SYMLINK libspdk_vhost.so 00:02:11.239 SYMLINK libspdk_nvmf.so 00:02:11.239 LIB libspdk_iscsi.a 00:02:11.239 SO libspdk_iscsi.so.8.0 00:02:11.498 SYMLINK libspdk_iscsi.so 00:02:12.067 CC module/env_dpdk/env_dpdk_rpc.o 00:02:12.067 CC module/vfu_device/vfu_virtio.o 00:02:12.067 CC module/vfu_device/vfu_virtio_scsi.o 00:02:12.067 CC module/vfu_device/vfu_virtio_blk.o 00:02:12.067 CC module/vfu_device/vfu_virtio_rpc.o 00:02:12.067 CC module/vfu_device/vfu_virtio_fs.o 00:02:12.067 CC module/accel/iaa/accel_iaa.o 00:02:12.067 CC module/scheduler/gscheduler/gscheduler.o 00:02:12.067 CC module/keyring/linux/keyring.o 00:02:12.067 CC module/accel/iaa/accel_iaa_rpc.o 00:02:12.067 CC module/accel/error/accel_error.o 00:02:12.067 CC module/keyring/linux/keyring_rpc.o 00:02:12.067 CC module/accel/error/accel_error_rpc.o 00:02:12.067 CC module/keyring/file/keyring.o 00:02:12.067 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:12.067 CC module/blob/bdev/blob_bdev.o 00:02:12.067 LIB libspdk_env_dpdk_rpc.a 00:02:12.067 CC module/keyring/file/keyring_rpc.o 00:02:12.067 CC module/sock/posix/posix.o 00:02:12.067 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:12.067 CC module/fsdev/aio/fsdev_aio.o 00:02:12.067 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:12.067 CC module/fsdev/aio/linux_aio_mgr.o 00:02:12.067 CC module/accel/dsa/accel_dsa.o 00:02:12.067 CC module/accel/ioat/accel_ioat.o 00:02:12.067 CC module/accel/dsa/accel_dsa_rpc.o 00:02:12.067 CC module/accel/ioat/accel_ioat_rpc.o 00:02:12.067 SO libspdk_env_dpdk_rpc.so.6.0 00:02:12.067 SYMLINK libspdk_env_dpdk_rpc.so 00:02:12.326 LIB libspdk_keyring_linux.a 00:02:12.326 LIB libspdk_keyring_file.a 00:02:12.326 LIB libspdk_scheduler_gscheduler.a 00:02:12.326 SO libspdk_keyring_linux.so.1.0 00:02:12.326 SO libspdk_scheduler_gscheduler.so.4.0 00:02:12.326 SO libspdk_keyring_file.so.2.0 00:02:12.326 LIB libspdk_scheduler_dpdk_governor.a 00:02:12.326 LIB libspdk_accel_iaa.a 00:02:12.326 LIB libspdk_scheduler_dynamic.a 00:02:12.326 LIB libspdk_accel_ioat.a 00:02:12.326 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:12.326 LIB libspdk_accel_error.a 00:02:12.326 SYMLINK libspdk_scheduler_gscheduler.so 00:02:12.326 SYMLINK libspdk_keyring_linux.so 00:02:12.326 SO libspdk_accel_iaa.so.3.0 00:02:12.326 SYMLINK libspdk_keyring_file.so 00:02:12.326 SO libspdk_scheduler_dynamic.so.4.0 00:02:12.326 SO libspdk_accel_ioat.so.6.0 00:02:12.326 SO libspdk_accel_error.so.2.0 00:02:12.326 LIB libspdk_blob_bdev.a 00:02:12.326 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:12.326 LIB libspdk_accel_dsa.a 00:02:12.326 SYMLINK libspdk_scheduler_dynamic.so 00:02:12.326 SO libspdk_blob_bdev.so.11.0 00:02:12.326 SYMLINK libspdk_accel_iaa.so 00:02:12.326 SYMLINK libspdk_accel_ioat.so 00:02:12.326 SO libspdk_accel_dsa.so.5.0 00:02:12.326 SYMLINK libspdk_accel_error.so 00:02:12.326 SYMLINK libspdk_blob_bdev.so 00:02:12.585 LIB libspdk_vfu_device.a 00:02:12.585 SYMLINK libspdk_accel_dsa.so 00:02:12.585 SO libspdk_vfu_device.so.3.0 00:02:12.585 SYMLINK libspdk_vfu_device.so 00:02:12.585 LIB libspdk_fsdev_aio.a 00:02:12.585 LIB libspdk_sock_posix.a 00:02:12.585 SO libspdk_fsdev_aio.so.1.0 00:02:12.843 SO libspdk_sock_posix.so.6.0 00:02:12.843 SYMLINK libspdk_fsdev_aio.so 00:02:12.843 SYMLINK libspdk_sock_posix.so 00:02:12.843 CC module/bdev/aio/bdev_aio.o 00:02:12.843 CC module/bdev/aio/bdev_aio_rpc.o 00:02:12.843 CC module/bdev/null/bdev_null.o 00:02:12.843 CC module/bdev/null/bdev_null_rpc.o 00:02:12.843 CC module/bdev/delay/vbdev_delay.o 00:02:12.843 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:12.843 CC module/blobfs/bdev/blobfs_bdev.o 00:02:12.843 CC module/bdev/nvme/bdev_nvme.o 00:02:12.844 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:12.844 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:12.844 CC module/bdev/raid/bdev_raid_rpc.o 00:02:12.844 CC module/bdev/split/vbdev_split.o 00:02:12.844 CC module/bdev/nvme/nvme_rpc.o 00:02:12.844 CC module/bdev/raid/bdev_raid.o 00:02:12.844 CC module/bdev/split/vbdev_split_rpc.o 00:02:12.844 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:12.844 CC module/bdev/nvme/bdev_mdns_client.o 00:02:12.844 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:12.844 CC module/bdev/lvol/vbdev_lvol.o 00:02:12.844 CC module/bdev/raid/bdev_raid_sb.o 00:02:12.844 CC module/bdev/nvme/vbdev_opal.o 00:02:12.844 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:12.844 CC module/bdev/raid/raid0.o 00:02:12.844 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:12.844 CC module/bdev/gpt/gpt.o 00:02:12.844 CC module/bdev/gpt/vbdev_gpt.o 00:02:12.844 CC module/bdev/error/vbdev_error.o 00:02:12.844 CC module/bdev/raid/raid1.o 00:02:12.844 CC module/bdev/raid/concat.o 00:02:12.844 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:12.844 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:12.844 CC module/bdev/error/vbdev_error_rpc.o 00:02:12.844 CC module/bdev/ftl/bdev_ftl.o 00:02:12.844 CC module/bdev/malloc/bdev_malloc.o 00:02:12.844 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:12.844 CC module/bdev/passthru/vbdev_passthru.o 00:02:12.844 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:12.844 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:12.844 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:12.844 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:12.844 CC module/bdev/iscsi/bdev_iscsi.o 00:02:12.844 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:13.102 LIB libspdk_blobfs_bdev.a 00:02:13.102 LIB libspdk_bdev_null.a 00:02:13.102 LIB libspdk_bdev_split.a 00:02:13.102 SO libspdk_blobfs_bdev.so.6.0 00:02:13.102 LIB libspdk_bdev_gpt.a 00:02:13.102 SO libspdk_bdev_split.so.6.0 00:02:13.102 SO libspdk_bdev_null.so.6.0 00:02:13.102 LIB libspdk_bdev_error.a 00:02:13.361 LIB libspdk_bdev_aio.a 00:02:13.361 SYMLINK libspdk_blobfs_bdev.so 00:02:13.361 SO libspdk_bdev_gpt.so.6.0 00:02:13.361 SO libspdk_bdev_error.so.6.0 00:02:13.361 SYMLINK libspdk_bdev_null.so 00:02:13.361 SYMLINK libspdk_bdev_split.so 00:02:13.361 LIB libspdk_bdev_passthru.a 00:02:13.361 SO libspdk_bdev_aio.so.6.0 00:02:13.361 LIB libspdk_bdev_ftl.a 00:02:13.361 SO libspdk_bdev_passthru.so.6.0 00:02:13.361 LIB libspdk_bdev_malloc.a 00:02:13.361 SO libspdk_bdev_ftl.so.6.0 00:02:13.361 SYMLINK libspdk_bdev_gpt.so 00:02:13.361 LIB libspdk_bdev_zone_block.a 00:02:13.361 SYMLINK libspdk_bdev_error.so 00:02:13.361 LIB libspdk_bdev_iscsi.a 00:02:13.361 SYMLINK libspdk_bdev_aio.so 00:02:13.361 LIB libspdk_bdev_delay.a 00:02:13.361 SO libspdk_bdev_malloc.so.6.0 00:02:13.361 SO libspdk_bdev_zone_block.so.6.0 00:02:13.361 SO libspdk_bdev_iscsi.so.6.0 00:02:13.361 SYMLINK libspdk_bdev_passthru.so 00:02:13.361 SO libspdk_bdev_delay.so.6.0 00:02:13.361 SYMLINK libspdk_bdev_ftl.so 00:02:13.361 SYMLINK libspdk_bdev_malloc.so 00:02:13.361 SYMLINK libspdk_bdev_zone_block.so 00:02:13.361 SYMLINK libspdk_bdev_iscsi.so 00:02:13.361 LIB libspdk_bdev_lvol.a 00:02:13.361 SYMLINK libspdk_bdev_delay.so 00:02:13.361 LIB libspdk_bdev_virtio.a 00:02:13.361 SO libspdk_bdev_lvol.so.6.0 00:02:13.361 SO libspdk_bdev_virtio.so.6.0 00:02:13.621 SYMLINK libspdk_bdev_lvol.so 00:02:13.621 SYMLINK libspdk_bdev_virtio.so 00:02:13.880 LIB libspdk_bdev_raid.a 00:02:13.880 SO libspdk_bdev_raid.so.6.0 00:02:13.880 SYMLINK libspdk_bdev_raid.so 00:02:14.817 LIB libspdk_bdev_nvme.a 00:02:14.817 SO libspdk_bdev_nvme.so.7.1 00:02:14.817 SYMLINK libspdk_bdev_nvme.so 00:02:15.755 CC module/event/subsystems/vmd/vmd.o 00:02:15.755 CC module/event/subsystems/iobuf/iobuf.o 00:02:15.755 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:15.755 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:15.755 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:15.755 CC module/event/subsystems/sock/sock.o 00:02:15.755 CC module/event/subsystems/keyring/keyring.o 00:02:15.755 CC module/event/subsystems/scheduler/scheduler.o 00:02:15.755 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:15.755 CC module/event/subsystems/fsdev/fsdev.o 00:02:15.755 LIB libspdk_event_vhost_blk.a 00:02:15.755 LIB libspdk_event_sock.a 00:02:15.755 LIB libspdk_event_scheduler.a 00:02:15.755 LIB libspdk_event_keyring.a 00:02:15.755 LIB libspdk_event_iobuf.a 00:02:15.755 LIB libspdk_event_vmd.a 00:02:15.755 LIB libspdk_event_fsdev.a 00:02:15.755 SO libspdk_event_vhost_blk.so.3.0 00:02:15.755 LIB libspdk_event_vfu_tgt.a 00:02:15.755 SO libspdk_event_sock.so.5.0 00:02:15.755 SO libspdk_event_iobuf.so.3.0 00:02:15.755 SO libspdk_event_vmd.so.6.0 00:02:15.755 SO libspdk_event_keyring.so.1.0 00:02:15.755 SO libspdk_event_fsdev.so.1.0 00:02:15.755 SO libspdk_event_scheduler.so.4.0 00:02:15.755 SO libspdk_event_vfu_tgt.so.3.0 00:02:15.755 SYMLINK libspdk_event_vhost_blk.so 00:02:15.755 SYMLINK libspdk_event_sock.so 00:02:15.755 SYMLINK libspdk_event_keyring.so 00:02:15.755 SYMLINK libspdk_event_fsdev.so 00:02:15.755 SYMLINK libspdk_event_iobuf.so 00:02:15.755 SYMLINK libspdk_event_scheduler.so 00:02:15.755 SYMLINK libspdk_event_vmd.so 00:02:15.755 SYMLINK libspdk_event_vfu_tgt.so 00:02:16.014 CC module/event/subsystems/accel/accel.o 00:02:16.274 LIB libspdk_event_accel.a 00:02:16.274 SO libspdk_event_accel.so.6.0 00:02:16.274 SYMLINK libspdk_event_accel.so 00:02:16.842 CC module/event/subsystems/bdev/bdev.o 00:02:16.842 LIB libspdk_event_bdev.a 00:02:16.842 SO libspdk_event_bdev.so.6.0 00:02:16.842 SYMLINK libspdk_event_bdev.so 00:02:17.412 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:17.412 CC module/event/subsystems/scsi/scsi.o 00:02:17.412 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:17.412 CC module/event/subsystems/nbd/nbd.o 00:02:17.412 CC module/event/subsystems/ublk/ublk.o 00:02:17.412 LIB libspdk_event_ublk.a 00:02:17.412 LIB libspdk_event_nbd.a 00:02:17.412 LIB libspdk_event_scsi.a 00:02:17.412 SO libspdk_event_ublk.so.3.0 00:02:17.412 SO libspdk_event_nbd.so.6.0 00:02:17.412 SO libspdk_event_scsi.so.6.0 00:02:17.412 LIB libspdk_event_nvmf.a 00:02:17.412 SYMLINK libspdk_event_ublk.so 00:02:17.412 SYMLINK libspdk_event_nbd.so 00:02:17.412 SO libspdk_event_nvmf.so.6.0 00:02:17.412 SYMLINK libspdk_event_scsi.so 00:02:17.671 SYMLINK libspdk_event_nvmf.so 00:02:17.930 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:17.930 CC module/event/subsystems/iscsi/iscsi.o 00:02:17.930 LIB libspdk_event_vhost_scsi.a 00:02:17.930 LIB libspdk_event_iscsi.a 00:02:17.930 SO libspdk_event_vhost_scsi.so.3.0 00:02:17.930 SO libspdk_event_iscsi.so.6.0 00:02:17.930 SYMLINK libspdk_event_vhost_scsi.so 00:02:18.189 SYMLINK libspdk_event_iscsi.so 00:02:18.189 SO libspdk.so.6.0 00:02:18.189 SYMLINK libspdk.so 00:02:18.764 CC app/spdk_nvme_identify/identify.o 00:02:18.764 TEST_HEADER include/spdk/accel.h 00:02:18.764 TEST_HEADER include/spdk/accel_module.h 00:02:18.764 TEST_HEADER include/spdk/assert.h 00:02:18.764 TEST_HEADER include/spdk/base64.h 00:02:18.764 TEST_HEADER include/spdk/barrier.h 00:02:18.764 TEST_HEADER include/spdk/bdev.h 00:02:18.764 CC app/spdk_nvme_perf/perf.o 00:02:18.764 TEST_HEADER include/spdk/bdev_module.h 00:02:18.764 CC app/trace_record/trace_record.o 00:02:18.764 TEST_HEADER include/spdk/bdev_zone.h 00:02:18.764 CC app/spdk_lspci/spdk_lspci.o 00:02:18.764 TEST_HEADER include/spdk/bit_array.h 00:02:18.764 TEST_HEADER include/spdk/bit_pool.h 00:02:18.764 TEST_HEADER include/spdk/blob_bdev.h 00:02:18.764 CXX app/trace/trace.o 00:02:18.764 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:18.764 TEST_HEADER include/spdk/blobfs.h 00:02:18.764 TEST_HEADER include/spdk/blob.h 00:02:18.764 TEST_HEADER include/spdk/conf.h 00:02:18.764 TEST_HEADER include/spdk/crc16.h 00:02:18.764 TEST_HEADER include/spdk/config.h 00:02:18.764 CC app/spdk_top/spdk_top.o 00:02:18.764 TEST_HEADER include/spdk/cpuset.h 00:02:18.764 CC test/rpc_client/rpc_client_test.o 00:02:18.764 TEST_HEADER include/spdk/crc64.h 00:02:18.764 TEST_HEADER include/spdk/crc32.h 00:02:18.764 TEST_HEADER include/spdk/dif.h 00:02:18.764 TEST_HEADER include/spdk/env_dpdk.h 00:02:18.764 TEST_HEADER include/spdk/endian.h 00:02:18.764 TEST_HEADER include/spdk/dma.h 00:02:18.764 CC app/spdk_nvme_discover/discovery_aer.o 00:02:18.764 TEST_HEADER include/spdk/env.h 00:02:18.764 TEST_HEADER include/spdk/event.h 00:02:18.764 TEST_HEADER include/spdk/fd_group.h 00:02:18.764 TEST_HEADER include/spdk/fd.h 00:02:18.764 TEST_HEADER include/spdk/file.h 00:02:18.764 TEST_HEADER include/spdk/fsdev.h 00:02:18.764 TEST_HEADER include/spdk/fsdev_module.h 00:02:18.764 TEST_HEADER include/spdk/ftl.h 00:02:18.764 TEST_HEADER include/spdk/fuse_dispatcher.h 00:02:18.764 TEST_HEADER include/spdk/hexlify.h 00:02:18.764 TEST_HEADER include/spdk/gpt_spec.h 00:02:18.764 TEST_HEADER include/spdk/histogram_data.h 00:02:18.764 TEST_HEADER include/spdk/idxd.h 00:02:18.764 TEST_HEADER include/spdk/idxd_spec.h 00:02:18.764 TEST_HEADER include/spdk/init.h 00:02:18.764 TEST_HEADER include/spdk/ioat.h 00:02:18.764 TEST_HEADER include/spdk/ioat_spec.h 00:02:18.765 TEST_HEADER include/spdk/iscsi_spec.h 00:02:18.765 TEST_HEADER include/spdk/json.h 00:02:18.765 TEST_HEADER include/spdk/jsonrpc.h 00:02:18.765 TEST_HEADER include/spdk/keyring.h 00:02:18.765 TEST_HEADER include/spdk/keyring_module.h 00:02:18.765 TEST_HEADER include/spdk/log.h 00:02:18.765 TEST_HEADER include/spdk/likely.h 00:02:18.765 TEST_HEADER include/spdk/md5.h 00:02:18.765 TEST_HEADER include/spdk/lvol.h 00:02:18.765 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:18.765 TEST_HEADER include/spdk/mmio.h 00:02:18.765 TEST_HEADER include/spdk/memory.h 00:02:18.765 TEST_HEADER include/spdk/nbd.h 00:02:18.765 TEST_HEADER include/spdk/nvme.h 00:02:18.765 TEST_HEADER include/spdk/net.h 00:02:18.765 TEST_HEADER include/spdk/nvme_intel.h 00:02:18.765 TEST_HEADER include/spdk/notify.h 00:02:18.765 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:18.765 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:18.765 TEST_HEADER include/spdk/nvme_spec.h 00:02:18.765 CC app/spdk_dd/spdk_dd.o 00:02:18.765 TEST_HEADER include/spdk/nvme_zns.h 00:02:18.765 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:18.765 TEST_HEADER include/spdk/nvmf.h 00:02:18.765 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:18.765 TEST_HEADER include/spdk/nvmf_transport.h 00:02:18.765 TEST_HEADER include/spdk/opal.h 00:02:18.765 TEST_HEADER include/spdk/nvmf_spec.h 00:02:18.765 TEST_HEADER include/spdk/pci_ids.h 00:02:18.765 TEST_HEADER include/spdk/opal_spec.h 00:02:18.765 TEST_HEADER include/spdk/queue.h 00:02:18.765 TEST_HEADER include/spdk/pipe.h 00:02:18.765 TEST_HEADER include/spdk/scheduler.h 00:02:18.765 TEST_HEADER include/spdk/reduce.h 00:02:18.765 TEST_HEADER include/spdk/rpc.h 00:02:18.765 TEST_HEADER include/spdk/scsi.h 00:02:18.765 CC app/iscsi_tgt/iscsi_tgt.o 00:02:18.765 TEST_HEADER include/spdk/sock.h 00:02:18.765 TEST_HEADER include/spdk/stdinc.h 00:02:18.765 TEST_HEADER include/spdk/scsi_spec.h 00:02:18.765 TEST_HEADER include/spdk/string.h 00:02:18.765 TEST_HEADER include/spdk/thread.h 00:02:18.765 TEST_HEADER include/spdk/trace.h 00:02:18.765 CC app/spdk_tgt/spdk_tgt.o 00:02:18.765 TEST_HEADER include/spdk/tree.h 00:02:18.765 TEST_HEADER include/spdk/trace_parser.h 00:02:18.765 TEST_HEADER include/spdk/util.h 00:02:18.765 TEST_HEADER include/spdk/ublk.h 00:02:18.765 TEST_HEADER include/spdk/uuid.h 00:02:18.765 TEST_HEADER include/spdk/version.h 00:02:18.765 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:18.765 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:18.765 TEST_HEADER include/spdk/vhost.h 00:02:18.765 TEST_HEADER include/spdk/vmd.h 00:02:18.765 TEST_HEADER include/spdk/xor.h 00:02:18.765 TEST_HEADER include/spdk/zipf.h 00:02:18.765 CXX test/cpp_headers/accel.o 00:02:18.765 CXX test/cpp_headers/assert.o 00:02:18.765 CXX test/cpp_headers/accel_module.o 00:02:18.765 CXX test/cpp_headers/base64.o 00:02:18.765 CXX test/cpp_headers/barrier.o 00:02:18.765 CXX test/cpp_headers/bdev.o 00:02:18.765 CXX test/cpp_headers/bdev_module.o 00:02:18.765 CXX test/cpp_headers/bit_array.o 00:02:18.765 CXX test/cpp_headers/bdev_zone.o 00:02:18.765 CXX test/cpp_headers/bit_pool.o 00:02:18.765 CXX test/cpp_headers/blob_bdev.o 00:02:18.765 CXX test/cpp_headers/blobfs_bdev.o 00:02:18.765 CXX test/cpp_headers/blobfs.o 00:02:18.765 CXX test/cpp_headers/conf.o 00:02:18.765 CXX test/cpp_headers/blob.o 00:02:18.765 CXX test/cpp_headers/config.o 00:02:18.765 CXX test/cpp_headers/crc32.o 00:02:18.765 CXX test/cpp_headers/crc16.o 00:02:18.765 CXX test/cpp_headers/crc64.o 00:02:18.765 CXX test/cpp_headers/cpuset.o 00:02:18.765 CXX test/cpp_headers/dif.o 00:02:18.765 CXX test/cpp_headers/env_dpdk.o 00:02:18.765 CXX test/cpp_headers/endian.o 00:02:18.765 CXX test/cpp_headers/dma.o 00:02:18.765 CXX test/cpp_headers/fd_group.o 00:02:18.765 CXX test/cpp_headers/event.o 00:02:18.765 CXX test/cpp_headers/env.o 00:02:18.765 CXX test/cpp_headers/fd.o 00:02:18.765 CXX test/cpp_headers/fsdev.o 00:02:18.765 CXX test/cpp_headers/file.o 00:02:18.765 CXX test/cpp_headers/ftl.o 00:02:18.765 CXX test/cpp_headers/fsdev_module.o 00:02:18.765 CXX test/cpp_headers/fuse_dispatcher.o 00:02:18.765 CXX test/cpp_headers/gpt_spec.o 00:02:18.765 CXX test/cpp_headers/hexlify.o 00:02:18.765 CXX test/cpp_headers/histogram_data.o 00:02:18.765 CXX test/cpp_headers/idxd_spec.o 00:02:18.765 CC app/nvmf_tgt/nvmf_main.o 00:02:18.765 CXX test/cpp_headers/idxd.o 00:02:18.765 CXX test/cpp_headers/ioat.o 00:02:18.765 CXX test/cpp_headers/init.o 00:02:18.765 CXX test/cpp_headers/json.o 00:02:18.765 CXX test/cpp_headers/iscsi_spec.o 00:02:18.765 CXX test/cpp_headers/ioat_spec.o 00:02:18.765 CXX test/cpp_headers/jsonrpc.o 00:02:18.765 CXX test/cpp_headers/keyring.o 00:02:18.765 CXX test/cpp_headers/keyring_module.o 00:02:18.765 CXX test/cpp_headers/likely.o 00:02:18.765 CXX test/cpp_headers/log.o 00:02:18.765 CXX test/cpp_headers/md5.o 00:02:18.765 CXX test/cpp_headers/lvol.o 00:02:18.765 CXX test/cpp_headers/nbd.o 00:02:18.765 CXX test/cpp_headers/memory.o 00:02:18.765 CXX test/cpp_headers/mmio.o 00:02:18.765 CXX test/cpp_headers/notify.o 00:02:18.765 CXX test/cpp_headers/net.o 00:02:18.765 CXX test/cpp_headers/nvme.o 00:02:18.765 CXX test/cpp_headers/nvme_intel.o 00:02:18.765 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:18.765 CXX test/cpp_headers/nvme_ocssd.o 00:02:18.765 CXX test/cpp_headers/nvme_spec.o 00:02:18.765 CXX test/cpp_headers/nvme_zns.o 00:02:18.765 CXX test/cpp_headers/nvmf_cmd.o 00:02:18.765 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:18.765 CXX test/cpp_headers/nvmf.o 00:02:18.765 CXX test/cpp_headers/nvmf_spec.o 00:02:18.765 CXX test/cpp_headers/nvmf_transport.o 00:02:18.765 CC test/env/pci/pci_ut.o 00:02:18.765 CXX test/cpp_headers/opal.o 00:02:18.765 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:18.765 CC test/env/memory/memory_ut.o 00:02:18.765 CC examples/util/zipf/zipf.o 00:02:18.765 CXX test/cpp_headers/opal_spec.o 00:02:18.765 CC test/app/stub/stub.o 00:02:18.765 CC test/env/vtophys/vtophys.o 00:02:18.765 CC app/fio/nvme/fio_plugin.o 00:02:18.765 CC test/dma/test_dma/test_dma.o 00:02:18.765 CC test/thread/poller_perf/poller_perf.o 00:02:18.765 CC test/app/jsoncat/jsoncat.o 00:02:18.765 CC test/app/histogram_perf/histogram_perf.o 00:02:18.765 CC examples/ioat/verify/verify.o 00:02:18.765 CC examples/ioat/perf/perf.o 00:02:18.765 CC app/fio/bdev/fio_plugin.o 00:02:18.765 CC test/app/bdev_svc/bdev_svc.o 00:02:19.034 LINK spdk_lspci 00:02:19.034 LINK rpc_client_test 00:02:19.034 LINK spdk_nvme_discover 00:02:19.034 LINK interrupt_tgt 00:02:19.293 CC test/env/mem_callbacks/mem_callbacks.o 00:02:19.293 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:19.293 LINK nvmf_tgt 00:02:19.293 LINK jsoncat 00:02:19.293 CXX test/cpp_headers/pci_ids.o 00:02:19.293 LINK histogram_perf 00:02:19.293 CXX test/cpp_headers/pipe.o 00:02:19.293 CXX test/cpp_headers/queue.o 00:02:19.293 CXX test/cpp_headers/reduce.o 00:02:19.293 CXX test/cpp_headers/rpc.o 00:02:19.293 CXX test/cpp_headers/scheduler.o 00:02:19.293 CXX test/cpp_headers/scsi.o 00:02:19.293 CXX test/cpp_headers/scsi_spec.o 00:02:19.293 CXX test/cpp_headers/sock.o 00:02:19.293 CXX test/cpp_headers/stdinc.o 00:02:19.293 CXX test/cpp_headers/string.o 00:02:19.293 LINK stub 00:02:19.293 CXX test/cpp_headers/thread.o 00:02:19.293 CXX test/cpp_headers/trace.o 00:02:19.293 CXX test/cpp_headers/trace_parser.o 00:02:19.293 CXX test/cpp_headers/ublk.o 00:02:19.293 CXX test/cpp_headers/tree.o 00:02:19.293 CXX test/cpp_headers/util.o 00:02:19.293 CXX test/cpp_headers/uuid.o 00:02:19.293 CXX test/cpp_headers/version.o 00:02:19.293 CXX test/cpp_headers/vfio_user_pci.o 00:02:19.293 LINK iscsi_tgt 00:02:19.293 CXX test/cpp_headers/vfio_user_spec.o 00:02:19.293 CXX test/cpp_headers/vhost.o 00:02:19.293 LINK spdk_trace_record 00:02:19.293 CXX test/cpp_headers/vmd.o 00:02:19.293 CXX test/cpp_headers/xor.o 00:02:19.293 CXX test/cpp_headers/zipf.o 00:02:19.293 LINK zipf 00:02:19.293 LINK spdk_tgt 00:02:19.293 LINK poller_perf 00:02:19.293 LINK vtophys 00:02:19.551 LINK env_dpdk_post_init 00:02:19.551 LINK ioat_perf 00:02:19.551 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:19.551 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:19.551 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:19.551 LINK bdev_svc 00:02:19.551 LINK verify 00:02:19.552 LINK spdk_trace 00:02:19.552 LINK spdk_dd 00:02:19.552 LINK pci_ut 00:02:19.809 LINK spdk_nvme 00:02:19.809 LINK test_dma 00:02:19.809 LINK nvme_fuzz 00:02:19.809 CC app/vhost/vhost.o 00:02:19.809 CC examples/sock/hello_world/hello_sock.o 00:02:19.809 LINK spdk_top 00:02:19.809 LINK spdk_bdev 00:02:19.809 CC examples/vmd/led/led.o 00:02:19.809 LINK vhost_fuzz 00:02:19.809 CC examples/vmd/lsvmd/lsvmd.o 00:02:19.809 LINK spdk_nvme_identify 00:02:19.809 CC examples/idxd/perf/perf.o 00:02:19.809 CC test/event/reactor_perf/reactor_perf.o 00:02:19.809 CC test/event/reactor/reactor.o 00:02:19.809 LINK mem_callbacks 00:02:19.809 LINK spdk_nvme_perf 00:02:19.809 CC examples/thread/thread/thread_ex.o 00:02:19.809 CC test/event/event_perf/event_perf.o 00:02:20.079 CC test/event/app_repeat/app_repeat.o 00:02:20.079 CC test/event/scheduler/scheduler.o 00:02:20.079 LINK lsvmd 00:02:20.079 LINK led 00:02:20.080 LINK reactor 00:02:20.080 LINK vhost 00:02:20.080 LINK reactor_perf 00:02:20.080 LINK event_perf 00:02:20.080 LINK hello_sock 00:02:20.080 LINK app_repeat 00:02:20.080 LINK thread 00:02:20.080 LINK idxd_perf 00:02:20.340 LINK scheduler 00:02:20.340 LINK memory_ut 00:02:20.340 CC test/nvme/compliance/nvme_compliance.o 00:02:20.340 CC test/nvme/reset/reset.o 00:02:20.340 CC test/nvme/connect_stress/connect_stress.o 00:02:20.340 CC test/nvme/err_injection/err_injection.o 00:02:20.340 CC test/nvme/sgl/sgl.o 00:02:20.340 CC test/nvme/e2edp/nvme_dp.o 00:02:20.340 CC test/nvme/fdp/fdp.o 00:02:20.340 CC test/nvme/cuse/cuse.o 00:02:20.340 CC test/nvme/boot_partition/boot_partition.o 00:02:20.340 CC test/nvme/overhead/overhead.o 00:02:20.340 CC test/nvme/reserve/reserve.o 00:02:20.340 CC test/nvme/fused_ordering/fused_ordering.o 00:02:20.340 CC test/nvme/aer/aer.o 00:02:20.340 CC test/nvme/simple_copy/simple_copy.o 00:02:20.340 CC test/nvme/startup/startup.o 00:02:20.340 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:20.340 CC test/blobfs/mkfs/mkfs.o 00:02:20.340 CC test/accel/dif/dif.o 00:02:20.340 CC test/lvol/esnap/esnap.o 00:02:20.598 LINK connect_stress 00:02:20.598 LINK boot_partition 00:02:20.598 LINK startup 00:02:20.598 LINK err_injection 00:02:20.598 LINK reserve 00:02:20.598 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:20.598 CC examples/nvme/abort/abort.o 00:02:20.598 LINK doorbell_aers 00:02:20.598 LINK fused_ordering 00:02:20.598 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:20.598 CC examples/nvme/arbitration/arbitration.o 00:02:20.598 CC examples/nvme/hello_world/hello_world.o 00:02:20.598 CC examples/nvme/reconnect/reconnect.o 00:02:20.598 LINK nvme_dp 00:02:20.598 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:20.598 LINK simple_copy 00:02:20.598 CC examples/nvme/hotplug/hotplug.o 00:02:20.598 LINK sgl 00:02:20.598 LINK reset 00:02:20.598 LINK mkfs 00:02:20.598 LINK aer 00:02:20.598 LINK overhead 00:02:20.598 LINK nvme_compliance 00:02:20.598 LINK fdp 00:02:20.599 CC examples/accel/perf/accel_perf.o 00:02:20.599 CC examples/blob/hello_world/hello_blob.o 00:02:20.599 CC examples/blob/cli/blobcli.o 00:02:20.599 CC examples/fsdev/hello_world/hello_fsdev.o 00:02:20.857 LINK pmr_persistence 00:02:20.857 LINK cmb_copy 00:02:20.857 LINK hello_world 00:02:20.857 LINK hotplug 00:02:20.857 LINK iscsi_fuzz 00:02:20.857 LINK arbitration 00:02:20.857 LINK reconnect 00:02:20.857 LINK abort 00:02:20.858 LINK hello_blob 00:02:20.858 LINK nvme_manage 00:02:20.858 LINK dif 00:02:20.858 LINK hello_fsdev 00:02:21.116 LINK accel_perf 00:02:21.116 LINK blobcli 00:02:21.374 LINK cuse 00:02:21.374 CC test/bdev/bdevio/bdevio.o 00:02:21.632 CC examples/bdev/hello_world/hello_bdev.o 00:02:21.632 CC examples/bdev/bdevperf/bdevperf.o 00:02:21.890 LINK hello_bdev 00:02:21.890 LINK bdevio 00:02:22.150 LINK bdevperf 00:02:22.719 CC examples/nvmf/nvmf/nvmf.o 00:02:22.978 LINK nvmf 00:02:23.916 LINK esnap 00:02:24.176 00:02:24.176 real 0m55.480s 00:02:24.176 user 8m0.059s 00:02:24.176 sys 3m41.703s 00:02:24.176 10:56:51 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:24.177 10:56:51 make -- common/autotest_common.sh@10 -- $ set +x 00:02:24.177 ************************************ 00:02:24.177 END TEST make 00:02:24.177 ************************************ 00:02:24.437 10:56:51 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:24.437 10:56:51 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:24.437 10:56:51 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:24.437 10:56:51 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:24.437 10:56:51 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:24.437 10:56:51 -- pm/common@44 -- $ pid=3780482 00:02:24.437 10:56:51 -- pm/common@50 -- $ kill -TERM 3780482 00:02:24.437 10:56:51 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:24.437 10:56:51 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:24.437 10:56:51 -- pm/common@44 -- $ pid=3780483 00:02:24.437 10:56:51 -- pm/common@50 -- $ kill -TERM 3780483 00:02:24.437 10:56:51 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:24.437 10:56:51 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:24.437 10:56:51 -- pm/common@44 -- $ pid=3780485 00:02:24.437 10:56:51 -- pm/common@50 -- $ kill -TERM 3780485 00:02:24.437 10:56:51 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:24.437 10:56:51 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:24.437 10:56:51 -- pm/common@44 -- $ pid=3780508 00:02:24.437 10:56:51 -- pm/common@50 -- $ sudo -E kill -TERM 3780508 00:02:24.437 10:56:51 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:02:24.437 10:56:51 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:24.437 10:56:51 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:02:24.437 10:56:51 -- common/autotest_common.sh@1693 -- # lcov --version 00:02:24.437 10:56:51 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:02:24.437 10:56:51 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:02:24.437 10:56:51 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:02:24.437 10:56:51 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:02:24.437 10:56:51 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:02:24.437 10:56:51 -- scripts/common.sh@336 -- # IFS=.-: 00:02:24.437 10:56:51 -- scripts/common.sh@336 -- # read -ra ver1 00:02:24.437 10:56:51 -- scripts/common.sh@337 -- # IFS=.-: 00:02:24.437 10:56:51 -- scripts/common.sh@337 -- # read -ra ver2 00:02:24.437 10:56:51 -- scripts/common.sh@338 -- # local 'op=<' 00:02:24.437 10:56:51 -- scripts/common.sh@340 -- # ver1_l=2 00:02:24.437 10:56:51 -- scripts/common.sh@341 -- # ver2_l=1 00:02:24.437 10:56:51 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:02:24.437 10:56:51 -- scripts/common.sh@344 -- # case "$op" in 00:02:24.437 10:56:51 -- scripts/common.sh@345 -- # : 1 00:02:24.437 10:56:51 -- scripts/common.sh@364 -- # (( v = 0 )) 00:02:24.437 10:56:51 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:24.437 10:56:51 -- scripts/common.sh@365 -- # decimal 1 00:02:24.437 10:56:51 -- scripts/common.sh@353 -- # local d=1 00:02:24.437 10:56:51 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:24.437 10:56:51 -- scripts/common.sh@355 -- # echo 1 00:02:24.437 10:56:51 -- scripts/common.sh@365 -- # ver1[v]=1 00:02:24.437 10:56:51 -- scripts/common.sh@366 -- # decimal 2 00:02:24.437 10:56:51 -- scripts/common.sh@353 -- # local d=2 00:02:24.437 10:56:51 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:24.437 10:56:51 -- scripts/common.sh@355 -- # echo 2 00:02:24.437 10:56:51 -- scripts/common.sh@366 -- # ver2[v]=2 00:02:24.437 10:56:51 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:02:24.437 10:56:51 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:02:24.437 10:56:51 -- scripts/common.sh@368 -- # return 0 00:02:24.437 10:56:51 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:24.437 10:56:51 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:02:24.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:24.437 --rc genhtml_branch_coverage=1 00:02:24.437 --rc genhtml_function_coverage=1 00:02:24.437 --rc genhtml_legend=1 00:02:24.437 --rc geninfo_all_blocks=1 00:02:24.437 --rc geninfo_unexecuted_blocks=1 00:02:24.437 00:02:24.437 ' 00:02:24.437 10:56:51 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:02:24.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:24.437 --rc genhtml_branch_coverage=1 00:02:24.437 --rc genhtml_function_coverage=1 00:02:24.437 --rc genhtml_legend=1 00:02:24.437 --rc geninfo_all_blocks=1 00:02:24.437 --rc geninfo_unexecuted_blocks=1 00:02:24.437 00:02:24.437 ' 00:02:24.437 10:56:51 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:02:24.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:24.437 --rc genhtml_branch_coverage=1 00:02:24.437 --rc genhtml_function_coverage=1 00:02:24.437 --rc genhtml_legend=1 00:02:24.437 --rc geninfo_all_blocks=1 00:02:24.437 --rc geninfo_unexecuted_blocks=1 00:02:24.437 00:02:24.437 ' 00:02:24.437 10:56:51 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:02:24.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:24.437 --rc genhtml_branch_coverage=1 00:02:24.437 --rc genhtml_function_coverage=1 00:02:24.437 --rc genhtml_legend=1 00:02:24.437 --rc geninfo_all_blocks=1 00:02:24.437 --rc geninfo_unexecuted_blocks=1 00:02:24.437 00:02:24.437 ' 00:02:24.437 10:56:51 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:24.437 10:56:51 -- nvmf/common.sh@7 -- # uname -s 00:02:24.437 10:56:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:24.437 10:56:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:24.437 10:56:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:24.437 10:56:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:24.437 10:56:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:24.437 10:56:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:24.437 10:56:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:24.437 10:56:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:24.437 10:56:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:24.437 10:56:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:24.437 10:56:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:02:24.437 10:56:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:02:24.437 10:56:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:24.438 10:56:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:24.438 10:56:51 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:24.438 10:56:51 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:24.438 10:56:51 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:24.438 10:56:51 -- scripts/common.sh@15 -- # shopt -s extglob 00:02:24.438 10:56:51 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:24.438 10:56:51 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:24.438 10:56:51 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:24.438 10:56:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:24.438 10:56:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:24.438 10:56:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:24.438 10:56:51 -- paths/export.sh@5 -- # export PATH 00:02:24.438 10:56:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:24.438 10:56:51 -- nvmf/common.sh@51 -- # : 0 00:02:24.438 10:56:51 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:02:24.438 10:56:51 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:02:24.438 10:56:51 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:24.438 10:56:51 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:24.438 10:56:51 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:24.438 10:56:51 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:02:24.438 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:02:24.438 10:56:51 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:02:24.438 10:56:51 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:02:24.438 10:56:51 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:02:24.438 10:56:51 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:24.438 10:56:51 -- spdk/autotest.sh@32 -- # uname -s 00:02:24.698 10:56:51 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:24.698 10:56:51 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:24.698 10:56:51 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:24.698 10:56:51 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:24.698 10:56:51 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:24.698 10:56:51 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:24.698 10:56:51 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:24.698 10:56:51 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:24.698 10:56:51 -- spdk/autotest.sh@48 -- # udevadm_pid=3842978 00:02:24.698 10:56:51 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:24.698 10:56:51 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:24.698 10:56:51 -- pm/common@17 -- # local monitor 00:02:24.698 10:56:51 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:24.698 10:56:51 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:24.698 10:56:51 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:24.698 10:56:51 -- pm/common@21 -- # date +%s 00:02:24.698 10:56:51 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:24.698 10:56:51 -- pm/common@21 -- # date +%s 00:02:24.698 10:56:51 -- pm/common@25 -- # sleep 1 00:02:24.698 10:56:51 -- pm/common@21 -- # date +%s 00:02:24.698 10:56:51 -- pm/common@21 -- # date +%s 00:02:24.698 10:56:51 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732096611 00:02:24.698 10:56:51 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732096611 00:02:24.698 10:56:51 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732096611 00:02:24.698 10:56:51 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732096611 00:02:24.698 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732096611_collect-cpu-load.pm.log 00:02:24.698 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732096611_collect-vmstat.pm.log 00:02:24.698 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732096611_collect-cpu-temp.pm.log 00:02:24.698 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732096611_collect-bmc-pm.bmc.pm.log 00:02:25.638 10:56:52 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:25.638 10:56:52 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:25.638 10:56:52 -- common/autotest_common.sh@726 -- # xtrace_disable 00:02:25.638 10:56:52 -- common/autotest_common.sh@10 -- # set +x 00:02:25.638 10:56:52 -- spdk/autotest.sh@59 -- # create_test_list 00:02:25.638 10:56:52 -- common/autotest_common.sh@752 -- # xtrace_disable 00:02:25.638 10:56:52 -- common/autotest_common.sh@10 -- # set +x 00:02:25.638 10:56:53 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:25.638 10:56:53 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:25.638 10:56:53 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:25.638 10:56:53 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:25.638 10:56:53 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:25.638 10:56:53 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:25.638 10:56:53 -- common/autotest_common.sh@1457 -- # uname 00:02:25.638 10:56:53 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:02:25.638 10:56:53 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:25.638 10:56:53 -- common/autotest_common.sh@1477 -- # uname 00:02:25.638 10:56:53 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:02:25.638 10:56:53 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:02:25.638 10:56:53 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:02:25.638 lcov: LCOV version 1.15 00:02:25.638 10:56:53 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:37.850 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:37.850 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:52.733 10:57:18 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:02:52.733 10:57:18 -- common/autotest_common.sh@726 -- # xtrace_disable 00:02:52.733 10:57:18 -- common/autotest_common.sh@10 -- # set +x 00:02:52.733 10:57:18 -- spdk/autotest.sh@78 -- # rm -f 00:02:52.733 10:57:18 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:53.672 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:02:53.932 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:02:53.932 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:02:53.932 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:02:53.932 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:02:53.932 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:02:53.932 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:02:53.932 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:02:53.932 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:02:53.932 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:02:53.932 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:02:53.932 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:02:53.932 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:02:54.191 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:02:54.191 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:02:54.191 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:02:54.191 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:02:54.191 10:57:21 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:02:54.191 10:57:21 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:02:54.191 10:57:21 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:02:54.191 10:57:21 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:02:54.191 10:57:21 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:02:54.192 10:57:21 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:02:54.192 10:57:21 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:02:54.192 10:57:21 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:54.192 10:57:21 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:02:54.192 10:57:21 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:02:54.192 10:57:21 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:02:54.192 10:57:21 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:02:54.192 10:57:21 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:02:54.192 10:57:21 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:02:54.192 10:57:21 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:54.192 No valid GPT data, bailing 00:02:54.192 10:57:21 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:54.192 10:57:21 -- scripts/common.sh@394 -- # pt= 00:02:54.192 10:57:21 -- scripts/common.sh@395 -- # return 1 00:02:54.192 10:57:21 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:54.192 1+0 records in 00:02:54.192 1+0 records out 00:02:54.192 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0017984 s, 583 MB/s 00:02:54.192 10:57:21 -- spdk/autotest.sh@105 -- # sync 00:02:54.192 10:57:21 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:54.192 10:57:21 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:54.192 10:57:21 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:00.944 10:57:27 -- spdk/autotest.sh@111 -- # uname -s 00:03:00.944 10:57:27 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:00.944 10:57:27 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:00.944 10:57:27 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:02.856 Hugepages 00:03:02.856 node hugesize free / total 00:03:02.856 node0 1048576kB 0 / 0 00:03:02.856 node0 2048kB 0 / 0 00:03:02.856 node1 1048576kB 0 / 0 00:03:02.856 node1 2048kB 0 / 0 00:03:02.856 00:03:02.856 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:02.856 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:03:02.856 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:03:02.856 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:03:02.856 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:03:02.856 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:03:02.856 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:03:02.856 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:03:02.856 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:03:02.856 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:03:02.856 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:03:02.856 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:03:02.856 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:03:02.856 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:03:02.856 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:03:02.856 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:03:02.856 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:03:02.856 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:03:02.856 10:57:30 -- spdk/autotest.sh@117 -- # uname -s 00:03:02.856 10:57:30 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:02.856 10:57:30 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:02.856 10:57:30 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:06.151 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:06.151 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:06.151 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:06.151 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:06.151 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:06.151 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:06.151 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:06.151 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:06.151 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:06.151 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:06.151 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:06.151 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:06.151 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:06.151 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:06.151 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:06.151 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:06.724 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:06.724 10:57:34 -- common/autotest_common.sh@1517 -- # sleep 1 00:03:07.661 10:57:35 -- common/autotest_common.sh@1518 -- # bdfs=() 00:03:07.661 10:57:35 -- common/autotest_common.sh@1518 -- # local bdfs 00:03:07.661 10:57:35 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:03:07.661 10:57:35 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:03:07.661 10:57:35 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:07.661 10:57:35 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:07.661 10:57:35 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:07.661 10:57:35 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:07.661 10:57:35 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:07.661 10:57:35 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:07.921 10:57:35 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:03:07.921 10:57:35 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:10.458 Waiting for block devices as requested 00:03:10.458 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:03:10.718 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:03:10.718 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:03:10.978 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:03:10.978 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:03:10.978 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:03:10.978 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:03:11.237 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:03:11.237 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:03:11.237 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:03:11.495 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:03:11.495 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:03:11.495 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:03:11.754 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:03:11.754 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:03:11.754 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:03:11.754 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:03:12.013 10:57:39 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:03:12.013 10:57:39 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:03:12.013 10:57:39 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:03:12.013 10:57:39 -- common/autotest_common.sh@1487 -- # grep 0000:5e:00.0/nvme/nvme 00:03:12.013 10:57:39 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:03:12.013 10:57:39 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:03:12.013 10:57:39 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:03:12.013 10:57:39 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:03:12.013 10:57:39 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:03:12.013 10:57:39 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:03:12.013 10:57:39 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:03:12.013 10:57:39 -- common/autotest_common.sh@1531 -- # grep oacs 00:03:12.013 10:57:39 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:03:12.013 10:57:39 -- common/autotest_common.sh@1531 -- # oacs=' 0xe' 00:03:12.013 10:57:39 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:03:12.013 10:57:39 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:03:12.013 10:57:39 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:03:12.013 10:57:39 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:03:12.013 10:57:39 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:12.013 10:57:39 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:03:12.013 10:57:39 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:03:12.013 10:57:39 -- common/autotest_common.sh@1543 -- # continue 00:03:12.013 10:57:39 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:03:12.013 10:57:39 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:12.013 10:57:39 -- common/autotest_common.sh@10 -- # set +x 00:03:12.013 10:57:39 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:03:12.013 10:57:39 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:12.013 10:57:39 -- common/autotest_common.sh@10 -- # set +x 00:03:12.013 10:57:39 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:15.307 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:15.307 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:15.307 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:15.307 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:15.307 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:15.307 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:15.307 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:15.307 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:15.307 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:15.307 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:15.307 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:15.307 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:15.307 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:15.307 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:15.307 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:15.307 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:15.879 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:15.879 10:57:43 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:03:15.879 10:57:43 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:15.879 10:57:43 -- common/autotest_common.sh@10 -- # set +x 00:03:15.879 10:57:43 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:03:15.879 10:57:43 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:03:15.879 10:57:43 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:03:15.879 10:57:43 -- common/autotest_common.sh@1563 -- # bdfs=() 00:03:15.879 10:57:43 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:03:15.879 10:57:43 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:03:15.879 10:57:43 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:03:15.879 10:57:43 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:03:15.879 10:57:43 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:15.879 10:57:43 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:15.879 10:57:43 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:15.879 10:57:43 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:15.879 10:57:43 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:16.138 10:57:43 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:16.139 10:57:43 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:03:16.139 10:57:43 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:03:16.139 10:57:43 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:03:16.139 10:57:43 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:03:16.139 10:57:43 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:03:16.139 10:57:43 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:03:16.139 10:57:43 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:03:16.139 10:57:43 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:5e:00.0 00:03:16.139 10:57:43 -- common/autotest_common.sh@1579 -- # [[ -z 0000:5e:00.0 ]] 00:03:16.139 10:57:43 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=3857942 00:03:16.139 10:57:43 -- common/autotest_common.sh@1585 -- # waitforlisten 3857942 00:03:16.139 10:57:43 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:16.139 10:57:43 -- common/autotest_common.sh@835 -- # '[' -z 3857942 ']' 00:03:16.139 10:57:43 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:16.139 10:57:43 -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:16.139 10:57:43 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:16.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:16.139 10:57:43 -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:16.139 10:57:43 -- common/autotest_common.sh@10 -- # set +x 00:03:16.139 [2024-11-20 10:57:43.469431] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:03:16.139 [2024-11-20 10:57:43.469484] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3857942 ] 00:03:16.139 [2024-11-20 10:57:43.547208] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:16.139 [2024-11-20 10:57:43.590390] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:16.398 10:57:43 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:16.398 10:57:43 -- common/autotest_common.sh@868 -- # return 0 00:03:16.398 10:57:43 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:03:16.398 10:57:43 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:03:16.398 10:57:43 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:03:19.688 nvme0n1 00:03:19.688 10:57:46 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:03:19.688 [2024-11-20 10:57:46.984056] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:03:19.688 request: 00:03:19.688 { 00:03:19.688 "nvme_ctrlr_name": "nvme0", 00:03:19.688 "password": "test", 00:03:19.688 "method": "bdev_nvme_opal_revert", 00:03:19.688 "req_id": 1 00:03:19.688 } 00:03:19.688 Got JSON-RPC error response 00:03:19.688 response: 00:03:19.688 { 00:03:19.688 "code": -32602, 00:03:19.688 "message": "Invalid parameters" 00:03:19.688 } 00:03:19.688 10:57:46 -- common/autotest_common.sh@1591 -- # true 00:03:19.688 10:57:46 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:03:19.688 10:57:47 -- common/autotest_common.sh@1595 -- # killprocess 3857942 00:03:19.688 10:57:47 -- common/autotest_common.sh@954 -- # '[' -z 3857942 ']' 00:03:19.688 10:57:47 -- common/autotest_common.sh@958 -- # kill -0 3857942 00:03:19.688 10:57:47 -- common/autotest_common.sh@959 -- # uname 00:03:19.688 10:57:47 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:19.688 10:57:47 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3857942 00:03:19.688 10:57:47 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:19.688 10:57:47 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:19.688 10:57:47 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3857942' 00:03:19.688 killing process with pid 3857942 00:03:19.688 10:57:47 -- common/autotest_common.sh@973 -- # kill 3857942 00:03:19.688 10:57:47 -- common/autotest_common.sh@978 -- # wait 3857942 00:03:21.589 10:57:48 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:03:21.589 10:57:48 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:03:21.589 10:57:48 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:21.589 10:57:48 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:21.589 10:57:48 -- spdk/autotest.sh@149 -- # timing_enter lib 00:03:21.589 10:57:48 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:21.589 10:57:48 -- common/autotest_common.sh@10 -- # set +x 00:03:21.589 10:57:48 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:03:21.589 10:57:48 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:21.589 10:57:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:21.589 10:57:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:21.589 10:57:48 -- common/autotest_common.sh@10 -- # set +x 00:03:21.589 ************************************ 00:03:21.589 START TEST env 00:03:21.589 ************************************ 00:03:21.589 10:57:48 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:21.589 * Looking for test storage... 00:03:21.589 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:03:21.589 10:57:48 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:21.589 10:57:48 env -- common/autotest_common.sh@1693 -- # lcov --version 00:03:21.589 10:57:48 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:21.590 10:57:48 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:21.590 10:57:48 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:21.590 10:57:48 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:21.590 10:57:48 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:21.590 10:57:48 env -- scripts/common.sh@336 -- # IFS=.-: 00:03:21.590 10:57:48 env -- scripts/common.sh@336 -- # read -ra ver1 00:03:21.590 10:57:48 env -- scripts/common.sh@337 -- # IFS=.-: 00:03:21.590 10:57:48 env -- scripts/common.sh@337 -- # read -ra ver2 00:03:21.590 10:57:48 env -- scripts/common.sh@338 -- # local 'op=<' 00:03:21.590 10:57:48 env -- scripts/common.sh@340 -- # ver1_l=2 00:03:21.590 10:57:48 env -- scripts/common.sh@341 -- # ver2_l=1 00:03:21.590 10:57:48 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:21.590 10:57:48 env -- scripts/common.sh@344 -- # case "$op" in 00:03:21.590 10:57:48 env -- scripts/common.sh@345 -- # : 1 00:03:21.590 10:57:48 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:21.590 10:57:48 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:21.590 10:57:48 env -- scripts/common.sh@365 -- # decimal 1 00:03:21.590 10:57:48 env -- scripts/common.sh@353 -- # local d=1 00:03:21.590 10:57:48 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:21.590 10:57:48 env -- scripts/common.sh@355 -- # echo 1 00:03:21.590 10:57:48 env -- scripts/common.sh@365 -- # ver1[v]=1 00:03:21.590 10:57:48 env -- scripts/common.sh@366 -- # decimal 2 00:03:21.590 10:57:48 env -- scripts/common.sh@353 -- # local d=2 00:03:21.590 10:57:48 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:21.590 10:57:48 env -- scripts/common.sh@355 -- # echo 2 00:03:21.590 10:57:48 env -- scripts/common.sh@366 -- # ver2[v]=2 00:03:21.590 10:57:48 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:21.590 10:57:48 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:21.590 10:57:48 env -- scripts/common.sh@368 -- # return 0 00:03:21.590 10:57:48 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:21.590 10:57:48 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:21.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:21.590 --rc genhtml_branch_coverage=1 00:03:21.590 --rc genhtml_function_coverage=1 00:03:21.590 --rc genhtml_legend=1 00:03:21.590 --rc geninfo_all_blocks=1 00:03:21.590 --rc geninfo_unexecuted_blocks=1 00:03:21.590 00:03:21.590 ' 00:03:21.590 10:57:48 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:21.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:21.590 --rc genhtml_branch_coverage=1 00:03:21.590 --rc genhtml_function_coverage=1 00:03:21.590 --rc genhtml_legend=1 00:03:21.590 --rc geninfo_all_blocks=1 00:03:21.590 --rc geninfo_unexecuted_blocks=1 00:03:21.590 00:03:21.590 ' 00:03:21.590 10:57:48 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:21.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:21.590 --rc genhtml_branch_coverage=1 00:03:21.590 --rc genhtml_function_coverage=1 00:03:21.590 --rc genhtml_legend=1 00:03:21.590 --rc geninfo_all_blocks=1 00:03:21.590 --rc geninfo_unexecuted_blocks=1 00:03:21.590 00:03:21.590 ' 00:03:21.590 10:57:48 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:21.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:21.590 --rc genhtml_branch_coverage=1 00:03:21.590 --rc genhtml_function_coverage=1 00:03:21.590 --rc genhtml_legend=1 00:03:21.590 --rc geninfo_all_blocks=1 00:03:21.590 --rc geninfo_unexecuted_blocks=1 00:03:21.590 00:03:21.590 ' 00:03:21.590 10:57:48 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:21.590 10:57:48 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:21.590 10:57:48 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:21.590 10:57:48 env -- common/autotest_common.sh@10 -- # set +x 00:03:21.590 ************************************ 00:03:21.590 START TEST env_memory 00:03:21.590 ************************************ 00:03:21.590 10:57:48 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:21.590 00:03:21.590 00:03:21.590 CUnit - A unit testing framework for C - Version 2.1-3 00:03:21.590 http://cunit.sourceforge.net/ 00:03:21.590 00:03:21.590 00:03:21.590 Suite: memory 00:03:21.590 Test: alloc and free memory map ...[2024-11-20 10:57:48.914284] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:21.590 passed 00:03:21.590 Test: mem map translation ...[2024-11-20 10:57:48.933512] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:21.590 [2024-11-20 10:57:48.933526] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:21.590 [2024-11-20 10:57:48.933575] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:21.590 [2024-11-20 10:57:48.933581] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:21.590 passed 00:03:21.590 Test: mem map registration ...[2024-11-20 10:57:48.971394] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:03:21.590 [2024-11-20 10:57:48.971407] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:03:21.590 passed 00:03:21.590 Test: mem map adjacent registrations ...passed 00:03:21.590 00:03:21.590 Run Summary: Type Total Ran Passed Failed Inactive 00:03:21.590 suites 1 1 n/a 0 0 00:03:21.590 tests 4 4 4 0 0 00:03:21.590 asserts 152 152 152 0 n/a 00:03:21.590 00:03:21.590 Elapsed time = 0.140 seconds 00:03:21.590 00:03:21.590 real 0m0.153s 00:03:21.590 user 0m0.144s 00:03:21.590 sys 0m0.008s 00:03:21.590 10:57:49 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:21.590 10:57:49 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:21.590 ************************************ 00:03:21.590 END TEST env_memory 00:03:21.590 ************************************ 00:03:21.590 10:57:49 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:21.590 10:57:49 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:21.590 10:57:49 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:21.590 10:57:49 env -- common/autotest_common.sh@10 -- # set +x 00:03:21.849 ************************************ 00:03:21.849 START TEST env_vtophys 00:03:21.849 ************************************ 00:03:21.849 10:57:49 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:21.849 EAL: lib.eal log level changed from notice to debug 00:03:21.849 EAL: Detected lcore 0 as core 0 on socket 0 00:03:21.849 EAL: Detected lcore 1 as core 1 on socket 0 00:03:21.849 EAL: Detected lcore 2 as core 2 on socket 0 00:03:21.849 EAL: Detected lcore 3 as core 3 on socket 0 00:03:21.849 EAL: Detected lcore 4 as core 4 on socket 0 00:03:21.849 EAL: Detected lcore 5 as core 5 on socket 0 00:03:21.849 EAL: Detected lcore 6 as core 6 on socket 0 00:03:21.849 EAL: Detected lcore 7 as core 8 on socket 0 00:03:21.849 EAL: Detected lcore 8 as core 9 on socket 0 00:03:21.849 EAL: Detected lcore 9 as core 10 on socket 0 00:03:21.849 EAL: Detected lcore 10 as core 11 on socket 0 00:03:21.849 EAL: Detected lcore 11 as core 12 on socket 0 00:03:21.849 EAL: Detected lcore 12 as core 13 on socket 0 00:03:21.849 EAL: Detected lcore 13 as core 16 on socket 0 00:03:21.849 EAL: Detected lcore 14 as core 17 on socket 0 00:03:21.849 EAL: Detected lcore 15 as core 18 on socket 0 00:03:21.849 EAL: Detected lcore 16 as core 19 on socket 0 00:03:21.849 EAL: Detected lcore 17 as core 20 on socket 0 00:03:21.849 EAL: Detected lcore 18 as core 21 on socket 0 00:03:21.849 EAL: Detected lcore 19 as core 25 on socket 0 00:03:21.849 EAL: Detected lcore 20 as core 26 on socket 0 00:03:21.849 EAL: Detected lcore 21 as core 27 on socket 0 00:03:21.849 EAL: Detected lcore 22 as core 28 on socket 0 00:03:21.849 EAL: Detected lcore 23 as core 29 on socket 0 00:03:21.849 EAL: Detected lcore 24 as core 0 on socket 1 00:03:21.849 EAL: Detected lcore 25 as core 1 on socket 1 00:03:21.849 EAL: Detected lcore 26 as core 2 on socket 1 00:03:21.849 EAL: Detected lcore 27 as core 3 on socket 1 00:03:21.849 EAL: Detected lcore 28 as core 4 on socket 1 00:03:21.849 EAL: Detected lcore 29 as core 5 on socket 1 00:03:21.849 EAL: Detected lcore 30 as core 6 on socket 1 00:03:21.849 EAL: Detected lcore 31 as core 9 on socket 1 00:03:21.849 EAL: Detected lcore 32 as core 10 on socket 1 00:03:21.849 EAL: Detected lcore 33 as core 11 on socket 1 00:03:21.849 EAL: Detected lcore 34 as core 12 on socket 1 00:03:21.849 EAL: Detected lcore 35 as core 13 on socket 1 00:03:21.849 EAL: Detected lcore 36 as core 16 on socket 1 00:03:21.849 EAL: Detected lcore 37 as core 17 on socket 1 00:03:21.849 EAL: Detected lcore 38 as core 18 on socket 1 00:03:21.849 EAL: Detected lcore 39 as core 19 on socket 1 00:03:21.849 EAL: Detected lcore 40 as core 20 on socket 1 00:03:21.849 EAL: Detected lcore 41 as core 21 on socket 1 00:03:21.849 EAL: Detected lcore 42 as core 24 on socket 1 00:03:21.849 EAL: Detected lcore 43 as core 25 on socket 1 00:03:21.849 EAL: Detected lcore 44 as core 26 on socket 1 00:03:21.849 EAL: Detected lcore 45 as core 27 on socket 1 00:03:21.849 EAL: Detected lcore 46 as core 28 on socket 1 00:03:21.849 EAL: Detected lcore 47 as core 29 on socket 1 00:03:21.849 EAL: Detected lcore 48 as core 0 on socket 0 00:03:21.849 EAL: Detected lcore 49 as core 1 on socket 0 00:03:21.849 EAL: Detected lcore 50 as core 2 on socket 0 00:03:21.849 EAL: Detected lcore 51 as core 3 on socket 0 00:03:21.849 EAL: Detected lcore 52 as core 4 on socket 0 00:03:21.849 EAL: Detected lcore 53 as core 5 on socket 0 00:03:21.849 EAL: Detected lcore 54 as core 6 on socket 0 00:03:21.849 EAL: Detected lcore 55 as core 8 on socket 0 00:03:21.849 EAL: Detected lcore 56 as core 9 on socket 0 00:03:21.849 EAL: Detected lcore 57 as core 10 on socket 0 00:03:21.849 EAL: Detected lcore 58 as core 11 on socket 0 00:03:21.849 EAL: Detected lcore 59 as core 12 on socket 0 00:03:21.849 EAL: Detected lcore 60 as core 13 on socket 0 00:03:21.849 EAL: Detected lcore 61 as core 16 on socket 0 00:03:21.849 EAL: Detected lcore 62 as core 17 on socket 0 00:03:21.849 EAL: Detected lcore 63 as core 18 on socket 0 00:03:21.849 EAL: Detected lcore 64 as core 19 on socket 0 00:03:21.849 EAL: Detected lcore 65 as core 20 on socket 0 00:03:21.849 EAL: Detected lcore 66 as core 21 on socket 0 00:03:21.849 EAL: Detected lcore 67 as core 25 on socket 0 00:03:21.849 EAL: Detected lcore 68 as core 26 on socket 0 00:03:21.849 EAL: Detected lcore 69 as core 27 on socket 0 00:03:21.849 EAL: Detected lcore 70 as core 28 on socket 0 00:03:21.849 EAL: Detected lcore 71 as core 29 on socket 0 00:03:21.849 EAL: Detected lcore 72 as core 0 on socket 1 00:03:21.849 EAL: Detected lcore 73 as core 1 on socket 1 00:03:21.849 EAL: Detected lcore 74 as core 2 on socket 1 00:03:21.849 EAL: Detected lcore 75 as core 3 on socket 1 00:03:21.849 EAL: Detected lcore 76 as core 4 on socket 1 00:03:21.849 EAL: Detected lcore 77 as core 5 on socket 1 00:03:21.849 EAL: Detected lcore 78 as core 6 on socket 1 00:03:21.849 EAL: Detected lcore 79 as core 9 on socket 1 00:03:21.849 EAL: Detected lcore 80 as core 10 on socket 1 00:03:21.849 EAL: Detected lcore 81 as core 11 on socket 1 00:03:21.849 EAL: Detected lcore 82 as core 12 on socket 1 00:03:21.849 EAL: Detected lcore 83 as core 13 on socket 1 00:03:21.849 EAL: Detected lcore 84 as core 16 on socket 1 00:03:21.849 EAL: Detected lcore 85 as core 17 on socket 1 00:03:21.849 EAL: Detected lcore 86 as core 18 on socket 1 00:03:21.849 EAL: Detected lcore 87 as core 19 on socket 1 00:03:21.849 EAL: Detected lcore 88 as core 20 on socket 1 00:03:21.849 EAL: Detected lcore 89 as core 21 on socket 1 00:03:21.849 EAL: Detected lcore 90 as core 24 on socket 1 00:03:21.849 EAL: Detected lcore 91 as core 25 on socket 1 00:03:21.849 EAL: Detected lcore 92 as core 26 on socket 1 00:03:21.849 EAL: Detected lcore 93 as core 27 on socket 1 00:03:21.849 EAL: Detected lcore 94 as core 28 on socket 1 00:03:21.849 EAL: Detected lcore 95 as core 29 on socket 1 00:03:21.849 EAL: Maximum logical cores by configuration: 128 00:03:21.849 EAL: Detected CPU lcores: 96 00:03:21.849 EAL: Detected NUMA nodes: 2 00:03:21.849 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:21.849 EAL: Detected shared linkage of DPDK 00:03:21.849 EAL: No shared files mode enabled, IPC will be disabled 00:03:21.849 EAL: Bus pci wants IOVA as 'DC' 00:03:21.849 EAL: Buses did not request a specific IOVA mode. 00:03:21.849 EAL: IOMMU is available, selecting IOVA as VA mode. 00:03:21.849 EAL: Selected IOVA mode 'VA' 00:03:21.849 EAL: Probing VFIO support... 00:03:21.849 EAL: IOMMU type 1 (Type 1) is supported 00:03:21.849 EAL: IOMMU type 7 (sPAPR) is not supported 00:03:21.849 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:03:21.849 EAL: VFIO support initialized 00:03:21.849 EAL: Ask a virtual area of 0x2e000 bytes 00:03:21.849 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:21.849 EAL: Setting up physically contiguous memory... 00:03:21.849 EAL: Setting maximum number of open files to 524288 00:03:21.849 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:21.849 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:03:21.849 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:21.849 EAL: Ask a virtual area of 0x61000 bytes 00:03:21.849 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:21.849 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:21.849 EAL: Ask a virtual area of 0x400000000 bytes 00:03:21.849 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:21.849 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:21.849 EAL: Ask a virtual area of 0x61000 bytes 00:03:21.849 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:21.849 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:21.849 EAL: Ask a virtual area of 0x400000000 bytes 00:03:21.849 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:21.849 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:21.849 EAL: Ask a virtual area of 0x61000 bytes 00:03:21.849 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:21.849 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:21.849 EAL: Ask a virtual area of 0x400000000 bytes 00:03:21.849 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:21.849 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:21.849 EAL: Ask a virtual area of 0x61000 bytes 00:03:21.849 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:21.849 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:21.849 EAL: Ask a virtual area of 0x400000000 bytes 00:03:21.849 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:21.849 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:21.849 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:03:21.849 EAL: Ask a virtual area of 0x61000 bytes 00:03:21.849 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:03:21.849 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:21.849 EAL: Ask a virtual area of 0x400000000 bytes 00:03:21.849 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:03:21.849 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:03:21.849 EAL: Ask a virtual area of 0x61000 bytes 00:03:21.849 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:03:21.849 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:21.849 EAL: Ask a virtual area of 0x400000000 bytes 00:03:21.849 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:03:21.849 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:03:21.849 EAL: Ask a virtual area of 0x61000 bytes 00:03:21.849 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:03:21.849 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:21.849 EAL: Ask a virtual area of 0x400000000 bytes 00:03:21.849 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:03:21.849 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:03:21.849 EAL: Ask a virtual area of 0x61000 bytes 00:03:21.849 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:03:21.849 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:21.849 EAL: Ask a virtual area of 0x400000000 bytes 00:03:21.849 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:03:21.849 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:03:21.849 EAL: Hugepages will be freed exactly as allocated. 00:03:21.849 EAL: No shared files mode enabled, IPC is disabled 00:03:21.849 EAL: No shared files mode enabled, IPC is disabled 00:03:21.849 EAL: TSC frequency is ~2300000 KHz 00:03:21.849 EAL: Main lcore 0 is ready (tid=7f1a57b8ca00;cpuset=[0]) 00:03:21.849 EAL: Trying to obtain current memory policy. 00:03:21.849 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:21.849 EAL: Restoring previous memory policy: 0 00:03:21.849 EAL: request: mp_malloc_sync 00:03:21.849 EAL: No shared files mode enabled, IPC is disabled 00:03:21.849 EAL: Heap on socket 0 was expanded by 2MB 00:03:21.849 EAL: No shared files mode enabled, IPC is disabled 00:03:21.849 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:21.849 EAL: Mem event callback 'spdk:(nil)' registered 00:03:21.849 00:03:21.849 00:03:21.849 CUnit - A unit testing framework for C - Version 2.1-3 00:03:21.849 http://cunit.sourceforge.net/ 00:03:21.849 00:03:21.849 00:03:21.849 Suite: components_suite 00:03:21.849 Test: vtophys_malloc_test ...passed 00:03:21.849 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:21.849 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:21.849 EAL: Restoring previous memory policy: 4 00:03:21.849 EAL: Calling mem event callback 'spdk:(nil)' 00:03:21.849 EAL: request: mp_malloc_sync 00:03:21.849 EAL: No shared files mode enabled, IPC is disabled 00:03:21.849 EAL: Heap on socket 0 was expanded by 4MB 00:03:21.849 EAL: Calling mem event callback 'spdk:(nil)' 00:03:21.849 EAL: request: mp_malloc_sync 00:03:21.849 EAL: No shared files mode enabled, IPC is disabled 00:03:21.849 EAL: Heap on socket 0 was shrunk by 4MB 00:03:21.849 EAL: Trying to obtain current memory policy. 00:03:21.849 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:21.849 EAL: Restoring previous memory policy: 4 00:03:21.849 EAL: Calling mem event callback 'spdk:(nil)' 00:03:21.849 EAL: request: mp_malloc_sync 00:03:21.849 EAL: No shared files mode enabled, IPC is disabled 00:03:21.849 EAL: Heap on socket 0 was expanded by 6MB 00:03:21.849 EAL: Calling mem event callback 'spdk:(nil)' 00:03:21.849 EAL: request: mp_malloc_sync 00:03:21.849 EAL: No shared files mode enabled, IPC is disabled 00:03:21.849 EAL: Heap on socket 0 was shrunk by 6MB 00:03:21.849 EAL: Trying to obtain current memory policy. 00:03:21.849 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:21.849 EAL: Restoring previous memory policy: 4 00:03:21.849 EAL: Calling mem event callback 'spdk:(nil)' 00:03:21.849 EAL: request: mp_malloc_sync 00:03:21.849 EAL: No shared files mode enabled, IPC is disabled 00:03:21.849 EAL: Heap on socket 0 was expanded by 10MB 00:03:21.849 EAL: Calling mem event callback 'spdk:(nil)' 00:03:21.849 EAL: request: mp_malloc_sync 00:03:21.849 EAL: No shared files mode enabled, IPC is disabled 00:03:21.849 EAL: Heap on socket 0 was shrunk by 10MB 00:03:21.849 EAL: Trying to obtain current memory policy. 00:03:21.849 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:21.849 EAL: Restoring previous memory policy: 4 00:03:21.849 EAL: Calling mem event callback 'spdk:(nil)' 00:03:21.849 EAL: request: mp_malloc_sync 00:03:21.849 EAL: No shared files mode enabled, IPC is disabled 00:03:21.849 EAL: Heap on socket 0 was expanded by 18MB 00:03:21.849 EAL: Calling mem event callback 'spdk:(nil)' 00:03:21.849 EAL: request: mp_malloc_sync 00:03:21.849 EAL: No shared files mode enabled, IPC is disabled 00:03:21.849 EAL: Heap on socket 0 was shrunk by 18MB 00:03:21.849 EAL: Trying to obtain current memory policy. 00:03:21.849 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:21.849 EAL: Restoring previous memory policy: 4 00:03:21.849 EAL: Calling mem event callback 'spdk:(nil)' 00:03:21.849 EAL: request: mp_malloc_sync 00:03:21.849 EAL: No shared files mode enabled, IPC is disabled 00:03:21.849 EAL: Heap on socket 0 was expanded by 34MB 00:03:21.849 EAL: Calling mem event callback 'spdk:(nil)' 00:03:21.849 EAL: request: mp_malloc_sync 00:03:21.849 EAL: No shared files mode enabled, IPC is disabled 00:03:21.849 EAL: Heap on socket 0 was shrunk by 34MB 00:03:21.849 EAL: Trying to obtain current memory policy. 00:03:21.849 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:21.849 EAL: Restoring previous memory policy: 4 00:03:21.849 EAL: Calling mem event callback 'spdk:(nil)' 00:03:21.849 EAL: request: mp_malloc_sync 00:03:21.849 EAL: No shared files mode enabled, IPC is disabled 00:03:21.849 EAL: Heap on socket 0 was expanded by 66MB 00:03:21.849 EAL: Calling mem event callback 'spdk:(nil)' 00:03:21.849 EAL: request: mp_malloc_sync 00:03:21.849 EAL: No shared files mode enabled, IPC is disabled 00:03:21.849 EAL: Heap on socket 0 was shrunk by 66MB 00:03:21.849 EAL: Trying to obtain current memory policy. 00:03:21.849 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:21.849 EAL: Restoring previous memory policy: 4 00:03:21.849 EAL: Calling mem event callback 'spdk:(nil)' 00:03:21.849 EAL: request: mp_malloc_sync 00:03:21.849 EAL: No shared files mode enabled, IPC is disabled 00:03:21.849 EAL: Heap on socket 0 was expanded by 130MB 00:03:21.849 EAL: Calling mem event callback 'spdk:(nil)' 00:03:21.849 EAL: request: mp_malloc_sync 00:03:21.849 EAL: No shared files mode enabled, IPC is disabled 00:03:21.849 EAL: Heap on socket 0 was shrunk by 130MB 00:03:21.849 EAL: Trying to obtain current memory policy. 00:03:21.849 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:21.849 EAL: Restoring previous memory policy: 4 00:03:21.849 EAL: Calling mem event callback 'spdk:(nil)' 00:03:21.849 EAL: request: mp_malloc_sync 00:03:21.849 EAL: No shared files mode enabled, IPC is disabled 00:03:21.849 EAL: Heap on socket 0 was expanded by 258MB 00:03:22.108 EAL: Calling mem event callback 'spdk:(nil)' 00:03:22.108 EAL: request: mp_malloc_sync 00:03:22.108 EAL: No shared files mode enabled, IPC is disabled 00:03:22.108 EAL: Heap on socket 0 was shrunk by 258MB 00:03:22.108 EAL: Trying to obtain current memory policy. 00:03:22.108 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:22.108 EAL: Restoring previous memory policy: 4 00:03:22.108 EAL: Calling mem event callback 'spdk:(nil)' 00:03:22.108 EAL: request: mp_malloc_sync 00:03:22.108 EAL: No shared files mode enabled, IPC is disabled 00:03:22.108 EAL: Heap on socket 0 was expanded by 514MB 00:03:22.108 EAL: Calling mem event callback 'spdk:(nil)' 00:03:22.367 EAL: request: mp_malloc_sync 00:03:22.367 EAL: No shared files mode enabled, IPC is disabled 00:03:22.367 EAL: Heap on socket 0 was shrunk by 514MB 00:03:22.367 EAL: Trying to obtain current memory policy. 00:03:22.367 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:22.624 EAL: Restoring previous memory policy: 4 00:03:22.624 EAL: Calling mem event callback 'spdk:(nil)' 00:03:22.624 EAL: request: mp_malloc_sync 00:03:22.624 EAL: No shared files mode enabled, IPC is disabled 00:03:22.624 EAL: Heap on socket 0 was expanded by 1026MB 00:03:22.624 EAL: Calling mem event callback 'spdk:(nil)' 00:03:22.882 EAL: request: mp_malloc_sync 00:03:22.882 EAL: No shared files mode enabled, IPC is disabled 00:03:22.882 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:22.882 passed 00:03:22.882 00:03:22.882 Run Summary: Type Total Ran Passed Failed Inactive 00:03:22.882 suites 1 1 n/a 0 0 00:03:22.882 tests 2 2 2 0 0 00:03:22.882 asserts 497 497 497 0 n/a 00:03:22.882 00:03:22.882 Elapsed time = 0.980 seconds 00:03:22.882 EAL: Calling mem event callback 'spdk:(nil)' 00:03:22.882 EAL: request: mp_malloc_sync 00:03:22.882 EAL: No shared files mode enabled, IPC is disabled 00:03:22.882 EAL: Heap on socket 0 was shrunk by 2MB 00:03:22.882 EAL: No shared files mode enabled, IPC is disabled 00:03:22.882 EAL: No shared files mode enabled, IPC is disabled 00:03:22.882 EAL: No shared files mode enabled, IPC is disabled 00:03:22.882 00:03:22.882 real 0m1.107s 00:03:22.883 user 0m0.661s 00:03:22.883 sys 0m0.422s 00:03:22.883 10:57:50 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:22.883 10:57:50 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:03:22.883 ************************************ 00:03:22.883 END TEST env_vtophys 00:03:22.883 ************************************ 00:03:22.883 10:57:50 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:22.883 10:57:50 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:22.883 10:57:50 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:22.883 10:57:50 env -- common/autotest_common.sh@10 -- # set +x 00:03:22.883 ************************************ 00:03:22.883 START TEST env_pci 00:03:22.883 ************************************ 00:03:22.883 10:57:50 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:22.883 00:03:22.883 00:03:22.883 CUnit - A unit testing framework for C - Version 2.1-3 00:03:22.883 http://cunit.sourceforge.net/ 00:03:22.883 00:03:22.883 00:03:22.883 Suite: pci 00:03:22.883 Test: pci_hook ...[2024-11-20 10:57:50.283514] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3859164 has claimed it 00:03:22.883 EAL: Cannot find device (10000:00:01.0) 00:03:22.883 EAL: Failed to attach device on primary process 00:03:22.883 passed 00:03:22.883 00:03:22.883 Run Summary: Type Total Ran Passed Failed Inactive 00:03:22.883 suites 1 1 n/a 0 0 00:03:22.883 tests 1 1 1 0 0 00:03:22.883 asserts 25 25 25 0 n/a 00:03:22.883 00:03:22.883 Elapsed time = 0.027 seconds 00:03:22.883 00:03:22.883 real 0m0.048s 00:03:22.883 user 0m0.016s 00:03:22.883 sys 0m0.032s 00:03:22.883 10:57:50 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:22.883 10:57:50 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:03:22.883 ************************************ 00:03:22.883 END TEST env_pci 00:03:22.883 ************************************ 00:03:22.883 10:57:50 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:22.883 10:57:50 env -- env/env.sh@15 -- # uname 00:03:22.883 10:57:50 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:22.883 10:57:50 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:22.883 10:57:50 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:22.883 10:57:50 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:03:22.883 10:57:50 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:22.883 10:57:50 env -- common/autotest_common.sh@10 -- # set +x 00:03:23.142 ************************************ 00:03:23.142 START TEST env_dpdk_post_init 00:03:23.142 ************************************ 00:03:23.142 10:57:50 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:23.142 EAL: Detected CPU lcores: 96 00:03:23.142 EAL: Detected NUMA nodes: 2 00:03:23.142 EAL: Detected shared linkage of DPDK 00:03:23.142 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:23.142 EAL: Selected IOVA mode 'VA' 00:03:23.142 EAL: VFIO support initialized 00:03:23.143 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:23.143 EAL: Using IOMMU type 1 (Type 1) 00:03:23.143 EAL: Ignore mapping IO port bar(1) 00:03:23.143 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:03:23.143 EAL: Ignore mapping IO port bar(1) 00:03:23.143 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:03:23.143 EAL: Ignore mapping IO port bar(1) 00:03:23.143 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:03:23.143 EAL: Ignore mapping IO port bar(1) 00:03:23.143 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:03:23.143 EAL: Ignore mapping IO port bar(1) 00:03:23.143 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:03:23.143 EAL: Ignore mapping IO port bar(1) 00:03:23.143 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:03:23.143 EAL: Ignore mapping IO port bar(1) 00:03:23.143 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:03:23.143 EAL: Ignore mapping IO port bar(1) 00:03:23.143 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:03:24.081 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:03:24.081 EAL: Ignore mapping IO port bar(1) 00:03:24.081 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:03:24.081 EAL: Ignore mapping IO port bar(1) 00:03:24.081 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:03:24.081 EAL: Ignore mapping IO port bar(1) 00:03:24.081 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:03:24.081 EAL: Ignore mapping IO port bar(1) 00:03:24.081 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:03:24.081 EAL: Ignore mapping IO port bar(1) 00:03:24.081 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:03:24.081 EAL: Ignore mapping IO port bar(1) 00:03:24.081 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:03:24.081 EAL: Ignore mapping IO port bar(1) 00:03:24.081 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:03:24.081 EAL: Ignore mapping IO port bar(1) 00:03:24.081 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:03:27.371 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:03:27.371 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:03:27.371 Starting DPDK initialization... 00:03:27.371 Starting SPDK post initialization... 00:03:27.371 SPDK NVMe probe 00:03:27.371 Attaching to 0000:5e:00.0 00:03:27.371 Attached to 0000:5e:00.0 00:03:27.371 Cleaning up... 00:03:27.371 00:03:27.371 real 0m4.367s 00:03:27.371 user 0m2.981s 00:03:27.371 sys 0m0.454s 00:03:27.371 10:57:54 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:27.371 10:57:54 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:03:27.371 ************************************ 00:03:27.371 END TEST env_dpdk_post_init 00:03:27.371 ************************************ 00:03:27.371 10:57:54 env -- env/env.sh@26 -- # uname 00:03:27.371 10:57:54 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:03:27.371 10:57:54 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:27.371 10:57:54 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:27.371 10:57:54 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:27.371 10:57:54 env -- common/autotest_common.sh@10 -- # set +x 00:03:27.371 ************************************ 00:03:27.371 START TEST env_mem_callbacks 00:03:27.371 ************************************ 00:03:27.371 10:57:54 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:27.371 EAL: Detected CPU lcores: 96 00:03:27.371 EAL: Detected NUMA nodes: 2 00:03:27.371 EAL: Detected shared linkage of DPDK 00:03:27.371 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:27.630 EAL: Selected IOVA mode 'VA' 00:03:27.630 EAL: VFIO support initialized 00:03:27.630 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:27.630 00:03:27.630 00:03:27.630 CUnit - A unit testing framework for C - Version 2.1-3 00:03:27.631 http://cunit.sourceforge.net/ 00:03:27.631 00:03:27.631 00:03:27.631 Suite: memory 00:03:27.631 Test: test ... 00:03:27.631 register 0x200000200000 2097152 00:03:27.631 malloc 3145728 00:03:27.631 register 0x200000400000 4194304 00:03:27.631 buf 0x200000500000 len 3145728 PASSED 00:03:27.631 malloc 64 00:03:27.631 buf 0x2000004fff40 len 64 PASSED 00:03:27.631 malloc 4194304 00:03:27.631 register 0x200000800000 6291456 00:03:27.631 buf 0x200000a00000 len 4194304 PASSED 00:03:27.631 free 0x200000500000 3145728 00:03:27.631 free 0x2000004fff40 64 00:03:27.631 unregister 0x200000400000 4194304 PASSED 00:03:27.631 free 0x200000a00000 4194304 00:03:27.631 unregister 0x200000800000 6291456 PASSED 00:03:27.631 malloc 8388608 00:03:27.631 register 0x200000400000 10485760 00:03:27.631 buf 0x200000600000 len 8388608 PASSED 00:03:27.631 free 0x200000600000 8388608 00:03:27.631 unregister 0x200000400000 10485760 PASSED 00:03:27.631 passed 00:03:27.631 00:03:27.631 Run Summary: Type Total Ran Passed Failed Inactive 00:03:27.631 suites 1 1 n/a 0 0 00:03:27.631 tests 1 1 1 0 0 00:03:27.631 asserts 15 15 15 0 n/a 00:03:27.631 00:03:27.631 Elapsed time = 0.007 seconds 00:03:27.631 00:03:27.631 real 0m0.055s 00:03:27.631 user 0m0.020s 00:03:27.631 sys 0m0.035s 00:03:27.631 10:57:54 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:27.631 10:57:54 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:03:27.631 ************************************ 00:03:27.631 END TEST env_mem_callbacks 00:03:27.631 ************************************ 00:03:27.631 00:03:27.631 real 0m6.263s 00:03:27.631 user 0m4.064s 00:03:27.631 sys 0m1.278s 00:03:27.631 10:57:54 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:27.631 10:57:54 env -- common/autotest_common.sh@10 -- # set +x 00:03:27.631 ************************************ 00:03:27.631 END TEST env 00:03:27.631 ************************************ 00:03:27.631 10:57:54 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:27.631 10:57:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:27.631 10:57:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:27.631 10:57:54 -- common/autotest_common.sh@10 -- # set +x 00:03:27.631 ************************************ 00:03:27.631 START TEST rpc 00:03:27.631 ************************************ 00:03:27.631 10:57:54 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:27.631 * Looking for test storage... 00:03:27.631 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:27.631 10:57:55 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:27.631 10:57:55 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:03:27.631 10:57:55 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:27.890 10:57:55 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:27.890 10:57:55 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:27.890 10:57:55 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:27.890 10:57:55 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:27.890 10:57:55 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:27.890 10:57:55 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:27.890 10:57:55 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:27.890 10:57:55 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:27.890 10:57:55 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:27.890 10:57:55 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:27.890 10:57:55 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:27.890 10:57:55 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:27.890 10:57:55 rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:27.890 10:57:55 rpc -- scripts/common.sh@345 -- # : 1 00:03:27.890 10:57:55 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:27.890 10:57:55 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:27.890 10:57:55 rpc -- scripts/common.sh@365 -- # decimal 1 00:03:27.890 10:57:55 rpc -- scripts/common.sh@353 -- # local d=1 00:03:27.890 10:57:55 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:27.890 10:57:55 rpc -- scripts/common.sh@355 -- # echo 1 00:03:27.890 10:57:55 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:27.891 10:57:55 rpc -- scripts/common.sh@366 -- # decimal 2 00:03:27.891 10:57:55 rpc -- scripts/common.sh@353 -- # local d=2 00:03:27.891 10:57:55 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:27.891 10:57:55 rpc -- scripts/common.sh@355 -- # echo 2 00:03:27.891 10:57:55 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:27.891 10:57:55 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:27.891 10:57:55 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:27.891 10:57:55 rpc -- scripts/common.sh@368 -- # return 0 00:03:27.891 10:57:55 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:27.891 10:57:55 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:27.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:27.891 --rc genhtml_branch_coverage=1 00:03:27.891 --rc genhtml_function_coverage=1 00:03:27.891 --rc genhtml_legend=1 00:03:27.891 --rc geninfo_all_blocks=1 00:03:27.891 --rc geninfo_unexecuted_blocks=1 00:03:27.891 00:03:27.891 ' 00:03:27.891 10:57:55 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:27.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:27.891 --rc genhtml_branch_coverage=1 00:03:27.891 --rc genhtml_function_coverage=1 00:03:27.891 --rc genhtml_legend=1 00:03:27.891 --rc geninfo_all_blocks=1 00:03:27.891 --rc geninfo_unexecuted_blocks=1 00:03:27.891 00:03:27.891 ' 00:03:27.891 10:57:55 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:27.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:27.891 --rc genhtml_branch_coverage=1 00:03:27.891 --rc genhtml_function_coverage=1 00:03:27.891 --rc genhtml_legend=1 00:03:27.891 --rc geninfo_all_blocks=1 00:03:27.891 --rc geninfo_unexecuted_blocks=1 00:03:27.891 00:03:27.891 ' 00:03:27.891 10:57:55 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:27.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:27.891 --rc genhtml_branch_coverage=1 00:03:27.891 --rc genhtml_function_coverage=1 00:03:27.891 --rc genhtml_legend=1 00:03:27.891 --rc geninfo_all_blocks=1 00:03:27.891 --rc geninfo_unexecuted_blocks=1 00:03:27.891 00:03:27.891 ' 00:03:27.891 10:57:55 rpc -- rpc/rpc.sh@65 -- # spdk_pid=3860093 00:03:27.891 10:57:55 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:27.891 10:57:55 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:03:27.891 10:57:55 rpc -- rpc/rpc.sh@67 -- # waitforlisten 3860093 00:03:27.891 10:57:55 rpc -- common/autotest_common.sh@835 -- # '[' -z 3860093 ']' 00:03:27.891 10:57:55 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:27.891 10:57:55 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:27.891 10:57:55 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:27.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:27.891 10:57:55 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:27.891 10:57:55 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:27.891 [2024-11-20 10:57:55.222344] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:03:27.891 [2024-11-20 10:57:55.222391] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3860093 ] 00:03:27.891 [2024-11-20 10:57:55.296120] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:27.891 [2024-11-20 10:57:55.338337] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:03:27.891 [2024-11-20 10:57:55.338375] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3860093' to capture a snapshot of events at runtime. 00:03:27.891 [2024-11-20 10:57:55.338384] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:03:27.891 [2024-11-20 10:57:55.338389] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:03:27.891 [2024-11-20 10:57:55.338394] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3860093 for offline analysis/debug. 00:03:27.891 [2024-11-20 10:57:55.338962] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:28.150 10:57:55 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:28.150 10:57:55 rpc -- common/autotest_common.sh@868 -- # return 0 00:03:28.150 10:57:55 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:28.150 10:57:55 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:28.150 10:57:55 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:03:28.151 10:57:55 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:03:28.151 10:57:55 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:28.151 10:57:55 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:28.151 10:57:55 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:28.151 ************************************ 00:03:28.151 START TEST rpc_integrity 00:03:28.151 ************************************ 00:03:28.151 10:57:55 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:03:28.151 10:57:55 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:28.151 10:57:55 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:28.151 10:57:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:28.151 10:57:55 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:28.151 10:57:55 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:28.151 10:57:55 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:28.410 10:57:55 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:28.410 10:57:55 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:28.410 10:57:55 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:28.410 10:57:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:28.410 10:57:55 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:28.410 10:57:55 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:03:28.410 10:57:55 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:28.410 10:57:55 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:28.410 10:57:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:28.410 10:57:55 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:28.410 10:57:55 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:28.410 { 00:03:28.410 "name": "Malloc0", 00:03:28.410 "aliases": [ 00:03:28.410 "b9d10bd8-286d-4f9a-b5d5-8a6100ab6bb6" 00:03:28.410 ], 00:03:28.410 "product_name": "Malloc disk", 00:03:28.410 "block_size": 512, 00:03:28.410 "num_blocks": 16384, 00:03:28.410 "uuid": "b9d10bd8-286d-4f9a-b5d5-8a6100ab6bb6", 00:03:28.410 "assigned_rate_limits": { 00:03:28.410 "rw_ios_per_sec": 0, 00:03:28.410 "rw_mbytes_per_sec": 0, 00:03:28.410 "r_mbytes_per_sec": 0, 00:03:28.410 "w_mbytes_per_sec": 0 00:03:28.410 }, 00:03:28.410 "claimed": false, 00:03:28.410 "zoned": false, 00:03:28.411 "supported_io_types": { 00:03:28.411 "read": true, 00:03:28.411 "write": true, 00:03:28.411 "unmap": true, 00:03:28.411 "flush": true, 00:03:28.411 "reset": true, 00:03:28.411 "nvme_admin": false, 00:03:28.411 "nvme_io": false, 00:03:28.411 "nvme_io_md": false, 00:03:28.411 "write_zeroes": true, 00:03:28.411 "zcopy": true, 00:03:28.411 "get_zone_info": false, 00:03:28.411 "zone_management": false, 00:03:28.411 "zone_append": false, 00:03:28.411 "compare": false, 00:03:28.411 "compare_and_write": false, 00:03:28.411 "abort": true, 00:03:28.411 "seek_hole": false, 00:03:28.411 "seek_data": false, 00:03:28.411 "copy": true, 00:03:28.411 "nvme_iov_md": false 00:03:28.411 }, 00:03:28.411 "memory_domains": [ 00:03:28.411 { 00:03:28.411 "dma_device_id": "system", 00:03:28.411 "dma_device_type": 1 00:03:28.411 }, 00:03:28.411 { 00:03:28.411 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:28.411 "dma_device_type": 2 00:03:28.411 } 00:03:28.411 ], 00:03:28.411 "driver_specific": {} 00:03:28.411 } 00:03:28.411 ]' 00:03:28.411 10:57:55 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:28.411 10:57:55 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:28.411 10:57:55 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:03:28.411 10:57:55 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:28.411 10:57:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:28.411 [2024-11-20 10:57:55.722187] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:03:28.411 [2024-11-20 10:57:55.722216] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:28.411 [2024-11-20 10:57:55.722229] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x247f6e0 00:03:28.411 [2024-11-20 10:57:55.722236] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:28.411 [2024-11-20 10:57:55.723367] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:28.411 [2024-11-20 10:57:55.723388] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:28.411 Passthru0 00:03:28.411 10:57:55 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:28.411 10:57:55 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:28.411 10:57:55 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:28.411 10:57:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:28.411 10:57:55 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:28.411 10:57:55 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:28.411 { 00:03:28.411 "name": "Malloc0", 00:03:28.411 "aliases": [ 00:03:28.411 "b9d10bd8-286d-4f9a-b5d5-8a6100ab6bb6" 00:03:28.411 ], 00:03:28.411 "product_name": "Malloc disk", 00:03:28.411 "block_size": 512, 00:03:28.411 "num_blocks": 16384, 00:03:28.411 "uuid": "b9d10bd8-286d-4f9a-b5d5-8a6100ab6bb6", 00:03:28.411 "assigned_rate_limits": { 00:03:28.411 "rw_ios_per_sec": 0, 00:03:28.411 "rw_mbytes_per_sec": 0, 00:03:28.411 "r_mbytes_per_sec": 0, 00:03:28.411 "w_mbytes_per_sec": 0 00:03:28.411 }, 00:03:28.411 "claimed": true, 00:03:28.411 "claim_type": "exclusive_write", 00:03:28.411 "zoned": false, 00:03:28.411 "supported_io_types": { 00:03:28.411 "read": true, 00:03:28.411 "write": true, 00:03:28.411 "unmap": true, 00:03:28.411 "flush": true, 00:03:28.411 "reset": true, 00:03:28.411 "nvme_admin": false, 00:03:28.411 "nvme_io": false, 00:03:28.411 "nvme_io_md": false, 00:03:28.411 "write_zeroes": true, 00:03:28.411 "zcopy": true, 00:03:28.411 "get_zone_info": false, 00:03:28.411 "zone_management": false, 00:03:28.411 "zone_append": false, 00:03:28.411 "compare": false, 00:03:28.411 "compare_and_write": false, 00:03:28.411 "abort": true, 00:03:28.411 "seek_hole": false, 00:03:28.411 "seek_data": false, 00:03:28.411 "copy": true, 00:03:28.411 "nvme_iov_md": false 00:03:28.411 }, 00:03:28.411 "memory_domains": [ 00:03:28.411 { 00:03:28.411 "dma_device_id": "system", 00:03:28.411 "dma_device_type": 1 00:03:28.411 }, 00:03:28.411 { 00:03:28.411 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:28.411 "dma_device_type": 2 00:03:28.411 } 00:03:28.411 ], 00:03:28.411 "driver_specific": {} 00:03:28.411 }, 00:03:28.411 { 00:03:28.411 "name": "Passthru0", 00:03:28.411 "aliases": [ 00:03:28.411 "0b90b4f1-3ff2-54ba-9d31-3f32ce8e0669" 00:03:28.411 ], 00:03:28.411 "product_name": "passthru", 00:03:28.411 "block_size": 512, 00:03:28.411 "num_blocks": 16384, 00:03:28.411 "uuid": "0b90b4f1-3ff2-54ba-9d31-3f32ce8e0669", 00:03:28.411 "assigned_rate_limits": { 00:03:28.411 "rw_ios_per_sec": 0, 00:03:28.411 "rw_mbytes_per_sec": 0, 00:03:28.411 "r_mbytes_per_sec": 0, 00:03:28.411 "w_mbytes_per_sec": 0 00:03:28.411 }, 00:03:28.411 "claimed": false, 00:03:28.411 "zoned": false, 00:03:28.411 "supported_io_types": { 00:03:28.411 "read": true, 00:03:28.411 "write": true, 00:03:28.411 "unmap": true, 00:03:28.411 "flush": true, 00:03:28.411 "reset": true, 00:03:28.411 "nvme_admin": false, 00:03:28.411 "nvme_io": false, 00:03:28.411 "nvme_io_md": false, 00:03:28.411 "write_zeroes": true, 00:03:28.411 "zcopy": true, 00:03:28.411 "get_zone_info": false, 00:03:28.411 "zone_management": false, 00:03:28.411 "zone_append": false, 00:03:28.411 "compare": false, 00:03:28.411 "compare_and_write": false, 00:03:28.411 "abort": true, 00:03:28.411 "seek_hole": false, 00:03:28.411 "seek_data": false, 00:03:28.411 "copy": true, 00:03:28.411 "nvme_iov_md": false 00:03:28.411 }, 00:03:28.411 "memory_domains": [ 00:03:28.411 { 00:03:28.411 "dma_device_id": "system", 00:03:28.411 "dma_device_type": 1 00:03:28.411 }, 00:03:28.411 { 00:03:28.411 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:28.411 "dma_device_type": 2 00:03:28.411 } 00:03:28.411 ], 00:03:28.411 "driver_specific": { 00:03:28.411 "passthru": { 00:03:28.411 "name": "Passthru0", 00:03:28.411 "base_bdev_name": "Malloc0" 00:03:28.411 } 00:03:28.411 } 00:03:28.411 } 00:03:28.411 ]' 00:03:28.411 10:57:55 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:28.411 10:57:55 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:28.411 10:57:55 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:28.411 10:57:55 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:28.411 10:57:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:28.411 10:57:55 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:28.411 10:57:55 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:03:28.411 10:57:55 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:28.411 10:57:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:28.411 10:57:55 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:28.411 10:57:55 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:28.411 10:57:55 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:28.411 10:57:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:28.411 10:57:55 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:28.411 10:57:55 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:28.411 10:57:55 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:28.411 10:57:55 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:28.411 00:03:28.411 real 0m0.269s 00:03:28.411 user 0m0.165s 00:03:28.411 sys 0m0.040s 00:03:28.411 10:57:55 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:28.411 10:57:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:28.411 ************************************ 00:03:28.411 END TEST rpc_integrity 00:03:28.411 ************************************ 00:03:28.411 10:57:55 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:03:28.411 10:57:55 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:28.411 10:57:55 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:28.411 10:57:55 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:28.671 ************************************ 00:03:28.671 START TEST rpc_plugins 00:03:28.671 ************************************ 00:03:28.671 10:57:55 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:03:28.671 10:57:55 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:03:28.671 10:57:55 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:28.671 10:57:55 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:28.671 10:57:55 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:28.671 10:57:55 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:03:28.671 10:57:55 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:03:28.671 10:57:55 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:28.671 10:57:55 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:28.671 10:57:55 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:28.671 10:57:55 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:03:28.671 { 00:03:28.671 "name": "Malloc1", 00:03:28.671 "aliases": [ 00:03:28.671 "f853dd4b-b41f-4f8f-8463-f7bf312f4745" 00:03:28.671 ], 00:03:28.671 "product_name": "Malloc disk", 00:03:28.671 "block_size": 4096, 00:03:28.671 "num_blocks": 256, 00:03:28.671 "uuid": "f853dd4b-b41f-4f8f-8463-f7bf312f4745", 00:03:28.671 "assigned_rate_limits": { 00:03:28.671 "rw_ios_per_sec": 0, 00:03:28.671 "rw_mbytes_per_sec": 0, 00:03:28.671 "r_mbytes_per_sec": 0, 00:03:28.671 "w_mbytes_per_sec": 0 00:03:28.671 }, 00:03:28.671 "claimed": false, 00:03:28.671 "zoned": false, 00:03:28.671 "supported_io_types": { 00:03:28.671 "read": true, 00:03:28.671 "write": true, 00:03:28.671 "unmap": true, 00:03:28.671 "flush": true, 00:03:28.671 "reset": true, 00:03:28.671 "nvme_admin": false, 00:03:28.671 "nvme_io": false, 00:03:28.671 "nvme_io_md": false, 00:03:28.671 "write_zeroes": true, 00:03:28.671 "zcopy": true, 00:03:28.671 "get_zone_info": false, 00:03:28.671 "zone_management": false, 00:03:28.671 "zone_append": false, 00:03:28.671 "compare": false, 00:03:28.671 "compare_and_write": false, 00:03:28.671 "abort": true, 00:03:28.671 "seek_hole": false, 00:03:28.671 "seek_data": false, 00:03:28.671 "copy": true, 00:03:28.671 "nvme_iov_md": false 00:03:28.671 }, 00:03:28.671 "memory_domains": [ 00:03:28.671 { 00:03:28.671 "dma_device_id": "system", 00:03:28.671 "dma_device_type": 1 00:03:28.671 }, 00:03:28.671 { 00:03:28.671 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:28.671 "dma_device_type": 2 00:03:28.671 } 00:03:28.671 ], 00:03:28.671 "driver_specific": {} 00:03:28.671 } 00:03:28.671 ]' 00:03:28.671 10:57:55 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:03:28.671 10:57:56 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:03:28.671 10:57:56 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:03:28.671 10:57:56 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:28.671 10:57:56 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:28.671 10:57:56 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:28.671 10:57:56 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:03:28.671 10:57:56 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:28.672 10:57:56 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:28.672 10:57:56 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:28.672 10:57:56 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:03:28.672 10:57:56 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:03:28.672 10:57:56 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:03:28.672 00:03:28.672 real 0m0.142s 00:03:28.672 user 0m0.087s 00:03:28.672 sys 0m0.020s 00:03:28.672 10:57:56 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:28.672 10:57:56 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:28.672 ************************************ 00:03:28.672 END TEST rpc_plugins 00:03:28.672 ************************************ 00:03:28.672 10:57:56 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:03:28.672 10:57:56 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:28.672 10:57:56 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:28.672 10:57:56 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:28.672 ************************************ 00:03:28.672 START TEST rpc_trace_cmd_test 00:03:28.672 ************************************ 00:03:28.672 10:57:56 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:03:28.672 10:57:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:03:28.672 10:57:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:03:28.672 10:57:56 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:28.672 10:57:56 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:28.931 10:57:56 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:28.932 10:57:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:03:28.932 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3860093", 00:03:28.932 "tpoint_group_mask": "0x8", 00:03:28.932 "iscsi_conn": { 00:03:28.932 "mask": "0x2", 00:03:28.932 "tpoint_mask": "0x0" 00:03:28.932 }, 00:03:28.932 "scsi": { 00:03:28.932 "mask": "0x4", 00:03:28.932 "tpoint_mask": "0x0" 00:03:28.932 }, 00:03:28.932 "bdev": { 00:03:28.932 "mask": "0x8", 00:03:28.932 "tpoint_mask": "0xffffffffffffffff" 00:03:28.932 }, 00:03:28.932 "nvmf_rdma": { 00:03:28.932 "mask": "0x10", 00:03:28.932 "tpoint_mask": "0x0" 00:03:28.932 }, 00:03:28.932 "nvmf_tcp": { 00:03:28.932 "mask": "0x20", 00:03:28.932 "tpoint_mask": "0x0" 00:03:28.932 }, 00:03:28.932 "ftl": { 00:03:28.932 "mask": "0x40", 00:03:28.932 "tpoint_mask": "0x0" 00:03:28.932 }, 00:03:28.932 "blobfs": { 00:03:28.932 "mask": "0x80", 00:03:28.932 "tpoint_mask": "0x0" 00:03:28.932 }, 00:03:28.932 "dsa": { 00:03:28.932 "mask": "0x200", 00:03:28.932 "tpoint_mask": "0x0" 00:03:28.932 }, 00:03:28.932 "thread": { 00:03:28.932 "mask": "0x400", 00:03:28.932 "tpoint_mask": "0x0" 00:03:28.932 }, 00:03:28.932 "nvme_pcie": { 00:03:28.932 "mask": "0x800", 00:03:28.932 "tpoint_mask": "0x0" 00:03:28.932 }, 00:03:28.932 "iaa": { 00:03:28.932 "mask": "0x1000", 00:03:28.932 "tpoint_mask": "0x0" 00:03:28.932 }, 00:03:28.932 "nvme_tcp": { 00:03:28.932 "mask": "0x2000", 00:03:28.932 "tpoint_mask": "0x0" 00:03:28.932 }, 00:03:28.932 "bdev_nvme": { 00:03:28.932 "mask": "0x4000", 00:03:28.932 "tpoint_mask": "0x0" 00:03:28.932 }, 00:03:28.932 "sock": { 00:03:28.932 "mask": "0x8000", 00:03:28.932 "tpoint_mask": "0x0" 00:03:28.932 }, 00:03:28.932 "blob": { 00:03:28.932 "mask": "0x10000", 00:03:28.932 "tpoint_mask": "0x0" 00:03:28.932 }, 00:03:28.932 "bdev_raid": { 00:03:28.932 "mask": "0x20000", 00:03:28.932 "tpoint_mask": "0x0" 00:03:28.932 }, 00:03:28.932 "scheduler": { 00:03:28.932 "mask": "0x40000", 00:03:28.932 "tpoint_mask": "0x0" 00:03:28.932 } 00:03:28.932 }' 00:03:28.932 10:57:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:03:28.932 10:57:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:03:28.932 10:57:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:03:28.932 10:57:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:03:28.932 10:57:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:03:28.932 10:57:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:03:28.932 10:57:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:03:28.932 10:57:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:03:28.932 10:57:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:03:28.932 10:57:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:03:28.932 00:03:28.932 real 0m0.215s 00:03:28.932 user 0m0.178s 00:03:28.932 sys 0m0.027s 00:03:28.932 10:57:56 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:28.932 10:57:56 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:28.932 ************************************ 00:03:28.932 END TEST rpc_trace_cmd_test 00:03:28.932 ************************************ 00:03:28.932 10:57:56 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:03:28.932 10:57:56 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:03:28.932 10:57:56 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:03:28.932 10:57:56 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:28.932 10:57:56 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:28.932 10:57:56 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:29.192 ************************************ 00:03:29.192 START TEST rpc_daemon_integrity 00:03:29.192 ************************************ 00:03:29.192 10:57:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:03:29.192 10:57:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:29.192 10:57:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:29.192 10:57:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:29.192 10:57:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:29.192 10:57:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:29.192 10:57:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:29.192 10:57:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:29.192 10:57:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:29.192 10:57:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:29.192 10:57:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:29.192 10:57:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:29.192 10:57:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:03:29.192 10:57:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:29.192 10:57:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:29.192 10:57:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:29.192 10:57:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:29.192 10:57:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:29.192 { 00:03:29.192 "name": "Malloc2", 00:03:29.192 "aliases": [ 00:03:29.192 "721ad9cf-7133-4dcb-9a6f-b09ff55e1e91" 00:03:29.192 ], 00:03:29.192 "product_name": "Malloc disk", 00:03:29.192 "block_size": 512, 00:03:29.192 "num_blocks": 16384, 00:03:29.192 "uuid": "721ad9cf-7133-4dcb-9a6f-b09ff55e1e91", 00:03:29.192 "assigned_rate_limits": { 00:03:29.193 "rw_ios_per_sec": 0, 00:03:29.193 "rw_mbytes_per_sec": 0, 00:03:29.193 "r_mbytes_per_sec": 0, 00:03:29.193 "w_mbytes_per_sec": 0 00:03:29.193 }, 00:03:29.193 "claimed": false, 00:03:29.193 "zoned": false, 00:03:29.193 "supported_io_types": { 00:03:29.193 "read": true, 00:03:29.193 "write": true, 00:03:29.193 "unmap": true, 00:03:29.193 "flush": true, 00:03:29.193 "reset": true, 00:03:29.193 "nvme_admin": false, 00:03:29.193 "nvme_io": false, 00:03:29.193 "nvme_io_md": false, 00:03:29.193 "write_zeroes": true, 00:03:29.193 "zcopy": true, 00:03:29.193 "get_zone_info": false, 00:03:29.193 "zone_management": false, 00:03:29.193 "zone_append": false, 00:03:29.193 "compare": false, 00:03:29.193 "compare_and_write": false, 00:03:29.193 "abort": true, 00:03:29.193 "seek_hole": false, 00:03:29.193 "seek_data": false, 00:03:29.193 "copy": true, 00:03:29.193 "nvme_iov_md": false 00:03:29.193 }, 00:03:29.193 "memory_domains": [ 00:03:29.193 { 00:03:29.193 "dma_device_id": "system", 00:03:29.193 "dma_device_type": 1 00:03:29.193 }, 00:03:29.193 { 00:03:29.193 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:29.193 "dma_device_type": 2 00:03:29.193 } 00:03:29.193 ], 00:03:29.193 "driver_specific": {} 00:03:29.193 } 00:03:29.193 ]' 00:03:29.193 10:57:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:29.193 10:57:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:29.193 10:57:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:03:29.193 10:57:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:29.193 10:57:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:29.193 [2024-11-20 10:57:56.556485] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:03:29.193 [2024-11-20 10:57:56.556512] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:29.193 [2024-11-20 10:57:56.556523] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x250fb70 00:03:29.193 [2024-11-20 10:57:56.556529] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:29.193 [2024-11-20 10:57:56.557537] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:29.193 [2024-11-20 10:57:56.557556] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:29.193 Passthru0 00:03:29.193 10:57:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:29.193 10:57:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:29.193 10:57:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:29.193 10:57:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:29.193 10:57:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:29.193 10:57:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:29.193 { 00:03:29.193 "name": "Malloc2", 00:03:29.193 "aliases": [ 00:03:29.193 "721ad9cf-7133-4dcb-9a6f-b09ff55e1e91" 00:03:29.193 ], 00:03:29.193 "product_name": "Malloc disk", 00:03:29.193 "block_size": 512, 00:03:29.193 "num_blocks": 16384, 00:03:29.193 "uuid": "721ad9cf-7133-4dcb-9a6f-b09ff55e1e91", 00:03:29.193 "assigned_rate_limits": { 00:03:29.193 "rw_ios_per_sec": 0, 00:03:29.193 "rw_mbytes_per_sec": 0, 00:03:29.193 "r_mbytes_per_sec": 0, 00:03:29.193 "w_mbytes_per_sec": 0 00:03:29.193 }, 00:03:29.193 "claimed": true, 00:03:29.193 "claim_type": "exclusive_write", 00:03:29.193 "zoned": false, 00:03:29.193 "supported_io_types": { 00:03:29.193 "read": true, 00:03:29.193 "write": true, 00:03:29.193 "unmap": true, 00:03:29.193 "flush": true, 00:03:29.193 "reset": true, 00:03:29.193 "nvme_admin": false, 00:03:29.193 "nvme_io": false, 00:03:29.193 "nvme_io_md": false, 00:03:29.193 "write_zeroes": true, 00:03:29.193 "zcopy": true, 00:03:29.193 "get_zone_info": false, 00:03:29.193 "zone_management": false, 00:03:29.193 "zone_append": false, 00:03:29.193 "compare": false, 00:03:29.193 "compare_and_write": false, 00:03:29.193 "abort": true, 00:03:29.193 "seek_hole": false, 00:03:29.193 "seek_data": false, 00:03:29.193 "copy": true, 00:03:29.193 "nvme_iov_md": false 00:03:29.193 }, 00:03:29.193 "memory_domains": [ 00:03:29.193 { 00:03:29.193 "dma_device_id": "system", 00:03:29.193 "dma_device_type": 1 00:03:29.193 }, 00:03:29.193 { 00:03:29.193 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:29.193 "dma_device_type": 2 00:03:29.193 } 00:03:29.193 ], 00:03:29.193 "driver_specific": {} 00:03:29.193 }, 00:03:29.193 { 00:03:29.193 "name": "Passthru0", 00:03:29.193 "aliases": [ 00:03:29.193 "8f82ea20-2480-5b47-a888-6080e47da0c0" 00:03:29.193 ], 00:03:29.193 "product_name": "passthru", 00:03:29.193 "block_size": 512, 00:03:29.193 "num_blocks": 16384, 00:03:29.193 "uuid": "8f82ea20-2480-5b47-a888-6080e47da0c0", 00:03:29.193 "assigned_rate_limits": { 00:03:29.193 "rw_ios_per_sec": 0, 00:03:29.193 "rw_mbytes_per_sec": 0, 00:03:29.193 "r_mbytes_per_sec": 0, 00:03:29.193 "w_mbytes_per_sec": 0 00:03:29.193 }, 00:03:29.193 "claimed": false, 00:03:29.193 "zoned": false, 00:03:29.193 "supported_io_types": { 00:03:29.193 "read": true, 00:03:29.193 "write": true, 00:03:29.193 "unmap": true, 00:03:29.193 "flush": true, 00:03:29.193 "reset": true, 00:03:29.193 "nvme_admin": false, 00:03:29.193 "nvme_io": false, 00:03:29.193 "nvme_io_md": false, 00:03:29.193 "write_zeroes": true, 00:03:29.193 "zcopy": true, 00:03:29.193 "get_zone_info": false, 00:03:29.193 "zone_management": false, 00:03:29.193 "zone_append": false, 00:03:29.193 "compare": false, 00:03:29.193 "compare_and_write": false, 00:03:29.193 "abort": true, 00:03:29.193 "seek_hole": false, 00:03:29.193 "seek_data": false, 00:03:29.193 "copy": true, 00:03:29.193 "nvme_iov_md": false 00:03:29.193 }, 00:03:29.193 "memory_domains": [ 00:03:29.193 { 00:03:29.193 "dma_device_id": "system", 00:03:29.193 "dma_device_type": 1 00:03:29.193 }, 00:03:29.193 { 00:03:29.193 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:29.193 "dma_device_type": 2 00:03:29.193 } 00:03:29.193 ], 00:03:29.193 "driver_specific": { 00:03:29.193 "passthru": { 00:03:29.193 "name": "Passthru0", 00:03:29.193 "base_bdev_name": "Malloc2" 00:03:29.193 } 00:03:29.193 } 00:03:29.193 } 00:03:29.193 ]' 00:03:29.193 10:57:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:29.193 10:57:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:29.193 10:57:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:29.193 10:57:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:29.193 10:57:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:29.193 10:57:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:29.193 10:57:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:03:29.193 10:57:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:29.193 10:57:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:29.193 10:57:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:29.193 10:57:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:29.193 10:57:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:29.193 10:57:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:29.193 10:57:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:29.193 10:57:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:29.193 10:57:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:29.453 10:57:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:29.453 00:03:29.453 real 0m0.273s 00:03:29.453 user 0m0.181s 00:03:29.453 sys 0m0.030s 00:03:29.453 10:57:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:29.453 10:57:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:29.453 ************************************ 00:03:29.453 END TEST rpc_daemon_integrity 00:03:29.453 ************************************ 00:03:29.453 10:57:56 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:03:29.453 10:57:56 rpc -- rpc/rpc.sh@84 -- # killprocess 3860093 00:03:29.453 10:57:56 rpc -- common/autotest_common.sh@954 -- # '[' -z 3860093 ']' 00:03:29.453 10:57:56 rpc -- common/autotest_common.sh@958 -- # kill -0 3860093 00:03:29.453 10:57:56 rpc -- common/autotest_common.sh@959 -- # uname 00:03:29.453 10:57:56 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:29.453 10:57:56 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3860093 00:03:29.453 10:57:56 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:29.453 10:57:56 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:29.453 10:57:56 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3860093' 00:03:29.453 killing process with pid 3860093 00:03:29.453 10:57:56 rpc -- common/autotest_common.sh@973 -- # kill 3860093 00:03:29.453 10:57:56 rpc -- common/autotest_common.sh@978 -- # wait 3860093 00:03:29.713 00:03:29.713 real 0m2.094s 00:03:29.713 user 0m2.649s 00:03:29.713 sys 0m0.712s 00:03:29.713 10:57:57 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:29.713 10:57:57 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:29.713 ************************************ 00:03:29.713 END TEST rpc 00:03:29.713 ************************************ 00:03:29.713 10:57:57 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:29.713 10:57:57 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:29.713 10:57:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:29.713 10:57:57 -- common/autotest_common.sh@10 -- # set +x 00:03:29.713 ************************************ 00:03:29.713 START TEST skip_rpc 00:03:29.713 ************************************ 00:03:29.713 10:57:57 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:29.972 * Looking for test storage... 00:03:29.972 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:29.972 10:57:57 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:29.972 10:57:57 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:03:29.972 10:57:57 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:29.972 10:57:57 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:29.972 10:57:57 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:29.972 10:57:57 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:29.972 10:57:57 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:29.972 10:57:57 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:29.972 10:57:57 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:29.972 10:57:57 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:29.972 10:57:57 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:29.972 10:57:57 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:29.972 10:57:57 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:29.972 10:57:57 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:29.972 10:57:57 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:29.972 10:57:57 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:29.972 10:57:57 skip_rpc -- scripts/common.sh@345 -- # : 1 00:03:29.972 10:57:57 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:29.972 10:57:57 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:29.972 10:57:57 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:03:29.972 10:57:57 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:03:29.972 10:57:57 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:29.972 10:57:57 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:03:29.972 10:57:57 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:29.972 10:57:57 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:03:29.972 10:57:57 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:03:29.972 10:57:57 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:29.972 10:57:57 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:03:29.972 10:57:57 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:29.972 10:57:57 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:29.972 10:57:57 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:29.972 10:57:57 skip_rpc -- scripts/common.sh@368 -- # return 0 00:03:29.972 10:57:57 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:29.972 10:57:57 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:29.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:29.972 --rc genhtml_branch_coverage=1 00:03:29.972 --rc genhtml_function_coverage=1 00:03:29.972 --rc genhtml_legend=1 00:03:29.972 --rc geninfo_all_blocks=1 00:03:29.972 --rc geninfo_unexecuted_blocks=1 00:03:29.972 00:03:29.972 ' 00:03:29.972 10:57:57 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:29.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:29.972 --rc genhtml_branch_coverage=1 00:03:29.972 --rc genhtml_function_coverage=1 00:03:29.972 --rc genhtml_legend=1 00:03:29.972 --rc geninfo_all_blocks=1 00:03:29.972 --rc geninfo_unexecuted_blocks=1 00:03:29.972 00:03:29.972 ' 00:03:29.972 10:57:57 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:29.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:29.972 --rc genhtml_branch_coverage=1 00:03:29.972 --rc genhtml_function_coverage=1 00:03:29.972 --rc genhtml_legend=1 00:03:29.972 --rc geninfo_all_blocks=1 00:03:29.972 --rc geninfo_unexecuted_blocks=1 00:03:29.972 00:03:29.972 ' 00:03:29.972 10:57:57 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:29.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:29.972 --rc genhtml_branch_coverage=1 00:03:29.972 --rc genhtml_function_coverage=1 00:03:29.972 --rc genhtml_legend=1 00:03:29.972 --rc geninfo_all_blocks=1 00:03:29.973 --rc geninfo_unexecuted_blocks=1 00:03:29.973 00:03:29.973 ' 00:03:29.973 10:57:57 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:29.973 10:57:57 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:29.973 10:57:57 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:03:29.973 10:57:57 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:29.973 10:57:57 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:29.973 10:57:57 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:29.973 ************************************ 00:03:29.973 START TEST skip_rpc 00:03:29.973 ************************************ 00:03:29.973 10:57:57 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:03:29.973 10:57:57 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=3860728 00:03:29.973 10:57:57 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:29.973 10:57:57 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:03:29.973 10:57:57 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:03:29.973 [2024-11-20 10:57:57.423505] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:03:29.973 [2024-11-20 10:57:57.423541] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3860728 ] 00:03:30.231 [2024-11-20 10:57:57.498004] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:30.231 [2024-11-20 10:57:57.538239] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:35.537 10:58:02 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:03:35.537 10:58:02 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:03:35.537 10:58:02 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:03:35.537 10:58:02 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:03:35.537 10:58:02 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:35.537 10:58:02 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:03:35.537 10:58:02 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:35.537 10:58:02 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:03:35.538 10:58:02 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:35.538 10:58:02 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:35.538 10:58:02 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:03:35.538 10:58:02 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:03:35.538 10:58:02 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:03:35.538 10:58:02 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:03:35.538 10:58:02 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:03:35.538 10:58:02 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:03:35.538 10:58:02 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 3860728 00:03:35.538 10:58:02 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 3860728 ']' 00:03:35.538 10:58:02 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 3860728 00:03:35.538 10:58:02 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:03:35.538 10:58:02 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:35.538 10:58:02 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3860728 00:03:35.538 10:58:02 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:35.538 10:58:02 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:35.538 10:58:02 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3860728' 00:03:35.538 killing process with pid 3860728 00:03:35.538 10:58:02 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 3860728 00:03:35.538 10:58:02 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 3860728 00:03:35.538 00:03:35.538 real 0m5.364s 00:03:35.538 user 0m5.125s 00:03:35.538 sys 0m0.277s 00:03:35.538 10:58:02 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:35.538 10:58:02 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:35.538 ************************************ 00:03:35.538 END TEST skip_rpc 00:03:35.538 ************************************ 00:03:35.538 10:58:02 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:03:35.538 10:58:02 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:35.538 10:58:02 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:35.538 10:58:02 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:35.538 ************************************ 00:03:35.538 START TEST skip_rpc_with_json 00:03:35.538 ************************************ 00:03:35.538 10:58:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:03:35.538 10:58:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:03:35.538 10:58:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=3861676 00:03:35.538 10:58:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:35.538 10:58:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:35.538 10:58:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 3861676 00:03:35.538 10:58:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 3861676 ']' 00:03:35.538 10:58:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:35.538 10:58:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:35.538 10:58:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:35.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:35.538 10:58:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:35.538 10:58:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:35.538 [2024-11-20 10:58:02.857442] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:03:35.538 [2024-11-20 10:58:02.857488] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3861676 ] 00:03:35.538 [2024-11-20 10:58:02.934319] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:35.538 [2024-11-20 10:58:02.976767] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:35.798 10:58:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:35.798 10:58:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:03:35.798 10:58:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:03:35.798 10:58:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:35.798 10:58:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:35.798 [2024-11-20 10:58:03.194940] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:03:35.798 request: 00:03:35.798 { 00:03:35.798 "trtype": "tcp", 00:03:35.798 "method": "nvmf_get_transports", 00:03:35.798 "req_id": 1 00:03:35.798 } 00:03:35.798 Got JSON-RPC error response 00:03:35.798 response: 00:03:35.798 { 00:03:35.798 "code": -19, 00:03:35.798 "message": "No such device" 00:03:35.798 } 00:03:35.798 10:58:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:03:35.798 10:58:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:03:35.798 10:58:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:35.798 10:58:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:35.798 [2024-11-20 10:58:03.207050] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:35.798 10:58:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:35.798 10:58:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:03:35.798 10:58:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:35.798 10:58:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:36.058 10:58:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:36.058 10:58:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:36.058 { 00:03:36.058 "subsystems": [ 00:03:36.058 { 00:03:36.058 "subsystem": "fsdev", 00:03:36.058 "config": [ 00:03:36.058 { 00:03:36.058 "method": "fsdev_set_opts", 00:03:36.058 "params": { 00:03:36.058 "fsdev_io_pool_size": 65535, 00:03:36.058 "fsdev_io_cache_size": 256 00:03:36.058 } 00:03:36.058 } 00:03:36.058 ] 00:03:36.058 }, 00:03:36.058 { 00:03:36.058 "subsystem": "vfio_user_target", 00:03:36.058 "config": null 00:03:36.058 }, 00:03:36.058 { 00:03:36.058 "subsystem": "keyring", 00:03:36.058 "config": [] 00:03:36.058 }, 00:03:36.058 { 00:03:36.058 "subsystem": "iobuf", 00:03:36.058 "config": [ 00:03:36.058 { 00:03:36.058 "method": "iobuf_set_options", 00:03:36.058 "params": { 00:03:36.058 "small_pool_count": 8192, 00:03:36.058 "large_pool_count": 1024, 00:03:36.058 "small_bufsize": 8192, 00:03:36.058 "large_bufsize": 135168, 00:03:36.058 "enable_numa": false 00:03:36.058 } 00:03:36.058 } 00:03:36.058 ] 00:03:36.058 }, 00:03:36.058 { 00:03:36.058 "subsystem": "sock", 00:03:36.058 "config": [ 00:03:36.058 { 00:03:36.058 "method": "sock_set_default_impl", 00:03:36.058 "params": { 00:03:36.058 "impl_name": "posix" 00:03:36.058 } 00:03:36.058 }, 00:03:36.058 { 00:03:36.058 "method": "sock_impl_set_options", 00:03:36.058 "params": { 00:03:36.058 "impl_name": "ssl", 00:03:36.058 "recv_buf_size": 4096, 00:03:36.058 "send_buf_size": 4096, 00:03:36.058 "enable_recv_pipe": true, 00:03:36.058 "enable_quickack": false, 00:03:36.058 "enable_placement_id": 0, 00:03:36.058 "enable_zerocopy_send_server": true, 00:03:36.058 "enable_zerocopy_send_client": false, 00:03:36.058 "zerocopy_threshold": 0, 00:03:36.058 "tls_version": 0, 00:03:36.058 "enable_ktls": false 00:03:36.058 } 00:03:36.058 }, 00:03:36.058 { 00:03:36.058 "method": "sock_impl_set_options", 00:03:36.058 "params": { 00:03:36.058 "impl_name": "posix", 00:03:36.058 "recv_buf_size": 2097152, 00:03:36.058 "send_buf_size": 2097152, 00:03:36.058 "enable_recv_pipe": true, 00:03:36.058 "enable_quickack": false, 00:03:36.058 "enable_placement_id": 0, 00:03:36.058 "enable_zerocopy_send_server": true, 00:03:36.058 "enable_zerocopy_send_client": false, 00:03:36.058 "zerocopy_threshold": 0, 00:03:36.058 "tls_version": 0, 00:03:36.058 "enable_ktls": false 00:03:36.058 } 00:03:36.058 } 00:03:36.058 ] 00:03:36.058 }, 00:03:36.058 { 00:03:36.058 "subsystem": "vmd", 00:03:36.058 "config": [] 00:03:36.058 }, 00:03:36.058 { 00:03:36.058 "subsystem": "accel", 00:03:36.058 "config": [ 00:03:36.058 { 00:03:36.058 "method": "accel_set_options", 00:03:36.058 "params": { 00:03:36.058 "small_cache_size": 128, 00:03:36.058 "large_cache_size": 16, 00:03:36.058 "task_count": 2048, 00:03:36.058 "sequence_count": 2048, 00:03:36.058 "buf_count": 2048 00:03:36.058 } 00:03:36.058 } 00:03:36.058 ] 00:03:36.058 }, 00:03:36.058 { 00:03:36.058 "subsystem": "bdev", 00:03:36.058 "config": [ 00:03:36.058 { 00:03:36.058 "method": "bdev_set_options", 00:03:36.058 "params": { 00:03:36.058 "bdev_io_pool_size": 65535, 00:03:36.058 "bdev_io_cache_size": 256, 00:03:36.058 "bdev_auto_examine": true, 00:03:36.058 "iobuf_small_cache_size": 128, 00:03:36.058 "iobuf_large_cache_size": 16 00:03:36.058 } 00:03:36.058 }, 00:03:36.058 { 00:03:36.058 "method": "bdev_raid_set_options", 00:03:36.058 "params": { 00:03:36.058 "process_window_size_kb": 1024, 00:03:36.058 "process_max_bandwidth_mb_sec": 0 00:03:36.058 } 00:03:36.058 }, 00:03:36.058 { 00:03:36.058 "method": "bdev_iscsi_set_options", 00:03:36.058 "params": { 00:03:36.058 "timeout_sec": 30 00:03:36.058 } 00:03:36.058 }, 00:03:36.058 { 00:03:36.058 "method": "bdev_nvme_set_options", 00:03:36.058 "params": { 00:03:36.058 "action_on_timeout": "none", 00:03:36.058 "timeout_us": 0, 00:03:36.058 "timeout_admin_us": 0, 00:03:36.058 "keep_alive_timeout_ms": 10000, 00:03:36.058 "arbitration_burst": 0, 00:03:36.058 "low_priority_weight": 0, 00:03:36.058 "medium_priority_weight": 0, 00:03:36.058 "high_priority_weight": 0, 00:03:36.058 "nvme_adminq_poll_period_us": 10000, 00:03:36.058 "nvme_ioq_poll_period_us": 0, 00:03:36.058 "io_queue_requests": 0, 00:03:36.058 "delay_cmd_submit": true, 00:03:36.058 "transport_retry_count": 4, 00:03:36.058 "bdev_retry_count": 3, 00:03:36.058 "transport_ack_timeout": 0, 00:03:36.058 "ctrlr_loss_timeout_sec": 0, 00:03:36.058 "reconnect_delay_sec": 0, 00:03:36.058 "fast_io_fail_timeout_sec": 0, 00:03:36.058 "disable_auto_failback": false, 00:03:36.058 "generate_uuids": false, 00:03:36.058 "transport_tos": 0, 00:03:36.058 "nvme_error_stat": false, 00:03:36.058 "rdma_srq_size": 0, 00:03:36.058 "io_path_stat": false, 00:03:36.058 "allow_accel_sequence": false, 00:03:36.058 "rdma_max_cq_size": 0, 00:03:36.058 "rdma_cm_event_timeout_ms": 0, 00:03:36.058 "dhchap_digests": [ 00:03:36.058 "sha256", 00:03:36.058 "sha384", 00:03:36.058 "sha512" 00:03:36.058 ], 00:03:36.058 "dhchap_dhgroups": [ 00:03:36.058 "null", 00:03:36.058 "ffdhe2048", 00:03:36.058 "ffdhe3072", 00:03:36.058 "ffdhe4096", 00:03:36.058 "ffdhe6144", 00:03:36.058 "ffdhe8192" 00:03:36.058 ] 00:03:36.058 } 00:03:36.058 }, 00:03:36.058 { 00:03:36.058 "method": "bdev_nvme_set_hotplug", 00:03:36.058 "params": { 00:03:36.058 "period_us": 100000, 00:03:36.058 "enable": false 00:03:36.058 } 00:03:36.058 }, 00:03:36.058 { 00:03:36.058 "method": "bdev_wait_for_examine" 00:03:36.058 } 00:03:36.058 ] 00:03:36.058 }, 00:03:36.058 { 00:03:36.058 "subsystem": "scsi", 00:03:36.058 "config": null 00:03:36.058 }, 00:03:36.058 { 00:03:36.058 "subsystem": "scheduler", 00:03:36.058 "config": [ 00:03:36.058 { 00:03:36.058 "method": "framework_set_scheduler", 00:03:36.058 "params": { 00:03:36.058 "name": "static" 00:03:36.058 } 00:03:36.058 } 00:03:36.058 ] 00:03:36.058 }, 00:03:36.058 { 00:03:36.058 "subsystem": "vhost_scsi", 00:03:36.058 "config": [] 00:03:36.058 }, 00:03:36.058 { 00:03:36.058 "subsystem": "vhost_blk", 00:03:36.058 "config": [] 00:03:36.058 }, 00:03:36.058 { 00:03:36.058 "subsystem": "ublk", 00:03:36.058 "config": [] 00:03:36.058 }, 00:03:36.058 { 00:03:36.058 "subsystem": "nbd", 00:03:36.058 "config": [] 00:03:36.058 }, 00:03:36.058 { 00:03:36.058 "subsystem": "nvmf", 00:03:36.058 "config": [ 00:03:36.058 { 00:03:36.058 "method": "nvmf_set_config", 00:03:36.058 "params": { 00:03:36.058 "discovery_filter": "match_any", 00:03:36.058 "admin_cmd_passthru": { 00:03:36.058 "identify_ctrlr": false 00:03:36.058 }, 00:03:36.058 "dhchap_digests": [ 00:03:36.058 "sha256", 00:03:36.058 "sha384", 00:03:36.058 "sha512" 00:03:36.058 ], 00:03:36.058 "dhchap_dhgroups": [ 00:03:36.058 "null", 00:03:36.058 "ffdhe2048", 00:03:36.058 "ffdhe3072", 00:03:36.058 "ffdhe4096", 00:03:36.059 "ffdhe6144", 00:03:36.059 "ffdhe8192" 00:03:36.059 ] 00:03:36.059 } 00:03:36.059 }, 00:03:36.059 { 00:03:36.059 "method": "nvmf_set_max_subsystems", 00:03:36.059 "params": { 00:03:36.059 "max_subsystems": 1024 00:03:36.059 } 00:03:36.059 }, 00:03:36.059 { 00:03:36.059 "method": "nvmf_set_crdt", 00:03:36.059 "params": { 00:03:36.059 "crdt1": 0, 00:03:36.059 "crdt2": 0, 00:03:36.059 "crdt3": 0 00:03:36.059 } 00:03:36.059 }, 00:03:36.059 { 00:03:36.059 "method": "nvmf_create_transport", 00:03:36.059 "params": { 00:03:36.059 "trtype": "TCP", 00:03:36.059 "max_queue_depth": 128, 00:03:36.059 "max_io_qpairs_per_ctrlr": 127, 00:03:36.059 "in_capsule_data_size": 4096, 00:03:36.059 "max_io_size": 131072, 00:03:36.059 "io_unit_size": 131072, 00:03:36.059 "max_aq_depth": 128, 00:03:36.059 "num_shared_buffers": 511, 00:03:36.059 "buf_cache_size": 4294967295, 00:03:36.059 "dif_insert_or_strip": false, 00:03:36.059 "zcopy": false, 00:03:36.059 "c2h_success": true, 00:03:36.059 "sock_priority": 0, 00:03:36.059 "abort_timeout_sec": 1, 00:03:36.059 "ack_timeout": 0, 00:03:36.059 "data_wr_pool_size": 0 00:03:36.059 } 00:03:36.059 } 00:03:36.059 ] 00:03:36.059 }, 00:03:36.059 { 00:03:36.059 "subsystem": "iscsi", 00:03:36.059 "config": [ 00:03:36.059 { 00:03:36.059 "method": "iscsi_set_options", 00:03:36.059 "params": { 00:03:36.059 "node_base": "iqn.2016-06.io.spdk", 00:03:36.059 "max_sessions": 128, 00:03:36.059 "max_connections_per_session": 2, 00:03:36.059 "max_queue_depth": 64, 00:03:36.059 "default_time2wait": 2, 00:03:36.059 "default_time2retain": 20, 00:03:36.059 "first_burst_length": 8192, 00:03:36.059 "immediate_data": true, 00:03:36.059 "allow_duplicated_isid": false, 00:03:36.059 "error_recovery_level": 0, 00:03:36.059 "nop_timeout": 60, 00:03:36.059 "nop_in_interval": 30, 00:03:36.059 "disable_chap": false, 00:03:36.059 "require_chap": false, 00:03:36.059 "mutual_chap": false, 00:03:36.059 "chap_group": 0, 00:03:36.059 "max_large_datain_per_connection": 64, 00:03:36.059 "max_r2t_per_connection": 4, 00:03:36.059 "pdu_pool_size": 36864, 00:03:36.059 "immediate_data_pool_size": 16384, 00:03:36.059 "data_out_pool_size": 2048 00:03:36.059 } 00:03:36.059 } 00:03:36.059 ] 00:03:36.059 } 00:03:36.059 ] 00:03:36.059 } 00:03:36.059 10:58:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:03:36.059 10:58:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 3861676 00:03:36.059 10:58:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 3861676 ']' 00:03:36.059 10:58:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 3861676 00:03:36.059 10:58:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:03:36.059 10:58:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:36.059 10:58:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3861676 00:03:36.059 10:58:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:36.059 10:58:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:36.059 10:58:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3861676' 00:03:36.059 killing process with pid 3861676 00:03:36.059 10:58:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 3861676 00:03:36.059 10:58:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 3861676 00:03:36.318 10:58:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=3861694 00:03:36.318 10:58:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:36.318 10:58:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:03:41.588 10:58:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 3861694 00:03:41.588 10:58:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 3861694 ']' 00:03:41.588 10:58:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 3861694 00:03:41.588 10:58:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:03:41.588 10:58:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:41.588 10:58:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3861694 00:03:41.588 10:58:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:41.588 10:58:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:41.588 10:58:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3861694' 00:03:41.588 killing process with pid 3861694 00:03:41.588 10:58:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 3861694 00:03:41.588 10:58:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 3861694 00:03:41.864 10:58:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:41.864 10:58:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:41.864 00:03:41.864 real 0m6.289s 00:03:41.864 user 0m5.968s 00:03:41.864 sys 0m0.616s 00:03:41.864 10:58:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:41.864 10:58:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:41.864 ************************************ 00:03:41.864 END TEST skip_rpc_with_json 00:03:41.864 ************************************ 00:03:41.864 10:58:09 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:03:41.864 10:58:09 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:41.864 10:58:09 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:41.864 10:58:09 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:41.864 ************************************ 00:03:41.864 START TEST skip_rpc_with_delay 00:03:41.864 ************************************ 00:03:41.864 10:58:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:03:41.864 10:58:09 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:41.864 10:58:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:03:41.864 10:58:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:41.864 10:58:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:41.864 10:58:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:41.864 10:58:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:41.864 10:58:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:41.864 10:58:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:41.864 10:58:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:41.864 10:58:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:41.864 10:58:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:03:41.864 10:58:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:41.864 [2024-11-20 10:58:09.219596] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:03:41.864 10:58:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:03:41.864 10:58:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:03:41.864 10:58:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:03:41.864 10:58:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:03:41.864 00:03:41.864 real 0m0.071s 00:03:41.864 user 0m0.048s 00:03:41.864 sys 0m0.023s 00:03:41.864 10:58:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:41.864 10:58:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:03:41.864 ************************************ 00:03:41.864 END TEST skip_rpc_with_delay 00:03:41.864 ************************************ 00:03:41.864 10:58:09 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:03:41.864 10:58:09 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:03:41.864 10:58:09 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:03:41.864 10:58:09 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:41.864 10:58:09 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:41.864 10:58:09 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:41.864 ************************************ 00:03:41.864 START TEST exit_on_failed_rpc_init 00:03:41.864 ************************************ 00:03:41.864 10:58:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:03:41.864 10:58:09 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=3862687 00:03:41.864 10:58:09 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 3862687 00:03:41.864 10:58:09 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:41.864 10:58:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 3862687 ']' 00:03:41.864 10:58:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:41.864 10:58:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:41.864 10:58:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:41.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:41.864 10:58:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:41.864 10:58:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:03:42.123 [2024-11-20 10:58:09.359098] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:03:42.123 [2024-11-20 10:58:09.359143] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3862687 ] 00:03:42.123 [2024-11-20 10:58:09.433048] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:42.123 [2024-11-20 10:58:09.476199] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:42.382 10:58:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:42.382 10:58:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:03:42.382 10:58:09 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:42.382 10:58:09 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:42.382 10:58:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:03:42.382 10:58:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:42.382 10:58:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:42.382 10:58:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:42.382 10:58:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:42.382 10:58:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:42.382 10:58:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:42.382 10:58:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:42.382 10:58:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:42.382 10:58:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:03:42.382 10:58:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:42.382 [2024-11-20 10:58:09.747363] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:03:42.382 [2024-11-20 10:58:09.747415] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3862893 ] 00:03:42.382 [2024-11-20 10:58:09.823398] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:42.382 [2024-11-20 10:58:09.864490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:03:42.382 [2024-11-20 10:58:09.864545] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:03:42.382 [2024-11-20 10:58:09.864554] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:03:42.383 [2024-11-20 10:58:09.864563] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:03:42.641 10:58:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:03:42.641 10:58:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:03:42.641 10:58:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:03:42.641 10:58:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:03:42.641 10:58:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:03:42.641 10:58:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:03:42.641 10:58:09 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:03:42.641 10:58:09 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 3862687 00:03:42.641 10:58:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 3862687 ']' 00:03:42.641 10:58:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 3862687 00:03:42.641 10:58:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:03:42.641 10:58:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:42.641 10:58:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3862687 00:03:42.641 10:58:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:42.641 10:58:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:42.641 10:58:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3862687' 00:03:42.641 killing process with pid 3862687 00:03:42.641 10:58:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 3862687 00:03:42.641 10:58:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 3862687 00:03:42.901 00:03:42.901 real 0m0.961s 00:03:42.901 user 0m1.018s 00:03:42.901 sys 0m0.398s 00:03:42.901 10:58:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:42.901 10:58:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:03:42.901 ************************************ 00:03:42.901 END TEST exit_on_failed_rpc_init 00:03:42.901 ************************************ 00:03:42.901 10:58:10 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:42.901 00:03:42.901 real 0m13.145s 00:03:42.901 user 0m12.376s 00:03:42.901 sys 0m1.586s 00:03:42.901 10:58:10 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:42.901 10:58:10 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:42.901 ************************************ 00:03:42.901 END TEST skip_rpc 00:03:42.901 ************************************ 00:03:42.901 10:58:10 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:03:42.901 10:58:10 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:42.901 10:58:10 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:42.901 10:58:10 -- common/autotest_common.sh@10 -- # set +x 00:03:42.901 ************************************ 00:03:42.901 START TEST rpc_client 00:03:42.901 ************************************ 00:03:42.901 10:58:10 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:03:43.160 * Looking for test storage... 00:03:43.160 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:03:43.160 10:58:10 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:43.160 10:58:10 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:03:43.160 10:58:10 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:43.160 10:58:10 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:43.160 10:58:10 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:43.160 10:58:10 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:43.160 10:58:10 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:43.161 10:58:10 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:03:43.161 10:58:10 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:03:43.161 10:58:10 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:03:43.161 10:58:10 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:03:43.161 10:58:10 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:03:43.161 10:58:10 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:03:43.161 10:58:10 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:03:43.161 10:58:10 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:43.161 10:58:10 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:03:43.161 10:58:10 rpc_client -- scripts/common.sh@345 -- # : 1 00:03:43.161 10:58:10 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:43.161 10:58:10 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:43.161 10:58:10 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:03:43.161 10:58:10 rpc_client -- scripts/common.sh@353 -- # local d=1 00:03:43.161 10:58:10 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:43.161 10:58:10 rpc_client -- scripts/common.sh@355 -- # echo 1 00:03:43.161 10:58:10 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:03:43.161 10:58:10 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:03:43.161 10:58:10 rpc_client -- scripts/common.sh@353 -- # local d=2 00:03:43.161 10:58:10 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:43.161 10:58:10 rpc_client -- scripts/common.sh@355 -- # echo 2 00:03:43.161 10:58:10 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:03:43.161 10:58:10 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:43.161 10:58:10 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:43.161 10:58:10 rpc_client -- scripts/common.sh@368 -- # return 0 00:03:43.161 10:58:10 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:43.161 10:58:10 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:43.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:43.161 --rc genhtml_branch_coverage=1 00:03:43.161 --rc genhtml_function_coverage=1 00:03:43.161 --rc genhtml_legend=1 00:03:43.161 --rc geninfo_all_blocks=1 00:03:43.161 --rc geninfo_unexecuted_blocks=1 00:03:43.161 00:03:43.161 ' 00:03:43.161 10:58:10 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:43.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:43.161 --rc genhtml_branch_coverage=1 00:03:43.161 --rc genhtml_function_coverage=1 00:03:43.161 --rc genhtml_legend=1 00:03:43.161 --rc geninfo_all_blocks=1 00:03:43.161 --rc geninfo_unexecuted_blocks=1 00:03:43.161 00:03:43.161 ' 00:03:43.161 10:58:10 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:43.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:43.161 --rc genhtml_branch_coverage=1 00:03:43.161 --rc genhtml_function_coverage=1 00:03:43.161 --rc genhtml_legend=1 00:03:43.161 --rc geninfo_all_blocks=1 00:03:43.161 --rc geninfo_unexecuted_blocks=1 00:03:43.161 00:03:43.161 ' 00:03:43.161 10:58:10 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:43.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:43.161 --rc genhtml_branch_coverage=1 00:03:43.161 --rc genhtml_function_coverage=1 00:03:43.161 --rc genhtml_legend=1 00:03:43.161 --rc geninfo_all_blocks=1 00:03:43.161 --rc geninfo_unexecuted_blocks=1 00:03:43.161 00:03:43.161 ' 00:03:43.161 10:58:10 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:03:43.161 OK 00:03:43.161 10:58:10 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:03:43.161 00:03:43.161 real 0m0.200s 00:03:43.161 user 0m0.122s 00:03:43.161 sys 0m0.091s 00:03:43.161 10:58:10 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:43.161 10:58:10 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:03:43.161 ************************************ 00:03:43.161 END TEST rpc_client 00:03:43.161 ************************************ 00:03:43.161 10:58:10 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:03:43.161 10:58:10 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:43.161 10:58:10 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:43.161 10:58:10 -- common/autotest_common.sh@10 -- # set +x 00:03:43.161 ************************************ 00:03:43.161 START TEST json_config 00:03:43.161 ************************************ 00:03:43.161 10:58:10 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:03:43.420 10:58:10 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:43.420 10:58:10 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:03:43.420 10:58:10 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:43.420 10:58:10 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:43.420 10:58:10 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:43.420 10:58:10 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:43.420 10:58:10 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:43.420 10:58:10 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:03:43.420 10:58:10 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:03:43.420 10:58:10 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:03:43.420 10:58:10 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:03:43.420 10:58:10 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:03:43.420 10:58:10 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:03:43.421 10:58:10 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:03:43.421 10:58:10 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:43.421 10:58:10 json_config -- scripts/common.sh@344 -- # case "$op" in 00:03:43.421 10:58:10 json_config -- scripts/common.sh@345 -- # : 1 00:03:43.421 10:58:10 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:43.421 10:58:10 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:43.421 10:58:10 json_config -- scripts/common.sh@365 -- # decimal 1 00:03:43.421 10:58:10 json_config -- scripts/common.sh@353 -- # local d=1 00:03:43.421 10:58:10 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:43.421 10:58:10 json_config -- scripts/common.sh@355 -- # echo 1 00:03:43.421 10:58:10 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:03:43.421 10:58:10 json_config -- scripts/common.sh@366 -- # decimal 2 00:03:43.421 10:58:10 json_config -- scripts/common.sh@353 -- # local d=2 00:03:43.421 10:58:10 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:43.421 10:58:10 json_config -- scripts/common.sh@355 -- # echo 2 00:03:43.421 10:58:10 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:03:43.421 10:58:10 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:43.421 10:58:10 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:43.421 10:58:10 json_config -- scripts/common.sh@368 -- # return 0 00:03:43.421 10:58:10 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:43.421 10:58:10 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:43.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:43.421 --rc genhtml_branch_coverage=1 00:03:43.421 --rc genhtml_function_coverage=1 00:03:43.421 --rc genhtml_legend=1 00:03:43.421 --rc geninfo_all_blocks=1 00:03:43.421 --rc geninfo_unexecuted_blocks=1 00:03:43.421 00:03:43.421 ' 00:03:43.421 10:58:10 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:43.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:43.421 --rc genhtml_branch_coverage=1 00:03:43.421 --rc genhtml_function_coverage=1 00:03:43.421 --rc genhtml_legend=1 00:03:43.421 --rc geninfo_all_blocks=1 00:03:43.421 --rc geninfo_unexecuted_blocks=1 00:03:43.421 00:03:43.421 ' 00:03:43.421 10:58:10 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:43.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:43.421 --rc genhtml_branch_coverage=1 00:03:43.421 --rc genhtml_function_coverage=1 00:03:43.421 --rc genhtml_legend=1 00:03:43.421 --rc geninfo_all_blocks=1 00:03:43.421 --rc geninfo_unexecuted_blocks=1 00:03:43.421 00:03:43.421 ' 00:03:43.421 10:58:10 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:43.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:43.421 --rc genhtml_branch_coverage=1 00:03:43.421 --rc genhtml_function_coverage=1 00:03:43.421 --rc genhtml_legend=1 00:03:43.421 --rc geninfo_all_blocks=1 00:03:43.421 --rc geninfo_unexecuted_blocks=1 00:03:43.421 00:03:43.421 ' 00:03:43.421 10:58:10 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:43.421 10:58:10 json_config -- nvmf/common.sh@7 -- # uname -s 00:03:43.421 10:58:10 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:43.421 10:58:10 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:43.421 10:58:10 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:43.421 10:58:10 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:43.421 10:58:10 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:43.421 10:58:10 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:43.421 10:58:10 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:43.421 10:58:10 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:43.421 10:58:10 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:43.421 10:58:10 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:43.421 10:58:10 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:03:43.421 10:58:10 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:03:43.421 10:58:10 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:43.421 10:58:10 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:43.421 10:58:10 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:43.421 10:58:10 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:43.421 10:58:10 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:43.421 10:58:10 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:03:43.421 10:58:10 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:43.421 10:58:10 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:43.421 10:58:10 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:43.421 10:58:10 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:43.421 10:58:10 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:43.421 10:58:10 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:43.421 10:58:10 json_config -- paths/export.sh@5 -- # export PATH 00:03:43.421 10:58:10 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:43.421 10:58:10 json_config -- nvmf/common.sh@51 -- # : 0 00:03:43.421 10:58:10 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:43.421 10:58:10 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:43.421 10:58:10 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:43.421 10:58:10 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:43.421 10:58:10 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:43.421 10:58:10 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:43.421 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:43.421 10:58:10 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:43.421 10:58:10 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:43.421 10:58:10 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:43.421 10:58:10 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:03:43.421 10:58:10 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:03:43.421 10:58:10 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:03:43.421 10:58:10 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:03:43.421 10:58:10 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:03:43.421 10:58:10 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:03:43.421 10:58:10 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:03:43.421 10:58:10 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:03:43.421 10:58:10 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:03:43.421 10:58:10 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:03:43.421 10:58:10 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:03:43.421 10:58:10 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:03:43.421 10:58:10 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:03:43.421 10:58:10 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:03:43.421 10:58:10 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:03:43.421 10:58:10 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:03:43.421 INFO: JSON configuration test init 00:03:43.421 10:58:10 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:03:43.421 10:58:10 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:03:43.421 10:58:10 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:43.421 10:58:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:43.421 10:58:10 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:03:43.421 10:58:10 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:43.421 10:58:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:43.421 10:58:10 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:03:43.422 10:58:10 json_config -- json_config/common.sh@9 -- # local app=target 00:03:43.422 10:58:10 json_config -- json_config/common.sh@10 -- # shift 00:03:43.422 10:58:10 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:03:43.422 10:58:10 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:03:43.422 10:58:10 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:03:43.422 10:58:10 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:43.422 10:58:10 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:43.422 10:58:10 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3863147 00:03:43.422 10:58:10 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:03:43.422 Waiting for target to run... 00:03:43.422 10:58:10 json_config -- json_config/common.sh@25 -- # waitforlisten 3863147 /var/tmp/spdk_tgt.sock 00:03:43.422 10:58:10 json_config -- common/autotest_common.sh@835 -- # '[' -z 3863147 ']' 00:03:43.422 10:58:10 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:43.422 10:58:10 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:03:43.422 10:58:10 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:43.422 10:58:10 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:43.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:43.422 10:58:10 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:43.422 10:58:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:43.422 [2024-11-20 10:58:10.892698] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:03:43.422 [2024-11-20 10:58:10.892747] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3863147 ] 00:03:43.989 [2024-11-20 10:58:11.180996] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:43.989 [2024-11-20 10:58:11.215797] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:44.249 10:58:11 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:44.249 10:58:11 json_config -- common/autotest_common.sh@868 -- # return 0 00:03:44.249 10:58:11 json_config -- json_config/common.sh@26 -- # echo '' 00:03:44.249 00:03:44.249 10:58:11 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:03:44.249 10:58:11 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:03:44.249 10:58:11 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:44.249 10:58:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:44.249 10:58:11 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:03:44.249 10:58:11 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:03:44.249 10:58:11 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:44.249 10:58:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:44.508 10:58:11 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:03:44.508 10:58:11 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:03:44.508 10:58:11 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:03:47.792 10:58:14 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:03:47.792 10:58:14 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:03:47.792 10:58:14 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:47.792 10:58:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:47.792 10:58:14 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:03:47.792 10:58:14 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:03:47.792 10:58:14 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:03:47.792 10:58:14 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:03:47.792 10:58:14 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:03:47.792 10:58:14 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:03:47.792 10:58:14 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:03:47.792 10:58:14 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:03:47.792 10:58:15 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:03:47.792 10:58:15 json_config -- json_config/json_config.sh@51 -- # local get_types 00:03:47.792 10:58:15 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:03:47.792 10:58:15 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:03:47.792 10:58:15 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:03:47.792 10:58:15 json_config -- json_config/json_config.sh@54 -- # sort 00:03:47.792 10:58:15 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:03:47.792 10:58:15 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:03:47.792 10:58:15 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:03:47.792 10:58:15 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:03:47.792 10:58:15 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:47.792 10:58:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:47.792 10:58:15 json_config -- json_config/json_config.sh@62 -- # return 0 00:03:47.792 10:58:15 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:03:47.792 10:58:15 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:03:47.792 10:58:15 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:03:47.792 10:58:15 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:03:47.792 10:58:15 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:03:47.792 10:58:15 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:03:47.792 10:58:15 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:47.792 10:58:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:47.792 10:58:15 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:03:47.792 10:58:15 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:03:47.792 10:58:15 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:03:47.792 10:58:15 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:03:47.792 10:58:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:03:48.050 MallocForNvmf0 00:03:48.050 10:58:15 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:03:48.050 10:58:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:03:48.050 MallocForNvmf1 00:03:48.050 10:58:15 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:03:48.050 10:58:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:03:48.309 [2024-11-20 10:58:15.692944] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:48.309 10:58:15 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:03:48.309 10:58:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:03:48.569 10:58:15 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:03:48.569 10:58:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:03:48.828 10:58:16 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:03:48.828 10:58:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:03:48.828 10:58:16 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:03:48.828 10:58:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:03:49.088 [2024-11-20 10:58:16.483447] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:03:49.088 10:58:16 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:03:49.088 10:58:16 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:49.088 10:58:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:49.088 10:58:16 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:03:49.088 10:58:16 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:49.088 10:58:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:49.088 10:58:16 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:03:49.088 10:58:16 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:03:49.088 10:58:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:03:49.346 MallocBdevForConfigChangeCheck 00:03:49.346 10:58:16 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:03:49.346 10:58:16 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:49.346 10:58:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:49.346 10:58:16 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:03:49.346 10:58:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:49.913 10:58:17 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:03:49.913 INFO: shutting down applications... 00:03:49.913 10:58:17 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:03:49.913 10:58:17 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:03:49.913 10:58:17 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:03:49.913 10:58:17 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:03:51.290 Calling clear_iscsi_subsystem 00:03:51.290 Calling clear_nvmf_subsystem 00:03:51.290 Calling clear_nbd_subsystem 00:03:51.290 Calling clear_ublk_subsystem 00:03:51.290 Calling clear_vhost_blk_subsystem 00:03:51.290 Calling clear_vhost_scsi_subsystem 00:03:51.290 Calling clear_bdev_subsystem 00:03:51.290 10:58:18 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:03:51.290 10:58:18 json_config -- json_config/json_config.sh@350 -- # count=100 00:03:51.290 10:58:18 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:03:51.290 10:58:18 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:51.290 10:58:18 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:03:51.290 10:58:18 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:03:51.862 10:58:19 json_config -- json_config/json_config.sh@352 -- # break 00:03:51.862 10:58:19 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:03:51.862 10:58:19 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:03:51.862 10:58:19 json_config -- json_config/common.sh@31 -- # local app=target 00:03:51.863 10:58:19 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:03:51.863 10:58:19 json_config -- json_config/common.sh@35 -- # [[ -n 3863147 ]] 00:03:51.863 10:58:19 json_config -- json_config/common.sh@38 -- # kill -SIGINT 3863147 00:03:51.863 10:58:19 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:03:51.863 10:58:19 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:51.863 10:58:19 json_config -- json_config/common.sh@41 -- # kill -0 3863147 00:03:51.863 10:58:19 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:03:52.123 10:58:19 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:03:52.123 10:58:19 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:52.123 10:58:19 json_config -- json_config/common.sh@41 -- # kill -0 3863147 00:03:52.123 10:58:19 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:03:52.123 10:58:19 json_config -- json_config/common.sh@43 -- # break 00:03:52.123 10:58:19 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:03:52.123 10:58:19 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:03:52.123 SPDK target shutdown done 00:03:52.123 10:58:19 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:03:52.123 INFO: relaunching applications... 00:03:52.123 10:58:19 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:52.123 10:58:19 json_config -- json_config/common.sh@9 -- # local app=target 00:03:52.123 10:58:19 json_config -- json_config/common.sh@10 -- # shift 00:03:52.123 10:58:19 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:03:52.123 10:58:19 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:03:52.123 10:58:19 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:03:52.123 10:58:19 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:52.123 10:58:19 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:52.123 10:58:19 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3864769 00:03:52.123 10:58:19 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:03:52.123 Waiting for target to run... 00:03:52.123 10:58:19 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:52.123 10:58:19 json_config -- json_config/common.sh@25 -- # waitforlisten 3864769 /var/tmp/spdk_tgt.sock 00:03:52.123 10:58:19 json_config -- common/autotest_common.sh@835 -- # '[' -z 3864769 ']' 00:03:52.123 10:58:19 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:52.123 10:58:19 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:52.123 10:58:19 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:52.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:52.123 10:58:19 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:52.123 10:58:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:52.383 [2024-11-20 10:58:19.641466] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:03:52.383 [2024-11-20 10:58:19.641521] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3864769 ] 00:03:52.642 [2024-11-20 10:58:19.927520] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:52.642 [2024-11-20 10:58:19.962200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:55.933 [2024-11-20 10:58:22.998681] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:55.933 [2024-11-20 10:58:23.031064] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:03:55.933 10:58:23 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:55.933 10:58:23 json_config -- common/autotest_common.sh@868 -- # return 0 00:03:55.933 10:58:23 json_config -- json_config/common.sh@26 -- # echo '' 00:03:55.933 00:03:55.933 10:58:23 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:03:55.933 10:58:23 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:03:55.933 INFO: Checking if target configuration is the same... 00:03:55.933 10:58:23 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:55.933 10:58:23 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:03:55.933 10:58:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:55.933 + '[' 2 -ne 2 ']' 00:03:55.933 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:03:55.933 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:03:55.933 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:55.933 +++ basename /dev/fd/62 00:03:55.933 ++ mktemp /tmp/62.XXX 00:03:55.933 + tmp_file_1=/tmp/62.s6c 00:03:55.933 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:55.933 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:03:55.933 + tmp_file_2=/tmp/spdk_tgt_config.json.r6k 00:03:55.933 + ret=0 00:03:55.933 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:03:55.933 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:03:56.193 + diff -u /tmp/62.s6c /tmp/spdk_tgt_config.json.r6k 00:03:56.193 + echo 'INFO: JSON config files are the same' 00:03:56.193 INFO: JSON config files are the same 00:03:56.193 + rm /tmp/62.s6c /tmp/spdk_tgt_config.json.r6k 00:03:56.193 + exit 0 00:03:56.193 10:58:23 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:03:56.193 10:58:23 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:03:56.193 INFO: changing configuration and checking if this can be detected... 00:03:56.193 10:58:23 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:03:56.193 10:58:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:03:56.193 10:58:23 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:56.193 10:58:23 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:03:56.193 10:58:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:56.193 + '[' 2 -ne 2 ']' 00:03:56.193 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:03:56.193 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:03:56.193 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:56.193 +++ basename /dev/fd/62 00:03:56.193 ++ mktemp /tmp/62.XXX 00:03:56.193 + tmp_file_1=/tmp/62.WqI 00:03:56.193 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:56.193 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:03:56.453 + tmp_file_2=/tmp/spdk_tgt_config.json.TRP 00:03:56.453 + ret=0 00:03:56.453 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:03:56.740 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:03:56.740 + diff -u /tmp/62.WqI /tmp/spdk_tgt_config.json.TRP 00:03:56.740 + ret=1 00:03:56.740 + echo '=== Start of file: /tmp/62.WqI ===' 00:03:56.741 + cat /tmp/62.WqI 00:03:56.741 + echo '=== End of file: /tmp/62.WqI ===' 00:03:56.741 + echo '' 00:03:56.741 + echo '=== Start of file: /tmp/spdk_tgt_config.json.TRP ===' 00:03:56.741 + cat /tmp/spdk_tgt_config.json.TRP 00:03:56.741 + echo '=== End of file: /tmp/spdk_tgt_config.json.TRP ===' 00:03:56.741 + echo '' 00:03:56.741 + rm /tmp/62.WqI /tmp/spdk_tgt_config.json.TRP 00:03:56.741 + exit 1 00:03:56.741 10:58:24 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:03:56.741 INFO: configuration change detected. 00:03:56.741 10:58:24 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:03:56.741 10:58:24 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:03:56.741 10:58:24 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:56.741 10:58:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:56.741 10:58:24 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:03:56.741 10:58:24 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:03:56.741 10:58:24 json_config -- json_config/json_config.sh@324 -- # [[ -n 3864769 ]] 00:03:56.741 10:58:24 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:03:56.741 10:58:24 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:03:56.741 10:58:24 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:56.741 10:58:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:56.741 10:58:24 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:03:56.741 10:58:24 json_config -- json_config/json_config.sh@200 -- # uname -s 00:03:56.741 10:58:24 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:03:56.741 10:58:24 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:03:56.741 10:58:24 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:03:56.741 10:58:24 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:03:56.741 10:58:24 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:56.741 10:58:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:56.741 10:58:24 json_config -- json_config/json_config.sh@330 -- # killprocess 3864769 00:03:56.741 10:58:24 json_config -- common/autotest_common.sh@954 -- # '[' -z 3864769 ']' 00:03:56.741 10:58:24 json_config -- common/autotest_common.sh@958 -- # kill -0 3864769 00:03:56.741 10:58:24 json_config -- common/autotest_common.sh@959 -- # uname 00:03:56.741 10:58:24 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:56.741 10:58:24 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3864769 00:03:56.741 10:58:24 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:56.741 10:58:24 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:56.741 10:58:24 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3864769' 00:03:56.741 killing process with pid 3864769 00:03:56.741 10:58:24 json_config -- common/autotest_common.sh@973 -- # kill 3864769 00:03:56.741 10:58:24 json_config -- common/autotest_common.sh@978 -- # wait 3864769 00:03:58.644 10:58:25 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:58.644 10:58:25 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:03:58.644 10:58:25 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:58.644 10:58:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:58.644 10:58:25 json_config -- json_config/json_config.sh@335 -- # return 0 00:03:58.644 10:58:25 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:03:58.644 INFO: Success 00:03:58.644 00:03:58.644 real 0m15.039s 00:03:58.644 user 0m15.756s 00:03:58.644 sys 0m2.363s 00:03:58.644 10:58:25 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:58.644 10:58:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:58.644 ************************************ 00:03:58.644 END TEST json_config 00:03:58.644 ************************************ 00:03:58.644 10:58:25 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:03:58.644 10:58:25 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:58.644 10:58:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:58.644 10:58:25 -- common/autotest_common.sh@10 -- # set +x 00:03:58.644 ************************************ 00:03:58.644 START TEST json_config_extra_key 00:03:58.644 ************************************ 00:03:58.644 10:58:25 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:03:58.644 10:58:25 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:58.644 10:58:25 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:03:58.645 10:58:25 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:58.645 10:58:25 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:58.645 10:58:25 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:58.645 10:58:25 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:58.645 10:58:25 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:58.645 10:58:25 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:03:58.645 10:58:25 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:03:58.645 10:58:25 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:03:58.645 10:58:25 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:03:58.645 10:58:25 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:03:58.645 10:58:25 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:03:58.645 10:58:25 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:03:58.645 10:58:25 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:58.645 10:58:25 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:03:58.645 10:58:25 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:03:58.645 10:58:25 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:58.645 10:58:25 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:58.645 10:58:25 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:03:58.645 10:58:25 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:03:58.645 10:58:25 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:58.645 10:58:25 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:03:58.645 10:58:25 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:03:58.645 10:58:25 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:03:58.645 10:58:25 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:03:58.645 10:58:25 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:58.645 10:58:25 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:03:58.645 10:58:25 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:03:58.645 10:58:25 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:58.645 10:58:25 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:58.645 10:58:25 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:03:58.645 10:58:25 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:58.645 10:58:25 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:58.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:58.645 --rc genhtml_branch_coverage=1 00:03:58.645 --rc genhtml_function_coverage=1 00:03:58.645 --rc genhtml_legend=1 00:03:58.645 --rc geninfo_all_blocks=1 00:03:58.645 --rc geninfo_unexecuted_blocks=1 00:03:58.645 00:03:58.645 ' 00:03:58.645 10:58:25 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:58.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:58.645 --rc genhtml_branch_coverage=1 00:03:58.645 --rc genhtml_function_coverage=1 00:03:58.645 --rc genhtml_legend=1 00:03:58.645 --rc geninfo_all_blocks=1 00:03:58.645 --rc geninfo_unexecuted_blocks=1 00:03:58.645 00:03:58.645 ' 00:03:58.645 10:58:25 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:58.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:58.645 --rc genhtml_branch_coverage=1 00:03:58.645 --rc genhtml_function_coverage=1 00:03:58.645 --rc genhtml_legend=1 00:03:58.645 --rc geninfo_all_blocks=1 00:03:58.645 --rc geninfo_unexecuted_blocks=1 00:03:58.645 00:03:58.645 ' 00:03:58.645 10:58:25 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:58.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:58.645 --rc genhtml_branch_coverage=1 00:03:58.645 --rc genhtml_function_coverage=1 00:03:58.645 --rc genhtml_legend=1 00:03:58.645 --rc geninfo_all_blocks=1 00:03:58.645 --rc geninfo_unexecuted_blocks=1 00:03:58.645 00:03:58.645 ' 00:03:58.645 10:58:25 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:58.645 10:58:25 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:03:58.645 10:58:25 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:58.645 10:58:25 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:58.645 10:58:25 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:58.645 10:58:25 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:58.645 10:58:25 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:58.645 10:58:25 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:58.645 10:58:25 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:58.645 10:58:25 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:58.645 10:58:25 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:58.645 10:58:25 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:58.645 10:58:25 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:03:58.645 10:58:25 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:03:58.645 10:58:25 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:58.645 10:58:25 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:58.645 10:58:25 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:58.645 10:58:25 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:58.645 10:58:25 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:58.645 10:58:25 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:03:58.645 10:58:25 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:58.645 10:58:25 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:58.645 10:58:25 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:58.645 10:58:25 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:58.645 10:58:25 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:58.645 10:58:25 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:58.645 10:58:25 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:03:58.645 10:58:25 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:58.645 10:58:25 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:03:58.645 10:58:25 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:58.645 10:58:25 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:58.645 10:58:25 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:58.645 10:58:25 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:58.645 10:58:25 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:58.645 10:58:25 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:58.645 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:58.645 10:58:25 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:58.645 10:58:25 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:58.645 10:58:25 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:58.645 10:58:25 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:03:58.645 10:58:25 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:03:58.645 10:58:25 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:03:58.646 10:58:25 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:03:58.646 10:58:25 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:03:58.646 10:58:25 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:03:58.646 10:58:25 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:03:58.646 10:58:25 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:03:58.646 10:58:25 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:03:58.646 10:58:25 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:03:58.646 10:58:25 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:03:58.646 INFO: launching applications... 00:03:58.646 10:58:25 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:03:58.646 10:58:25 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:03:58.646 10:58:25 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:03:58.646 10:58:25 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:03:58.646 10:58:25 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:03:58.646 10:58:25 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:03:58.646 10:58:25 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:58.646 10:58:25 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:58.646 10:58:25 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=3865932 00:03:58.646 10:58:25 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:03:58.646 Waiting for target to run... 00:03:58.646 10:58:25 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 3865932 /var/tmp/spdk_tgt.sock 00:03:58.646 10:58:25 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 3865932 ']' 00:03:58.646 10:58:25 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:03:58.646 10:58:25 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:58.646 10:58:25 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:58.646 10:58:25 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:58.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:58.646 10:58:25 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:58.646 10:58:25 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:03:58.646 [2024-11-20 10:58:25.991647] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:03:58.646 [2024-11-20 10:58:25.991700] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3865932 ] 00:03:59.211 [2024-11-20 10:58:26.442656] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:59.211 [2024-11-20 10:58:26.496519] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:59.469 10:58:26 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:59.469 10:58:26 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:03:59.469 10:58:26 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:03:59.469 00:03:59.469 10:58:26 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:03:59.469 INFO: shutting down applications... 00:03:59.469 10:58:26 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:03:59.469 10:58:26 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:03:59.469 10:58:26 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:03:59.469 10:58:26 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 3865932 ]] 00:03:59.469 10:58:26 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 3865932 00:03:59.469 10:58:26 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:03:59.469 10:58:26 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:59.469 10:58:26 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3865932 00:03:59.469 10:58:26 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:00.035 10:58:27 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:00.035 10:58:27 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:00.035 10:58:27 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3865932 00:04:00.035 10:58:27 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:00.035 10:58:27 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:00.035 10:58:27 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:00.035 10:58:27 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:00.035 SPDK target shutdown done 00:04:00.035 10:58:27 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:00.035 Success 00:04:00.035 00:04:00.035 real 0m1.580s 00:04:00.035 user 0m1.208s 00:04:00.035 sys 0m0.567s 00:04:00.035 10:58:27 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:00.035 10:58:27 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:00.035 ************************************ 00:04:00.035 END TEST json_config_extra_key 00:04:00.035 ************************************ 00:04:00.035 10:58:27 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:00.035 10:58:27 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:00.035 10:58:27 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:00.035 10:58:27 -- common/autotest_common.sh@10 -- # set +x 00:04:00.035 ************************************ 00:04:00.035 START TEST alias_rpc 00:04:00.035 ************************************ 00:04:00.035 10:58:27 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:00.035 * Looking for test storage... 00:04:00.035 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:00.035 10:58:27 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:00.035 10:58:27 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:00.035 10:58:27 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:00.293 10:58:27 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:00.293 10:58:27 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:00.293 10:58:27 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:00.293 10:58:27 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:00.293 10:58:27 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:00.293 10:58:27 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:00.293 10:58:27 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:00.293 10:58:27 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:00.293 10:58:27 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:00.293 10:58:27 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:00.293 10:58:27 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:00.293 10:58:27 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:00.293 10:58:27 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:00.293 10:58:27 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:00.293 10:58:27 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:00.293 10:58:27 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:00.293 10:58:27 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:00.293 10:58:27 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:00.293 10:58:27 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:00.293 10:58:27 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:00.293 10:58:27 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:00.293 10:58:27 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:00.293 10:58:27 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:00.293 10:58:27 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:00.293 10:58:27 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:00.293 10:58:27 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:00.293 10:58:27 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:00.293 10:58:27 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:00.294 10:58:27 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:00.294 10:58:27 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:00.294 10:58:27 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:00.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:00.294 --rc genhtml_branch_coverage=1 00:04:00.294 --rc genhtml_function_coverage=1 00:04:00.294 --rc genhtml_legend=1 00:04:00.294 --rc geninfo_all_blocks=1 00:04:00.294 --rc geninfo_unexecuted_blocks=1 00:04:00.294 00:04:00.294 ' 00:04:00.294 10:58:27 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:00.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:00.294 --rc genhtml_branch_coverage=1 00:04:00.294 --rc genhtml_function_coverage=1 00:04:00.294 --rc genhtml_legend=1 00:04:00.294 --rc geninfo_all_blocks=1 00:04:00.294 --rc geninfo_unexecuted_blocks=1 00:04:00.294 00:04:00.294 ' 00:04:00.294 10:58:27 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:00.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:00.294 --rc genhtml_branch_coverage=1 00:04:00.294 --rc genhtml_function_coverage=1 00:04:00.294 --rc genhtml_legend=1 00:04:00.294 --rc geninfo_all_blocks=1 00:04:00.294 --rc geninfo_unexecuted_blocks=1 00:04:00.294 00:04:00.294 ' 00:04:00.294 10:58:27 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:00.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:00.294 --rc genhtml_branch_coverage=1 00:04:00.294 --rc genhtml_function_coverage=1 00:04:00.294 --rc genhtml_legend=1 00:04:00.294 --rc geninfo_all_blocks=1 00:04:00.294 --rc geninfo_unexecuted_blocks=1 00:04:00.294 00:04:00.294 ' 00:04:00.294 10:58:27 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:00.294 10:58:27 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3866322 00:04:00.294 10:58:27 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3866322 00:04:00.294 10:58:27 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:00.294 10:58:27 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 3866322 ']' 00:04:00.294 10:58:27 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:00.294 10:58:27 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:00.294 10:58:27 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:00.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:00.294 10:58:27 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:00.294 10:58:27 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:00.294 [2024-11-20 10:58:27.639615] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:04:00.294 [2024-11-20 10:58:27.639665] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3866322 ] 00:04:00.294 [2024-11-20 10:58:27.711938] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:00.294 [2024-11-20 10:58:27.752416] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:00.552 10:58:27 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:00.552 10:58:27 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:00.552 10:58:27 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:00.810 10:58:28 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3866322 00:04:00.810 10:58:28 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 3866322 ']' 00:04:00.810 10:58:28 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 3866322 00:04:00.810 10:58:28 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:04:00.810 10:58:28 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:00.810 10:58:28 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3866322 00:04:00.810 10:58:28 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:00.810 10:58:28 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:00.810 10:58:28 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3866322' 00:04:00.810 killing process with pid 3866322 00:04:00.810 10:58:28 alias_rpc -- common/autotest_common.sh@973 -- # kill 3866322 00:04:00.810 10:58:28 alias_rpc -- common/autotest_common.sh@978 -- # wait 3866322 00:04:01.068 00:04:01.068 real 0m1.136s 00:04:01.068 user 0m1.181s 00:04:01.068 sys 0m0.401s 00:04:01.068 10:58:28 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:01.068 10:58:28 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:01.068 ************************************ 00:04:01.068 END TEST alias_rpc 00:04:01.068 ************************************ 00:04:01.327 10:58:28 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:01.327 10:58:28 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:01.327 10:58:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:01.327 10:58:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:01.327 10:58:28 -- common/autotest_common.sh@10 -- # set +x 00:04:01.327 ************************************ 00:04:01.327 START TEST spdkcli_tcp 00:04:01.327 ************************************ 00:04:01.327 10:58:28 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:01.327 * Looking for test storage... 00:04:01.327 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:01.327 10:58:28 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:01.327 10:58:28 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:04:01.327 10:58:28 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:01.327 10:58:28 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:01.327 10:58:28 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:01.327 10:58:28 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:01.327 10:58:28 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:01.327 10:58:28 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:01.327 10:58:28 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:01.327 10:58:28 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:01.327 10:58:28 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:01.327 10:58:28 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:01.327 10:58:28 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:01.327 10:58:28 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:01.327 10:58:28 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:01.327 10:58:28 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:01.327 10:58:28 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:01.327 10:58:28 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:01.327 10:58:28 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:01.327 10:58:28 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:01.327 10:58:28 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:01.327 10:58:28 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:01.327 10:58:28 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:01.327 10:58:28 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:01.327 10:58:28 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:01.327 10:58:28 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:01.327 10:58:28 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:01.327 10:58:28 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:01.327 10:58:28 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:01.327 10:58:28 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:01.327 10:58:28 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:01.327 10:58:28 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:01.327 10:58:28 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:01.327 10:58:28 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:01.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:01.327 --rc genhtml_branch_coverage=1 00:04:01.327 --rc genhtml_function_coverage=1 00:04:01.327 --rc genhtml_legend=1 00:04:01.327 --rc geninfo_all_blocks=1 00:04:01.327 --rc geninfo_unexecuted_blocks=1 00:04:01.327 00:04:01.327 ' 00:04:01.327 10:58:28 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:01.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:01.327 --rc genhtml_branch_coverage=1 00:04:01.327 --rc genhtml_function_coverage=1 00:04:01.327 --rc genhtml_legend=1 00:04:01.328 --rc geninfo_all_blocks=1 00:04:01.328 --rc geninfo_unexecuted_blocks=1 00:04:01.328 00:04:01.328 ' 00:04:01.328 10:58:28 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:01.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:01.328 --rc genhtml_branch_coverage=1 00:04:01.328 --rc genhtml_function_coverage=1 00:04:01.328 --rc genhtml_legend=1 00:04:01.328 --rc geninfo_all_blocks=1 00:04:01.328 --rc geninfo_unexecuted_blocks=1 00:04:01.328 00:04:01.328 ' 00:04:01.328 10:58:28 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:01.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:01.328 --rc genhtml_branch_coverage=1 00:04:01.328 --rc genhtml_function_coverage=1 00:04:01.328 --rc genhtml_legend=1 00:04:01.328 --rc geninfo_all_blocks=1 00:04:01.328 --rc geninfo_unexecuted_blocks=1 00:04:01.328 00:04:01.328 ' 00:04:01.328 10:58:28 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:01.328 10:58:28 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:01.328 10:58:28 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:01.328 10:58:28 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:01.328 10:58:28 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:01.328 10:58:28 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:01.328 10:58:28 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:01.328 10:58:28 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:01.328 10:58:28 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:01.328 10:58:28 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3866587 00:04:01.328 10:58:28 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:01.328 10:58:28 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 3866587 00:04:01.328 10:58:28 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 3866587 ']' 00:04:01.328 10:58:28 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:01.328 10:58:28 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:01.328 10:58:28 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:01.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:01.328 10:58:28 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:01.328 10:58:28 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:01.586 [2024-11-20 10:58:28.853191] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:04:01.586 [2024-11-20 10:58:28.853241] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3866587 ] 00:04:01.586 [2024-11-20 10:58:28.929332] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:01.586 [2024-11-20 10:58:28.973087] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:01.586 [2024-11-20 10:58:28.973090] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:01.844 10:58:29 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:01.844 10:58:29 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:04:01.844 10:58:29 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=3866620 00:04:01.844 10:58:29 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:01.844 10:58:29 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:02.103 [ 00:04:02.103 "bdev_malloc_delete", 00:04:02.103 "bdev_malloc_create", 00:04:02.103 "bdev_null_resize", 00:04:02.103 "bdev_null_delete", 00:04:02.103 "bdev_null_create", 00:04:02.103 "bdev_nvme_cuse_unregister", 00:04:02.103 "bdev_nvme_cuse_register", 00:04:02.103 "bdev_opal_new_user", 00:04:02.103 "bdev_opal_set_lock_state", 00:04:02.103 "bdev_opal_delete", 00:04:02.103 "bdev_opal_get_info", 00:04:02.103 "bdev_opal_create", 00:04:02.103 "bdev_nvme_opal_revert", 00:04:02.103 "bdev_nvme_opal_init", 00:04:02.103 "bdev_nvme_send_cmd", 00:04:02.103 "bdev_nvme_set_keys", 00:04:02.104 "bdev_nvme_get_path_iostat", 00:04:02.104 "bdev_nvme_get_mdns_discovery_info", 00:04:02.104 "bdev_nvme_stop_mdns_discovery", 00:04:02.104 "bdev_nvme_start_mdns_discovery", 00:04:02.104 "bdev_nvme_set_multipath_policy", 00:04:02.104 "bdev_nvme_set_preferred_path", 00:04:02.104 "bdev_nvme_get_io_paths", 00:04:02.104 "bdev_nvme_remove_error_injection", 00:04:02.104 "bdev_nvme_add_error_injection", 00:04:02.104 "bdev_nvme_get_discovery_info", 00:04:02.104 "bdev_nvme_stop_discovery", 00:04:02.104 "bdev_nvme_start_discovery", 00:04:02.104 "bdev_nvme_get_controller_health_info", 00:04:02.104 "bdev_nvme_disable_controller", 00:04:02.104 "bdev_nvme_enable_controller", 00:04:02.104 "bdev_nvme_reset_controller", 00:04:02.104 "bdev_nvme_get_transport_statistics", 00:04:02.104 "bdev_nvme_apply_firmware", 00:04:02.104 "bdev_nvme_detach_controller", 00:04:02.104 "bdev_nvme_get_controllers", 00:04:02.104 "bdev_nvme_attach_controller", 00:04:02.104 "bdev_nvme_set_hotplug", 00:04:02.104 "bdev_nvme_set_options", 00:04:02.104 "bdev_passthru_delete", 00:04:02.104 "bdev_passthru_create", 00:04:02.104 "bdev_lvol_set_parent_bdev", 00:04:02.104 "bdev_lvol_set_parent", 00:04:02.104 "bdev_lvol_check_shallow_copy", 00:04:02.104 "bdev_lvol_start_shallow_copy", 00:04:02.104 "bdev_lvol_grow_lvstore", 00:04:02.104 "bdev_lvol_get_lvols", 00:04:02.104 "bdev_lvol_get_lvstores", 00:04:02.104 "bdev_lvol_delete", 00:04:02.104 "bdev_lvol_set_read_only", 00:04:02.104 "bdev_lvol_resize", 00:04:02.104 "bdev_lvol_decouple_parent", 00:04:02.104 "bdev_lvol_inflate", 00:04:02.104 "bdev_lvol_rename", 00:04:02.104 "bdev_lvol_clone_bdev", 00:04:02.104 "bdev_lvol_clone", 00:04:02.104 "bdev_lvol_snapshot", 00:04:02.104 "bdev_lvol_create", 00:04:02.104 "bdev_lvol_delete_lvstore", 00:04:02.104 "bdev_lvol_rename_lvstore", 00:04:02.104 "bdev_lvol_create_lvstore", 00:04:02.104 "bdev_raid_set_options", 00:04:02.104 "bdev_raid_remove_base_bdev", 00:04:02.104 "bdev_raid_add_base_bdev", 00:04:02.104 "bdev_raid_delete", 00:04:02.104 "bdev_raid_create", 00:04:02.104 "bdev_raid_get_bdevs", 00:04:02.104 "bdev_error_inject_error", 00:04:02.104 "bdev_error_delete", 00:04:02.104 "bdev_error_create", 00:04:02.104 "bdev_split_delete", 00:04:02.104 "bdev_split_create", 00:04:02.104 "bdev_delay_delete", 00:04:02.104 "bdev_delay_create", 00:04:02.104 "bdev_delay_update_latency", 00:04:02.104 "bdev_zone_block_delete", 00:04:02.104 "bdev_zone_block_create", 00:04:02.104 "blobfs_create", 00:04:02.104 "blobfs_detect", 00:04:02.104 "blobfs_set_cache_size", 00:04:02.104 "bdev_aio_delete", 00:04:02.104 "bdev_aio_rescan", 00:04:02.104 "bdev_aio_create", 00:04:02.104 "bdev_ftl_set_property", 00:04:02.104 "bdev_ftl_get_properties", 00:04:02.104 "bdev_ftl_get_stats", 00:04:02.104 "bdev_ftl_unmap", 00:04:02.104 "bdev_ftl_unload", 00:04:02.104 "bdev_ftl_delete", 00:04:02.104 "bdev_ftl_load", 00:04:02.104 "bdev_ftl_create", 00:04:02.104 "bdev_virtio_attach_controller", 00:04:02.104 "bdev_virtio_scsi_get_devices", 00:04:02.104 "bdev_virtio_detach_controller", 00:04:02.104 "bdev_virtio_blk_set_hotplug", 00:04:02.104 "bdev_iscsi_delete", 00:04:02.104 "bdev_iscsi_create", 00:04:02.104 "bdev_iscsi_set_options", 00:04:02.104 "accel_error_inject_error", 00:04:02.104 "ioat_scan_accel_module", 00:04:02.104 "dsa_scan_accel_module", 00:04:02.104 "iaa_scan_accel_module", 00:04:02.104 "vfu_virtio_create_fs_endpoint", 00:04:02.104 "vfu_virtio_create_scsi_endpoint", 00:04:02.104 "vfu_virtio_scsi_remove_target", 00:04:02.104 "vfu_virtio_scsi_add_target", 00:04:02.104 "vfu_virtio_create_blk_endpoint", 00:04:02.104 "vfu_virtio_delete_endpoint", 00:04:02.104 "keyring_file_remove_key", 00:04:02.104 "keyring_file_add_key", 00:04:02.104 "keyring_linux_set_options", 00:04:02.104 "fsdev_aio_delete", 00:04:02.104 "fsdev_aio_create", 00:04:02.104 "iscsi_get_histogram", 00:04:02.104 "iscsi_enable_histogram", 00:04:02.104 "iscsi_set_options", 00:04:02.104 "iscsi_get_auth_groups", 00:04:02.104 "iscsi_auth_group_remove_secret", 00:04:02.104 "iscsi_auth_group_add_secret", 00:04:02.104 "iscsi_delete_auth_group", 00:04:02.104 "iscsi_create_auth_group", 00:04:02.104 "iscsi_set_discovery_auth", 00:04:02.104 "iscsi_get_options", 00:04:02.104 "iscsi_target_node_request_logout", 00:04:02.104 "iscsi_target_node_set_redirect", 00:04:02.104 "iscsi_target_node_set_auth", 00:04:02.104 "iscsi_target_node_add_lun", 00:04:02.104 "iscsi_get_stats", 00:04:02.104 "iscsi_get_connections", 00:04:02.104 "iscsi_portal_group_set_auth", 00:04:02.104 "iscsi_start_portal_group", 00:04:02.104 "iscsi_delete_portal_group", 00:04:02.104 "iscsi_create_portal_group", 00:04:02.104 "iscsi_get_portal_groups", 00:04:02.104 "iscsi_delete_target_node", 00:04:02.104 "iscsi_target_node_remove_pg_ig_maps", 00:04:02.104 "iscsi_target_node_add_pg_ig_maps", 00:04:02.104 "iscsi_create_target_node", 00:04:02.104 "iscsi_get_target_nodes", 00:04:02.104 "iscsi_delete_initiator_group", 00:04:02.104 "iscsi_initiator_group_remove_initiators", 00:04:02.104 "iscsi_initiator_group_add_initiators", 00:04:02.104 "iscsi_create_initiator_group", 00:04:02.104 "iscsi_get_initiator_groups", 00:04:02.104 "nvmf_set_crdt", 00:04:02.104 "nvmf_set_config", 00:04:02.104 "nvmf_set_max_subsystems", 00:04:02.104 "nvmf_stop_mdns_prr", 00:04:02.104 "nvmf_publish_mdns_prr", 00:04:02.104 "nvmf_subsystem_get_listeners", 00:04:02.104 "nvmf_subsystem_get_qpairs", 00:04:02.104 "nvmf_subsystem_get_controllers", 00:04:02.104 "nvmf_get_stats", 00:04:02.104 "nvmf_get_transports", 00:04:02.104 "nvmf_create_transport", 00:04:02.104 "nvmf_get_targets", 00:04:02.104 "nvmf_delete_target", 00:04:02.104 "nvmf_create_target", 00:04:02.104 "nvmf_subsystem_allow_any_host", 00:04:02.104 "nvmf_subsystem_set_keys", 00:04:02.104 "nvmf_subsystem_remove_host", 00:04:02.104 "nvmf_subsystem_add_host", 00:04:02.104 "nvmf_ns_remove_host", 00:04:02.104 "nvmf_ns_add_host", 00:04:02.104 "nvmf_subsystem_remove_ns", 00:04:02.104 "nvmf_subsystem_set_ns_ana_group", 00:04:02.104 "nvmf_subsystem_add_ns", 00:04:02.104 "nvmf_subsystem_listener_set_ana_state", 00:04:02.104 "nvmf_discovery_get_referrals", 00:04:02.104 "nvmf_discovery_remove_referral", 00:04:02.104 "nvmf_discovery_add_referral", 00:04:02.104 "nvmf_subsystem_remove_listener", 00:04:02.104 "nvmf_subsystem_add_listener", 00:04:02.104 "nvmf_delete_subsystem", 00:04:02.104 "nvmf_create_subsystem", 00:04:02.104 "nvmf_get_subsystems", 00:04:02.104 "env_dpdk_get_mem_stats", 00:04:02.104 "nbd_get_disks", 00:04:02.104 "nbd_stop_disk", 00:04:02.104 "nbd_start_disk", 00:04:02.104 "ublk_recover_disk", 00:04:02.104 "ublk_get_disks", 00:04:02.104 "ublk_stop_disk", 00:04:02.104 "ublk_start_disk", 00:04:02.104 "ublk_destroy_target", 00:04:02.104 "ublk_create_target", 00:04:02.104 "virtio_blk_create_transport", 00:04:02.104 "virtio_blk_get_transports", 00:04:02.104 "vhost_controller_set_coalescing", 00:04:02.104 "vhost_get_controllers", 00:04:02.104 "vhost_delete_controller", 00:04:02.104 "vhost_create_blk_controller", 00:04:02.104 "vhost_scsi_controller_remove_target", 00:04:02.104 "vhost_scsi_controller_add_target", 00:04:02.104 "vhost_start_scsi_controller", 00:04:02.104 "vhost_create_scsi_controller", 00:04:02.104 "thread_set_cpumask", 00:04:02.104 "scheduler_set_options", 00:04:02.104 "framework_get_governor", 00:04:02.104 "framework_get_scheduler", 00:04:02.104 "framework_set_scheduler", 00:04:02.104 "framework_get_reactors", 00:04:02.104 "thread_get_io_channels", 00:04:02.104 "thread_get_pollers", 00:04:02.104 "thread_get_stats", 00:04:02.104 "framework_monitor_context_switch", 00:04:02.104 "spdk_kill_instance", 00:04:02.104 "log_enable_timestamps", 00:04:02.104 "log_get_flags", 00:04:02.104 "log_clear_flag", 00:04:02.104 "log_set_flag", 00:04:02.104 "log_get_level", 00:04:02.104 "log_set_level", 00:04:02.104 "log_get_print_level", 00:04:02.104 "log_set_print_level", 00:04:02.104 "framework_enable_cpumask_locks", 00:04:02.104 "framework_disable_cpumask_locks", 00:04:02.104 "framework_wait_init", 00:04:02.104 "framework_start_init", 00:04:02.104 "scsi_get_devices", 00:04:02.104 "bdev_get_histogram", 00:04:02.104 "bdev_enable_histogram", 00:04:02.104 "bdev_set_qos_limit", 00:04:02.104 "bdev_set_qd_sampling_period", 00:04:02.104 "bdev_get_bdevs", 00:04:02.104 "bdev_reset_iostat", 00:04:02.104 "bdev_get_iostat", 00:04:02.104 "bdev_examine", 00:04:02.104 "bdev_wait_for_examine", 00:04:02.104 "bdev_set_options", 00:04:02.104 "accel_get_stats", 00:04:02.104 "accel_set_options", 00:04:02.104 "accel_set_driver", 00:04:02.104 "accel_crypto_key_destroy", 00:04:02.104 "accel_crypto_keys_get", 00:04:02.104 "accel_crypto_key_create", 00:04:02.104 "accel_assign_opc", 00:04:02.104 "accel_get_module_info", 00:04:02.104 "accel_get_opc_assignments", 00:04:02.104 "vmd_rescan", 00:04:02.104 "vmd_remove_device", 00:04:02.104 "vmd_enable", 00:04:02.104 "sock_get_default_impl", 00:04:02.104 "sock_set_default_impl", 00:04:02.104 "sock_impl_set_options", 00:04:02.104 "sock_impl_get_options", 00:04:02.104 "iobuf_get_stats", 00:04:02.104 "iobuf_set_options", 00:04:02.104 "keyring_get_keys", 00:04:02.104 "vfu_tgt_set_base_path", 00:04:02.105 "framework_get_pci_devices", 00:04:02.105 "framework_get_config", 00:04:02.105 "framework_get_subsystems", 00:04:02.105 "fsdev_set_opts", 00:04:02.105 "fsdev_get_opts", 00:04:02.105 "trace_get_info", 00:04:02.105 "trace_get_tpoint_group_mask", 00:04:02.105 "trace_disable_tpoint_group", 00:04:02.105 "trace_enable_tpoint_group", 00:04:02.105 "trace_clear_tpoint_mask", 00:04:02.105 "trace_set_tpoint_mask", 00:04:02.105 "notify_get_notifications", 00:04:02.105 "notify_get_types", 00:04:02.105 "spdk_get_version", 00:04:02.105 "rpc_get_methods" 00:04:02.105 ] 00:04:02.105 10:58:29 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:02.105 10:58:29 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:02.105 10:58:29 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:02.105 10:58:29 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:02.105 10:58:29 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 3866587 00:04:02.105 10:58:29 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 3866587 ']' 00:04:02.105 10:58:29 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 3866587 00:04:02.105 10:58:29 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:04:02.105 10:58:29 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:02.105 10:58:29 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3866587 00:04:02.105 10:58:29 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:02.105 10:58:29 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:02.105 10:58:29 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3866587' 00:04:02.105 killing process with pid 3866587 00:04:02.105 10:58:29 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 3866587 00:04:02.105 10:58:29 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 3866587 00:04:02.363 00:04:02.363 real 0m1.145s 00:04:02.363 user 0m1.915s 00:04:02.363 sys 0m0.439s 00:04:02.363 10:58:29 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:02.363 10:58:29 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:02.363 ************************************ 00:04:02.363 END TEST spdkcli_tcp 00:04:02.363 ************************************ 00:04:02.363 10:58:29 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:02.363 10:58:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:02.363 10:58:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:02.363 10:58:29 -- common/autotest_common.sh@10 -- # set +x 00:04:02.363 ************************************ 00:04:02.363 START TEST dpdk_mem_utility 00:04:02.363 ************************************ 00:04:02.363 10:58:29 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:02.621 * Looking for test storage... 00:04:02.621 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:02.621 10:58:29 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:02.621 10:58:29 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:04:02.621 10:58:29 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:02.621 10:58:29 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:02.621 10:58:29 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:02.621 10:58:29 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:02.621 10:58:29 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:02.621 10:58:29 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:02.621 10:58:29 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:02.621 10:58:29 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:02.622 10:58:29 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:02.622 10:58:29 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:02.622 10:58:29 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:02.622 10:58:29 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:02.622 10:58:29 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:02.622 10:58:29 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:02.622 10:58:29 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:02.622 10:58:29 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:02.622 10:58:29 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:02.622 10:58:29 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:02.622 10:58:29 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:02.622 10:58:29 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:02.622 10:58:29 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:02.622 10:58:29 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:02.622 10:58:29 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:02.622 10:58:29 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:02.622 10:58:29 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:02.622 10:58:29 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:02.622 10:58:30 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:02.622 10:58:30 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:02.622 10:58:30 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:02.622 10:58:30 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:02.622 10:58:30 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:02.622 10:58:30 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:02.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.622 --rc genhtml_branch_coverage=1 00:04:02.622 --rc genhtml_function_coverage=1 00:04:02.622 --rc genhtml_legend=1 00:04:02.622 --rc geninfo_all_blocks=1 00:04:02.622 --rc geninfo_unexecuted_blocks=1 00:04:02.622 00:04:02.622 ' 00:04:02.622 10:58:30 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:02.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.622 --rc genhtml_branch_coverage=1 00:04:02.622 --rc genhtml_function_coverage=1 00:04:02.622 --rc genhtml_legend=1 00:04:02.622 --rc geninfo_all_blocks=1 00:04:02.622 --rc geninfo_unexecuted_blocks=1 00:04:02.622 00:04:02.622 ' 00:04:02.622 10:58:30 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:02.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.622 --rc genhtml_branch_coverage=1 00:04:02.622 --rc genhtml_function_coverage=1 00:04:02.622 --rc genhtml_legend=1 00:04:02.622 --rc geninfo_all_blocks=1 00:04:02.622 --rc geninfo_unexecuted_blocks=1 00:04:02.622 00:04:02.622 ' 00:04:02.622 10:58:30 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:02.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.622 --rc genhtml_branch_coverage=1 00:04:02.622 --rc genhtml_function_coverage=1 00:04:02.622 --rc genhtml_legend=1 00:04:02.622 --rc geninfo_all_blocks=1 00:04:02.622 --rc geninfo_unexecuted_blocks=1 00:04:02.622 00:04:02.622 ' 00:04:02.622 10:58:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:02.622 10:58:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3866785 00:04:02.622 10:58:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3866785 00:04:02.622 10:58:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:02.622 10:58:30 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 3866785 ']' 00:04:02.622 10:58:30 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:02.622 10:58:30 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:02.622 10:58:30 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:02.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:02.622 10:58:30 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:02.622 10:58:30 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:02.622 [2024-11-20 10:58:30.056143] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:04:02.622 [2024-11-20 10:58:30.056196] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3866785 ] 00:04:02.881 [2024-11-20 10:58:30.132911] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:02.881 [2024-11-20 10:58:30.174582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:03.141 10:58:30 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:03.141 10:58:30 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:04:03.141 10:58:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:03.141 10:58:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:03.141 10:58:30 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:03.141 10:58:30 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:03.141 { 00:04:03.141 "filename": "/tmp/spdk_mem_dump.txt" 00:04:03.141 } 00:04:03.141 10:58:30 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:03.141 10:58:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:03.141 DPDK memory size 810.000000 MiB in 1 heap(s) 00:04:03.141 1 heaps totaling size 810.000000 MiB 00:04:03.141 size: 810.000000 MiB heap id: 0 00:04:03.141 end heaps---------- 00:04:03.141 9 mempools totaling size 595.772034 MiB 00:04:03.141 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:03.141 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:03.141 size: 92.545471 MiB name: bdev_io_3866785 00:04:03.141 size: 50.003479 MiB name: msgpool_3866785 00:04:03.141 size: 36.509338 MiB name: fsdev_io_3866785 00:04:03.141 size: 21.763794 MiB name: PDU_Pool 00:04:03.141 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:03.141 size: 4.133484 MiB name: evtpool_3866785 00:04:03.141 size: 0.026123 MiB name: Session_Pool 00:04:03.141 end mempools------- 00:04:03.141 6 memzones totaling size 4.142822 MiB 00:04:03.141 size: 1.000366 MiB name: RG_ring_0_3866785 00:04:03.141 size: 1.000366 MiB name: RG_ring_1_3866785 00:04:03.141 size: 1.000366 MiB name: RG_ring_4_3866785 00:04:03.141 size: 1.000366 MiB name: RG_ring_5_3866785 00:04:03.141 size: 0.125366 MiB name: RG_ring_2_3866785 00:04:03.141 size: 0.015991 MiB name: RG_ring_3_3866785 00:04:03.141 end memzones------- 00:04:03.141 10:58:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:03.141 heap id: 0 total size: 810.000000 MiB number of busy elements: 44 number of free elements: 15 00:04:03.141 list of free elements. size: 10.862488 MiB 00:04:03.141 element at address: 0x200018a00000 with size: 0.999878 MiB 00:04:03.141 element at address: 0x200018c00000 with size: 0.999878 MiB 00:04:03.141 element at address: 0x200000400000 with size: 0.998535 MiB 00:04:03.141 element at address: 0x200031800000 with size: 0.994446 MiB 00:04:03.141 element at address: 0x200006400000 with size: 0.959839 MiB 00:04:03.141 element at address: 0x200012c00000 with size: 0.954285 MiB 00:04:03.141 element at address: 0x200018e00000 with size: 0.936584 MiB 00:04:03.141 element at address: 0x200000200000 with size: 0.717346 MiB 00:04:03.141 element at address: 0x20001a600000 with size: 0.582886 MiB 00:04:03.141 element at address: 0x200000c00000 with size: 0.495422 MiB 00:04:03.141 element at address: 0x20000a600000 with size: 0.490723 MiB 00:04:03.141 element at address: 0x200019000000 with size: 0.485657 MiB 00:04:03.141 element at address: 0x200003e00000 with size: 0.481934 MiB 00:04:03.141 element at address: 0x200027a00000 with size: 0.410034 MiB 00:04:03.141 element at address: 0x200000800000 with size: 0.355042 MiB 00:04:03.141 list of standard malloc elements. size: 199.218628 MiB 00:04:03.141 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:04:03.141 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:04:03.141 element at address: 0x200018afff80 with size: 1.000122 MiB 00:04:03.141 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:04:03.141 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:03.141 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:03.141 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:04:03.141 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:03.141 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:04:03.141 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:03.141 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:03.141 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:04:03.141 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:04:03.141 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:04:03.141 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:04:03.141 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:04:03.141 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:04:03.141 element at address: 0x20000085b040 with size: 0.000183 MiB 00:04:03.141 element at address: 0x20000085f300 with size: 0.000183 MiB 00:04:03.141 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:04:03.141 element at address: 0x20000087f680 with size: 0.000183 MiB 00:04:03.141 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:04:03.141 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:04:03.141 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:04:03.141 element at address: 0x200000cff000 with size: 0.000183 MiB 00:04:03.141 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:04:03.141 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:04:03.141 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:04:03.141 element at address: 0x200003efb980 with size: 0.000183 MiB 00:04:03.141 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:04:03.141 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:04:03.141 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:04:03.141 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:04:03.141 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:04:03.141 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:04:03.141 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:04:03.141 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:04:03.141 element at address: 0x20001a695380 with size: 0.000183 MiB 00:04:03.141 element at address: 0x20001a695440 with size: 0.000183 MiB 00:04:03.141 element at address: 0x200027a68f80 with size: 0.000183 MiB 00:04:03.141 element at address: 0x200027a69040 with size: 0.000183 MiB 00:04:03.141 element at address: 0x200027a6fc40 with size: 0.000183 MiB 00:04:03.141 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:04:03.141 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:04:03.142 list of memzone associated elements. size: 599.918884 MiB 00:04:03.142 element at address: 0x20001a695500 with size: 211.416748 MiB 00:04:03.142 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:03.142 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:04:03.142 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:03.142 element at address: 0x200012df4780 with size: 92.045044 MiB 00:04:03.142 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_3866785_0 00:04:03.142 element at address: 0x200000dff380 with size: 48.003052 MiB 00:04:03.142 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3866785_0 00:04:03.142 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:04:03.142 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_3866785_0 00:04:03.142 element at address: 0x2000191be940 with size: 20.255554 MiB 00:04:03.142 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:03.142 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:04:03.142 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:03.142 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:04:03.142 associated memzone info: size: 3.000122 MiB name: MP_evtpool_3866785_0 00:04:03.142 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:04:03.142 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3866785 00:04:03.142 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:03.142 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3866785 00:04:03.142 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:04:03.142 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:03.142 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:04:03.142 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:03.142 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:04:03.142 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:03.142 element at address: 0x200003efba40 with size: 1.008118 MiB 00:04:03.142 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:03.142 element at address: 0x200000cff180 with size: 1.000488 MiB 00:04:03.142 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3866785 00:04:03.142 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:04:03.142 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3866785 00:04:03.142 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:04:03.142 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3866785 00:04:03.142 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:04:03.142 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3866785 00:04:03.142 element at address: 0x20000087f740 with size: 0.500488 MiB 00:04:03.142 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_3866785 00:04:03.142 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:04:03.142 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3866785 00:04:03.142 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:04:03.142 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:03.142 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:04:03.142 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:03.142 element at address: 0x20001907c540 with size: 0.250488 MiB 00:04:03.142 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:03.142 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:04:03.142 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_3866785 00:04:03.142 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:04:03.142 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3866785 00:04:03.142 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:04:03.142 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:03.142 element at address: 0x200027a69100 with size: 0.023743 MiB 00:04:03.142 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:03.142 element at address: 0x20000085b100 with size: 0.016113 MiB 00:04:03.142 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3866785 00:04:03.142 element at address: 0x200027a6f240 with size: 0.002441 MiB 00:04:03.142 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:03.142 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:04:03.142 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3866785 00:04:03.142 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:04:03.142 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_3866785 00:04:03.142 element at address: 0x20000085af00 with size: 0.000305 MiB 00:04:03.142 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3866785 00:04:03.142 element at address: 0x200027a6fd00 with size: 0.000305 MiB 00:04:03.142 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:03.142 10:58:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:03.142 10:58:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3866785 00:04:03.142 10:58:30 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 3866785 ']' 00:04:03.142 10:58:30 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 3866785 00:04:03.142 10:58:30 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:04:03.142 10:58:30 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:03.142 10:58:30 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3866785 00:04:03.142 10:58:30 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:03.142 10:58:30 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:03.142 10:58:30 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3866785' 00:04:03.142 killing process with pid 3866785 00:04:03.142 10:58:30 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 3866785 00:04:03.142 10:58:30 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 3866785 00:04:03.400 00:04:03.400 real 0m1.024s 00:04:03.400 user 0m0.949s 00:04:03.400 sys 0m0.424s 00:04:03.400 10:58:30 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:03.400 10:58:30 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:03.400 ************************************ 00:04:03.400 END TEST dpdk_mem_utility 00:04:03.401 ************************************ 00:04:03.401 10:58:30 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:03.401 10:58:30 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:03.401 10:58:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:03.401 10:58:30 -- common/autotest_common.sh@10 -- # set +x 00:04:03.659 ************************************ 00:04:03.659 START TEST event 00:04:03.659 ************************************ 00:04:03.659 10:58:30 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:03.660 * Looking for test storage... 00:04:03.660 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:03.660 10:58:31 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:03.660 10:58:31 event -- common/autotest_common.sh@1693 -- # lcov --version 00:04:03.660 10:58:31 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:03.660 10:58:31 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:03.660 10:58:31 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:03.660 10:58:31 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:03.660 10:58:31 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:03.660 10:58:31 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:03.660 10:58:31 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:03.660 10:58:31 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:03.660 10:58:31 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:03.660 10:58:31 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:03.660 10:58:31 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:03.660 10:58:31 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:03.660 10:58:31 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:03.660 10:58:31 event -- scripts/common.sh@344 -- # case "$op" in 00:04:03.660 10:58:31 event -- scripts/common.sh@345 -- # : 1 00:04:03.660 10:58:31 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:03.660 10:58:31 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:03.660 10:58:31 event -- scripts/common.sh@365 -- # decimal 1 00:04:03.660 10:58:31 event -- scripts/common.sh@353 -- # local d=1 00:04:03.660 10:58:31 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:03.660 10:58:31 event -- scripts/common.sh@355 -- # echo 1 00:04:03.660 10:58:31 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:03.660 10:58:31 event -- scripts/common.sh@366 -- # decimal 2 00:04:03.660 10:58:31 event -- scripts/common.sh@353 -- # local d=2 00:04:03.660 10:58:31 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:03.660 10:58:31 event -- scripts/common.sh@355 -- # echo 2 00:04:03.660 10:58:31 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:03.660 10:58:31 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:03.660 10:58:31 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:03.660 10:58:31 event -- scripts/common.sh@368 -- # return 0 00:04:03.660 10:58:31 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:03.660 10:58:31 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:03.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:03.660 --rc genhtml_branch_coverage=1 00:04:03.660 --rc genhtml_function_coverage=1 00:04:03.660 --rc genhtml_legend=1 00:04:03.660 --rc geninfo_all_blocks=1 00:04:03.660 --rc geninfo_unexecuted_blocks=1 00:04:03.660 00:04:03.660 ' 00:04:03.660 10:58:31 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:03.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:03.660 --rc genhtml_branch_coverage=1 00:04:03.660 --rc genhtml_function_coverage=1 00:04:03.660 --rc genhtml_legend=1 00:04:03.660 --rc geninfo_all_blocks=1 00:04:03.660 --rc geninfo_unexecuted_blocks=1 00:04:03.660 00:04:03.660 ' 00:04:03.660 10:58:31 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:03.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:03.660 --rc genhtml_branch_coverage=1 00:04:03.660 --rc genhtml_function_coverage=1 00:04:03.660 --rc genhtml_legend=1 00:04:03.660 --rc geninfo_all_blocks=1 00:04:03.660 --rc geninfo_unexecuted_blocks=1 00:04:03.660 00:04:03.660 ' 00:04:03.660 10:58:31 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:03.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:03.660 --rc genhtml_branch_coverage=1 00:04:03.660 --rc genhtml_function_coverage=1 00:04:03.660 --rc genhtml_legend=1 00:04:03.660 --rc geninfo_all_blocks=1 00:04:03.660 --rc geninfo_unexecuted_blocks=1 00:04:03.660 00:04:03.660 ' 00:04:03.660 10:58:31 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:03.660 10:58:31 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:03.660 10:58:31 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:03.660 10:58:31 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:04:03.660 10:58:31 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:03.660 10:58:31 event -- common/autotest_common.sh@10 -- # set +x 00:04:03.660 ************************************ 00:04:03.660 START TEST event_perf 00:04:03.660 ************************************ 00:04:03.660 10:58:31 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:03.919 Running I/O for 1 seconds...[2024-11-20 10:58:31.155352] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:04:03.919 [2024-11-20 10:58:31.155423] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3867000 ] 00:04:03.919 [2024-11-20 10:58:31.236421] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:03.919 [2024-11-20 10:58:31.281324] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:03.919 [2024-11-20 10:58:31.281360] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:03.919 [2024-11-20 10:58:31.281395] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:03.919 [2024-11-20 10:58:31.281394] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:04.852 Running I/O for 1 seconds... 00:04:04.852 lcore 0: 199916 00:04:04.852 lcore 1: 199915 00:04:04.852 lcore 2: 199915 00:04:04.852 lcore 3: 199916 00:04:04.852 done. 00:04:04.852 00:04:04.852 real 0m1.189s 00:04:04.852 user 0m4.098s 00:04:04.852 sys 0m0.087s 00:04:04.852 10:58:32 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:04.852 10:58:32 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:04.852 ************************************ 00:04:04.852 END TEST event_perf 00:04:04.852 ************************************ 00:04:05.111 10:58:32 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:05.111 10:58:32 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:05.111 10:58:32 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:05.111 10:58:32 event -- common/autotest_common.sh@10 -- # set +x 00:04:05.111 ************************************ 00:04:05.111 START TEST event_reactor 00:04:05.111 ************************************ 00:04:05.111 10:58:32 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:05.111 [2024-11-20 10:58:32.415770] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:04:05.111 [2024-11-20 10:58:32.415835] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3867250 ] 00:04:05.111 [2024-11-20 10:58:32.497852] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:05.111 [2024-11-20 10:58:32.538535] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:06.487 test_start 00:04:06.487 oneshot 00:04:06.487 tick 100 00:04:06.487 tick 100 00:04:06.487 tick 250 00:04:06.487 tick 100 00:04:06.487 tick 100 00:04:06.487 tick 100 00:04:06.487 tick 250 00:04:06.487 tick 500 00:04:06.487 tick 100 00:04:06.487 tick 100 00:04:06.487 tick 250 00:04:06.487 tick 100 00:04:06.487 tick 100 00:04:06.487 test_end 00:04:06.487 00:04:06.487 real 0m1.182s 00:04:06.487 user 0m1.105s 00:04:06.487 sys 0m0.072s 00:04:06.487 10:58:33 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:06.487 10:58:33 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:06.488 ************************************ 00:04:06.488 END TEST event_reactor 00:04:06.488 ************************************ 00:04:06.488 10:58:33 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:06.488 10:58:33 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:06.488 10:58:33 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:06.488 10:58:33 event -- common/autotest_common.sh@10 -- # set +x 00:04:06.488 ************************************ 00:04:06.488 START TEST event_reactor_perf 00:04:06.488 ************************************ 00:04:06.488 10:58:33 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:06.488 [2024-11-20 10:58:33.669872] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:04:06.488 [2024-11-20 10:58:33.669938] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3867501 ] 00:04:06.488 [2024-11-20 10:58:33.751113] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:06.488 [2024-11-20 10:58:33.791745] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:07.424 test_start 00:04:07.424 test_end 00:04:07.424 Performance: 503214 events per second 00:04:07.424 00:04:07.424 real 0m1.180s 00:04:07.424 user 0m1.096s 00:04:07.424 sys 0m0.080s 00:04:07.424 10:58:34 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:07.424 10:58:34 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:07.424 ************************************ 00:04:07.424 END TEST event_reactor_perf 00:04:07.424 ************************************ 00:04:07.424 10:58:34 event -- event/event.sh@49 -- # uname -s 00:04:07.424 10:58:34 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:07.424 10:58:34 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:07.424 10:58:34 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:07.424 10:58:34 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:07.424 10:58:34 event -- common/autotest_common.sh@10 -- # set +x 00:04:07.424 ************************************ 00:04:07.424 START TEST event_scheduler 00:04:07.424 ************************************ 00:04:07.424 10:58:34 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:07.683 * Looking for test storage... 00:04:07.683 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:04:07.683 10:58:34 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:07.683 10:58:34 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:04:07.683 10:58:34 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:07.683 10:58:35 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:07.683 10:58:35 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:07.683 10:58:35 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:07.683 10:58:35 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:07.683 10:58:35 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:04:07.683 10:58:35 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:04:07.683 10:58:35 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:04:07.683 10:58:35 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:04:07.683 10:58:35 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:04:07.683 10:58:35 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:04:07.683 10:58:35 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:04:07.683 10:58:35 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:07.683 10:58:35 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:04:07.683 10:58:35 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:04:07.683 10:58:35 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:07.683 10:58:35 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:07.683 10:58:35 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:04:07.683 10:58:35 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:04:07.683 10:58:35 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:07.683 10:58:35 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:04:07.683 10:58:35 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:04:07.683 10:58:35 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:04:07.683 10:58:35 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:04:07.683 10:58:35 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:07.683 10:58:35 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:04:07.684 10:58:35 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:04:07.684 10:58:35 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:07.684 10:58:35 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:07.684 10:58:35 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:04:07.684 10:58:35 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:07.684 10:58:35 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:07.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.684 --rc genhtml_branch_coverage=1 00:04:07.684 --rc genhtml_function_coverage=1 00:04:07.684 --rc genhtml_legend=1 00:04:07.684 --rc geninfo_all_blocks=1 00:04:07.684 --rc geninfo_unexecuted_blocks=1 00:04:07.684 00:04:07.684 ' 00:04:07.684 10:58:35 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:07.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.684 --rc genhtml_branch_coverage=1 00:04:07.684 --rc genhtml_function_coverage=1 00:04:07.684 --rc genhtml_legend=1 00:04:07.684 --rc geninfo_all_blocks=1 00:04:07.684 --rc geninfo_unexecuted_blocks=1 00:04:07.684 00:04:07.684 ' 00:04:07.684 10:58:35 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:07.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.684 --rc genhtml_branch_coverage=1 00:04:07.684 --rc genhtml_function_coverage=1 00:04:07.684 --rc genhtml_legend=1 00:04:07.684 --rc geninfo_all_blocks=1 00:04:07.684 --rc geninfo_unexecuted_blocks=1 00:04:07.684 00:04:07.684 ' 00:04:07.684 10:58:35 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:07.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.684 --rc genhtml_branch_coverage=1 00:04:07.684 --rc genhtml_function_coverage=1 00:04:07.684 --rc genhtml_legend=1 00:04:07.684 --rc geninfo_all_blocks=1 00:04:07.684 --rc geninfo_unexecuted_blocks=1 00:04:07.684 00:04:07.684 ' 00:04:07.684 10:58:35 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:07.684 10:58:35 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=3867787 00:04:07.684 10:58:35 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:07.684 10:58:35 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:07.684 10:58:35 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 3867787 00:04:07.684 10:58:35 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 3867787 ']' 00:04:07.684 10:58:35 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:07.684 10:58:35 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:07.684 10:58:35 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:07.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:07.684 10:58:35 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:07.684 10:58:35 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:07.684 [2024-11-20 10:58:35.126090] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:04:07.684 [2024-11-20 10:58:35.126139] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3867787 ] 00:04:07.942 [2024-11-20 10:58:35.200047] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:07.942 [2024-11-20 10:58:35.243501] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:07.942 [2024-11-20 10:58:35.243545] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:07.942 [2024-11-20 10:58:35.243658] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:07.942 [2024-11-20 10:58:35.243659] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:07.942 10:58:35 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:07.942 10:58:35 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:04:07.942 10:58:35 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:07.942 10:58:35 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:07.942 10:58:35 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:07.942 [2024-11-20 10:58:35.292327] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:04:07.942 [2024-11-20 10:58:35.292344] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:04:07.942 [2024-11-20 10:58:35.292353] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:07.942 [2024-11-20 10:58:35.292359] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:07.942 [2024-11-20 10:58:35.292364] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:07.942 10:58:35 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:07.942 10:58:35 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:07.942 10:58:35 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:07.942 10:58:35 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:07.942 [2024-11-20 10:58:35.366667] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:07.942 10:58:35 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:07.942 10:58:35 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:07.942 10:58:35 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:07.942 10:58:35 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:07.942 10:58:35 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:07.942 ************************************ 00:04:07.942 START TEST scheduler_create_thread 00:04:07.942 ************************************ 00:04:07.942 10:58:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:04:07.942 10:58:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:07.942 10:58:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:07.942 10:58:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:07.942 2 00:04:07.942 10:58:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:07.942 10:58:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:07.942 10:58:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:07.942 10:58:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:07.942 3 00:04:07.942 10:58:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:07.942 10:58:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:07.942 10:58:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:07.942 10:58:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:08.200 4 00:04:08.200 10:58:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:08.200 10:58:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:08.200 10:58:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:08.200 10:58:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:08.200 5 00:04:08.200 10:58:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:08.200 10:58:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:08.200 10:58:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:08.200 10:58:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:08.200 6 00:04:08.200 10:58:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:08.200 10:58:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:08.200 10:58:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:08.200 10:58:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:08.200 7 00:04:08.200 10:58:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:08.200 10:58:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:08.200 10:58:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:08.200 10:58:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:08.200 8 00:04:08.200 10:58:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:08.200 10:58:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:08.200 10:58:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:08.200 10:58:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:08.200 9 00:04:08.200 10:58:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:08.200 10:58:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:08.201 10:58:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:08.201 10:58:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:08.201 10 00:04:08.201 10:58:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:08.201 10:58:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:08.201 10:58:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:08.201 10:58:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:08.201 10:58:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:08.201 10:58:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:08.201 10:58:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:08.201 10:58:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:08.201 10:58:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:08.767 10:58:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:08.767 10:58:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:08.767 10:58:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:08.767 10:58:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:10.142 10:58:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:10.142 10:58:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:10.142 10:58:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:10.142 10:58:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:10.142 10:58:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:11.077 10:58:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:11.077 00:04:11.077 real 0m3.103s 00:04:11.077 user 0m0.023s 00:04:11.077 sys 0m0.006s 00:04:11.077 10:58:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:11.077 10:58:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:11.077 ************************************ 00:04:11.077 END TEST scheduler_create_thread 00:04:11.077 ************************************ 00:04:11.077 10:58:38 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:11.077 10:58:38 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 3867787 00:04:11.077 10:58:38 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 3867787 ']' 00:04:11.077 10:58:38 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 3867787 00:04:11.077 10:58:38 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:04:11.077 10:58:38 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:11.077 10:58:38 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3867787 00:04:11.366 10:58:38 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:04:11.366 10:58:38 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:04:11.366 10:58:38 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3867787' 00:04:11.366 killing process with pid 3867787 00:04:11.366 10:58:38 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 3867787 00:04:11.366 10:58:38 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 3867787 00:04:11.657 [2024-11-20 10:58:38.885885] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:11.657 00:04:11.657 real 0m4.167s 00:04:11.657 user 0m6.644s 00:04:11.657 sys 0m0.396s 00:04:11.657 10:58:39 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:11.657 10:58:39 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:11.657 ************************************ 00:04:11.657 END TEST event_scheduler 00:04:11.657 ************************************ 00:04:11.657 10:58:39 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:11.657 10:58:39 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:11.657 10:58:39 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:11.657 10:58:39 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:11.657 10:58:39 event -- common/autotest_common.sh@10 -- # set +x 00:04:11.657 ************************************ 00:04:11.657 START TEST app_repeat 00:04:11.914 ************************************ 00:04:11.914 10:58:39 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:04:11.914 10:58:39 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:11.914 10:58:39 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:11.914 10:58:39 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:11.914 10:58:39 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:11.914 10:58:39 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:11.915 10:58:39 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:11.915 10:58:39 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:11.915 10:58:39 event.app_repeat -- event/event.sh@19 -- # repeat_pid=3868536 00:04:11.915 10:58:39 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:11.915 10:58:39 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:11.915 10:58:39 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3868536' 00:04:11.915 Process app_repeat pid: 3868536 00:04:11.915 10:58:39 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:11.915 10:58:39 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:11.915 spdk_app_start Round 0 00:04:11.915 10:58:39 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3868536 /var/tmp/spdk-nbd.sock 00:04:11.915 10:58:39 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 3868536 ']' 00:04:11.915 10:58:39 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:11.915 10:58:39 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:11.915 10:58:39 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:11.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:11.915 10:58:39 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:11.915 10:58:39 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:11.915 [2024-11-20 10:58:39.186854] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:04:11.915 [2024-11-20 10:58:39.186910] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3868536 ] 00:04:11.915 [2024-11-20 10:58:39.264826] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:11.915 [2024-11-20 10:58:39.306320] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:11.915 [2024-11-20 10:58:39.306321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:11.915 10:58:39 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:11.915 10:58:39 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:11.915 10:58:39 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:12.174 Malloc0 00:04:12.174 10:58:39 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:12.433 Malloc1 00:04:12.433 10:58:39 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:12.433 10:58:39 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:12.433 10:58:39 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:12.433 10:58:39 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:12.433 10:58:39 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:12.433 10:58:39 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:12.433 10:58:39 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:12.433 10:58:39 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:12.433 10:58:39 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:12.433 10:58:39 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:12.433 10:58:39 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:12.433 10:58:39 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:12.433 10:58:39 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:12.433 10:58:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:12.433 10:58:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:12.433 10:58:39 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:12.693 /dev/nbd0 00:04:12.693 10:58:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:12.693 10:58:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:12.693 10:58:40 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:12.693 10:58:40 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:12.693 10:58:40 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:12.693 10:58:40 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:12.693 10:58:40 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:12.693 10:58:40 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:12.693 10:58:40 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:12.693 10:58:40 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:12.693 10:58:40 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:12.693 1+0 records in 00:04:12.693 1+0 records out 00:04:12.693 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000187091 s, 21.9 MB/s 00:04:12.693 10:58:40 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:12.693 10:58:40 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:12.693 10:58:40 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:12.693 10:58:40 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:12.693 10:58:40 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:12.693 10:58:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:12.693 10:58:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:12.693 10:58:40 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:12.952 /dev/nbd1 00:04:12.952 10:58:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:12.952 10:58:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:12.952 10:58:40 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:12.952 10:58:40 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:12.952 10:58:40 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:12.952 10:58:40 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:12.952 10:58:40 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:12.952 10:58:40 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:12.952 10:58:40 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:12.952 10:58:40 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:12.952 10:58:40 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:12.952 1+0 records in 00:04:12.952 1+0 records out 00:04:12.952 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000208095 s, 19.7 MB/s 00:04:12.952 10:58:40 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:12.952 10:58:40 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:12.952 10:58:40 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:12.952 10:58:40 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:12.952 10:58:40 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:12.952 10:58:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:12.952 10:58:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:12.952 10:58:40 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:12.952 10:58:40 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:12.952 10:58:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:13.211 10:58:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:13.211 { 00:04:13.211 "nbd_device": "/dev/nbd0", 00:04:13.211 "bdev_name": "Malloc0" 00:04:13.211 }, 00:04:13.211 { 00:04:13.211 "nbd_device": "/dev/nbd1", 00:04:13.211 "bdev_name": "Malloc1" 00:04:13.211 } 00:04:13.211 ]' 00:04:13.211 10:58:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:13.211 { 00:04:13.211 "nbd_device": "/dev/nbd0", 00:04:13.211 "bdev_name": "Malloc0" 00:04:13.211 }, 00:04:13.211 { 00:04:13.211 "nbd_device": "/dev/nbd1", 00:04:13.211 "bdev_name": "Malloc1" 00:04:13.211 } 00:04:13.211 ]' 00:04:13.211 10:58:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:13.211 10:58:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:13.211 /dev/nbd1' 00:04:13.211 10:58:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:13.211 /dev/nbd1' 00:04:13.211 10:58:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:13.211 10:58:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:13.211 10:58:40 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:13.211 10:58:40 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:13.211 10:58:40 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:13.211 10:58:40 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:13.211 10:58:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:13.211 10:58:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:13.211 10:58:40 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:13.211 10:58:40 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:13.211 10:58:40 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:13.211 10:58:40 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:13.211 256+0 records in 00:04:13.211 256+0 records out 00:04:13.211 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103005 s, 102 MB/s 00:04:13.211 10:58:40 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:13.211 10:58:40 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:13.211 256+0 records in 00:04:13.211 256+0 records out 00:04:13.211 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.014562 s, 72.0 MB/s 00:04:13.211 10:58:40 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:13.211 10:58:40 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:13.211 256+0 records in 00:04:13.211 256+0 records out 00:04:13.211 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0148575 s, 70.6 MB/s 00:04:13.211 10:58:40 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:13.211 10:58:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:13.211 10:58:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:13.211 10:58:40 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:13.211 10:58:40 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:13.211 10:58:40 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:13.211 10:58:40 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:13.211 10:58:40 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:13.211 10:58:40 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:13.211 10:58:40 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:13.211 10:58:40 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:13.211 10:58:40 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:13.211 10:58:40 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:13.211 10:58:40 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:13.211 10:58:40 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:13.211 10:58:40 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:13.211 10:58:40 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:13.211 10:58:40 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:13.211 10:58:40 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:13.470 10:58:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:13.470 10:58:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:13.470 10:58:40 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:13.470 10:58:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:13.470 10:58:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:13.470 10:58:40 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:13.470 10:58:40 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:13.470 10:58:40 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:13.470 10:58:40 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:13.470 10:58:40 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:13.729 10:58:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:13.729 10:58:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:13.729 10:58:41 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:13.729 10:58:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:13.729 10:58:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:13.729 10:58:41 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:13.729 10:58:41 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:13.729 10:58:41 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:13.729 10:58:41 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:13.729 10:58:41 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:13.729 10:58:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:13.986 10:58:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:13.986 10:58:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:13.986 10:58:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:13.986 10:58:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:13.986 10:58:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:13.986 10:58:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:13.986 10:58:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:13.986 10:58:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:13.987 10:58:41 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:13.987 10:58:41 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:13.987 10:58:41 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:13.987 10:58:41 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:13.987 10:58:41 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:14.244 10:58:41 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:14.244 [2024-11-20 10:58:41.692243] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:14.244 [2024-11-20 10:58:41.729736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:14.245 [2024-11-20 10:58:41.729736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:14.502 [2024-11-20 10:58:41.770976] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:14.502 [2024-11-20 10:58:41.771023] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:17.786 10:58:44 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:17.786 10:58:44 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:17.786 spdk_app_start Round 1 00:04:17.786 10:58:44 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3868536 /var/tmp/spdk-nbd.sock 00:04:17.786 10:58:44 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 3868536 ']' 00:04:17.786 10:58:44 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:17.786 10:58:44 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:17.786 10:58:44 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:17.786 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:17.786 10:58:44 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:17.786 10:58:44 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:17.786 10:58:44 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:17.786 10:58:44 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:17.786 10:58:44 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:17.786 Malloc0 00:04:17.786 10:58:44 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:17.786 Malloc1 00:04:17.786 10:58:45 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:17.786 10:58:45 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:17.786 10:58:45 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:17.786 10:58:45 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:17.786 10:58:45 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:17.786 10:58:45 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:17.786 10:58:45 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:17.786 10:58:45 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:17.786 10:58:45 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:17.786 10:58:45 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:17.786 10:58:45 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:17.786 10:58:45 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:17.786 10:58:45 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:17.786 10:58:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:17.786 10:58:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:17.786 10:58:45 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:18.044 /dev/nbd0 00:04:18.044 10:58:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:18.044 10:58:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:18.045 10:58:45 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:18.045 10:58:45 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:18.045 10:58:45 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:18.045 10:58:45 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:18.045 10:58:45 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:18.045 10:58:45 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:18.045 10:58:45 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:18.045 10:58:45 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:18.045 10:58:45 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:18.045 1+0 records in 00:04:18.045 1+0 records out 00:04:18.045 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000463621 s, 8.8 MB/s 00:04:18.045 10:58:45 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:18.045 10:58:45 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:18.045 10:58:45 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:18.045 10:58:45 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:18.045 10:58:45 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:18.045 10:58:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:18.045 10:58:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:18.045 10:58:45 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:18.303 /dev/nbd1 00:04:18.303 10:58:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:18.303 10:58:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:18.303 10:58:45 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:18.303 10:58:45 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:18.303 10:58:45 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:18.303 10:58:45 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:18.303 10:58:45 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:18.303 10:58:45 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:18.303 10:58:45 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:18.303 10:58:45 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:18.303 10:58:45 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:18.303 1+0 records in 00:04:18.303 1+0 records out 00:04:18.303 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000251692 s, 16.3 MB/s 00:04:18.303 10:58:45 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:18.303 10:58:45 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:18.303 10:58:45 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:18.303 10:58:45 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:18.303 10:58:45 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:18.303 10:58:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:18.303 10:58:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:18.303 10:58:45 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:18.303 10:58:45 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:18.303 10:58:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:18.562 10:58:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:18.562 { 00:04:18.562 "nbd_device": "/dev/nbd0", 00:04:18.562 "bdev_name": "Malloc0" 00:04:18.562 }, 00:04:18.562 { 00:04:18.562 "nbd_device": "/dev/nbd1", 00:04:18.562 "bdev_name": "Malloc1" 00:04:18.562 } 00:04:18.562 ]' 00:04:18.562 10:58:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:18.562 10:58:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:18.562 { 00:04:18.562 "nbd_device": "/dev/nbd0", 00:04:18.562 "bdev_name": "Malloc0" 00:04:18.562 }, 00:04:18.562 { 00:04:18.562 "nbd_device": "/dev/nbd1", 00:04:18.562 "bdev_name": "Malloc1" 00:04:18.562 } 00:04:18.562 ]' 00:04:18.562 10:58:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:18.562 /dev/nbd1' 00:04:18.562 10:58:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:18.562 /dev/nbd1' 00:04:18.562 10:58:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:18.562 10:58:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:18.562 10:58:45 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:18.562 10:58:45 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:18.562 10:58:45 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:18.562 10:58:45 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:18.562 10:58:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:18.562 10:58:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:18.562 10:58:45 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:18.562 10:58:45 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:18.562 10:58:45 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:18.562 10:58:45 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:18.562 256+0 records in 00:04:18.562 256+0 records out 00:04:18.562 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.010279 s, 102 MB/s 00:04:18.562 10:58:45 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:18.562 10:58:45 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:18.562 256+0 records in 00:04:18.562 256+0 records out 00:04:18.562 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0139402 s, 75.2 MB/s 00:04:18.562 10:58:45 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:18.562 10:58:45 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:18.562 256+0 records in 00:04:18.562 256+0 records out 00:04:18.562 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0148756 s, 70.5 MB/s 00:04:18.562 10:58:45 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:18.562 10:58:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:18.562 10:58:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:18.562 10:58:45 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:18.562 10:58:45 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:18.562 10:58:45 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:18.562 10:58:45 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:18.562 10:58:45 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:18.562 10:58:45 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:18.562 10:58:45 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:18.562 10:58:45 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:18.562 10:58:45 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:18.562 10:58:46 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:18.562 10:58:46 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:18.562 10:58:46 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:18.562 10:58:46 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:18.562 10:58:46 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:18.562 10:58:46 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:18.562 10:58:46 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:18.821 10:58:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:18.821 10:58:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:18.821 10:58:46 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:18.821 10:58:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:18.821 10:58:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:18.821 10:58:46 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:18.821 10:58:46 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:18.821 10:58:46 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:18.821 10:58:46 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:18.821 10:58:46 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:19.079 10:58:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:19.079 10:58:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:19.079 10:58:46 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:19.079 10:58:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:19.079 10:58:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:19.079 10:58:46 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:19.079 10:58:46 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:19.079 10:58:46 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:19.079 10:58:46 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:19.079 10:58:46 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:19.079 10:58:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:19.337 10:58:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:19.337 10:58:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:19.337 10:58:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:19.337 10:58:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:19.337 10:58:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:19.337 10:58:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:19.337 10:58:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:19.337 10:58:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:19.337 10:58:46 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:19.337 10:58:46 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:19.337 10:58:46 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:19.337 10:58:46 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:19.337 10:58:46 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:19.596 10:58:46 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:19.596 [2024-11-20 10:58:47.040200] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:19.596 [2024-11-20 10:58:47.077296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:19.596 [2024-11-20 10:58:47.077296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:19.855 [2024-11-20 10:58:47.118773] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:19.855 [2024-11-20 10:58:47.118815] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:23.146 10:58:49 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:23.146 10:58:49 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:23.146 spdk_app_start Round 2 00:04:23.146 10:58:49 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3868536 /var/tmp/spdk-nbd.sock 00:04:23.146 10:58:49 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 3868536 ']' 00:04:23.146 10:58:49 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:23.147 10:58:49 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:23.147 10:58:49 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:23.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:23.147 10:58:49 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:23.147 10:58:49 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:23.147 10:58:50 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:23.147 10:58:50 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:23.147 10:58:50 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:23.147 Malloc0 00:04:23.147 10:58:50 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:23.147 Malloc1 00:04:23.147 10:58:50 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:23.147 10:58:50 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:23.147 10:58:50 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:23.147 10:58:50 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:23.147 10:58:50 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:23.147 10:58:50 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:23.147 10:58:50 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:23.147 10:58:50 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:23.147 10:58:50 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:23.147 10:58:50 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:23.147 10:58:50 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:23.147 10:58:50 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:23.147 10:58:50 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:23.147 10:58:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:23.147 10:58:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:23.147 10:58:50 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:23.405 /dev/nbd0 00:04:23.405 10:58:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:23.405 10:58:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:23.405 10:58:50 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:23.405 10:58:50 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:23.405 10:58:50 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:23.405 10:58:50 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:23.405 10:58:50 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:23.405 10:58:50 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:23.405 10:58:50 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:23.405 10:58:50 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:23.405 10:58:50 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:23.405 1+0 records in 00:04:23.405 1+0 records out 00:04:23.405 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000199 s, 20.6 MB/s 00:04:23.405 10:58:50 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:23.405 10:58:50 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:23.405 10:58:50 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:23.405 10:58:50 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:23.405 10:58:50 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:23.405 10:58:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:23.405 10:58:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:23.405 10:58:50 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:23.663 /dev/nbd1 00:04:23.663 10:58:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:23.663 10:58:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:23.663 10:58:51 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:23.663 10:58:51 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:23.663 10:58:51 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:23.663 10:58:51 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:23.663 10:58:51 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:23.663 10:58:51 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:23.663 10:58:51 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:23.663 10:58:51 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:23.663 10:58:51 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:23.663 1+0 records in 00:04:23.663 1+0 records out 00:04:23.663 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000243526 s, 16.8 MB/s 00:04:23.663 10:58:51 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:23.663 10:58:51 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:23.663 10:58:51 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:23.663 10:58:51 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:23.663 10:58:51 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:23.663 10:58:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:23.663 10:58:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:23.663 10:58:51 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:23.663 10:58:51 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:23.663 10:58:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:23.921 10:58:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:23.921 { 00:04:23.921 "nbd_device": "/dev/nbd0", 00:04:23.921 "bdev_name": "Malloc0" 00:04:23.921 }, 00:04:23.921 { 00:04:23.921 "nbd_device": "/dev/nbd1", 00:04:23.921 "bdev_name": "Malloc1" 00:04:23.921 } 00:04:23.921 ]' 00:04:23.921 10:58:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:23.921 { 00:04:23.921 "nbd_device": "/dev/nbd0", 00:04:23.921 "bdev_name": "Malloc0" 00:04:23.921 }, 00:04:23.921 { 00:04:23.921 "nbd_device": "/dev/nbd1", 00:04:23.921 "bdev_name": "Malloc1" 00:04:23.921 } 00:04:23.921 ]' 00:04:23.921 10:58:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:23.921 10:58:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:23.921 /dev/nbd1' 00:04:23.921 10:58:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:23.921 /dev/nbd1' 00:04:23.921 10:58:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:23.921 10:58:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:23.921 10:58:51 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:23.921 10:58:51 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:23.921 10:58:51 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:23.921 10:58:51 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:23.921 10:58:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:23.921 10:58:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:23.921 10:58:51 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:23.921 10:58:51 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:23.921 10:58:51 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:23.921 10:58:51 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:23.921 256+0 records in 00:04:23.921 256+0 records out 00:04:23.921 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.010687 s, 98.1 MB/s 00:04:23.921 10:58:51 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:23.921 10:58:51 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:23.921 256+0 records in 00:04:23.921 256+0 records out 00:04:23.921 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0149668 s, 70.1 MB/s 00:04:23.921 10:58:51 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:23.921 10:58:51 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:23.921 256+0 records in 00:04:23.921 256+0 records out 00:04:23.921 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0149932 s, 69.9 MB/s 00:04:23.921 10:58:51 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:23.921 10:58:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:23.921 10:58:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:23.921 10:58:51 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:23.921 10:58:51 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:23.921 10:58:51 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:23.921 10:58:51 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:23.921 10:58:51 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:23.921 10:58:51 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:23.921 10:58:51 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:23.921 10:58:51 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:23.921 10:58:51 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:23.921 10:58:51 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:23.921 10:58:51 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:23.921 10:58:51 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:23.921 10:58:51 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:23.921 10:58:51 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:23.922 10:58:51 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:23.922 10:58:51 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:24.180 10:58:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:24.180 10:58:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:24.180 10:58:51 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:24.180 10:58:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:24.180 10:58:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:24.180 10:58:51 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:24.180 10:58:51 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:24.180 10:58:51 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:24.180 10:58:51 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:24.180 10:58:51 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:24.438 10:58:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:24.438 10:58:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:24.438 10:58:51 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:24.438 10:58:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:24.438 10:58:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:24.438 10:58:51 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:24.438 10:58:51 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:24.438 10:58:51 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:24.438 10:58:51 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:24.438 10:58:51 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:24.438 10:58:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:24.697 10:58:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:24.697 10:58:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:24.697 10:58:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:24.697 10:58:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:24.697 10:58:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:24.697 10:58:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:24.697 10:58:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:24.697 10:58:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:24.697 10:58:52 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:24.697 10:58:52 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:24.697 10:58:52 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:24.697 10:58:52 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:24.697 10:58:52 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:24.955 10:58:52 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:24.955 [2024-11-20 10:58:52.387822] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:24.955 [2024-11-20 10:58:52.425487] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:24.955 [2024-11-20 10:58:52.425488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:25.213 [2024-11-20 10:58:52.466926] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:25.213 [2024-11-20 10:58:52.466968] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:28.492 10:58:55 event.app_repeat -- event/event.sh@38 -- # waitforlisten 3868536 /var/tmp/spdk-nbd.sock 00:04:28.492 10:58:55 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 3868536 ']' 00:04:28.492 10:58:55 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:28.492 10:58:55 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:28.492 10:58:55 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:28.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:28.492 10:58:55 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:28.492 10:58:55 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:28.492 10:58:55 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:28.492 10:58:55 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:28.492 10:58:55 event.app_repeat -- event/event.sh@39 -- # killprocess 3868536 00:04:28.492 10:58:55 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 3868536 ']' 00:04:28.492 10:58:55 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 3868536 00:04:28.492 10:58:55 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:04:28.492 10:58:55 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:28.492 10:58:55 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3868536 00:04:28.492 10:58:55 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:28.492 10:58:55 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:28.492 10:58:55 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3868536' 00:04:28.492 killing process with pid 3868536 00:04:28.492 10:58:55 event.app_repeat -- common/autotest_common.sh@973 -- # kill 3868536 00:04:28.492 10:58:55 event.app_repeat -- common/autotest_common.sh@978 -- # wait 3868536 00:04:28.492 spdk_app_start is called in Round 0. 00:04:28.492 Shutdown signal received, stop current app iteration 00:04:28.492 Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 reinitialization... 00:04:28.492 spdk_app_start is called in Round 1. 00:04:28.492 Shutdown signal received, stop current app iteration 00:04:28.492 Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 reinitialization... 00:04:28.492 spdk_app_start is called in Round 2. 00:04:28.492 Shutdown signal received, stop current app iteration 00:04:28.492 Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 reinitialization... 00:04:28.492 spdk_app_start is called in Round 3. 00:04:28.492 Shutdown signal received, stop current app iteration 00:04:28.492 10:58:55 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:04:28.492 10:58:55 event.app_repeat -- event/event.sh@42 -- # return 0 00:04:28.492 00:04:28.492 real 0m16.486s 00:04:28.492 user 0m36.270s 00:04:28.492 sys 0m2.557s 00:04:28.492 10:58:55 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:28.492 10:58:55 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:28.492 ************************************ 00:04:28.492 END TEST app_repeat 00:04:28.492 ************************************ 00:04:28.492 10:58:55 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:04:28.492 10:58:55 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:28.492 10:58:55 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:28.492 10:58:55 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:28.492 10:58:55 event -- common/autotest_common.sh@10 -- # set +x 00:04:28.492 ************************************ 00:04:28.492 START TEST cpu_locks 00:04:28.492 ************************************ 00:04:28.492 10:58:55 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:28.492 * Looking for test storage... 00:04:28.492 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:28.492 10:58:55 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:28.492 10:58:55 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:04:28.492 10:58:55 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:28.492 10:58:55 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:28.492 10:58:55 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:28.492 10:58:55 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:28.492 10:58:55 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:28.492 10:58:55 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:04:28.492 10:58:55 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:04:28.492 10:58:55 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:04:28.492 10:58:55 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:04:28.492 10:58:55 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:04:28.492 10:58:55 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:04:28.492 10:58:55 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:04:28.492 10:58:55 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:28.492 10:58:55 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:04:28.492 10:58:55 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:04:28.492 10:58:55 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:28.492 10:58:55 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:28.492 10:58:55 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:04:28.492 10:58:55 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:04:28.492 10:58:55 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:28.492 10:58:55 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:04:28.492 10:58:55 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:04:28.492 10:58:55 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:04:28.492 10:58:55 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:04:28.492 10:58:55 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:28.492 10:58:55 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:04:28.492 10:58:55 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:04:28.492 10:58:55 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:28.492 10:58:55 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:28.493 10:58:55 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:04:28.493 10:58:55 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:28.493 10:58:55 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:28.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.493 --rc genhtml_branch_coverage=1 00:04:28.493 --rc genhtml_function_coverage=1 00:04:28.493 --rc genhtml_legend=1 00:04:28.493 --rc geninfo_all_blocks=1 00:04:28.493 --rc geninfo_unexecuted_blocks=1 00:04:28.493 00:04:28.493 ' 00:04:28.493 10:58:55 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:28.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.493 --rc genhtml_branch_coverage=1 00:04:28.493 --rc genhtml_function_coverage=1 00:04:28.493 --rc genhtml_legend=1 00:04:28.493 --rc geninfo_all_blocks=1 00:04:28.493 --rc geninfo_unexecuted_blocks=1 00:04:28.493 00:04:28.493 ' 00:04:28.493 10:58:55 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:28.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.493 --rc genhtml_branch_coverage=1 00:04:28.493 --rc genhtml_function_coverage=1 00:04:28.493 --rc genhtml_legend=1 00:04:28.493 --rc geninfo_all_blocks=1 00:04:28.493 --rc geninfo_unexecuted_blocks=1 00:04:28.493 00:04:28.493 ' 00:04:28.493 10:58:55 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:28.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.493 --rc genhtml_branch_coverage=1 00:04:28.493 --rc genhtml_function_coverage=1 00:04:28.493 --rc genhtml_legend=1 00:04:28.493 --rc geninfo_all_blocks=1 00:04:28.493 --rc geninfo_unexecuted_blocks=1 00:04:28.493 00:04:28.493 ' 00:04:28.493 10:58:55 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:04:28.493 10:58:55 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:04:28.493 10:58:55 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:04:28.493 10:58:55 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:04:28.493 10:58:55 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:28.493 10:58:55 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:28.493 10:58:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:28.493 ************************************ 00:04:28.493 START TEST default_locks 00:04:28.493 ************************************ 00:04:28.493 10:58:55 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:04:28.493 10:58:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:28.493 10:58:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3871530 00:04:28.493 10:58:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 3871530 00:04:28.493 10:58:55 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 3871530 ']' 00:04:28.493 10:58:55 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:28.493 10:58:55 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:28.493 10:58:55 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:28.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:28.493 10:58:55 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:28.493 10:58:55 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:28.493 [2024-11-20 10:58:55.962329] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:04:28.493 [2024-11-20 10:58:55.962372] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3871530 ] 00:04:28.752 [2024-11-20 10:58:56.038959] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:28.752 [2024-11-20 10:58:56.079717] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:29.011 10:58:56 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:29.011 10:58:56 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:04:29.011 10:58:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 3871530 00:04:29.011 10:58:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 3871530 00:04:29.011 10:58:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:29.271 lslocks: write error 00:04:29.271 10:58:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 3871530 00:04:29.271 10:58:56 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 3871530 ']' 00:04:29.271 10:58:56 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 3871530 00:04:29.271 10:58:56 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:04:29.271 10:58:56 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:29.271 10:58:56 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3871530 00:04:29.271 10:58:56 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:29.271 10:58:56 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:29.271 10:58:56 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3871530' 00:04:29.271 killing process with pid 3871530 00:04:29.271 10:58:56 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 3871530 00:04:29.271 10:58:56 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 3871530 00:04:29.840 10:58:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3871530 00:04:29.840 10:58:57 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:04:29.840 10:58:57 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 3871530 00:04:29.840 10:58:57 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:04:29.840 10:58:57 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:29.840 10:58:57 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:04:29.840 10:58:57 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:29.840 10:58:57 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 3871530 00:04:29.840 10:58:57 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 3871530 ']' 00:04:29.840 10:58:57 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:29.840 10:58:57 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:29.840 10:58:57 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:29.840 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:29.840 10:58:57 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:29.840 10:58:57 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:29.840 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (3871530) - No such process 00:04:29.840 ERROR: process (pid: 3871530) is no longer running 00:04:29.840 10:58:57 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:29.840 10:58:57 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:04:29.840 10:58:57 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:04:29.840 10:58:57 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:29.840 10:58:57 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:29.840 10:58:57 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:29.840 10:58:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:04:29.840 10:58:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:29.840 10:58:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:04:29.840 10:58:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:29.840 00:04:29.840 real 0m1.151s 00:04:29.840 user 0m1.117s 00:04:29.840 sys 0m0.538s 00:04:29.840 10:58:57 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:29.840 10:58:57 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:29.840 ************************************ 00:04:29.840 END TEST default_locks 00:04:29.840 ************************************ 00:04:29.840 10:58:57 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:04:29.840 10:58:57 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:29.840 10:58:57 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:29.840 10:58:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:29.840 ************************************ 00:04:29.840 START TEST default_locks_via_rpc 00:04:29.840 ************************************ 00:04:29.840 10:58:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:04:29.840 10:58:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3871785 00:04:29.840 10:58:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 3871785 00:04:29.840 10:58:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:29.840 10:58:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3871785 ']' 00:04:29.840 10:58:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:29.840 10:58:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:29.840 10:58:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:29.840 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:29.840 10:58:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:29.840 10:58:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:29.840 [2024-11-20 10:58:57.185564] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:04:29.840 [2024-11-20 10:58:57.185605] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3871785 ] 00:04:29.840 [2024-11-20 10:58:57.260523] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:29.840 [2024-11-20 10:58:57.299311] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:30.098 10:58:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:30.098 10:58:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:30.099 10:58:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:04:30.099 10:58:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.099 10:58:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:30.099 10:58:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:30.099 10:58:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:04:30.099 10:58:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:30.099 10:58:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:04:30.099 10:58:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:30.099 10:58:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:04:30.099 10:58:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.099 10:58:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:30.099 10:58:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:30.099 10:58:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 3871785 00:04:30.099 10:58:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 3871785 00:04:30.099 10:58:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:30.665 10:58:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 3871785 00:04:30.665 10:58:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 3871785 ']' 00:04:30.665 10:58:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 3871785 00:04:30.665 10:58:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:04:30.665 10:58:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:30.665 10:58:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3871785 00:04:30.665 10:58:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:30.665 10:58:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:30.665 10:58:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3871785' 00:04:30.665 killing process with pid 3871785 00:04:30.665 10:58:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 3871785 00:04:30.665 10:58:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 3871785 00:04:30.924 00:04:30.924 real 0m1.230s 00:04:30.924 user 0m1.192s 00:04:30.924 sys 0m0.557s 00:04:30.924 10:58:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:30.924 10:58:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:30.924 ************************************ 00:04:30.924 END TEST default_locks_via_rpc 00:04:30.924 ************************************ 00:04:30.924 10:58:58 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:04:30.924 10:58:58 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:30.924 10:58:58 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:30.924 10:58:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:31.183 ************************************ 00:04:31.183 START TEST non_locking_app_on_locked_coremask 00:04:31.183 ************************************ 00:04:31.183 10:58:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:04:31.183 10:58:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3872041 00:04:31.183 10:58:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 3872041 /var/tmp/spdk.sock 00:04:31.183 10:58:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:31.183 10:58:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3872041 ']' 00:04:31.183 10:58:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:31.183 10:58:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:31.183 10:58:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:31.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:31.183 10:58:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:31.183 10:58:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:31.183 [2024-11-20 10:58:58.487000] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:04:31.183 [2024-11-20 10:58:58.487043] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3872041 ] 00:04:31.183 [2024-11-20 10:58:58.563523] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:31.183 [2024-11-20 10:58:58.605643] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:31.441 10:58:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:31.441 10:58:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:31.442 10:58:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3872134 00:04:31.442 10:58:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 3872134 /var/tmp/spdk2.sock 00:04:31.442 10:58:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:04:31.442 10:58:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3872134 ']' 00:04:31.442 10:58:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:31.442 10:58:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:31.442 10:58:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:31.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:31.442 10:58:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:31.442 10:58:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:31.442 [2024-11-20 10:58:58.870897] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:04:31.442 [2024-11-20 10:58:58.870953] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3872134 ] 00:04:31.700 [2024-11-20 10:58:58.964208] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:31.700 [2024-11-20 10:58:58.964234] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:31.700 [2024-11-20 10:58:59.052862] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:32.267 10:58:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:32.267 10:58:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:32.267 10:58:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 3872041 00:04:32.267 10:58:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3872041 00:04:32.267 10:58:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:32.835 lslocks: write error 00:04:32.835 10:59:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 3872041 00:04:32.835 10:59:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3872041 ']' 00:04:32.835 10:59:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 3872041 00:04:32.835 10:59:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:32.835 10:59:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:32.835 10:59:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3872041 00:04:32.835 10:59:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:32.835 10:59:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:32.835 10:59:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3872041' 00:04:32.835 killing process with pid 3872041 00:04:32.835 10:59:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 3872041 00:04:32.835 10:59:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 3872041 00:04:33.403 10:59:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 3872134 00:04:33.403 10:59:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3872134 ']' 00:04:33.403 10:59:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 3872134 00:04:33.403 10:59:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:33.403 10:59:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:33.403 10:59:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3872134 00:04:33.662 10:59:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:33.662 10:59:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:33.662 10:59:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3872134' 00:04:33.662 killing process with pid 3872134 00:04:33.662 10:59:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 3872134 00:04:33.662 10:59:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 3872134 00:04:33.921 00:04:33.921 real 0m2.792s 00:04:33.921 user 0m2.956s 00:04:33.921 sys 0m0.917s 00:04:33.921 10:59:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:33.921 10:59:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:33.921 ************************************ 00:04:33.921 END TEST non_locking_app_on_locked_coremask 00:04:33.921 ************************************ 00:04:33.921 10:59:01 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:04:33.921 10:59:01 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:33.921 10:59:01 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:33.921 10:59:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:33.921 ************************************ 00:04:33.921 START TEST locking_app_on_unlocked_coremask 00:04:33.921 ************************************ 00:04:33.921 10:59:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:04:33.921 10:59:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3872544 00:04:33.921 10:59:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 3872544 /var/tmp/spdk.sock 00:04:33.921 10:59:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:04:33.921 10:59:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3872544 ']' 00:04:33.921 10:59:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:33.921 10:59:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:33.921 10:59:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:33.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:33.921 10:59:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:33.921 10:59:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:33.921 [2024-11-20 10:59:01.342717] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:04:33.921 [2024-11-20 10:59:01.342759] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3872544 ] 00:04:34.181 [2024-11-20 10:59:01.416188] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:34.181 [2024-11-20 10:59:01.416221] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:34.181 [2024-11-20 10:59:01.454243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:34.441 10:59:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:34.441 10:59:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:34.441 10:59:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3872704 00:04:34.441 10:59:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 3872704 /var/tmp/spdk2.sock 00:04:34.441 10:59:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:34.441 10:59:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3872704 ']' 00:04:34.441 10:59:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:34.441 10:59:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:34.441 10:59:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:34.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:34.441 10:59:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:34.441 10:59:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:34.441 [2024-11-20 10:59:01.735802] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:04:34.441 [2024-11-20 10:59:01.735852] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3872704 ] 00:04:34.441 [2024-11-20 10:59:01.828676] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:34.441 [2024-11-20 10:59:01.909121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:35.378 10:59:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:35.378 10:59:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:35.378 10:59:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 3872704 00:04:35.378 10:59:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3872704 00:04:35.378 10:59:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:35.637 lslocks: write error 00:04:35.637 10:59:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 3872544 00:04:35.637 10:59:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3872544 ']' 00:04:35.637 10:59:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 3872544 00:04:35.637 10:59:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:35.637 10:59:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:35.637 10:59:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3872544 00:04:35.637 10:59:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:35.637 10:59:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:35.637 10:59:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3872544' 00:04:35.637 killing process with pid 3872544 00:04:35.637 10:59:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 3872544 00:04:35.637 10:59:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 3872544 00:04:36.206 10:59:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 3872704 00:04:36.206 10:59:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3872704 ']' 00:04:36.206 10:59:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 3872704 00:04:36.206 10:59:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:36.206 10:59:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:36.206 10:59:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3872704 00:04:36.206 10:59:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:36.206 10:59:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:36.206 10:59:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3872704' 00:04:36.206 killing process with pid 3872704 00:04:36.206 10:59:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 3872704 00:04:36.206 10:59:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 3872704 00:04:36.775 00:04:36.775 real 0m2.680s 00:04:36.775 user 0m2.832s 00:04:36.775 sys 0m0.894s 00:04:36.775 10:59:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:36.775 10:59:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:36.775 ************************************ 00:04:36.775 END TEST locking_app_on_unlocked_coremask 00:04:36.775 ************************************ 00:04:36.775 10:59:04 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:04:36.775 10:59:04 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:36.775 10:59:04 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:36.775 10:59:04 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:36.775 ************************************ 00:04:36.775 START TEST locking_app_on_locked_coremask 00:04:36.775 ************************************ 00:04:36.775 10:59:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:04:36.775 10:59:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3873044 00:04:36.775 10:59:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 3873044 /var/tmp/spdk.sock 00:04:36.775 10:59:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:36.775 10:59:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3873044 ']' 00:04:36.775 10:59:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:36.775 10:59:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:36.775 10:59:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:36.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:36.775 10:59:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:36.775 10:59:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:36.775 [2024-11-20 10:59:04.090658] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:04:36.775 [2024-11-20 10:59:04.090700] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3873044 ] 00:04:36.775 [2024-11-20 10:59:04.166274] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:36.775 [2024-11-20 10:59:04.204618] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:37.035 10:59:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:37.035 10:59:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:37.035 10:59:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3873206 00:04:37.035 10:59:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3873206 /var/tmp/spdk2.sock 00:04:37.035 10:59:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:37.035 10:59:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:04:37.035 10:59:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 3873206 /var/tmp/spdk2.sock 00:04:37.035 10:59:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:04:37.035 10:59:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:37.035 10:59:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:04:37.035 10:59:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:37.035 10:59:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 3873206 /var/tmp/spdk2.sock 00:04:37.035 10:59:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3873206 ']' 00:04:37.035 10:59:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:37.035 10:59:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:37.035 10:59:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:37.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:37.035 10:59:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:37.035 10:59:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:37.035 [2024-11-20 10:59:04.485915] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:04:37.035 [2024-11-20 10:59:04.485972] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3873206 ] 00:04:37.293 [2024-11-20 10:59:04.579273] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3873044 has claimed it. 00:04:37.293 [2024-11-20 10:59:04.579313] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:37.860 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (3873206) - No such process 00:04:37.860 ERROR: process (pid: 3873206) is no longer running 00:04:37.860 10:59:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:37.860 10:59:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:04:37.861 10:59:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:04:37.861 10:59:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:37.861 10:59:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:37.861 10:59:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:37.861 10:59:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 3873044 00:04:37.861 10:59:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3873044 00:04:37.861 10:59:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:38.120 lslocks: write error 00:04:38.120 10:59:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 3873044 00:04:38.120 10:59:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3873044 ']' 00:04:38.120 10:59:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 3873044 00:04:38.120 10:59:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:38.120 10:59:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:38.120 10:59:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3873044 00:04:38.120 10:59:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:38.120 10:59:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:38.120 10:59:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3873044' 00:04:38.120 killing process with pid 3873044 00:04:38.120 10:59:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 3873044 00:04:38.120 10:59:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 3873044 00:04:38.687 00:04:38.687 real 0m1.879s 00:04:38.687 user 0m2.014s 00:04:38.687 sys 0m0.653s 00:04:38.687 10:59:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:38.687 10:59:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:38.687 ************************************ 00:04:38.687 END TEST locking_app_on_locked_coremask 00:04:38.687 ************************************ 00:04:38.688 10:59:05 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:04:38.688 10:59:05 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:38.688 10:59:05 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:38.688 10:59:05 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:38.688 ************************************ 00:04:38.688 START TEST locking_overlapped_coremask 00:04:38.688 ************************************ 00:04:38.688 10:59:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:04:38.688 10:59:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3873529 00:04:38.688 10:59:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 3873529 /var/tmp/spdk.sock 00:04:38.688 10:59:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:04:38.688 10:59:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 3873529 ']' 00:04:38.688 10:59:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:38.688 10:59:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:38.688 10:59:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:38.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:38.688 10:59:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:38.688 10:59:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:38.688 [2024-11-20 10:59:06.041774] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:04:38.688 [2024-11-20 10:59:06.041817] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3873529 ] 00:04:38.688 [2024-11-20 10:59:06.116763] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:38.688 [2024-11-20 10:59:06.161942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:38.688 [2024-11-20 10:59:06.162054] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:38.688 [2024-11-20 10:59:06.162054] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.947 10:59:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:38.947 10:59:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:38.947 10:59:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3873535 00:04:38.947 10:59:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3873535 /var/tmp/spdk2.sock 00:04:38.947 10:59:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:04:38.947 10:59:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:04:38.947 10:59:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 3873535 /var/tmp/spdk2.sock 00:04:38.947 10:59:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:04:38.947 10:59:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:38.947 10:59:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:04:38.947 10:59:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:38.947 10:59:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 3873535 /var/tmp/spdk2.sock 00:04:38.947 10:59:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 3873535 ']' 00:04:38.947 10:59:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:38.947 10:59:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:38.947 10:59:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:38.947 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:38.947 10:59:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:38.947 10:59:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:38.947 [2024-11-20 10:59:06.420399] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:04:38.947 [2024-11-20 10:59:06.420442] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3873535 ] 00:04:39.205 [2024-11-20 10:59:06.513347] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3873529 has claimed it. 00:04:39.205 [2024-11-20 10:59:06.513381] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:39.773 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (3873535) - No such process 00:04:39.773 ERROR: process (pid: 3873535) is no longer running 00:04:39.773 10:59:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:39.773 10:59:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:04:39.773 10:59:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:04:39.773 10:59:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:39.773 10:59:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:39.773 10:59:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:39.773 10:59:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:04:39.773 10:59:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:04:39.773 10:59:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:04:39.773 10:59:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:04:39.773 10:59:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 3873529 00:04:39.773 10:59:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 3873529 ']' 00:04:39.773 10:59:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 3873529 00:04:39.773 10:59:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:04:39.773 10:59:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:39.773 10:59:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3873529 00:04:39.773 10:59:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:39.773 10:59:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:39.773 10:59:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3873529' 00:04:39.773 killing process with pid 3873529 00:04:39.773 10:59:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 3873529 00:04:39.773 10:59:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 3873529 00:04:40.033 00:04:40.033 real 0m1.426s 00:04:40.033 user 0m3.892s 00:04:40.033 sys 0m0.426s 00:04:40.033 10:59:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:40.033 10:59:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:40.033 ************************************ 00:04:40.033 END TEST locking_overlapped_coremask 00:04:40.033 ************************************ 00:04:40.033 10:59:07 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:04:40.033 10:59:07 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:40.033 10:59:07 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:40.033 10:59:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:40.033 ************************************ 00:04:40.033 START TEST locking_overlapped_coremask_via_rpc 00:04:40.033 ************************************ 00:04:40.033 10:59:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:04:40.033 10:59:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3873791 00:04:40.033 10:59:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 3873791 /var/tmp/spdk.sock 00:04:40.033 10:59:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:04:40.033 10:59:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3873791 ']' 00:04:40.033 10:59:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:40.033 10:59:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:40.033 10:59:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:40.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:40.033 10:59:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:40.033 10:59:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.293 [2024-11-20 10:59:07.534462] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:04:40.293 [2024-11-20 10:59:07.534503] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3873791 ] 00:04:40.293 [2024-11-20 10:59:07.605866] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:40.293 [2024-11-20 10:59:07.605894] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:40.293 [2024-11-20 10:59:07.645817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:40.293 [2024-11-20 10:59:07.645926] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:40.293 [2024-11-20 10:59:07.645926] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:40.552 10:59:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:40.552 10:59:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:40.552 10:59:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3873802 00:04:40.552 10:59:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 3873802 /var/tmp/spdk2.sock 00:04:40.552 10:59:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:04:40.552 10:59:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3873802 ']' 00:04:40.552 10:59:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:40.552 10:59:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:40.552 10:59:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:40.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:40.552 10:59:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:40.552 10:59:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.552 [2024-11-20 10:59:07.917476] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:04:40.552 [2024-11-20 10:59:07.917523] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3873802 ] 00:04:40.552 [2024-11-20 10:59:08.010587] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:40.552 [2024-11-20 10:59:08.010618] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:40.810 [2024-11-20 10:59:08.098379] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:40.810 [2024-11-20 10:59:08.098496] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:40.810 [2024-11-20 10:59:08.098497] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:04:41.377 10:59:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:41.377 10:59:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:41.377 10:59:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:04:41.377 10:59:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:41.377 10:59:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:41.377 10:59:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:41.377 10:59:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:41.377 10:59:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:41.377 10:59:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:41.377 10:59:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:41.377 10:59:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:41.377 10:59:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:41.377 10:59:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:41.377 10:59:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:41.377 10:59:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:41.377 10:59:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:41.377 [2024-11-20 10:59:08.771016] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3873791 has claimed it. 00:04:41.377 request: 00:04:41.377 { 00:04:41.377 "method": "framework_enable_cpumask_locks", 00:04:41.377 "req_id": 1 00:04:41.377 } 00:04:41.377 Got JSON-RPC error response 00:04:41.377 response: 00:04:41.377 { 00:04:41.377 "code": -32603, 00:04:41.377 "message": "Failed to claim CPU core: 2" 00:04:41.377 } 00:04:41.377 10:59:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:41.377 10:59:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:41.377 10:59:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:41.377 10:59:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:41.377 10:59:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:41.377 10:59:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 3873791 /var/tmp/spdk.sock 00:04:41.377 10:59:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3873791 ']' 00:04:41.377 10:59:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:41.377 10:59:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:41.377 10:59:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:41.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:41.377 10:59:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:41.377 10:59:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:41.636 10:59:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:41.636 10:59:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:41.636 10:59:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 3873802 /var/tmp/spdk2.sock 00:04:41.636 10:59:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3873802 ']' 00:04:41.636 10:59:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:41.636 10:59:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:41.636 10:59:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:41.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:41.636 10:59:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:41.636 10:59:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:41.896 10:59:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:41.896 10:59:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:41.896 10:59:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:04:41.896 10:59:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:04:41.896 10:59:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:04:41.896 10:59:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:04:41.896 00:04:41.896 real 0m1.706s 00:04:41.896 user 0m0.815s 00:04:41.896 sys 0m0.144s 00:04:41.896 10:59:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:41.896 10:59:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:41.896 ************************************ 00:04:41.896 END TEST locking_overlapped_coremask_via_rpc 00:04:41.896 ************************************ 00:04:41.896 10:59:09 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:04:41.896 10:59:09 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3873791 ]] 00:04:41.896 10:59:09 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3873791 00:04:41.896 10:59:09 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 3873791 ']' 00:04:41.896 10:59:09 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 3873791 00:04:41.896 10:59:09 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:04:41.896 10:59:09 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:41.896 10:59:09 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3873791 00:04:41.896 10:59:09 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:41.896 10:59:09 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:41.896 10:59:09 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3873791' 00:04:41.896 killing process with pid 3873791 00:04:41.896 10:59:09 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 3873791 00:04:41.896 10:59:09 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 3873791 00:04:42.155 10:59:09 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3873802 ]] 00:04:42.155 10:59:09 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3873802 00:04:42.155 10:59:09 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 3873802 ']' 00:04:42.155 10:59:09 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 3873802 00:04:42.155 10:59:09 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:04:42.155 10:59:09 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:42.155 10:59:09 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3873802 00:04:42.155 10:59:09 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:04:42.155 10:59:09 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:04:42.155 10:59:09 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3873802' 00:04:42.155 killing process with pid 3873802 00:04:42.155 10:59:09 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 3873802 00:04:42.155 10:59:09 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 3873802 00:04:42.722 10:59:09 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:04:42.722 10:59:09 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:04:42.722 10:59:09 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3873791 ]] 00:04:42.722 10:59:09 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3873791 00:04:42.722 10:59:09 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 3873791 ']' 00:04:42.722 10:59:09 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 3873791 00:04:42.722 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3873791) - No such process 00:04:42.722 10:59:09 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 3873791 is not found' 00:04:42.722 Process with pid 3873791 is not found 00:04:42.722 10:59:09 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3873802 ]] 00:04:42.722 10:59:09 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3873802 00:04:42.722 10:59:09 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 3873802 ']' 00:04:42.722 10:59:09 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 3873802 00:04:42.722 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3873802) - No such process 00:04:42.722 10:59:09 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 3873802 is not found' 00:04:42.722 Process with pid 3873802 is not found 00:04:42.722 10:59:09 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:04:42.722 00:04:42.722 real 0m14.245s 00:04:42.722 user 0m24.571s 00:04:42.722 sys 0m5.083s 00:04:42.722 10:59:09 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:42.722 10:59:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:42.722 ************************************ 00:04:42.722 END TEST cpu_locks 00:04:42.722 ************************************ 00:04:42.722 00:04:42.722 real 0m39.068s 00:04:42.722 user 1m14.052s 00:04:42.722 sys 0m8.664s 00:04:42.722 10:59:09 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:42.722 10:59:09 event -- common/autotest_common.sh@10 -- # set +x 00:04:42.722 ************************************ 00:04:42.722 END TEST event 00:04:42.722 ************************************ 00:04:42.722 10:59:10 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:04:42.722 10:59:10 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:42.722 10:59:10 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:42.722 10:59:10 -- common/autotest_common.sh@10 -- # set +x 00:04:42.722 ************************************ 00:04:42.722 START TEST thread 00:04:42.722 ************************************ 00:04:42.722 10:59:10 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:04:42.722 * Looking for test storage... 00:04:42.722 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:04:42.722 10:59:10 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:42.722 10:59:10 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:04:42.722 10:59:10 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:42.722 10:59:10 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:42.722 10:59:10 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:42.722 10:59:10 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:42.722 10:59:10 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:42.984 10:59:10 thread -- scripts/common.sh@336 -- # IFS=.-: 00:04:42.984 10:59:10 thread -- scripts/common.sh@336 -- # read -ra ver1 00:04:42.984 10:59:10 thread -- scripts/common.sh@337 -- # IFS=.-: 00:04:42.984 10:59:10 thread -- scripts/common.sh@337 -- # read -ra ver2 00:04:42.984 10:59:10 thread -- scripts/common.sh@338 -- # local 'op=<' 00:04:42.984 10:59:10 thread -- scripts/common.sh@340 -- # ver1_l=2 00:04:42.984 10:59:10 thread -- scripts/common.sh@341 -- # ver2_l=1 00:04:42.984 10:59:10 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:42.984 10:59:10 thread -- scripts/common.sh@344 -- # case "$op" in 00:04:42.984 10:59:10 thread -- scripts/common.sh@345 -- # : 1 00:04:42.984 10:59:10 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:42.984 10:59:10 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:42.984 10:59:10 thread -- scripts/common.sh@365 -- # decimal 1 00:04:42.984 10:59:10 thread -- scripts/common.sh@353 -- # local d=1 00:04:42.984 10:59:10 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:42.984 10:59:10 thread -- scripts/common.sh@355 -- # echo 1 00:04:42.984 10:59:10 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:04:42.984 10:59:10 thread -- scripts/common.sh@366 -- # decimal 2 00:04:42.984 10:59:10 thread -- scripts/common.sh@353 -- # local d=2 00:04:42.984 10:59:10 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:42.984 10:59:10 thread -- scripts/common.sh@355 -- # echo 2 00:04:42.984 10:59:10 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:04:42.984 10:59:10 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:42.984 10:59:10 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:42.984 10:59:10 thread -- scripts/common.sh@368 -- # return 0 00:04:42.984 10:59:10 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:42.984 10:59:10 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:42.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.984 --rc genhtml_branch_coverage=1 00:04:42.984 --rc genhtml_function_coverage=1 00:04:42.984 --rc genhtml_legend=1 00:04:42.984 --rc geninfo_all_blocks=1 00:04:42.984 --rc geninfo_unexecuted_blocks=1 00:04:42.984 00:04:42.984 ' 00:04:42.984 10:59:10 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:42.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.984 --rc genhtml_branch_coverage=1 00:04:42.984 --rc genhtml_function_coverage=1 00:04:42.984 --rc genhtml_legend=1 00:04:42.984 --rc geninfo_all_blocks=1 00:04:42.984 --rc geninfo_unexecuted_blocks=1 00:04:42.984 00:04:42.984 ' 00:04:42.984 10:59:10 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:42.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.984 --rc genhtml_branch_coverage=1 00:04:42.984 --rc genhtml_function_coverage=1 00:04:42.984 --rc genhtml_legend=1 00:04:42.984 --rc geninfo_all_blocks=1 00:04:42.984 --rc geninfo_unexecuted_blocks=1 00:04:42.984 00:04:42.984 ' 00:04:42.984 10:59:10 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:42.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.984 --rc genhtml_branch_coverage=1 00:04:42.984 --rc genhtml_function_coverage=1 00:04:42.984 --rc genhtml_legend=1 00:04:42.984 --rc geninfo_all_blocks=1 00:04:42.984 --rc geninfo_unexecuted_blocks=1 00:04:42.984 00:04:42.984 ' 00:04:42.984 10:59:10 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:04:42.984 10:59:10 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:04:42.984 10:59:10 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:42.984 10:59:10 thread -- common/autotest_common.sh@10 -- # set +x 00:04:42.984 ************************************ 00:04:42.984 START TEST thread_poller_perf 00:04:42.984 ************************************ 00:04:42.984 10:59:10 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:04:42.984 [2024-11-20 10:59:10.289181] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:04:42.984 [2024-11-20 10:59:10.289237] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3874365 ] 00:04:42.984 [2024-11-20 10:59:10.365384] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:42.984 [2024-11-20 10:59:10.406604] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.984 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:04:44.004 [2024-11-20T09:59:11.500Z] ====================================== 00:04:44.004 [2024-11-20T09:59:11.500Z] busy:2306985058 (cyc) 00:04:44.004 [2024-11-20T09:59:11.500Z] total_run_count: 406000 00:04:44.004 [2024-11-20T09:59:11.500Z] tsc_hz: 2300000000 (cyc) 00:04:44.004 [2024-11-20T09:59:11.500Z] ====================================== 00:04:44.004 [2024-11-20T09:59:11.500Z] poller_cost: 5682 (cyc), 2470 (nsec) 00:04:44.004 00:04:44.004 real 0m1.185s 00:04:44.004 user 0m1.112s 00:04:44.004 sys 0m0.069s 00:04:44.004 10:59:11 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:44.004 10:59:11 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:04:44.004 ************************************ 00:04:44.004 END TEST thread_poller_perf 00:04:44.004 ************************************ 00:04:44.004 10:59:11 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:04:44.004 10:59:11 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:04:44.004 10:59:11 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:44.004 10:59:11 thread -- common/autotest_common.sh@10 -- # set +x 00:04:44.262 ************************************ 00:04:44.262 START TEST thread_poller_perf 00:04:44.262 ************************************ 00:04:44.262 10:59:11 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:04:44.262 [2024-11-20 10:59:11.540066] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:04:44.262 [2024-11-20 10:59:11.540137] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3874619 ] 00:04:44.262 [2024-11-20 10:59:11.617621] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:44.262 [2024-11-20 10:59:11.657175] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:44.262 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:04:45.640 [2024-11-20T09:59:13.136Z] ====================================== 00:04:45.640 [2024-11-20T09:59:13.136Z] busy:2301357886 (cyc) 00:04:45.640 [2024-11-20T09:59:13.136Z] total_run_count: 5425000 00:04:45.640 [2024-11-20T09:59:13.136Z] tsc_hz: 2300000000 (cyc) 00:04:45.640 [2024-11-20T09:59:13.136Z] ====================================== 00:04:45.640 [2024-11-20T09:59:13.136Z] poller_cost: 424 (cyc), 184 (nsec) 00:04:45.640 00:04:45.640 real 0m1.178s 00:04:45.640 user 0m1.093s 00:04:45.640 sys 0m0.081s 00:04:45.640 10:59:12 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:45.640 10:59:12 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:04:45.640 ************************************ 00:04:45.640 END TEST thread_poller_perf 00:04:45.640 ************************************ 00:04:45.640 10:59:12 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:04:45.640 00:04:45.640 real 0m2.675s 00:04:45.640 user 0m2.356s 00:04:45.640 sys 0m0.333s 00:04:45.640 10:59:12 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:45.640 10:59:12 thread -- common/autotest_common.sh@10 -- # set +x 00:04:45.640 ************************************ 00:04:45.640 END TEST thread 00:04:45.640 ************************************ 00:04:45.640 10:59:12 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:04:45.640 10:59:12 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:04:45.640 10:59:12 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:45.640 10:59:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:45.640 10:59:12 -- common/autotest_common.sh@10 -- # set +x 00:04:45.640 ************************************ 00:04:45.640 START TEST app_cmdline 00:04:45.640 ************************************ 00:04:45.640 10:59:12 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:04:45.640 * Looking for test storage... 00:04:45.640 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:04:45.640 10:59:12 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:45.640 10:59:12 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:04:45.640 10:59:12 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:45.640 10:59:12 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:45.640 10:59:12 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:45.640 10:59:12 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:45.640 10:59:12 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:45.640 10:59:12 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:04:45.640 10:59:12 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:04:45.640 10:59:12 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:04:45.640 10:59:12 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:04:45.640 10:59:12 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:04:45.640 10:59:12 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:04:45.640 10:59:12 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:04:45.640 10:59:12 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:45.640 10:59:12 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:04:45.640 10:59:12 app_cmdline -- scripts/common.sh@345 -- # : 1 00:04:45.640 10:59:12 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:45.640 10:59:12 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:45.640 10:59:12 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:04:45.640 10:59:12 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:04:45.640 10:59:12 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:45.640 10:59:12 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:04:45.640 10:59:12 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:04:45.640 10:59:12 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:04:45.640 10:59:12 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:04:45.640 10:59:12 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:45.641 10:59:12 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:04:45.641 10:59:12 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:04:45.641 10:59:12 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:45.641 10:59:12 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:45.641 10:59:12 app_cmdline -- scripts/common.sh@368 -- # return 0 00:04:45.641 10:59:12 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:45.641 10:59:12 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:45.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.641 --rc genhtml_branch_coverage=1 00:04:45.641 --rc genhtml_function_coverage=1 00:04:45.641 --rc genhtml_legend=1 00:04:45.641 --rc geninfo_all_blocks=1 00:04:45.641 --rc geninfo_unexecuted_blocks=1 00:04:45.641 00:04:45.641 ' 00:04:45.641 10:59:12 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:45.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.641 --rc genhtml_branch_coverage=1 00:04:45.641 --rc genhtml_function_coverage=1 00:04:45.641 --rc genhtml_legend=1 00:04:45.641 --rc geninfo_all_blocks=1 00:04:45.641 --rc geninfo_unexecuted_blocks=1 00:04:45.641 00:04:45.641 ' 00:04:45.641 10:59:12 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:45.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.641 --rc genhtml_branch_coverage=1 00:04:45.641 --rc genhtml_function_coverage=1 00:04:45.641 --rc genhtml_legend=1 00:04:45.641 --rc geninfo_all_blocks=1 00:04:45.641 --rc geninfo_unexecuted_blocks=1 00:04:45.641 00:04:45.641 ' 00:04:45.641 10:59:12 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:45.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.641 --rc genhtml_branch_coverage=1 00:04:45.641 --rc genhtml_function_coverage=1 00:04:45.641 --rc genhtml_legend=1 00:04:45.641 --rc geninfo_all_blocks=1 00:04:45.641 --rc geninfo_unexecuted_blocks=1 00:04:45.641 00:04:45.641 ' 00:04:45.641 10:59:12 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:04:45.641 10:59:12 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=3874921 00:04:45.641 10:59:12 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:04:45.641 10:59:12 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 3874921 00:04:45.641 10:59:12 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 3874921 ']' 00:04:45.641 10:59:12 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:45.641 10:59:12 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:45.641 10:59:12 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:45.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:45.641 10:59:12 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:45.641 10:59:12 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:04:45.641 [2024-11-20 10:59:13.027862] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:04:45.641 [2024-11-20 10:59:13.027910] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3874921 ] 00:04:45.641 [2024-11-20 10:59:13.103305] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:45.900 [2024-11-20 10:59:13.144557] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:45.900 10:59:13 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:45.900 10:59:13 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:04:45.900 10:59:13 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:04:46.159 { 00:04:46.159 "version": "SPDK v25.01-pre git sha1 46fd068fc", 00:04:46.159 "fields": { 00:04:46.159 "major": 25, 00:04:46.159 "minor": 1, 00:04:46.159 "patch": 0, 00:04:46.159 "suffix": "-pre", 00:04:46.159 "commit": "46fd068fc" 00:04:46.159 } 00:04:46.159 } 00:04:46.159 10:59:13 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:04:46.159 10:59:13 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:04:46.159 10:59:13 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:04:46.159 10:59:13 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:04:46.159 10:59:13 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:04:46.159 10:59:13 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:46.159 10:59:13 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:04:46.159 10:59:13 app_cmdline -- app/cmdline.sh@26 -- # sort 00:04:46.159 10:59:13 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:04:46.159 10:59:13 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:46.159 10:59:13 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:04:46.159 10:59:13 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:04:46.159 10:59:13 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:04:46.159 10:59:13 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:04:46.159 10:59:13 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:04:46.159 10:59:13 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:46.159 10:59:13 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:46.159 10:59:13 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:46.159 10:59:13 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:46.159 10:59:13 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:46.159 10:59:13 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:46.159 10:59:13 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:46.159 10:59:13 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:04:46.159 10:59:13 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:04:46.417 request: 00:04:46.417 { 00:04:46.417 "method": "env_dpdk_get_mem_stats", 00:04:46.417 "req_id": 1 00:04:46.417 } 00:04:46.417 Got JSON-RPC error response 00:04:46.417 response: 00:04:46.417 { 00:04:46.417 "code": -32601, 00:04:46.417 "message": "Method not found" 00:04:46.417 } 00:04:46.417 10:59:13 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:04:46.417 10:59:13 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:46.417 10:59:13 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:46.417 10:59:13 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:46.417 10:59:13 app_cmdline -- app/cmdline.sh@1 -- # killprocess 3874921 00:04:46.417 10:59:13 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 3874921 ']' 00:04:46.417 10:59:13 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 3874921 00:04:46.417 10:59:13 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:04:46.417 10:59:13 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:46.417 10:59:13 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3874921 00:04:46.417 10:59:13 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:46.417 10:59:13 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:46.417 10:59:13 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3874921' 00:04:46.417 killing process with pid 3874921 00:04:46.417 10:59:13 app_cmdline -- common/autotest_common.sh@973 -- # kill 3874921 00:04:46.417 10:59:13 app_cmdline -- common/autotest_common.sh@978 -- # wait 3874921 00:04:46.676 00:04:46.676 real 0m1.354s 00:04:46.676 user 0m1.579s 00:04:46.676 sys 0m0.446s 00:04:46.676 10:59:14 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:46.676 10:59:14 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:04:46.676 ************************************ 00:04:46.676 END TEST app_cmdline 00:04:46.676 ************************************ 00:04:46.936 10:59:14 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:04:46.936 10:59:14 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:46.936 10:59:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:46.936 10:59:14 -- common/autotest_common.sh@10 -- # set +x 00:04:46.936 ************************************ 00:04:46.936 START TEST version 00:04:46.936 ************************************ 00:04:46.936 10:59:14 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:04:46.936 * Looking for test storage... 00:04:46.936 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:04:46.936 10:59:14 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:46.936 10:59:14 version -- common/autotest_common.sh@1693 -- # lcov --version 00:04:46.936 10:59:14 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:46.936 10:59:14 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:46.936 10:59:14 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:46.936 10:59:14 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:46.936 10:59:14 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:46.936 10:59:14 version -- scripts/common.sh@336 -- # IFS=.-: 00:04:46.936 10:59:14 version -- scripts/common.sh@336 -- # read -ra ver1 00:04:46.936 10:59:14 version -- scripts/common.sh@337 -- # IFS=.-: 00:04:46.936 10:59:14 version -- scripts/common.sh@337 -- # read -ra ver2 00:04:46.936 10:59:14 version -- scripts/common.sh@338 -- # local 'op=<' 00:04:46.936 10:59:14 version -- scripts/common.sh@340 -- # ver1_l=2 00:04:46.936 10:59:14 version -- scripts/common.sh@341 -- # ver2_l=1 00:04:46.936 10:59:14 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:46.936 10:59:14 version -- scripts/common.sh@344 -- # case "$op" in 00:04:46.936 10:59:14 version -- scripts/common.sh@345 -- # : 1 00:04:46.936 10:59:14 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:46.936 10:59:14 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:46.936 10:59:14 version -- scripts/common.sh@365 -- # decimal 1 00:04:46.936 10:59:14 version -- scripts/common.sh@353 -- # local d=1 00:04:46.936 10:59:14 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:46.936 10:59:14 version -- scripts/common.sh@355 -- # echo 1 00:04:46.936 10:59:14 version -- scripts/common.sh@365 -- # ver1[v]=1 00:04:46.936 10:59:14 version -- scripts/common.sh@366 -- # decimal 2 00:04:46.936 10:59:14 version -- scripts/common.sh@353 -- # local d=2 00:04:46.936 10:59:14 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:46.936 10:59:14 version -- scripts/common.sh@355 -- # echo 2 00:04:46.936 10:59:14 version -- scripts/common.sh@366 -- # ver2[v]=2 00:04:46.936 10:59:14 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:46.936 10:59:14 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:46.936 10:59:14 version -- scripts/common.sh@368 -- # return 0 00:04:46.936 10:59:14 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:46.936 10:59:14 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:46.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.936 --rc genhtml_branch_coverage=1 00:04:46.936 --rc genhtml_function_coverage=1 00:04:46.936 --rc genhtml_legend=1 00:04:46.936 --rc geninfo_all_blocks=1 00:04:46.936 --rc geninfo_unexecuted_blocks=1 00:04:46.936 00:04:46.936 ' 00:04:46.936 10:59:14 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:46.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.936 --rc genhtml_branch_coverage=1 00:04:46.936 --rc genhtml_function_coverage=1 00:04:46.936 --rc genhtml_legend=1 00:04:46.936 --rc geninfo_all_blocks=1 00:04:46.936 --rc geninfo_unexecuted_blocks=1 00:04:46.936 00:04:46.936 ' 00:04:46.936 10:59:14 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:46.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.936 --rc genhtml_branch_coverage=1 00:04:46.936 --rc genhtml_function_coverage=1 00:04:46.936 --rc genhtml_legend=1 00:04:46.936 --rc geninfo_all_blocks=1 00:04:46.936 --rc geninfo_unexecuted_blocks=1 00:04:46.936 00:04:46.936 ' 00:04:46.936 10:59:14 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:46.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.936 --rc genhtml_branch_coverage=1 00:04:46.936 --rc genhtml_function_coverage=1 00:04:46.936 --rc genhtml_legend=1 00:04:46.936 --rc geninfo_all_blocks=1 00:04:46.936 --rc geninfo_unexecuted_blocks=1 00:04:46.936 00:04:46.936 ' 00:04:46.936 10:59:14 version -- app/version.sh@17 -- # get_header_version major 00:04:46.936 10:59:14 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:04:46.936 10:59:14 version -- app/version.sh@14 -- # cut -f2 00:04:46.936 10:59:14 version -- app/version.sh@14 -- # tr -d '"' 00:04:46.936 10:59:14 version -- app/version.sh@17 -- # major=25 00:04:46.936 10:59:14 version -- app/version.sh@18 -- # get_header_version minor 00:04:46.936 10:59:14 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:04:46.936 10:59:14 version -- app/version.sh@14 -- # cut -f2 00:04:46.936 10:59:14 version -- app/version.sh@14 -- # tr -d '"' 00:04:46.936 10:59:14 version -- app/version.sh@18 -- # minor=1 00:04:46.936 10:59:14 version -- app/version.sh@19 -- # get_header_version patch 00:04:46.936 10:59:14 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:04:46.936 10:59:14 version -- app/version.sh@14 -- # cut -f2 00:04:46.936 10:59:14 version -- app/version.sh@14 -- # tr -d '"' 00:04:46.936 10:59:14 version -- app/version.sh@19 -- # patch=0 00:04:46.936 10:59:14 version -- app/version.sh@20 -- # get_header_version suffix 00:04:46.936 10:59:14 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:04:46.936 10:59:14 version -- app/version.sh@14 -- # cut -f2 00:04:46.936 10:59:14 version -- app/version.sh@14 -- # tr -d '"' 00:04:46.936 10:59:14 version -- app/version.sh@20 -- # suffix=-pre 00:04:46.936 10:59:14 version -- app/version.sh@22 -- # version=25.1 00:04:46.936 10:59:14 version -- app/version.sh@25 -- # (( patch != 0 )) 00:04:46.936 10:59:14 version -- app/version.sh@28 -- # version=25.1rc0 00:04:46.936 10:59:14 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:04:46.936 10:59:14 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:04:47.196 10:59:14 version -- app/version.sh@30 -- # py_version=25.1rc0 00:04:47.196 10:59:14 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:04:47.196 00:04:47.196 real 0m0.240s 00:04:47.196 user 0m0.143s 00:04:47.196 sys 0m0.139s 00:04:47.196 10:59:14 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:47.196 10:59:14 version -- common/autotest_common.sh@10 -- # set +x 00:04:47.196 ************************************ 00:04:47.196 END TEST version 00:04:47.196 ************************************ 00:04:47.196 10:59:14 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:04:47.196 10:59:14 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:04:47.196 10:59:14 -- spdk/autotest.sh@194 -- # uname -s 00:04:47.196 10:59:14 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:04:47.196 10:59:14 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:04:47.196 10:59:14 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:04:47.196 10:59:14 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:04:47.196 10:59:14 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:04:47.196 10:59:14 -- spdk/autotest.sh@260 -- # timing_exit lib 00:04:47.196 10:59:14 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:47.196 10:59:14 -- common/autotest_common.sh@10 -- # set +x 00:04:47.196 10:59:14 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:04:47.196 10:59:14 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:04:47.196 10:59:14 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:04:47.196 10:59:14 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:04:47.196 10:59:14 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:04:47.196 10:59:14 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:04:47.196 10:59:14 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:04:47.196 10:59:14 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:04:47.196 10:59:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:47.196 10:59:14 -- common/autotest_common.sh@10 -- # set +x 00:04:47.196 ************************************ 00:04:47.196 START TEST nvmf_tcp 00:04:47.196 ************************************ 00:04:47.196 10:59:14 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:04:47.196 * Looking for test storage... 00:04:47.196 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:04:47.196 10:59:14 nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:47.196 10:59:14 nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:04:47.196 10:59:14 nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:47.456 10:59:14 nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:47.456 10:59:14 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:47.456 10:59:14 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:47.456 10:59:14 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:47.456 10:59:14 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:47.456 10:59:14 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:47.456 10:59:14 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:47.456 10:59:14 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:47.456 10:59:14 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:47.456 10:59:14 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:47.456 10:59:14 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:47.456 10:59:14 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:47.456 10:59:14 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:47.456 10:59:14 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:04:47.456 10:59:14 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:47.456 10:59:14 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:47.456 10:59:14 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:47.456 10:59:14 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:04:47.456 10:59:14 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:47.456 10:59:14 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:04:47.456 10:59:14 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:47.456 10:59:14 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:47.456 10:59:14 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:04:47.456 10:59:14 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:47.456 10:59:14 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:04:47.456 10:59:14 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:47.456 10:59:14 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:47.456 10:59:14 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:47.456 10:59:14 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:04:47.456 10:59:14 nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:47.456 10:59:14 nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:47.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.456 --rc genhtml_branch_coverage=1 00:04:47.456 --rc genhtml_function_coverage=1 00:04:47.456 --rc genhtml_legend=1 00:04:47.456 --rc geninfo_all_blocks=1 00:04:47.456 --rc geninfo_unexecuted_blocks=1 00:04:47.456 00:04:47.456 ' 00:04:47.456 10:59:14 nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:47.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.456 --rc genhtml_branch_coverage=1 00:04:47.456 --rc genhtml_function_coverage=1 00:04:47.456 --rc genhtml_legend=1 00:04:47.456 --rc geninfo_all_blocks=1 00:04:47.456 --rc geninfo_unexecuted_blocks=1 00:04:47.456 00:04:47.456 ' 00:04:47.456 10:59:14 nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:47.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.456 --rc genhtml_branch_coverage=1 00:04:47.456 --rc genhtml_function_coverage=1 00:04:47.456 --rc genhtml_legend=1 00:04:47.456 --rc geninfo_all_blocks=1 00:04:47.456 --rc geninfo_unexecuted_blocks=1 00:04:47.456 00:04:47.456 ' 00:04:47.456 10:59:14 nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:47.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.456 --rc genhtml_branch_coverage=1 00:04:47.456 --rc genhtml_function_coverage=1 00:04:47.456 --rc genhtml_legend=1 00:04:47.456 --rc geninfo_all_blocks=1 00:04:47.456 --rc geninfo_unexecuted_blocks=1 00:04:47.456 00:04:47.456 ' 00:04:47.456 10:59:14 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:04:47.456 10:59:14 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:04:47.456 10:59:14 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:04:47.456 10:59:14 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:04:47.456 10:59:14 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:47.456 10:59:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:47.456 ************************************ 00:04:47.456 START TEST nvmf_target_core 00:04:47.456 ************************************ 00:04:47.456 10:59:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:04:47.456 * Looking for test storage... 00:04:47.456 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:04:47.456 10:59:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:47.456 10:59:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lcov --version 00:04:47.456 10:59:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:47.456 10:59:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:47.456 10:59:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:47.456 10:59:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:47.456 10:59:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:47.456 10:59:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:04:47.456 10:59:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:04:47.456 10:59:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:04:47.456 10:59:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:04:47.456 10:59:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:04:47.456 10:59:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:04:47.456 10:59:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:04:47.456 10:59:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:47.456 10:59:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:04:47.456 10:59:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:04:47.456 10:59:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:47.456 10:59:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:47.456 10:59:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:04:47.456 10:59:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:04:47.716 10:59:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:47.716 10:59:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:04:47.716 10:59:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:04:47.716 10:59:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:04:47.716 10:59:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:04:47.716 10:59:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:47.716 10:59:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:04:47.716 10:59:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:04:47.716 10:59:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:47.716 10:59:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:47.716 10:59:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:04:47.716 10:59:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:47.716 10:59:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:47.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.716 --rc genhtml_branch_coverage=1 00:04:47.716 --rc genhtml_function_coverage=1 00:04:47.716 --rc genhtml_legend=1 00:04:47.716 --rc geninfo_all_blocks=1 00:04:47.716 --rc geninfo_unexecuted_blocks=1 00:04:47.716 00:04:47.716 ' 00:04:47.716 10:59:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:47.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.717 --rc genhtml_branch_coverage=1 00:04:47.717 --rc genhtml_function_coverage=1 00:04:47.717 --rc genhtml_legend=1 00:04:47.717 --rc geninfo_all_blocks=1 00:04:47.717 --rc geninfo_unexecuted_blocks=1 00:04:47.717 00:04:47.717 ' 00:04:47.717 10:59:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:47.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.717 --rc genhtml_branch_coverage=1 00:04:47.717 --rc genhtml_function_coverage=1 00:04:47.717 --rc genhtml_legend=1 00:04:47.717 --rc geninfo_all_blocks=1 00:04:47.717 --rc geninfo_unexecuted_blocks=1 00:04:47.717 00:04:47.717 ' 00:04:47.717 10:59:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:47.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.717 --rc genhtml_branch_coverage=1 00:04:47.717 --rc genhtml_function_coverage=1 00:04:47.717 --rc genhtml_legend=1 00:04:47.717 --rc geninfo_all_blocks=1 00:04:47.717 --rc geninfo_unexecuted_blocks=1 00:04:47.717 00:04:47.717 ' 00:04:47.717 10:59:14 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:04:47.717 10:59:14 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:04:47.717 10:59:14 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:47.717 10:59:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:04:47.717 10:59:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:47.717 10:59:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:47.717 10:59:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:47.717 10:59:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:47.717 10:59:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:47.717 10:59:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:47.717 10:59:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:47.717 10:59:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:47.717 10:59:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:47.717 10:59:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:47.717 10:59:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:04:47.717 10:59:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:04:47.717 10:59:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:47.717 10:59:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:47.717 10:59:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:47.717 10:59:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:47.717 10:59:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:47.717 10:59:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:04:47.717 10:59:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:47.717 10:59:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:47.717 10:59:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:47.717 10:59:14 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:47.717 10:59:14 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:47.717 10:59:14 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:47.717 10:59:14 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:04:47.717 10:59:14 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:47.717 10:59:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:04:47.717 10:59:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:47.717 10:59:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:47.717 10:59:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:47.717 10:59:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:47.717 10:59:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:47.717 10:59:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:47.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:47.717 10:59:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:47.717 10:59:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:47.717 10:59:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:47.717 10:59:14 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:04:47.717 10:59:14 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:04:47.717 10:59:14 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:04:47.717 10:59:14 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:04:47.717 10:59:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:04:47.717 10:59:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:47.717 10:59:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:04:47.717 ************************************ 00:04:47.717 START TEST nvmf_abort 00:04:47.717 ************************************ 00:04:47.717 10:59:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:04:47.717 * Looking for test storage... 00:04:47.717 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:04:47.717 10:59:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:47.717 10:59:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:04:47.717 10:59:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:47.717 10:59:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:47.717 10:59:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:47.717 10:59:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:47.717 10:59:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:47.717 10:59:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:04:47.717 10:59:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:04:47.717 10:59:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:04:47.717 10:59:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:04:47.717 10:59:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:04:47.717 10:59:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:04:47.717 10:59:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:04:47.717 10:59:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:47.717 10:59:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:04:47.717 10:59:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:04:47.717 10:59:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:47.717 10:59:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:47.717 10:59:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:04:47.717 10:59:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:04:47.717 10:59:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:47.717 10:59:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:04:47.717 10:59:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:04:47.717 10:59:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:04:47.717 10:59:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:04:47.717 10:59:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:47.717 10:59:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:04:47.717 10:59:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:04:47.717 10:59:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:47.717 10:59:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:47.717 10:59:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:04:47.717 10:59:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:47.717 10:59:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:47.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.717 --rc genhtml_branch_coverage=1 00:04:47.717 --rc genhtml_function_coverage=1 00:04:47.717 --rc genhtml_legend=1 00:04:47.718 --rc geninfo_all_blocks=1 00:04:47.718 --rc geninfo_unexecuted_blocks=1 00:04:47.718 00:04:47.718 ' 00:04:47.718 10:59:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:47.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.718 --rc genhtml_branch_coverage=1 00:04:47.718 --rc genhtml_function_coverage=1 00:04:47.718 --rc genhtml_legend=1 00:04:47.718 --rc geninfo_all_blocks=1 00:04:47.718 --rc geninfo_unexecuted_blocks=1 00:04:47.718 00:04:47.718 ' 00:04:47.718 10:59:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:47.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.718 --rc genhtml_branch_coverage=1 00:04:47.718 --rc genhtml_function_coverage=1 00:04:47.718 --rc genhtml_legend=1 00:04:47.718 --rc geninfo_all_blocks=1 00:04:47.718 --rc geninfo_unexecuted_blocks=1 00:04:47.718 00:04:47.718 ' 00:04:47.718 10:59:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:47.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.718 --rc genhtml_branch_coverage=1 00:04:47.718 --rc genhtml_function_coverage=1 00:04:47.718 --rc genhtml_legend=1 00:04:47.718 --rc geninfo_all_blocks=1 00:04:47.718 --rc geninfo_unexecuted_blocks=1 00:04:47.718 00:04:47.718 ' 00:04:47.718 10:59:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:47.718 10:59:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:04:47.718 10:59:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:47.718 10:59:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:47.718 10:59:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:47.718 10:59:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:47.718 10:59:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:47.718 10:59:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:47.718 10:59:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:47.718 10:59:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:47.718 10:59:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:47.978 10:59:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:47.978 10:59:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:04:47.978 10:59:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:04:47.978 10:59:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:47.978 10:59:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:47.978 10:59:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:47.978 10:59:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:47.978 10:59:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:47.978 10:59:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:04:47.978 10:59:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:47.978 10:59:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:47.978 10:59:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:47.978 10:59:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:47.978 10:59:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:47.978 10:59:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:47.978 10:59:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:04:47.978 10:59:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:47.978 10:59:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:04:47.978 10:59:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:47.978 10:59:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:47.978 10:59:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:47.978 10:59:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:47.978 10:59:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:47.978 10:59:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:47.978 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:47.978 10:59:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:47.978 10:59:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:47.978 10:59:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:47.978 10:59:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:04:47.978 10:59:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:04:47.978 10:59:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:04:47.978 10:59:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:04:47.978 10:59:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:04:47.978 10:59:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:04:47.978 10:59:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:04:47.978 10:59:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:04:47.978 10:59:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:04:47.978 10:59:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:04:47.978 10:59:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:04:47.978 10:59:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:04:47.978 10:59:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:04:47.978 10:59:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:04:47.978 10:59:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:54.551 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:04:54.551 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:04:54.551 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:04:54.551 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:04:54.551 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:04:54.551 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:04:54.551 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:04:54.551 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:04:54.551 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:04:54.551 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:04:54.551 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:04:54.551 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:04:54.551 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:04:54.551 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:04:54.551 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:04:54.551 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:04:54.551 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:04:54.551 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:04:54.551 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:04:54.551 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:04:54.551 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:04:54.551 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:04:54.551 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:04:54.551 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:04:54.551 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:04:54.551 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:04:54.551 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:04:54.551 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:04:54.551 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:04:54.551 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:04:54.551 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:04:54.551 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:04:54.551 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:04:54.551 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:04:54.551 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:04:54.551 Found 0000:86:00.0 (0x8086 - 0x159b) 00:04:54.551 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:04:54.551 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:04:54.551 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:04:54.551 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:04:54.551 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:04:54.551 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:04:54.551 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:04:54.551 Found 0000:86:00.1 (0x8086 - 0x159b) 00:04:54.551 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:04:54.551 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:04:54.551 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:04:54.551 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:04:54.551 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:04:54.551 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:04:54.551 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:04:54.551 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:04:54.551 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:04:54.551 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:04:54.551 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:04:54.551 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:04:54.551 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:04:54.551 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:04:54.551 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:04:54.551 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:04:54.551 Found net devices under 0000:86:00.0: cvl_0_0 00:04:54.551 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:04:54.551 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:04:54.551 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:04:54.551 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:04:54.551 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:04:54.551 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:04:54.551 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:04:54.551 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:04:54.551 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:04:54.551 Found net devices under 0000:86:00.1: cvl_0_1 00:04:54.551 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:04:54.551 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:04:54.551 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:04:54.551 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:04:54.551 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:04:54.551 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:04:54.551 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:04:54.551 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:04:54.551 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:04:54.551 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:04:54.551 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:04:54.551 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:04:54.552 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:04:54.552 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:04:54.552 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:04:54.552 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:04:54.552 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:04:54.552 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:04:54.552 10:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:04:54.552 10:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:04:54.552 10:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:04:54.552 10:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:04:54.552 10:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:04:54.552 10:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:04:54.552 10:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:04:54.552 10:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:04:54.552 10:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:04:54.552 10:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:04:54.552 10:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:04:54.552 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:04:54.552 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.378 ms 00:04:54.552 00:04:54.552 --- 10.0.0.2 ping statistics --- 00:04:54.552 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:04:54.552 rtt min/avg/max/mdev = 0.378/0.378/0.378/0.000 ms 00:04:54.552 10:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:04:54.552 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:04:54.552 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:04:54.552 00:04:54.552 --- 10.0.0.1 ping statistics --- 00:04:54.552 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:04:54.552 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:04:54.552 10:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:04:54.552 10:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:04:54.552 10:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:04:54.552 10:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:04:54.552 10:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:04:54.552 10:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:04:54.552 10:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:04:54.552 10:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:04:54.552 10:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:04:54.552 10:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:04:54.552 10:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:04:54.552 10:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:54.552 10:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:54.552 10:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=3878599 00:04:54.552 10:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 3878599 00:04:54.552 10:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:04:54.552 10:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 3878599 ']' 00:04:54.552 10:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:54.552 10:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:54.552 10:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:54.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:54.552 10:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:54.552 10:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:54.552 [2024-11-20 10:59:21.335349] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:04:54.552 [2024-11-20 10:59:21.335399] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:04:54.552 [2024-11-20 10:59:21.412224] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:54.552 [2024-11-20 10:59:21.454369] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:04:54.552 [2024-11-20 10:59:21.454406] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:04:54.552 [2024-11-20 10:59:21.454414] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:54.552 [2024-11-20 10:59:21.454420] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:54.552 [2024-11-20 10:59:21.454425] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:04:54.552 [2024-11-20 10:59:21.455776] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:54.552 [2024-11-20 10:59:21.455862] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:54.552 [2024-11-20 10:59:21.455864] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:54.552 10:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:54.552 10:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:04:54.552 10:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:04:54.552 10:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:54.552 10:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:54.552 10:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:04:54.552 10:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:04:54.552 10:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:54.552 10:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:54.552 [2024-11-20 10:59:21.604708] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:54.552 10:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:54.552 10:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:04:54.552 10:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:54.552 10:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:54.552 Malloc0 00:04:54.552 10:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:54.552 10:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:04:54.552 10:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:54.552 10:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:54.552 Delay0 00:04:54.552 10:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:54.552 10:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:04:54.552 10:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:54.552 10:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:54.552 10:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:54.552 10:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:04:54.552 10:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:54.552 10:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:54.552 10:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:54.553 10:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:04:54.553 10:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:54.553 10:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:54.553 [2024-11-20 10:59:21.682752] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:04:54.553 10:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:54.553 10:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:04:54.553 10:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:54.553 10:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:54.553 10:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:54.553 10:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:04:54.553 [2024-11-20 10:59:21.819270] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:04:57.084 Initializing NVMe Controllers 00:04:57.084 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:04:57.084 controller IO queue size 128 less than required 00:04:57.084 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:04:57.084 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:04:57.084 Initialization complete. Launching workers. 00:04:57.084 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 36240 00:04:57.084 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 36301, failed to submit 62 00:04:57.084 success 36244, unsuccessful 57, failed 0 00:04:57.084 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:04:57.084 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:57.084 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:57.084 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:57.084 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:04:57.084 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:04:57.084 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:04:57.084 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:04:57.084 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:04:57.084 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:04:57.084 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:04:57.084 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:04:57.084 rmmod nvme_tcp 00:04:57.084 rmmod nvme_fabrics 00:04:57.084 rmmod nvme_keyring 00:04:57.084 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:04:57.084 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:04:57.084 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:04:57.084 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 3878599 ']' 00:04:57.084 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 3878599 00:04:57.084 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 3878599 ']' 00:04:57.084 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 3878599 00:04:57.084 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:04:57.084 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:57.084 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3878599 00:04:57.084 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:04:57.084 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:04:57.084 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3878599' 00:04:57.084 killing process with pid 3878599 00:04:57.084 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 3878599 00:04:57.084 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 3878599 00:04:57.084 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:04:57.084 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:04:57.084 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:04:57.084 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:04:57.084 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:04:57.084 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:04:57.084 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:04:57.084 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:04:57.084 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:04:57.084 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:04:57.084 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:04:57.084 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:04:58.990 10:59:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:04:58.990 00:04:58.990 real 0m11.373s 00:04:58.990 user 0m12.140s 00:04:58.990 sys 0m5.495s 00:04:58.990 10:59:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:58.990 10:59:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:58.990 ************************************ 00:04:58.990 END TEST nvmf_abort 00:04:58.990 ************************************ 00:04:58.990 10:59:26 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:04:58.990 10:59:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:04:58.990 10:59:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:58.990 10:59:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:04:58.990 ************************************ 00:04:58.990 START TEST nvmf_ns_hotplug_stress 00:04:58.990 ************************************ 00:04:58.990 10:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:04:59.250 * Looking for test storage... 00:04:59.250 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:04:59.250 10:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:59.250 10:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:04:59.250 10:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:59.250 10:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:59.250 10:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:59.250 10:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:59.250 10:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:59.250 10:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:04:59.250 10:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:04:59.250 10:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:04:59.250 10:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:04:59.250 10:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:04:59.250 10:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:04:59.250 10:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:04:59.250 10:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:59.250 10:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:04:59.250 10:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:04:59.251 10:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:59.251 10:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:59.251 10:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:04:59.251 10:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:04:59.251 10:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:59.251 10:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:04:59.251 10:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:04:59.251 10:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:04:59.251 10:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:04:59.251 10:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:59.251 10:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:04:59.251 10:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:04:59.251 10:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:59.251 10:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:59.251 10:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:04:59.251 10:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:59.251 10:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:59.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.251 --rc genhtml_branch_coverage=1 00:04:59.251 --rc genhtml_function_coverage=1 00:04:59.251 --rc genhtml_legend=1 00:04:59.251 --rc geninfo_all_blocks=1 00:04:59.251 --rc geninfo_unexecuted_blocks=1 00:04:59.251 00:04:59.251 ' 00:04:59.251 10:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:59.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.251 --rc genhtml_branch_coverage=1 00:04:59.251 --rc genhtml_function_coverage=1 00:04:59.251 --rc genhtml_legend=1 00:04:59.251 --rc geninfo_all_blocks=1 00:04:59.251 --rc geninfo_unexecuted_blocks=1 00:04:59.251 00:04:59.251 ' 00:04:59.251 10:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:59.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.251 --rc genhtml_branch_coverage=1 00:04:59.251 --rc genhtml_function_coverage=1 00:04:59.251 --rc genhtml_legend=1 00:04:59.251 --rc geninfo_all_blocks=1 00:04:59.251 --rc geninfo_unexecuted_blocks=1 00:04:59.251 00:04:59.251 ' 00:04:59.251 10:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:59.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.251 --rc genhtml_branch_coverage=1 00:04:59.251 --rc genhtml_function_coverage=1 00:04:59.251 --rc genhtml_legend=1 00:04:59.251 --rc geninfo_all_blocks=1 00:04:59.251 --rc geninfo_unexecuted_blocks=1 00:04:59.251 00:04:59.251 ' 00:04:59.251 10:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:59.251 10:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:04:59.251 10:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:59.251 10:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:59.251 10:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:59.251 10:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:59.251 10:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:59.251 10:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:59.251 10:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:59.251 10:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:59.251 10:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:59.251 10:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:59.251 10:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:04:59.251 10:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:04:59.251 10:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:59.251 10:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:59.251 10:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:59.251 10:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:59.251 10:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:59.251 10:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:04:59.251 10:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:59.251 10:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:59.251 10:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:59.251 10:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:59.251 10:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:59.251 10:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:59.251 10:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:04:59.251 10:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:59.251 10:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:04:59.251 10:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:59.251 10:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:59.251 10:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:59.251 10:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:59.251 10:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:59.251 10:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:59.251 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:59.251 10:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:59.251 10:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:59.251 10:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:59.251 10:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:59.251 10:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:04:59.251 10:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:04:59.251 10:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:04:59.251 10:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:04:59.251 10:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:04:59.251 10:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:04:59.251 10:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:04:59.251 10:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:04:59.251 10:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:04:59.251 10:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:04:59.251 10:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:04:59.251 10:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:04:59.251 10:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:05.821 10:59:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:05.821 10:59:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:05:05.821 10:59:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:05.821 10:59:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:05.821 10:59:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:05.821 10:59:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:05.821 10:59:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:05.821 10:59:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:05:05.821 10:59:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:05.821 10:59:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:05:05.821 10:59:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:05:05.821 10:59:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:05:05.821 10:59:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:05:05.821 10:59:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:05:05.821 10:59:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:05:05.821 10:59:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:05.821 10:59:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:05.821 10:59:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:05.821 10:59:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:05.821 10:59:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:05.821 10:59:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:05.821 10:59:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:05.821 10:59:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:05.821 10:59:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:05.822 10:59:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:05.822 10:59:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:05.822 10:59:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:05.822 10:59:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:05.822 10:59:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:05.822 10:59:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:05.822 10:59:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:05.822 10:59:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:05.822 10:59:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:05.822 10:59:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:05.822 10:59:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:05:05.822 Found 0000:86:00.0 (0x8086 - 0x159b) 00:05:05.822 10:59:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:05.822 10:59:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:05.822 10:59:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:05.822 10:59:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:05.822 10:59:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:05.822 10:59:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:05.822 10:59:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:05:05.822 Found 0000:86:00.1 (0x8086 - 0x159b) 00:05:05.822 10:59:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:05.822 10:59:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:05.822 10:59:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:05.822 10:59:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:05.822 10:59:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:05.822 10:59:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:05.822 10:59:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:05.822 10:59:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:05.822 10:59:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:05.822 10:59:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:05.822 10:59:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:05.822 10:59:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:05.822 10:59:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:05.822 10:59:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:05.822 10:59:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:05.822 10:59:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:05:05.822 Found net devices under 0000:86:00.0: cvl_0_0 00:05:05.822 10:59:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:05.822 10:59:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:05.822 10:59:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:05.822 10:59:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:05.822 10:59:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:05.822 10:59:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:05.822 10:59:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:05.822 10:59:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:05.822 10:59:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:05:05.822 Found net devices under 0000:86:00.1: cvl_0_1 00:05:05.822 10:59:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:05.822 10:59:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:05.822 10:59:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:05:05.822 10:59:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:05.822 10:59:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:05.822 10:59:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:05.822 10:59:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:05.822 10:59:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:05.822 10:59:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:05.822 10:59:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:05.822 10:59:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:05.822 10:59:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:05.822 10:59:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:05.822 10:59:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:05.822 10:59:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:05.822 10:59:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:05.822 10:59:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:05.822 10:59:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:05.822 10:59:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:05.822 10:59:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:05.822 10:59:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:05.822 10:59:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:05.822 10:59:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:05.822 10:59:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:05.822 10:59:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:05.822 10:59:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:05.822 10:59:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:05.822 10:59:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:05.822 10:59:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:05.822 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:05.822 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.437 ms 00:05:05.822 00:05:05.822 --- 10.0.0.2 ping statistics --- 00:05:05.822 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:05.822 rtt min/avg/max/mdev = 0.437/0.437/0.437/0.000 ms 00:05:05.822 10:59:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:05.822 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:05.822 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:05:05.822 00:05:05.822 --- 10.0.0.1 ping statistics --- 00:05:05.822 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:05.822 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:05:05.822 10:59:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:05.822 10:59:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:05:05.822 10:59:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:05.822 10:59:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:05.822 10:59:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:05.822 10:59:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:05.822 10:59:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:05.822 10:59:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:05.822 10:59:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:05.822 10:59:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:05:05.822 10:59:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:05.822 10:59:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:05.822 10:59:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:05.822 10:59:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=3882620 00:05:05.822 10:59:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 3882620 00:05:05.822 10:59:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:05.822 10:59:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 3882620 ']' 00:05:05.822 10:59:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:05.822 10:59:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:05.822 10:59:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:05.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:05.823 10:59:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:05.823 10:59:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:05.823 [2024-11-20 10:59:32.753999] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:05:05.823 [2024-11-20 10:59:32.754050] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:05.823 [2024-11-20 10:59:32.836644] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:05.823 [2024-11-20 10:59:32.879415] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:05.823 [2024-11-20 10:59:32.879447] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:05.823 [2024-11-20 10:59:32.879454] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:05.823 [2024-11-20 10:59:32.879461] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:05.823 [2024-11-20 10:59:32.879466] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:05.823 [2024-11-20 10:59:32.880863] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:05.823 [2024-11-20 10:59:32.880903] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:05.823 [2024-11-20 10:59:32.880904] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:05.823 10:59:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:05.823 10:59:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:05:05.823 10:59:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:05.823 10:59:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:05.823 10:59:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:05.823 10:59:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:05.823 10:59:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:05:05.823 10:59:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:05:05.823 [2024-11-20 10:59:33.180973] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:05.823 10:59:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:05:06.080 10:59:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:06.338 [2024-11-20 10:59:33.578405] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:06.338 10:59:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:06.338 10:59:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:05:06.597 Malloc0 00:05:06.597 10:59:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:06.855 Delay0 00:05:06.855 10:59:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:07.114 10:59:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:05:07.114 NULL1 00:05:07.114 10:59:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:05:07.373 10:59:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:05:07.373 10:59:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3882895 00:05:07.373 10:59:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3882895 00:05:07.373 10:59:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:07.631 10:59:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:07.889 10:59:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:05:07.889 10:59:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:05:07.889 true 00:05:07.889 10:59:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3882895 00:05:07.889 10:59:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:08.147 10:59:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:08.405 10:59:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:05:08.405 10:59:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:05:08.663 true 00:05:08.663 10:59:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3882895 00:05:08.663 10:59:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:09.598 Read completed with error (sct=0, sc=11) 00:05:09.598 10:59:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:09.598 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:09.598 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:09.857 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:09.857 10:59:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:05:09.857 10:59:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:05:10.115 true 00:05:10.115 10:59:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3882895 00:05:10.115 10:59:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:10.115 10:59:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:10.373 10:59:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:05:10.373 10:59:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:05:10.631 true 00:05:10.631 10:59:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3882895 00:05:10.631 10:59:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:12.006 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:12.006 10:59:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:12.006 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:12.006 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:12.006 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:12.006 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:12.006 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:12.006 10:59:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:05:12.006 10:59:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:05:12.265 true 00:05:12.265 10:59:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3882895 00:05:12.265 10:59:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:13.200 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:13.200 10:59:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:13.200 10:59:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:05:13.200 10:59:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:05:13.458 true 00:05:13.458 10:59:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3882895 00:05:13.458 10:59:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:13.718 10:59:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:13.977 10:59:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:05:13.977 10:59:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:05:13.977 true 00:05:13.977 10:59:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3882895 00:05:13.977 10:59:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:14.235 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:14.235 10:59:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:14.235 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:14.235 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:14.235 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:14.494 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:14.494 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:14.494 10:59:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:05:14.494 10:59:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:05:14.753 true 00:05:14.753 10:59:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3882895 00:05:14.753 10:59:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:15.688 10:59:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:15.688 10:59:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:05:15.688 10:59:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:05:15.947 true 00:05:15.947 10:59:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3882895 00:05:15.947 10:59:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:16.206 10:59:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:16.206 10:59:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:05:16.206 10:59:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:05:16.464 true 00:05:16.464 10:59:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3882895 00:05:16.464 10:59:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:17.400 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:17.658 10:59:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:05:17.658 10:59:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:05:17.917 true 00:05:17.917 10:59:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3882895 00:05:17.917 10:59:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:18.176 10:59:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:18.176 10:59:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:05:18.176 10:59:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:05:18.434 true 00:05:18.434 10:59:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3882895 00:05:18.434 10:59:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:19.812 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:19.812 10:59:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:19.812 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:19.812 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:19.812 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:19.812 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:19.812 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:19.812 10:59:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:05:19.812 10:59:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:05:20.071 true 00:05:20.071 10:59:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3882895 00:05:20.071 10:59:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:21.006 10:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:21.006 10:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:05:21.006 10:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:05:21.265 true 00:05:21.265 10:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3882895 00:05:21.265 10:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:21.523 10:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:21.523 10:59:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:05:21.523 10:59:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:05:21.782 true 00:05:21.782 10:59:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3882895 00:05:21.782 10:59:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:22.799 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:22.799 10:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:22.799 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:23.057 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:23.057 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:23.057 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:23.057 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:23.057 10:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:05:23.058 10:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:05:23.316 true 00:05:23.316 10:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3882895 00:05:23.316 10:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:24.251 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:24.251 10:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:24.251 10:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:05:24.251 10:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:05:24.510 true 00:05:24.510 10:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3882895 00:05:24.510 10:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:24.769 10:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:25.027 10:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:05:25.027 10:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:05:25.027 true 00:05:25.027 10:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3882895 00:05:25.027 10:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:26.403 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:26.403 10:59:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:26.403 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:26.403 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:26.403 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:26.403 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:26.403 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:26.403 10:59:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:05:26.403 10:59:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:05:26.662 true 00:05:26.662 10:59:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3882895 00:05:26.662 10:59:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:27.678 10:59:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:27.678 10:59:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:05:27.678 10:59:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:05:27.937 true 00:05:27.937 10:59:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3882895 00:05:27.937 10:59:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:28.195 10:59:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:28.195 10:59:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:05:28.195 10:59:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:05:28.453 true 00:05:28.453 10:59:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3882895 00:05:28.453 10:59:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:29.831 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:29.831 10:59:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:29.831 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:29.831 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:29.831 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:29.831 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:29.831 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:29.831 10:59:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:05:29.831 10:59:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:05:30.089 true 00:05:30.090 10:59:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3882895 00:05:30.090 10:59:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:31.026 10:59:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:31.026 10:59:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:05:31.026 10:59:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:05:31.285 true 00:05:31.285 10:59:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3882895 00:05:31.285 10:59:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:31.543 10:59:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:31.543 10:59:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:05:31.543 10:59:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:05:31.815 true 00:05:31.815 10:59:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3882895 00:05:31.815 10:59:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:33.192 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:33.192 11:00:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:33.192 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:33.192 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:33.192 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:33.192 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:33.192 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:33.192 11:00:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:05:33.192 11:00:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:05:33.451 true 00:05:33.451 11:00:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3882895 00:05:33.451 11:00:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:34.016 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:34.274 11:00:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:34.274 11:00:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:05:34.274 11:00:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:05:34.533 true 00:05:34.533 11:00:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3882895 00:05:34.533 11:00:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:34.791 11:00:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:35.049 11:00:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:05:35.049 11:00:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:05:35.049 true 00:05:35.049 11:00:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3882895 00:05:35.049 11:00:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:36.424 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:36.424 11:00:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:36.424 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:36.424 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:36.424 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:36.424 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:36.424 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:36.424 11:00:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:05:36.424 11:00:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:05:36.683 true 00:05:36.683 11:00:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3882895 00:05:36.683 11:00:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:37.622 11:00:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:37.622 Initializing NVMe Controllers 00:05:37.622 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:05:37.622 Controller IO queue size 128, less than required. 00:05:37.622 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:05:37.622 Controller IO queue size 128, less than required. 00:05:37.622 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:05:37.622 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:05:37.622 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:05:37.622 Initialization complete. Launching workers. 00:05:37.622 ======================================================== 00:05:37.622 Latency(us) 00:05:37.622 Device Information : IOPS MiB/s Average min max 00:05:37.622 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1899.97 0.93 44198.88 1308.40 1095188.49 00:05:37.622 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 16581.52 8.10 7700.25 1621.11 433315.16 00:05:37.622 ======================================================== 00:05:37.622 Total : 18481.48 9.02 11452.45 1308.40 1095188.49 00:05:37.622 00:05:37.622 11:00:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:05:37.622 11:00:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:05:37.880 true 00:05:37.880 11:00:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3882895 00:05:37.880 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3882895) - No such process 00:05:37.880 11:00:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3882895 00:05:37.880 11:00:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:38.139 11:00:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:38.398 11:00:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:05:38.398 11:00:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:05:38.398 11:00:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:05:38.398 11:00:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:38.398 11:00:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:05:38.398 null0 00:05:38.398 11:00:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:38.398 11:00:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:38.398 11:00:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:05:38.656 null1 00:05:38.656 11:00:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:38.656 11:00:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:38.656 11:00:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:05:38.914 null2 00:05:38.914 11:00:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:38.914 11:00:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:38.915 11:00:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:05:39.173 null3 00:05:39.173 11:00:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:39.173 11:00:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:39.173 11:00:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:05:39.173 null4 00:05:39.432 11:00:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:39.432 11:00:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:39.432 11:00:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:05:39.432 null5 00:05:39.432 11:00:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:39.432 11:00:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:39.432 11:00:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:05:39.691 null6 00:05:39.691 11:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:39.691 11:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:39.691 11:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:05:39.950 null7 00:05:39.950 11:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:39.950 11:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:39.950 11:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:05:39.950 11:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:39.950 11:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:05:39.950 11:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:39.950 11:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:05:39.950 11:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:39.950 11:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:39.950 11:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:39.950 11:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:39.950 11:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:39.950 11:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:39.950 11:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:39.950 11:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:05:39.950 11:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:39.950 11:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:05:39.950 11:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:39.950 11:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:39.950 11:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:05:39.950 11:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:39.950 11:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:39.950 11:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:05:39.950 11:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:39.950 11:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:39.950 11:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:39.950 11:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:39.950 11:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:39.950 11:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:39.950 11:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:39.950 11:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:05:39.950 11:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:39.950 11:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:05:39.950 11:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:39.950 11:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:39.950 11:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:39.950 11:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:39.950 11:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:39.950 11:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:05:39.950 11:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:39.950 11:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:05:39.950 11:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:39.950 11:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:39.950 11:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:39.950 11:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:39.950 11:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:39.950 11:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:05:39.950 11:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:39.950 11:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:05:39.950 11:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:39.950 11:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:39.950 11:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:39.950 11:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:39.950 11:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:39.950 11:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:05:39.950 11:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:39.950 11:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:05:39.950 11:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:39.950 11:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:39.950 11:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:39.950 11:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:39.950 11:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:39.950 11:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:05:39.950 11:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:39.950 11:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3888625 3888627 3888628 3888631 3888632 3888634 3888636 3888638 00:05:39.951 11:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:05:39.951 11:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:39.951 11:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:39.951 11:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:40.209 11:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:40.209 11:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:40.209 11:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:40.209 11:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:40.209 11:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:40.209 11:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:40.209 11:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:40.209 11:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:40.209 11:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:40.209 11:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:40.467 11:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:40.467 11:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:40.467 11:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:40.467 11:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:40.467 11:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:40.467 11:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:40.467 11:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:40.467 11:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:40.467 11:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:40.467 11:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:40.467 11:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:40.467 11:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:40.467 11:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:40.467 11:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:40.467 11:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:40.467 11:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:40.467 11:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:40.467 11:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:40.467 11:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:40.467 11:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:40.467 11:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:40.467 11:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:40.467 11:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:40.467 11:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:40.467 11:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:40.467 11:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:40.467 11:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:40.467 11:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:40.467 11:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:40.467 11:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:40.725 11:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:40.725 11:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:40.725 11:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:40.725 11:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:40.725 11:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:40.725 11:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:40.725 11:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:40.725 11:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:40.725 11:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:40.725 11:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:40.725 11:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:40.725 11:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:40.725 11:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:40.725 11:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:40.725 11:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:40.725 11:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:40.726 11:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:40.726 11:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:40.726 11:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:40.726 11:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:40.726 11:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:40.726 11:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:40.726 11:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:40.726 11:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:40.984 11:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:40.984 11:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:40.984 11:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:40.984 11:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:40.984 11:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:40.984 11:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:40.984 11:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:40.984 11:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:41.243 11:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:41.243 11:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:41.243 11:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:41.243 11:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:41.243 11:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:41.243 11:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:41.243 11:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:41.243 11:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:41.243 11:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:41.243 11:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:41.243 11:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:41.243 11:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:41.243 11:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:41.243 11:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:41.243 11:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:41.243 11:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:41.243 11:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:41.243 11:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:41.243 11:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:41.243 11:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:41.243 11:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:41.243 11:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:41.243 11:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:41.243 11:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:41.502 11:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:41.502 11:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:41.502 11:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:41.502 11:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:41.502 11:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:41.502 11:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:41.502 11:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:41.502 11:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:41.502 11:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:41.502 11:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:41.502 11:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:41.502 11:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:41.502 11:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:41.502 11:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:41.502 11:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:41.502 11:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:41.502 11:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:41.502 11:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:41.502 11:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:41.502 11:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:41.761 11:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:41.761 11:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:41.761 11:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:41.761 11:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:41.761 11:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:41.761 11:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:41.761 11:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:41.761 11:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:41.761 11:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:41.761 11:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:41.761 11:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:41.761 11:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:41.761 11:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:41.761 11:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:41.761 11:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:41.761 11:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:41.761 11:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:41.761 11:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:41.761 11:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:41.761 11:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:42.020 11:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.020 11:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.020 11:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:42.020 11:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.020 11:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.020 11:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:42.020 11:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.020 11:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.020 11:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:42.020 11:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.020 11:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.020 11:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:42.020 11:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.020 11:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.020 11:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:42.020 11:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.020 11:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.020 11:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:42.020 11:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.020 11:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.020 11:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:42.020 11:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.020 11:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.020 11:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:42.279 11:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:42.279 11:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:42.279 11:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:42.279 11:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:42.279 11:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:42.279 11:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:42.279 11:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:42.279 11:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:42.538 11:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.538 11:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.538 11:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:42.538 11:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.538 11:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.538 11:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:42.538 11:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.538 11:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.538 11:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:42.538 11:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.538 11:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.538 11:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:42.538 11:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.538 11:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.538 11:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:42.538 11:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.538 11:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.538 11:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:42.538 11:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.538 11:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.538 11:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:42.538 11:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.538 11:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.538 11:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:42.538 11:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:42.538 11:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:42.796 11:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:42.796 11:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:42.796 11:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:42.796 11:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:42.796 11:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:42.796 11:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:42.796 11:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.796 11:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.797 11:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:42.797 11:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.797 11:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.797 11:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:42.797 11:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.797 11:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.797 11:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:42.797 11:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.797 11:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.797 11:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.797 11:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.797 11:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:42.797 11:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:42.797 11:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.797 11:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.797 11:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:42.797 11:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.797 11:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.797 11:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.797 11:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:42.797 11:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.797 11:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:43.055 11:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:43.055 11:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:43.055 11:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:43.055 11:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:43.055 11:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:43.055 11:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:43.055 11:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:43.055 11:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:43.314 11:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:43.314 11:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:43.314 11:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:43.314 11:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:43.314 11:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:43.314 11:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:43.314 11:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:43.314 11:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:43.314 11:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:43.314 11:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:43.314 11:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:43.314 11:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:43.314 11:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:43.314 11:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:43.314 11:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:43.314 11:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:43.314 11:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:43.314 11:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:43.314 11:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:43.314 11:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:43.314 11:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:43.314 11:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:43.314 11:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:43.314 11:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:43.573 11:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:43.573 11:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:43.574 11:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:43.574 11:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:43.574 11:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:43.574 11:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:43.574 11:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:43.574 11:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:43.833 11:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:43.833 11:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:43.833 11:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:43.833 11:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:43.833 11:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:43.833 11:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:43.833 11:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:43.833 11:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:43.833 11:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:43.833 11:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:43.833 11:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:43.833 11:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:43.833 11:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:43.833 11:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:43.833 11:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:43.833 11:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:43.833 11:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:43.833 11:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:43.833 11:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:43.833 11:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:43.833 11:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:43.833 11:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:43.833 11:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:43.833 11:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:43.833 11:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:43.833 11:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:43.833 11:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:43.833 11:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:43.833 11:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:43.833 11:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:43.833 11:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:44.093 11:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:44.093 11:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:44.093 11:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.093 11:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:44.093 11:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.093 11:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:44.093 11:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.093 11:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:44.093 11:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.093 11:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:44.093 11:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.093 11:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:44.093 11:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.093 11:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:44.093 11:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.093 11:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:44.093 11:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.093 11:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:05:44.093 11:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:05:44.093 11:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:05:44.093 11:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:05:44.093 11:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:05:44.093 11:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:05:44.093 11:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:05:44.093 11:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:05:44.093 rmmod nvme_tcp 00:05:44.093 rmmod nvme_fabrics 00:05:44.093 rmmod nvme_keyring 00:05:44.353 11:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:05:44.353 11:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:05:44.353 11:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:05:44.353 11:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 3882620 ']' 00:05:44.353 11:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 3882620 00:05:44.353 11:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 3882620 ']' 00:05:44.353 11:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 3882620 00:05:44.353 11:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:05:44.353 11:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:44.353 11:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3882620 00:05:44.353 11:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:05:44.353 11:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:05:44.353 11:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3882620' 00:05:44.353 killing process with pid 3882620 00:05:44.353 11:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 3882620 00:05:44.353 11:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 3882620 00:05:44.353 11:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:05:44.353 11:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:05:44.353 11:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:05:44.353 11:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:05:44.353 11:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:05:44.353 11:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:05:44.354 11:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:05:44.354 11:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:05:44.354 11:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:05:44.354 11:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:44.354 11:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:44.354 11:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:46.890 11:00:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:05:46.890 00:05:46.890 real 0m47.413s 00:05:46.890 user 3m13.640s 00:05:46.890 sys 0m15.536s 00:05:46.890 11:00:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:46.890 11:00:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:46.890 ************************************ 00:05:46.890 END TEST nvmf_ns_hotplug_stress 00:05:46.890 ************************************ 00:05:46.890 11:00:13 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:05:46.890 11:00:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:46.890 11:00:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:46.890 11:00:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:46.890 ************************************ 00:05:46.890 START TEST nvmf_delete_subsystem 00:05:46.890 ************************************ 00:05:46.890 11:00:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:05:46.890 * Looking for test storage... 00:05:46.890 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:46.890 11:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:46.890 11:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:05:46.890 11:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:46.890 11:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:46.890 11:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:46.890 11:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:46.890 11:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:46.890 11:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:05:46.890 11:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:05:46.890 11:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:05:46.890 11:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:05:46.890 11:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:05:46.890 11:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:05:46.890 11:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:05:46.890 11:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:46.890 11:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:05:46.890 11:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:05:46.890 11:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:46.890 11:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:46.890 11:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:05:46.890 11:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:05:46.890 11:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:46.890 11:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:05:46.890 11:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:05:46.890 11:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:05:46.890 11:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:05:46.890 11:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:46.890 11:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:05:46.890 11:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:05:46.890 11:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:46.890 11:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:46.890 11:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:05:46.890 11:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:46.890 11:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:46.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.890 --rc genhtml_branch_coverage=1 00:05:46.890 --rc genhtml_function_coverage=1 00:05:46.890 --rc genhtml_legend=1 00:05:46.890 --rc geninfo_all_blocks=1 00:05:46.890 --rc geninfo_unexecuted_blocks=1 00:05:46.890 00:05:46.890 ' 00:05:46.890 11:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:46.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.890 --rc genhtml_branch_coverage=1 00:05:46.890 --rc genhtml_function_coverage=1 00:05:46.890 --rc genhtml_legend=1 00:05:46.890 --rc geninfo_all_blocks=1 00:05:46.890 --rc geninfo_unexecuted_blocks=1 00:05:46.890 00:05:46.890 ' 00:05:46.890 11:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:46.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.890 --rc genhtml_branch_coverage=1 00:05:46.890 --rc genhtml_function_coverage=1 00:05:46.890 --rc genhtml_legend=1 00:05:46.891 --rc geninfo_all_blocks=1 00:05:46.891 --rc geninfo_unexecuted_blocks=1 00:05:46.891 00:05:46.891 ' 00:05:46.891 11:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:46.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.891 --rc genhtml_branch_coverage=1 00:05:46.891 --rc genhtml_function_coverage=1 00:05:46.891 --rc genhtml_legend=1 00:05:46.891 --rc geninfo_all_blocks=1 00:05:46.891 --rc geninfo_unexecuted_blocks=1 00:05:46.891 00:05:46.891 ' 00:05:46.891 11:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:46.891 11:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:05:46.891 11:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:46.891 11:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:46.891 11:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:46.891 11:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:46.891 11:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:46.891 11:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:46.891 11:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:46.891 11:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:46.891 11:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:46.891 11:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:46.891 11:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:05:46.891 11:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:05:46.891 11:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:46.891 11:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:46.891 11:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:46.891 11:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:46.891 11:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:46.891 11:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:05:46.891 11:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:46.891 11:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:46.891 11:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:46.891 11:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:46.891 11:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:46.891 11:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:46.891 11:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:05:46.891 11:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:46.891 11:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:05:46.891 11:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:46.891 11:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:46.891 11:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:46.891 11:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:46.891 11:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:46.891 11:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:46.891 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:46.891 11:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:46.891 11:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:46.891 11:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:46.891 11:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:05:46.891 11:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:46.891 11:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:46.891 11:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:46.891 11:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:46.891 11:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:46.891 11:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:46.891 11:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:46.891 11:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:46.891 11:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:46.891 11:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:46.891 11:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:05:46.891 11:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:53.455 11:00:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:53.455 11:00:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:05:53.455 11:00:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:53.455 11:00:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:53.455 11:00:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:53.455 11:00:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:53.455 11:00:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:53.455 11:00:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:05:53.455 11:00:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:53.455 11:00:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:05:53.455 11:00:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:05:53.455 11:00:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:05:53.455 11:00:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:05:53.455 11:00:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:05:53.455 11:00:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:05:53.455 11:00:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:53.455 11:00:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:53.455 11:00:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:53.455 11:00:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:53.455 11:00:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:53.455 11:00:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:53.455 11:00:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:53.455 11:00:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:53.455 11:00:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:53.455 11:00:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:53.455 11:00:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:53.455 11:00:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:53.455 11:00:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:53.455 11:00:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:53.455 11:00:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:53.455 11:00:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:53.455 11:00:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:53.455 11:00:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:53.455 11:00:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:53.455 11:00:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:05:53.455 Found 0000:86:00.0 (0x8086 - 0x159b) 00:05:53.455 11:00:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:53.455 11:00:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:53.455 11:00:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:53.455 11:00:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:53.455 11:00:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:53.455 11:00:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:53.455 11:00:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:05:53.455 Found 0000:86:00.1 (0x8086 - 0x159b) 00:05:53.455 11:00:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:53.455 11:00:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:53.455 11:00:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:53.455 11:00:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:53.455 11:00:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:53.455 11:00:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:53.455 11:00:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:53.455 11:00:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:53.455 11:00:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:53.455 11:00:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:53.455 11:00:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:53.455 11:00:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:53.455 11:00:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:53.455 11:00:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:53.455 11:00:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:53.455 11:00:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:05:53.455 Found net devices under 0000:86:00.0: cvl_0_0 00:05:53.455 11:00:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:53.455 11:00:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:53.455 11:00:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:53.455 11:00:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:53.455 11:00:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:53.455 11:00:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:53.455 11:00:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:53.455 11:00:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:53.455 11:00:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:05:53.455 Found net devices under 0000:86:00.1: cvl_0_1 00:05:53.455 11:00:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:53.455 11:00:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:53.455 11:00:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:05:53.455 11:00:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:53.455 11:00:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:53.455 11:00:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:53.455 11:00:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:53.455 11:00:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:53.455 11:00:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:53.455 11:00:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:53.455 11:00:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:53.455 11:00:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:53.455 11:00:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:53.455 11:00:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:53.455 11:00:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:53.455 11:00:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:53.456 11:00:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:53.456 11:00:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:53.456 11:00:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:53.456 11:00:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:53.456 11:00:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:53.456 11:00:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:53.456 11:00:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:53.456 11:00:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:53.456 11:00:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:53.456 11:00:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:53.456 11:00:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:53.456 11:00:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:53.456 11:00:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:53.456 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:53.456 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.397 ms 00:05:53.456 00:05:53.456 --- 10.0.0.2 ping statistics --- 00:05:53.456 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:53.456 rtt min/avg/max/mdev = 0.397/0.397/0.397/0.000 ms 00:05:53.456 11:00:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:53.456 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:53.456 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:05:53.456 00:05:53.456 --- 10.0.0.1 ping statistics --- 00:05:53.456 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:53.456 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:05:53.456 11:00:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:53.456 11:00:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:05:53.456 11:00:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:53.456 11:00:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:53.456 11:00:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:53.456 11:00:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:53.456 11:00:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:53.456 11:00:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:53.456 11:00:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:53.456 11:00:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:05:53.456 11:00:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:53.456 11:00:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:53.456 11:00:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:53.456 11:00:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=3893413 00:05:53.456 11:00:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:05:53.456 11:00:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 3893413 00:05:53.456 11:00:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 3893413 ']' 00:05:53.456 11:00:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.456 11:00:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:53.456 11:00:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.456 11:00:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:53.456 11:00:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:53.456 [2024-11-20 11:00:20.277858] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:05:53.456 [2024-11-20 11:00:20.277902] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:53.456 [2024-11-20 11:00:20.358685] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:53.456 [2024-11-20 11:00:20.400934] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:53.456 [2024-11-20 11:00:20.400977] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:53.456 [2024-11-20 11:00:20.400984] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:53.456 [2024-11-20 11:00:20.400991] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:53.456 [2024-11-20 11:00:20.400996] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:53.456 [2024-11-20 11:00:20.402278] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:53.456 [2024-11-20 11:00:20.402280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.456 11:00:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:53.456 11:00:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:05:53.456 11:00:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:53.456 11:00:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:53.456 11:00:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:53.456 11:00:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:53.456 11:00:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:05:53.456 11:00:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:53.456 11:00:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:53.456 [2024-11-20 11:00:20.547901] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:53.456 11:00:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:53.456 11:00:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:05:53.456 11:00:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:53.456 11:00:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:53.456 11:00:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:53.456 11:00:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:53.456 11:00:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:53.456 11:00:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:53.456 [2024-11-20 11:00:20.568104] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:53.456 11:00:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:53.456 11:00:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:05:53.456 11:00:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:53.456 11:00:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:53.456 NULL1 00:05:53.456 11:00:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:53.456 11:00:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:53.456 11:00:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:53.456 11:00:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:53.456 Delay0 00:05:53.456 11:00:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:53.456 11:00:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:53.456 11:00:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:53.456 11:00:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:53.456 11:00:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:53.456 11:00:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3893635 00:05:53.456 11:00:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:05:53.456 11:00:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:05:53.456 [2024-11-20 11:00:20.679053] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:05:55.359 11:00:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:05:55.359 11:00:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:55.359 11:00:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:55.359 Write completed with error (sct=0, sc=8) 00:05:55.359 starting I/O failed: -6 00:05:55.359 Read completed with error (sct=0, sc=8) 00:05:55.359 Read completed with error (sct=0, sc=8) 00:05:55.359 Write completed with error (sct=0, sc=8) 00:05:55.359 Read completed with error (sct=0, sc=8) 00:05:55.359 starting I/O failed: -6 00:05:55.359 Read completed with error (sct=0, sc=8) 00:05:55.359 Write completed with error (sct=0, sc=8) 00:05:55.359 Read completed with error (sct=0, sc=8) 00:05:55.359 Write completed with error (sct=0, sc=8) 00:05:55.359 starting I/O failed: -6 00:05:55.359 Read completed with error (sct=0, sc=8) 00:05:55.359 Read completed with error (sct=0, sc=8) 00:05:55.359 Read completed with error (sct=0, sc=8) 00:05:55.359 Write completed with error (sct=0, sc=8) 00:05:55.359 starting I/O failed: -6 00:05:55.359 Read completed with error (sct=0, sc=8) 00:05:55.359 Write completed with error (sct=0, sc=8) 00:05:55.359 Write completed with error (sct=0, sc=8) 00:05:55.359 Write completed with error (sct=0, sc=8) 00:05:55.359 starting I/O failed: -6 00:05:55.359 Write completed with error (sct=0, sc=8) 00:05:55.359 Read completed with error (sct=0, sc=8) 00:05:55.359 Read completed with error (sct=0, sc=8) 00:05:55.359 Read completed with error (sct=0, sc=8) 00:05:55.359 starting I/O failed: -6 00:05:55.359 Read completed with error (sct=0, sc=8) 00:05:55.359 Write completed with error (sct=0, sc=8) 00:05:55.359 Write completed with error (sct=0, sc=8) 00:05:55.359 Read completed with error (sct=0, sc=8) 00:05:55.359 starting I/O failed: -6 00:05:55.359 Read completed with error (sct=0, sc=8) 00:05:55.359 Write completed with error (sct=0, sc=8) 00:05:55.359 Write completed with error (sct=0, sc=8) 00:05:55.359 Read completed with error (sct=0, sc=8) 00:05:55.359 starting I/O failed: -6 00:05:55.359 Read completed with error (sct=0, sc=8) 00:05:55.359 Read completed with error (sct=0, sc=8) 00:05:55.359 Write completed with error (sct=0, sc=8) 00:05:55.359 Read completed with error (sct=0, sc=8) 00:05:55.359 starting I/O failed: -6 00:05:55.359 Read completed with error (sct=0, sc=8) 00:05:55.359 Read completed with error (sct=0, sc=8) 00:05:55.359 Read completed with error (sct=0, sc=8) 00:05:55.359 Write completed with error (sct=0, sc=8) 00:05:55.359 starting I/O failed: -6 00:05:55.359 Write completed with error (sct=0, sc=8) 00:05:55.359 [2024-11-20 11:00:22.836203] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ef4a0 is same with the state(6) to be set 00:05:55.359 Read completed with error (sct=0, sc=8) 00:05:55.359 Write completed with error (sct=0, sc=8) 00:05:55.359 Read completed with error (sct=0, sc=8) 00:05:55.359 Read completed with error (sct=0, sc=8) 00:05:55.359 Read completed with error (sct=0, sc=8) 00:05:55.359 Read completed with error (sct=0, sc=8) 00:05:55.359 Read completed with error (sct=0, sc=8) 00:05:55.359 Read completed with error (sct=0, sc=8) 00:05:55.359 Read completed with error (sct=0, sc=8) 00:05:55.359 Read completed with error (sct=0, sc=8) 00:05:55.359 Read completed with error (sct=0, sc=8) 00:05:55.359 Read completed with error (sct=0, sc=8) 00:05:55.359 Read completed with error (sct=0, sc=8) 00:05:55.359 Read completed with error (sct=0, sc=8) 00:05:55.359 Write completed with error (sct=0, sc=8) 00:05:55.359 Read completed with error (sct=0, sc=8) 00:05:55.359 Write completed with error (sct=0, sc=8) 00:05:55.359 Read completed with error (sct=0, sc=8) 00:05:55.359 Write completed with error (sct=0, sc=8) 00:05:55.360 Write completed with error (sct=0, sc=8) 00:05:55.360 Read completed with error (sct=0, sc=8) 00:05:55.360 Read completed with error (sct=0, sc=8) 00:05:55.360 Read completed with error (sct=0, sc=8) 00:05:55.360 Read completed with error (sct=0, sc=8) 00:05:55.360 Write completed with error (sct=0, sc=8) 00:05:55.360 Read completed with error (sct=0, sc=8) 00:05:55.360 Read completed with error (sct=0, sc=8) 00:05:55.360 Read completed with error (sct=0, sc=8) 00:05:55.360 Write completed with error (sct=0, sc=8) 00:05:55.360 Read completed with error (sct=0, sc=8) 00:05:55.360 Read completed with error (sct=0, sc=8) 00:05:55.360 Write completed with error (sct=0, sc=8) 00:05:55.360 Read completed with error (sct=0, sc=8) 00:05:55.360 Read completed with error (sct=0, sc=8) 00:05:55.360 Write completed with error (sct=0, sc=8) 00:05:55.360 Read completed with error (sct=0, sc=8) 00:05:55.360 Write completed with error (sct=0, sc=8) 00:05:55.360 Write completed with error (sct=0, sc=8) 00:05:55.360 Write completed with error (sct=0, sc=8) 00:05:55.360 Read completed with error (sct=0, sc=8) 00:05:55.360 Read completed with error (sct=0, sc=8) 00:05:55.360 Write completed with error (sct=0, sc=8) 00:05:55.360 Write completed with error (sct=0, sc=8) 00:05:55.360 Read completed with error (sct=0, sc=8) 00:05:55.360 Read completed with error (sct=0, sc=8) 00:05:55.360 Read completed with error (sct=0, sc=8) 00:05:55.360 Read completed with error (sct=0, sc=8) 00:05:55.360 Read completed with error (sct=0, sc=8) 00:05:55.360 Write completed with error (sct=0, sc=8) 00:05:55.360 Read completed with error (sct=0, sc=8) 00:05:55.360 Read completed with error (sct=0, sc=8) 00:05:55.360 Read completed with error (sct=0, sc=8) 00:05:55.360 Write completed with error (sct=0, sc=8) 00:05:55.360 Read completed with error (sct=0, sc=8) 00:05:55.360 Write completed with error (sct=0, sc=8) 00:05:55.360 Write completed with error (sct=0, sc=8) 00:05:55.360 Read completed with error (sct=0, sc=8) 00:05:55.360 Write completed with error (sct=0, sc=8) 00:05:55.360 Read completed with error (sct=0, sc=8) 00:05:55.360 Write completed with error (sct=0, sc=8) 00:05:55.360 Write completed with error (sct=0, sc=8) 00:05:55.360 Read completed with error (sct=0, sc=8) 00:05:55.360 Read completed with error (sct=0, sc=8) 00:05:55.360 Write completed with error (sct=0, sc=8) 00:05:55.360 Read completed with error (sct=0, sc=8) 00:05:55.360 Read completed with error (sct=0, sc=8) 00:05:55.360 Read completed with error (sct=0, sc=8) 00:05:55.360 Read completed with error (sct=0, sc=8) 00:05:55.360 Read completed with error (sct=0, sc=8) 00:05:55.360 Write completed with error (sct=0, sc=8) 00:05:55.360 Write completed with error (sct=0, sc=8) 00:05:55.360 Read completed with error (sct=0, sc=8) 00:05:55.360 Read completed with error (sct=0, sc=8) 00:05:55.360 Read completed with error (sct=0, sc=8) 00:05:55.360 Read completed with error (sct=0, sc=8) 00:05:55.360 Read completed with error (sct=0, sc=8) 00:05:55.360 Read completed with error (sct=0, sc=8) 00:05:55.360 Read completed with error (sct=0, sc=8) 00:05:55.360 Read completed with error (sct=0, sc=8) 00:05:55.360 Read completed with error (sct=0, sc=8) 00:05:55.360 Read completed with error (sct=0, sc=8) 00:05:55.360 Read completed with error (sct=0, sc=8) 00:05:55.360 Read completed with error (sct=0, sc=8) 00:05:55.360 Write completed with error (sct=0, sc=8) 00:05:55.360 Read completed with error (sct=0, sc=8) 00:05:55.360 Write completed with error (sct=0, sc=8) 00:05:55.360 Read completed with error (sct=0, sc=8) 00:05:55.360 Write completed with error (sct=0, sc=8) 00:05:55.360 Write completed with error (sct=0, sc=8) 00:05:55.360 Write completed with error (sct=0, sc=8) 00:05:55.360 Write completed with error (sct=0, sc=8) 00:05:55.360 Write completed with error (sct=0, sc=8) 00:05:55.360 Write completed with error (sct=0, sc=8) 00:05:55.360 Read completed with error (sct=0, sc=8) 00:05:55.360 Write completed with error (sct=0, sc=8) 00:05:55.360 [2024-11-20 11:00:22.836750] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ef2c0 is same with the state(6) to be set 00:05:55.360 Read completed with error (sct=0, sc=8) 00:05:55.360 Read completed with error (sct=0, sc=8) 00:05:55.360 starting I/O failed: -6 00:05:55.360 Read completed with error (sct=0, sc=8) 00:05:55.360 Write completed with error (sct=0, sc=8) 00:05:55.360 Read completed with error (sct=0, sc=8) 00:05:55.360 Read completed with error (sct=0, sc=8) 00:05:55.360 starting I/O failed: -6 00:05:55.360 Write completed with error (sct=0, sc=8) 00:05:55.360 Read completed with error (sct=0, sc=8) 00:05:55.360 Write completed with error (sct=0, sc=8) 00:05:55.360 Read completed with error (sct=0, sc=8) 00:05:55.360 starting I/O failed: -6 00:05:55.360 Read completed with error (sct=0, sc=8) 00:05:55.360 Read completed with error (sct=0, sc=8) 00:05:55.360 Read completed with error (sct=0, sc=8) 00:05:55.360 Write completed with error (sct=0, sc=8) 00:05:55.360 starting I/O failed: -6 00:05:55.360 Read completed with error (sct=0, sc=8) 00:05:55.360 Write completed with error (sct=0, sc=8) 00:05:55.360 Read completed with error (sct=0, sc=8) 00:05:55.360 Read completed with error (sct=0, sc=8) 00:05:55.360 starting I/O failed: -6 00:05:55.360 Write completed with error (sct=0, sc=8) 00:05:55.360 Read completed with error (sct=0, sc=8) 00:05:55.360 Read completed with error (sct=0, sc=8) 00:05:55.360 Read completed with error (sct=0, sc=8) 00:05:55.360 starting I/O failed: -6 00:05:55.360 Read completed with error (sct=0, sc=8) 00:05:55.360 Write completed with error (sct=0, sc=8) 00:05:55.360 Read completed with error (sct=0, sc=8) 00:05:55.360 Read completed with error (sct=0, sc=8) 00:05:55.360 starting I/O failed: -6 00:05:55.360 Write completed with error (sct=0, sc=8) 00:05:55.360 Read completed with error (sct=0, sc=8) 00:05:55.360 Read completed with error (sct=0, sc=8) 00:05:55.360 Read completed with error (sct=0, sc=8) 00:05:55.360 starting I/O failed: -6 00:05:55.360 Write completed with error (sct=0, sc=8) 00:05:55.360 Write completed with error (sct=0, sc=8) 00:05:55.360 Read completed with error (sct=0, sc=8) 00:05:55.360 Write completed with error (sct=0, sc=8) 00:05:55.360 starting I/O failed: -6 00:05:55.360 Write completed with error (sct=0, sc=8) 00:05:55.360 Write completed with error (sct=0, sc=8) 00:05:55.360 Read completed with error (sct=0, sc=8) 00:05:55.360 Read completed with error (sct=0, sc=8) 00:05:55.360 starting I/O failed: -6 00:05:55.360 Write completed with error (sct=0, sc=8) 00:05:55.360 Write completed with error (sct=0, sc=8) 00:05:55.360 Read completed with error (sct=0, sc=8) 00:05:55.360 Write completed with error (sct=0, sc=8) 00:05:55.360 starting I/O failed: -6 00:05:55.360 Read completed with error (sct=0, sc=8) 00:05:55.360 Write completed with error (sct=0, sc=8) 00:05:55.360 Read completed with error (sct=0, sc=8) 00:05:55.360 [2024-11-20 11:00:22.838205] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f815800d4d0 is same with the state(6) to be set 00:05:56.736 [2024-11-20 11:00:23.813963] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f09a0 is same with the state(6) to be set 00:05:56.736 Read completed with error (sct=0, sc=8) 00:05:56.736 Write completed with error (sct=0, sc=8) 00:05:56.736 Write completed with error (sct=0, sc=8) 00:05:56.736 Read completed with error (sct=0, sc=8) 00:05:56.736 Read completed with error (sct=0, sc=8) 00:05:56.736 Read completed with error (sct=0, sc=8) 00:05:56.736 Write completed with error (sct=0, sc=8) 00:05:56.736 Read completed with error (sct=0, sc=8) 00:05:56.736 Read completed with error (sct=0, sc=8) 00:05:56.736 Write completed with error (sct=0, sc=8) 00:05:56.736 Read completed with error (sct=0, sc=8) 00:05:56.736 Read completed with error (sct=0, sc=8) 00:05:56.736 Read completed with error (sct=0, sc=8) 00:05:56.736 Write completed with error (sct=0, sc=8) 00:05:56.736 Read completed with error (sct=0, sc=8) 00:05:56.736 Read completed with error (sct=0, sc=8) 00:05:56.736 [2024-11-20 11:00:23.839601] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ef680 is same with the state(6) to be set 00:05:56.736 Write completed with error (sct=0, sc=8) 00:05:56.736 Read completed with error (sct=0, sc=8) 00:05:56.736 Read completed with error (sct=0, sc=8) 00:05:56.736 Read completed with error (sct=0, sc=8) 00:05:56.736 Read completed with error (sct=0, sc=8) 00:05:56.736 Read completed with error (sct=0, sc=8) 00:05:56.736 Write completed with error (sct=0, sc=8) 00:05:56.736 Read completed with error (sct=0, sc=8) 00:05:56.736 Write completed with error (sct=0, sc=8) 00:05:56.736 Read completed with error (sct=0, sc=8) 00:05:56.736 Read completed with error (sct=0, sc=8) 00:05:56.736 Read completed with error (sct=0, sc=8) 00:05:56.736 Write completed with error (sct=0, sc=8) 00:05:56.736 Read completed with error (sct=0, sc=8) 00:05:56.736 Read completed with error (sct=0, sc=8) 00:05:56.736 Write completed with error (sct=0, sc=8) 00:05:56.736 Write completed with error (sct=0, sc=8) 00:05:56.736 Read completed with error (sct=0, sc=8) 00:05:56.736 Read completed with error (sct=0, sc=8) 00:05:56.736 Read completed with error (sct=0, sc=8) 00:05:56.736 Read completed with error (sct=0, sc=8) 00:05:56.736 Write completed with error (sct=0, sc=8) 00:05:56.736 Write completed with error (sct=0, sc=8) 00:05:56.736 Read completed with error (sct=0, sc=8) 00:05:56.736 Read completed with error (sct=0, sc=8) 00:05:56.736 Write completed with error (sct=0, sc=8) 00:05:56.736 Read completed with error (sct=0, sc=8) 00:05:56.736 Read completed with error (sct=0, sc=8) 00:05:56.736 Read completed with error (sct=0, sc=8) 00:05:56.736 Read completed with error (sct=0, sc=8) 00:05:56.736 Read completed with error (sct=0, sc=8) 00:05:56.736 Write completed with error (sct=0, sc=8) 00:05:56.736 Read completed with error (sct=0, sc=8) 00:05:56.736 Write completed with error (sct=0, sc=8) 00:05:56.736 Write completed with error (sct=0, sc=8) 00:05:56.736 Read completed with error (sct=0, sc=8) 00:05:56.736 Read completed with error (sct=0, sc=8) 00:05:56.736 [2024-11-20 11:00:23.840945] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f8158000c40 is same with the state(6) to be set 00:05:56.736 Read completed with error (sct=0, sc=8) 00:05:56.736 Read completed with error (sct=0, sc=8) 00:05:56.736 Read completed with error (sct=0, sc=8) 00:05:56.736 Write completed with error (sct=0, sc=8) 00:05:56.736 Read completed with error (sct=0, sc=8) 00:05:56.736 Read completed with error (sct=0, sc=8) 00:05:56.736 Read completed with error (sct=0, sc=8) 00:05:56.736 Read completed with error (sct=0, sc=8) 00:05:56.736 Write completed with error (sct=0, sc=8) 00:05:56.736 Write completed with error (sct=0, sc=8) 00:05:56.736 Read completed with error (sct=0, sc=8) 00:05:56.736 Read completed with error (sct=0, sc=8) 00:05:56.736 Read completed with error (sct=0, sc=8) 00:05:56.736 Read completed with error (sct=0, sc=8) 00:05:56.736 Read completed with error (sct=0, sc=8) 00:05:56.736 Read completed with error (sct=0, sc=8) 00:05:56.736 Write completed with error (sct=0, sc=8) 00:05:56.736 Read completed with error (sct=0, sc=8) 00:05:56.736 Read completed with error (sct=0, sc=8) 00:05:56.736 Read completed with error (sct=0, sc=8) 00:05:56.736 Write completed with error (sct=0, sc=8) 00:05:56.736 Read completed with error (sct=0, sc=8) 00:05:56.736 Read completed with error (sct=0, sc=8) 00:05:56.736 Write completed with error (sct=0, sc=8) 00:05:56.736 Read completed with error (sct=0, sc=8) 00:05:56.736 Read completed with error (sct=0, sc=8) 00:05:56.736 Write completed with error (sct=0, sc=8) 00:05:56.736 Write completed with error (sct=0, sc=8) 00:05:56.736 Read completed with error (sct=0, sc=8) 00:05:56.736 Read completed with error (sct=0, sc=8) 00:05:56.736 Write completed with error (sct=0, sc=8) 00:05:56.736 Read completed with error (sct=0, sc=8) 00:05:56.736 Read completed with error (sct=0, sc=8) 00:05:56.736 Write completed with error (sct=0, sc=8) 00:05:56.736 Read completed with error (sct=0, sc=8) 00:05:56.736 Read completed with error (sct=0, sc=8) 00:05:56.736 Read completed with error (sct=0, sc=8) 00:05:56.736 [2024-11-20 11:00:23.841111] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f815800d020 is same with the state(6) to be set 00:05:56.736 Write completed with error (sct=0, sc=8) 00:05:56.736 Read completed with error (sct=0, sc=8) 00:05:56.736 Read completed with error (sct=0, sc=8) 00:05:56.736 Read completed with error (sct=0, sc=8) 00:05:56.736 Read completed with error (sct=0, sc=8) 00:05:56.736 Write completed with error (sct=0, sc=8) 00:05:56.736 Read completed with error (sct=0, sc=8) 00:05:56.736 Read completed with error (sct=0, sc=8) 00:05:56.736 Read completed with error (sct=0, sc=8) 00:05:56.736 Read completed with error (sct=0, sc=8) 00:05:56.736 Read completed with error (sct=0, sc=8) 00:05:56.736 Read completed with error (sct=0, sc=8) 00:05:56.736 Read completed with error (sct=0, sc=8) 00:05:56.736 Read completed with error (sct=0, sc=8) 00:05:56.736 Write completed with error (sct=0, sc=8) 00:05:56.736 Read completed with error (sct=0, sc=8) 00:05:56.736 Read completed with error (sct=0, sc=8) 00:05:56.736 Read completed with error (sct=0, sc=8) 00:05:56.736 Write completed with error (sct=0, sc=8) 00:05:56.736 Read completed with error (sct=0, sc=8) 00:05:56.736 Write completed with error (sct=0, sc=8) 00:05:56.736 Read completed with error (sct=0, sc=8) 00:05:56.736 Read completed with error (sct=0, sc=8) 00:05:56.737 Write completed with error (sct=0, sc=8) 00:05:56.737 Read completed with error (sct=0, sc=8) 00:05:56.737 Read completed with error (sct=0, sc=8) 00:05:56.737 Read completed with error (sct=0, sc=8) 00:05:56.737 Write completed with error (sct=0, sc=8) 00:05:56.737 Read completed with error (sct=0, sc=8) 00:05:56.737 Read completed with error (sct=0, sc=8) 00:05:56.737 Read completed with error (sct=0, sc=8) 00:05:56.737 Read completed with error (sct=0, sc=8) 00:05:56.737 Read completed with error (sct=0, sc=8) 00:05:56.737 Read completed with error (sct=0, sc=8) 00:05:56.737 Read completed with error (sct=0, sc=8) 00:05:56.737 Read completed with error (sct=0, sc=8) 00:05:56.737 Read completed with error (sct=0, sc=8) 00:05:56.737 [2024-11-20 11:00:23.841632] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f815800d800 is same with the state(6) to be set 00:05:56.737 Initializing NVMe Controllers 00:05:56.737 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:05:56.737 Controller IO queue size 128, less than required. 00:05:56.737 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:05:56.737 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:05:56.737 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:05:56.737 Initialization complete. Launching workers. 00:05:56.737 ======================================================== 00:05:56.737 Latency(us) 00:05:56.737 Device Information : IOPS MiB/s Average min max 00:05:56.737 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 153.72 0.08 884073.67 286.58 1007270.48 00:05:56.737 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 170.63 0.08 1073783.52 524.81 2001942.17 00:05:56.737 ======================================================== 00:05:56.737 Total : 324.35 0.16 983875.02 286.58 2001942.17 00:05:56.737 00:05:56.737 [2024-11-20 11:00:23.842158] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9f09a0 (9): Bad file descriptor 00:05:56.737 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:05:56.737 11:00:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:56.737 11:00:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:05:56.737 11:00:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3893635 00:05:56.737 11:00:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:05:56.996 11:00:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:05:56.996 11:00:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3893635 00:05:56.996 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3893635) - No such process 00:05:56.996 11:00:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3893635 00:05:56.996 11:00:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:05:56.996 11:00:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3893635 00:05:56.996 11:00:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:05:56.996 11:00:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:56.996 11:00:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:05:56.996 11:00:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:56.996 11:00:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 3893635 00:05:56.996 11:00:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:05:56.996 11:00:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:56.996 11:00:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:56.996 11:00:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:56.996 11:00:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:05:56.996 11:00:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:56.996 11:00:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:56.996 11:00:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:56.996 11:00:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:56.996 11:00:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:56.996 11:00:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:56.996 [2024-11-20 11:00:24.372260] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:56.996 11:00:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:56.996 11:00:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:56.996 11:00:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:56.996 11:00:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:56.996 11:00:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:56.996 11:00:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3894130 00:05:56.996 11:00:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:05:56.996 11:00:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:05:56.996 11:00:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3894130 00:05:56.996 11:00:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:05:56.996 [2024-11-20 11:00:24.462491] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:05:57.564 11:00:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:05:57.564 11:00:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3894130 00:05:57.564 11:00:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:05:58.131 11:00:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:05:58.131 11:00:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3894130 00:05:58.131 11:00:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:05:58.698 11:00:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:05:58.698 11:00:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3894130 00:05:58.698 11:00:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:05:58.956 11:00:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:05:58.956 11:00:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3894130 00:05:58.956 11:00:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:05:59.526 11:00:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:05:59.526 11:00:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3894130 00:05:59.526 11:00:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:00.103 11:00:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:00.103 11:00:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3894130 00:06:00.103 11:00:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:00.363 Initializing NVMe Controllers 00:06:00.363 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:00.363 Controller IO queue size 128, less than required. 00:06:00.363 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:00.363 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:00.363 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:00.363 Initialization complete. Launching workers. 00:06:00.363 ======================================================== 00:06:00.363 Latency(us) 00:06:00.363 Device Information : IOPS MiB/s Average min max 00:06:00.363 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002002.35 1000140.14 1005727.78 00:06:00.363 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003451.58 1000195.38 1009548.67 00:06:00.363 ======================================================== 00:06:00.363 Total : 256.00 0.12 1002726.96 1000140.14 1009548.67 00:06:00.363 00:06:00.622 11:00:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:00.622 11:00:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3894130 00:06:00.622 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3894130) - No such process 00:06:00.622 11:00:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3894130 00:06:00.622 11:00:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:00.622 11:00:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:06:00.622 11:00:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:00.622 11:00:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:06:00.622 11:00:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:00.622 11:00:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:06:00.622 11:00:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:00.622 11:00:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:00.622 rmmod nvme_tcp 00:06:00.622 rmmod nvme_fabrics 00:06:00.622 rmmod nvme_keyring 00:06:00.622 11:00:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:00.622 11:00:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:06:00.622 11:00:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:06:00.622 11:00:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 3893413 ']' 00:06:00.622 11:00:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 3893413 00:06:00.622 11:00:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 3893413 ']' 00:06:00.622 11:00:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 3893413 00:06:00.622 11:00:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:06:00.622 11:00:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:00.622 11:00:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3893413 00:06:00.622 11:00:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:00.622 11:00:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:00.622 11:00:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3893413' 00:06:00.622 killing process with pid 3893413 00:06:00.622 11:00:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 3893413 00:06:00.622 11:00:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 3893413 00:06:00.881 11:00:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:00.881 11:00:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:00.881 11:00:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:00.881 11:00:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:06:00.881 11:00:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:06:00.881 11:00:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:00.881 11:00:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:06:00.881 11:00:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:00.881 11:00:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:00.881 11:00:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:00.881 11:00:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:00.881 11:00:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:02.786 11:00:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:02.786 00:06:02.786 real 0m16.294s 00:06:02.786 user 0m29.263s 00:06:02.786 sys 0m5.611s 00:06:02.786 11:00:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:02.786 11:00:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:02.786 ************************************ 00:06:02.786 END TEST nvmf_delete_subsystem 00:06:02.786 ************************************ 00:06:03.046 11:00:30 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:03.046 11:00:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:03.046 11:00:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:03.046 11:00:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:03.046 ************************************ 00:06:03.046 START TEST nvmf_host_management 00:06:03.046 ************************************ 00:06:03.046 11:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:03.046 * Looking for test storage... 00:06:03.046 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:03.046 11:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:03.046 11:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:06:03.046 11:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:03.046 11:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:03.046 11:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:03.046 11:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:03.046 11:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:03.046 11:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:06:03.046 11:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:06:03.046 11:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:06:03.046 11:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:06:03.046 11:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:06:03.046 11:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:06:03.046 11:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:06:03.046 11:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:03.046 11:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:06:03.046 11:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:06:03.046 11:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:03.046 11:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:03.046 11:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:06:03.046 11:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:06:03.046 11:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:03.046 11:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:06:03.046 11:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:06:03.046 11:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:06:03.046 11:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:06:03.046 11:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:03.046 11:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:06:03.046 11:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:06:03.046 11:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:03.046 11:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:03.046 11:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:06:03.046 11:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:03.046 11:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:03.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.046 --rc genhtml_branch_coverage=1 00:06:03.046 --rc genhtml_function_coverage=1 00:06:03.046 --rc genhtml_legend=1 00:06:03.046 --rc geninfo_all_blocks=1 00:06:03.046 --rc geninfo_unexecuted_blocks=1 00:06:03.046 00:06:03.046 ' 00:06:03.046 11:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:03.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.046 --rc genhtml_branch_coverage=1 00:06:03.046 --rc genhtml_function_coverage=1 00:06:03.046 --rc genhtml_legend=1 00:06:03.046 --rc geninfo_all_blocks=1 00:06:03.046 --rc geninfo_unexecuted_blocks=1 00:06:03.046 00:06:03.046 ' 00:06:03.046 11:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:03.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.046 --rc genhtml_branch_coverage=1 00:06:03.046 --rc genhtml_function_coverage=1 00:06:03.046 --rc genhtml_legend=1 00:06:03.046 --rc geninfo_all_blocks=1 00:06:03.046 --rc geninfo_unexecuted_blocks=1 00:06:03.046 00:06:03.046 ' 00:06:03.046 11:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:03.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.046 --rc genhtml_branch_coverage=1 00:06:03.046 --rc genhtml_function_coverage=1 00:06:03.046 --rc genhtml_legend=1 00:06:03.046 --rc geninfo_all_blocks=1 00:06:03.046 --rc geninfo_unexecuted_blocks=1 00:06:03.046 00:06:03.046 ' 00:06:03.046 11:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:03.046 11:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:06:03.046 11:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:03.046 11:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:03.046 11:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:03.046 11:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:03.046 11:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:03.046 11:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:03.046 11:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:03.046 11:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:03.046 11:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:03.046 11:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:03.046 11:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:03.046 11:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:03.046 11:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:03.047 11:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:03.047 11:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:03.047 11:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:03.047 11:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:03.047 11:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:06:03.306 11:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:03.306 11:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:03.306 11:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:03.306 11:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:03.306 11:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:03.306 11:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:03.306 11:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:06:03.307 11:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:03.307 11:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:06:03.307 11:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:03.307 11:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:03.307 11:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:03.307 11:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:03.307 11:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:03.307 11:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:03.307 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:03.307 11:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:03.307 11:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:03.307 11:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:03.307 11:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:03.307 11:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:03.307 11:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:06:03.307 11:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:03.307 11:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:03.307 11:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:03.307 11:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:03.307 11:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:03.307 11:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:03.307 11:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:03.307 11:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:03.307 11:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:03.307 11:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:03.307 11:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:06:03.307 11:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:09.881 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:09.881 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:06:09.881 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:09.881 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:09.881 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:09.881 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:09.881 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:09.881 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:06:09.881 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:09.881 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:06:09.881 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:06:09.881 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:06:09.881 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:06:09.881 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:06:09.881 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:06:09.881 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:09.881 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:09.881 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:09.881 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:09.881 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:09.881 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:09.881 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:09.881 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:09.881 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:09.881 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:09.881 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:09.881 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:09.881 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:09.881 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:09.881 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:09.881 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:09.881 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:09.881 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:09.881 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:09.881 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:09.882 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:09.882 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:09.882 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:09.882 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:09.882 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:09.882 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:09.882 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:09.882 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:09.882 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:09.882 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:09.882 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:09.882 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:09.882 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:09.882 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:09.882 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:09.882 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:09.882 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:09.882 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:09.882 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:09.882 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:09.882 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:09.882 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:09.882 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:09.882 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:09.882 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:09.882 Found net devices under 0000:86:00.0: cvl_0_0 00:06:09.882 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:09.882 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:09.882 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:09.882 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:09.882 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:09.882 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:09.882 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:09.882 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:09.882 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:09.882 Found net devices under 0000:86:00.1: cvl_0_1 00:06:09.882 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:09.882 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:09.882 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:06:09.882 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:09.882 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:09.882 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:09.882 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:09.882 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:09.882 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:09.882 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:09.882 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:09.882 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:09.882 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:09.882 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:09.882 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:09.882 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:09.882 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:09.882 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:09.882 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:09.882 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:09.882 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:09.882 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:09.882 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:09.882 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:09.882 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:09.882 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:09.882 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:09.882 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:09.882 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:09.882 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:09.882 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.342 ms 00:06:09.882 00:06:09.882 --- 10.0.0.2 ping statistics --- 00:06:09.882 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:09.882 rtt min/avg/max/mdev = 0.342/0.342/0.342/0.000 ms 00:06:09.882 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:09.882 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:09.882 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:06:09.882 00:06:09.882 --- 10.0.0.1 ping statistics --- 00:06:09.882 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:09.882 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:06:09.882 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:09.882 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:06:09.882 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:09.882 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:09.882 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:09.882 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:09.882 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:09.882 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:09.882 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:09.882 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:06:09.882 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:06:09.882 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:06:09.882 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:09.882 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:09.882 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:09.882 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=3898359 00:06:09.882 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 3898359 00:06:09.882 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:06:09.882 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 3898359 ']' 00:06:09.882 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:09.882 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:09.882 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:09.882 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:09.882 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:09.882 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:09.882 [2024-11-20 11:00:36.629496] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:06:09.882 [2024-11-20 11:00:36.629540] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:09.882 [2024-11-20 11:00:36.707670] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:09.882 [2024-11-20 11:00:36.749521] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:09.882 [2024-11-20 11:00:36.749560] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:09.882 [2024-11-20 11:00:36.749568] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:09.882 [2024-11-20 11:00:36.749574] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:09.883 [2024-11-20 11:00:36.749579] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:09.883 [2024-11-20 11:00:36.751186] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:09.883 [2024-11-20 11:00:36.751291] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:09.883 [2024-11-20 11:00:36.751379] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:09.883 [2024-11-20 11:00:36.751378] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:09.883 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:09.883 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:06:09.883 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:09.883 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:09.883 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:09.883 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:09.883 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:09.883 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:09.883 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:09.883 [2024-11-20 11:00:36.900304] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:09.883 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:09.883 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:06:09.883 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:09.883 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:09.883 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:09.883 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:06:09.883 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:06:09.883 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:09.883 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:09.883 Malloc0 00:06:09.883 [2024-11-20 11:00:36.969519] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:09.883 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:09.883 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:06:09.883 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:09.883 11:00:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:09.883 11:00:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3898441 00:06:09.883 11:00:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3898441 /var/tmp/bdevperf.sock 00:06:09.883 11:00:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 3898441 ']' 00:06:09.883 11:00:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:09.883 11:00:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:06:09.883 11:00:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:06:09.883 11:00:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:09.883 11:00:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:09.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:09.883 11:00:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:09.883 11:00:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:09.883 11:00:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:09.883 11:00:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:09.883 11:00:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:09.883 11:00:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:09.883 { 00:06:09.883 "params": { 00:06:09.883 "name": "Nvme$subsystem", 00:06:09.883 "trtype": "$TEST_TRANSPORT", 00:06:09.883 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:09.883 "adrfam": "ipv4", 00:06:09.883 "trsvcid": "$NVMF_PORT", 00:06:09.883 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:09.883 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:09.883 "hdgst": ${hdgst:-false}, 00:06:09.883 "ddgst": ${ddgst:-false} 00:06:09.883 }, 00:06:09.883 "method": "bdev_nvme_attach_controller" 00:06:09.883 } 00:06:09.883 EOF 00:06:09.883 )") 00:06:09.883 11:00:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:09.883 11:00:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:09.883 11:00:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:09.883 11:00:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:09.883 "params": { 00:06:09.883 "name": "Nvme0", 00:06:09.883 "trtype": "tcp", 00:06:09.883 "traddr": "10.0.0.2", 00:06:09.883 "adrfam": "ipv4", 00:06:09.883 "trsvcid": "4420", 00:06:09.883 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:09.883 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:09.883 "hdgst": false, 00:06:09.883 "ddgst": false 00:06:09.883 }, 00:06:09.883 "method": "bdev_nvme_attach_controller" 00:06:09.883 }' 00:06:09.883 [2024-11-20 11:00:37.065980] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:06:09.883 [2024-11-20 11:00:37.066026] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3898441 ] 00:06:09.883 [2024-11-20 11:00:37.143271] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.883 [2024-11-20 11:00:37.184835] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.883 Running I/O for 10 seconds... 00:06:10.142 11:00:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:10.142 11:00:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:06:10.142 11:00:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:06:10.142 11:00:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:10.142 11:00:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:10.142 11:00:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:10.142 11:00:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:10.142 11:00:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:06:10.142 11:00:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:06:10.142 11:00:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:06:10.142 11:00:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:06:10.142 11:00:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:06:10.142 11:00:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:06:10.142 11:00:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:10.142 11:00:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:10.142 11:00:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:10.142 11:00:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:10.142 11:00:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:10.142 11:00:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:10.142 11:00:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=97 00:06:10.142 11:00:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 97 -ge 100 ']' 00:06:10.142 11:00:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:06:10.435 11:00:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:06:10.435 11:00:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:10.435 11:00:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:10.435 11:00:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:10.435 11:00:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:10.435 11:00:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:10.435 11:00:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:10.435 11:00:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=707 00:06:10.435 11:00:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 707 -ge 100 ']' 00:06:10.435 11:00:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:06:10.435 11:00:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:06:10.435 11:00:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:06:10.435 11:00:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:10.435 11:00:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:10.435 11:00:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:10.435 [2024-11-20 11:00:37.749082] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:10.435 [2024-11-20 11:00:37.749124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:10.435 [2024-11-20 11:00:37.749135] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:06:10.435 [2024-11-20 11:00:37.749148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:10.436 [2024-11-20 11:00:37.749157] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:06:10.436 [2024-11-20 11:00:37.749164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:10.436 [2024-11-20 11:00:37.749171] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:06:10.436 [2024-11-20 11:00:37.749179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:10.436 [2024-11-20 11:00:37.749186] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e8500 is same with the state(6) to be set 00:06:10.436 [2024-11-20 11:00:37.750294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.436 [2024-11-20 11:00:37.750307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:10.436 [2024-11-20 11:00:37.750322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.436 [2024-11-20 11:00:37.750331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:10.436 [2024-11-20 11:00:37.750340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.436 [2024-11-20 11:00:37.750347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:10.436 [2024-11-20 11:00:37.750356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.436 [2024-11-20 11:00:37.750363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:10.436 [2024-11-20 11:00:37.750372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.436 [2024-11-20 11:00:37.750378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:10.436 [2024-11-20 11:00:37.750387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.436 [2024-11-20 11:00:37.750395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:10.436 [2024-11-20 11:00:37.750403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.436 [2024-11-20 11:00:37.750411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:10.436 [2024-11-20 11:00:37.750419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.436 [2024-11-20 11:00:37.750426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:10.436 [2024-11-20 11:00:37.750434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.436 [2024-11-20 11:00:37.750441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:10.436 [2024-11-20 11:00:37.750449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.436 [2024-11-20 11:00:37.750460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:10.436 [2024-11-20 11:00:37.750469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.436 [2024-11-20 11:00:37.750476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:10.436 [2024-11-20 11:00:37.750484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.436 [2024-11-20 11:00:37.750491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:10.436 [2024-11-20 11:00:37.750499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.436 [2024-11-20 11:00:37.750507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:10.436 [2024-11-20 11:00:37.750516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.436 [2024-11-20 11:00:37.750524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:10.436 [2024-11-20 11:00:37.750532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.436 [2024-11-20 11:00:37.750539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:10.436 [2024-11-20 11:00:37.750548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.436 [2024-11-20 11:00:37.750554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:10.436 [2024-11-20 11:00:37.750563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.436 [2024-11-20 11:00:37.750570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:10.436 [2024-11-20 11:00:37.750579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.436 [2024-11-20 11:00:37.750587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:10.436 [2024-11-20 11:00:37.750595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.436 [2024-11-20 11:00:37.750602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:10.436 [2024-11-20 11:00:37.750611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.436 [2024-11-20 11:00:37.750618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:10.436 [2024-11-20 11:00:37.750628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.436 [2024-11-20 11:00:37.750635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:10.436 [2024-11-20 11:00:37.750643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.436 [2024-11-20 11:00:37.750650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:10.436 [2024-11-20 11:00:37.750658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.436 [2024-11-20 11:00:37.750666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:10.436 [2024-11-20 11:00:37.750675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.436 [2024-11-20 11:00:37.750681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:10.436 [2024-11-20 11:00:37.750690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.436 [2024-11-20 11:00:37.750698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:10.436 [2024-11-20 11:00:37.750706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.436 [2024-11-20 11:00:37.750712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:10.436 [2024-11-20 11:00:37.750720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.436 [2024-11-20 11:00:37.750728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:10.436 [2024-11-20 11:00:37.750737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.436 [2024-11-20 11:00:37.750744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:10.436 [2024-11-20 11:00:37.750752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.436 [2024-11-20 11:00:37.750760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:10.436 [2024-11-20 11:00:37.750768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.436 [2024-11-20 11:00:37.750775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:10.436 [2024-11-20 11:00:37.750783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.436 [2024-11-20 11:00:37.750790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:10.437 [2024-11-20 11:00:37.750799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.437 [2024-11-20 11:00:37.750806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:10.437 [2024-11-20 11:00:37.750814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.437 [2024-11-20 11:00:37.750821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:10.437 [2024-11-20 11:00:37.750830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.437 [2024-11-20 11:00:37.750836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:10.437 [2024-11-20 11:00:37.750845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.437 [2024-11-20 11:00:37.750857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:10.437 [2024-11-20 11:00:37.750866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.437 [2024-11-20 11:00:37.750873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:10.437 [2024-11-20 11:00:37.750882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.437 [2024-11-20 11:00:37.750889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:10.437 [2024-11-20 11:00:37.750897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.437 [2024-11-20 11:00:37.750904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:10.437 [2024-11-20 11:00:37.750913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.437 [2024-11-20 11:00:37.750920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:10.437 [2024-11-20 11:00:37.750929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.437 [2024-11-20 11:00:37.750936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:10.437 [2024-11-20 11:00:37.750944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.437 [2024-11-20 11:00:37.750955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:10.437 [2024-11-20 11:00:37.750963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.437 [2024-11-20 11:00:37.750970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:10.437 [2024-11-20 11:00:37.750978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.437 [2024-11-20 11:00:37.750985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:10.437 [2024-11-20 11:00:37.750993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.437 [2024-11-20 11:00:37.751000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:10.437 [2024-11-20 11:00:37.751009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.437 [2024-11-20 11:00:37.751015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:10.437 [2024-11-20 11:00:37.751024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.437 [2024-11-20 11:00:37.751030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:10.437 [2024-11-20 11:00:37.751039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.437 [2024-11-20 11:00:37.751046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:10.437 [2024-11-20 11:00:37.751056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.437 [2024-11-20 11:00:37.751064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:10.437 [2024-11-20 11:00:37.751072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.437 [2024-11-20 11:00:37.751079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:10.437 [2024-11-20 11:00:37.751087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.437 [2024-11-20 11:00:37.751094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:10.437 [2024-11-20 11:00:37.751102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.437 [2024-11-20 11:00:37.751109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:10.437 [2024-11-20 11:00:37.751117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.437 [2024-11-20 11:00:37.751124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:10.437 [2024-11-20 11:00:37.751133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.437 [2024-11-20 11:00:37.751141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:10.437 [2024-11-20 11:00:37.751150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.437 [2024-11-20 11:00:37.751156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:10.437 [2024-11-20 11:00:37.751165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.437 [2024-11-20 11:00:37.751171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:10.437 [2024-11-20 11:00:37.751181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.437 [2024-11-20 11:00:37.751188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:10.437 [2024-11-20 11:00:37.751197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.437 [2024-11-20 11:00:37.751204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:10.437 [2024-11-20 11:00:37.751212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.437 [2024-11-20 11:00:37.751219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:10.437 [2024-11-20 11:00:37.751227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.437 [2024-11-20 11:00:37.751235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:10.437 [2024-11-20 11:00:37.751244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.437 [2024-11-20 11:00:37.751253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:10.437 [2024-11-20 11:00:37.751261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.437 [2024-11-20 11:00:37.751268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:10.437 [2024-11-20 11:00:37.751276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.437 [2024-11-20 11:00:37.751282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:10.437 [2024-11-20 11:00:37.751291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.437 [2024-11-20 11:00:37.751298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:10.437 [2024-11-20 11:00:37.751307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:10.437 [2024-11-20 11:00:37.751313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:10.437 [2024-11-20 11:00:37.751321] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x901810 is same with the state(6) to be set 00:06:10.437 [2024-11-20 11:00:37.752315] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:06:10.437 11:00:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:10.437 11:00:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:10.437 task offset: 106368 on job bdev=Nvme0n1 fails 00:06:10.437 00:06:10.437 Latency(us) 00:06:10.437 [2024-11-20T10:00:37.933Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:10.437 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:10.437 Job: Nvme0n1 ended in about 0.40 seconds with error 00:06:10.437 Verification LBA range: start 0x0 length 0x400 00:06:10.437 Nvme0n1 : 0.40 1904.49 119.03 158.71 0.00 30175.16 4559.03 27582.11 00:06:10.437 [2024-11-20T10:00:37.933Z] =================================================================================================================== 00:06:10.437 [2024-11-20T10:00:37.933Z] Total : 1904.49 119.03 158.71 0.00 30175.16 4559.03 27582.11 00:06:10.437 11:00:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:10.437 11:00:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:10.438 [2024-11-20 11:00:37.754758] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:10.438 [2024-11-20 11:00:37.754781] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6e8500 (9): Bad file descriptor 00:06:10.438 [2024-11-20 11:00:37.764088] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:06:10.438 11:00:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:10.438 11:00:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:06:11.374 11:00:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3898441 00:06:11.375 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3898441) - No such process 00:06:11.375 11:00:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:06:11.375 11:00:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:06:11.375 11:00:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:06:11.375 11:00:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:06:11.375 11:00:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:11.375 11:00:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:11.375 11:00:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:11.375 11:00:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:11.375 { 00:06:11.375 "params": { 00:06:11.375 "name": "Nvme$subsystem", 00:06:11.375 "trtype": "$TEST_TRANSPORT", 00:06:11.375 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:11.375 "adrfam": "ipv4", 00:06:11.375 "trsvcid": "$NVMF_PORT", 00:06:11.375 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:11.375 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:11.375 "hdgst": ${hdgst:-false}, 00:06:11.375 "ddgst": ${ddgst:-false} 00:06:11.375 }, 00:06:11.375 "method": "bdev_nvme_attach_controller" 00:06:11.375 } 00:06:11.375 EOF 00:06:11.375 )") 00:06:11.375 11:00:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:11.375 11:00:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:11.375 11:00:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:11.375 11:00:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:11.375 "params": { 00:06:11.375 "name": "Nvme0", 00:06:11.375 "trtype": "tcp", 00:06:11.375 "traddr": "10.0.0.2", 00:06:11.375 "adrfam": "ipv4", 00:06:11.375 "trsvcid": "4420", 00:06:11.375 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:11.375 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:11.375 "hdgst": false, 00:06:11.375 "ddgst": false 00:06:11.375 }, 00:06:11.375 "method": "bdev_nvme_attach_controller" 00:06:11.375 }' 00:06:11.375 [2024-11-20 11:00:38.821407] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:06:11.375 [2024-11-20 11:00:38.821454] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3898868 ] 00:06:11.633 [2024-11-20 11:00:38.897668] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.633 [2024-11-20 11:00:38.937216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.891 Running I/O for 1 seconds... 00:06:12.825 1984.00 IOPS, 124.00 MiB/s 00:06:12.825 Latency(us) 00:06:12.825 [2024-11-20T10:00:40.321Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:12.825 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:12.825 Verification LBA range: start 0x0 length 0x400 00:06:12.825 Nvme0n1 : 1.02 1998.72 124.92 0.00 0.00 31516.39 6069.20 27354.16 00:06:12.825 [2024-11-20T10:00:40.321Z] =================================================================================================================== 00:06:12.825 [2024-11-20T10:00:40.321Z] Total : 1998.72 124.92 0.00 0.00 31516.39 6069.20 27354.16 00:06:13.085 11:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:06:13.085 11:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:06:13.085 11:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:06:13.085 11:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:13.085 11:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:06:13.085 11:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:13.085 11:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:06:13.085 11:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:13.085 11:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:06:13.085 11:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:13.085 11:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:13.085 rmmod nvme_tcp 00:06:13.085 rmmod nvme_fabrics 00:06:13.085 rmmod nvme_keyring 00:06:13.085 11:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:13.085 11:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:06:13.085 11:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:06:13.085 11:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 3898359 ']' 00:06:13.085 11:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 3898359 00:06:13.085 11:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 3898359 ']' 00:06:13.085 11:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 3898359 00:06:13.085 11:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:06:13.085 11:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:13.085 11:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3898359 00:06:13.085 11:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:13.085 11:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:13.085 11:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3898359' 00:06:13.085 killing process with pid 3898359 00:06:13.085 11:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 3898359 00:06:13.085 11:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 3898359 00:06:13.344 [2024-11-20 11:00:40.617036] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:06:13.344 11:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:13.344 11:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:13.344 11:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:13.344 11:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:06:13.344 11:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:06:13.344 11:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:13.344 11:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:06:13.344 11:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:13.344 11:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:13.344 11:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:13.344 11:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:13.345 11:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:15.249 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:15.249 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:06:15.249 00:06:15.249 real 0m12.384s 00:06:15.249 user 0m19.376s 00:06:15.249 sys 0m5.632s 00:06:15.249 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:15.249 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:15.249 ************************************ 00:06:15.249 END TEST nvmf_host_management 00:06:15.249 ************************************ 00:06:15.509 11:00:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:15.509 11:00:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:15.509 11:00:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:15.509 11:00:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:15.509 ************************************ 00:06:15.509 START TEST nvmf_lvol 00:06:15.509 ************************************ 00:06:15.509 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:15.509 * Looking for test storage... 00:06:15.509 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:15.509 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:15.509 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:06:15.509 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:15.509 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:15.509 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:15.509 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:15.509 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:15.509 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:06:15.509 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:06:15.509 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:06:15.509 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:06:15.509 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:06:15.509 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:06:15.509 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:06:15.509 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:15.509 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:06:15.509 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:06:15.509 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:15.509 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:15.509 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:06:15.509 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:06:15.509 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:15.510 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:06:15.510 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:06:15.510 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:06:15.510 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:06:15.510 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:15.510 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:06:15.510 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:06:15.510 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:15.510 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:15.510 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:06:15.510 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:15.510 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:15.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.510 --rc genhtml_branch_coverage=1 00:06:15.510 --rc genhtml_function_coverage=1 00:06:15.510 --rc genhtml_legend=1 00:06:15.510 --rc geninfo_all_blocks=1 00:06:15.510 --rc geninfo_unexecuted_blocks=1 00:06:15.510 00:06:15.510 ' 00:06:15.510 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:15.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.510 --rc genhtml_branch_coverage=1 00:06:15.510 --rc genhtml_function_coverage=1 00:06:15.510 --rc genhtml_legend=1 00:06:15.510 --rc geninfo_all_blocks=1 00:06:15.510 --rc geninfo_unexecuted_blocks=1 00:06:15.510 00:06:15.510 ' 00:06:15.510 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:15.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.510 --rc genhtml_branch_coverage=1 00:06:15.510 --rc genhtml_function_coverage=1 00:06:15.510 --rc genhtml_legend=1 00:06:15.510 --rc geninfo_all_blocks=1 00:06:15.510 --rc geninfo_unexecuted_blocks=1 00:06:15.510 00:06:15.510 ' 00:06:15.510 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:15.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.510 --rc genhtml_branch_coverage=1 00:06:15.510 --rc genhtml_function_coverage=1 00:06:15.510 --rc genhtml_legend=1 00:06:15.510 --rc geninfo_all_blocks=1 00:06:15.510 --rc geninfo_unexecuted_blocks=1 00:06:15.510 00:06:15.510 ' 00:06:15.510 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:15.510 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:06:15.510 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:15.510 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:15.510 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:15.510 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:15.510 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:15.510 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:15.510 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:15.510 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:15.510 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:15.510 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:15.510 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:15.510 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:15.510 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:15.510 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:15.510 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:15.510 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:15.510 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:15.510 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:06:15.510 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:15.510 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:15.510 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:15.510 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.510 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.510 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.510 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:06:15.510 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.510 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:06:15.510 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:15.510 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:15.510 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:15.510 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:15.510 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:15.510 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:15.510 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:15.510 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:15.510 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:15.511 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:15.770 11:00:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:15.770 11:00:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:15.770 11:00:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:06:15.770 11:00:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:06:15.770 11:00:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:15.770 11:00:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:06:15.770 11:00:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:15.770 11:00:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:15.770 11:00:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:15.770 11:00:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:15.770 11:00:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:15.770 11:00:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:15.770 11:00:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:15.770 11:00:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:15.770 11:00:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:15.770 11:00:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:15.770 11:00:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:06:15.770 11:00:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:22.342 11:00:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:22.342 11:00:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:06:22.342 11:00:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:22.342 11:00:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:22.342 11:00:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:22.342 11:00:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:22.342 11:00:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:22.342 11:00:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:06:22.342 11:00:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:22.342 11:00:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:06:22.342 11:00:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:06:22.342 11:00:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:06:22.342 11:00:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:06:22.342 11:00:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:06:22.342 11:00:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:06:22.342 11:00:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:22.342 11:00:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:22.342 11:00:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:22.342 11:00:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:22.342 11:00:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:22.342 11:00:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:22.342 11:00:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:22.342 11:00:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:22.342 11:00:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:22.342 11:00:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:22.342 11:00:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:22.342 11:00:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:22.343 11:00:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:22.343 11:00:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:22.343 11:00:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:22.343 11:00:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:22.343 11:00:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:22.343 11:00:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:22.343 11:00:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:22.343 11:00:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:22.343 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:22.343 11:00:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:22.343 11:00:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:22.343 11:00:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:22.343 11:00:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:22.343 11:00:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:22.343 11:00:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:22.343 11:00:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:22.343 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:22.343 11:00:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:22.343 11:00:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:22.343 11:00:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:22.343 11:00:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:22.343 11:00:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:22.343 11:00:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:22.343 11:00:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:22.343 11:00:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:22.343 11:00:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:22.343 11:00:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:22.343 11:00:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:22.343 11:00:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:22.343 11:00:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:22.343 11:00:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:22.343 11:00:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:22.343 11:00:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:22.343 Found net devices under 0000:86:00.0: cvl_0_0 00:06:22.343 11:00:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:22.343 11:00:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:22.343 11:00:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:22.343 11:00:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:22.343 11:00:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:22.343 11:00:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:22.343 11:00:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:22.343 11:00:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:22.343 11:00:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:22.343 Found net devices under 0000:86:00.1: cvl_0_1 00:06:22.343 11:00:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:22.343 11:00:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:22.343 11:00:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:06:22.343 11:00:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:22.343 11:00:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:22.343 11:00:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:22.343 11:00:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:22.343 11:00:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:22.343 11:00:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:22.343 11:00:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:22.343 11:00:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:22.343 11:00:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:22.343 11:00:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:22.343 11:00:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:22.343 11:00:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:22.343 11:00:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:22.343 11:00:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:22.343 11:00:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:22.343 11:00:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:22.343 11:00:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:22.343 11:00:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:22.343 11:00:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:22.343 11:00:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:22.343 11:00:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:22.343 11:00:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:22.343 11:00:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:22.343 11:00:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:22.343 11:00:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:22.343 11:00:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:22.343 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:22.343 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.402 ms 00:06:22.343 00:06:22.343 --- 10.0.0.2 ping statistics --- 00:06:22.343 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:22.343 rtt min/avg/max/mdev = 0.402/0.402/0.402/0.000 ms 00:06:22.343 11:00:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:22.343 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:22.343 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.186 ms 00:06:22.343 00:06:22.343 --- 10.0.0.1 ping statistics --- 00:06:22.343 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:22.343 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:06:22.343 11:00:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:22.343 11:00:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:06:22.343 11:00:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:22.343 11:00:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:22.343 11:00:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:22.343 11:00:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:22.343 11:00:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:22.343 11:00:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:22.343 11:00:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:22.343 11:00:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:06:22.343 11:00:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:22.343 11:00:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:22.343 11:00:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:22.343 11:00:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=3902649 00:06:22.344 11:00:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:06:22.344 11:00:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 3902649 00:06:22.344 11:00:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 3902649 ']' 00:06:22.344 11:00:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:22.344 11:00:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:22.344 11:00:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:22.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:22.344 11:00:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:22.344 11:00:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:22.344 [2024-11-20 11:00:49.065450] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:06:22.344 [2024-11-20 11:00:49.065492] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:22.344 [2024-11-20 11:00:49.127381] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:22.344 [2024-11-20 11:00:49.167194] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:22.344 [2024-11-20 11:00:49.167232] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:22.344 [2024-11-20 11:00:49.167239] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:22.344 [2024-11-20 11:00:49.167245] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:22.344 [2024-11-20 11:00:49.167251] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:22.344 [2024-11-20 11:00:49.168672] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:22.344 [2024-11-20 11:00:49.168783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.344 [2024-11-20 11:00:49.168784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:22.344 11:00:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:22.344 11:00:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:06:22.344 11:00:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:22.344 11:00:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:22.344 11:00:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:22.344 11:00:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:22.344 11:00:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:22.344 [2024-11-20 11:00:49.485262] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:22.344 11:00:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:22.344 11:00:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:06:22.344 11:00:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:22.603 11:00:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:06:22.603 11:00:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:06:22.862 11:00:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:06:23.120 11:00:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=e03666fa-853d-406c-be61-2c186d74934b 00:06:23.121 11:00:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u e03666fa-853d-406c-be61-2c186d74934b lvol 20 00:06:23.379 11:00:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=e2bcf840-34e5-466a-adb7-445c1bb08637 00:06:23.379 11:00:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:23.379 11:00:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 e2bcf840-34e5-466a-adb7-445c1bb08637 00:06:23.638 11:00:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:23.896 [2024-11-20 11:00:51.221243] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:23.896 11:00:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:24.155 11:00:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3903142 00:06:24.155 11:00:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:06:24.155 11:00:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:06:25.091 11:00:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot e2bcf840-34e5-466a-adb7-445c1bb08637 MY_SNAPSHOT 00:06:25.348 11:00:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=75b8879c-c256-4c66-8043-3673086a2039 00:06:25.348 11:00:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize e2bcf840-34e5-466a-adb7-445c1bb08637 30 00:06:25.606 11:00:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 75b8879c-c256-4c66-8043-3673086a2039 MY_CLONE 00:06:25.863 11:00:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=d647d561-aa07-4c91-83f8-0768f6a810ce 00:06:25.863 11:00:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate d647d561-aa07-4c91-83f8-0768f6a810ce 00:06:26.429 11:00:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3903142 00:06:34.539 Initializing NVMe Controllers 00:06:34.539 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:06:34.539 Controller IO queue size 128, less than required. 00:06:34.539 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:34.539 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:06:34.540 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:06:34.540 Initialization complete. Launching workers. 00:06:34.540 ======================================================== 00:06:34.540 Latency(us) 00:06:34.540 Device Information : IOPS MiB/s Average min max 00:06:34.540 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12156.60 47.49 10535.69 368.71 76742.31 00:06:34.540 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12051.20 47.07 10624.82 3409.25 43991.21 00:06:34.540 ======================================================== 00:06:34.540 Total : 24207.80 94.56 10580.06 368.71 76742.31 00:06:34.540 00:06:34.540 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:34.540 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete e2bcf840-34e5-466a-adb7-445c1bb08637 00:06:34.799 11:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e03666fa-853d-406c-be61-2c186d74934b 00:06:35.059 11:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:06:35.059 11:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:06:35.059 11:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:06:35.059 11:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:35.059 11:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:06:35.059 11:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:35.059 11:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:06:35.059 11:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:35.059 11:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:35.059 rmmod nvme_tcp 00:06:35.059 rmmod nvme_fabrics 00:06:35.059 rmmod nvme_keyring 00:06:35.059 11:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:35.059 11:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:06:35.059 11:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:06:35.059 11:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 3902649 ']' 00:06:35.059 11:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 3902649 00:06:35.059 11:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 3902649 ']' 00:06:35.059 11:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 3902649 00:06:35.059 11:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:06:35.059 11:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:35.059 11:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3902649 00:06:35.059 11:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:35.059 11:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:35.059 11:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3902649' 00:06:35.059 killing process with pid 3902649 00:06:35.059 11:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 3902649 00:06:35.059 11:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 3902649 00:06:35.318 11:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:35.318 11:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:35.318 11:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:35.318 11:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:06:35.318 11:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:06:35.318 11:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:35.318 11:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:06:35.319 11:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:35.319 11:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:35.319 11:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:35.319 11:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:35.319 11:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:37.855 11:01:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:37.855 00:06:37.855 real 0m22.013s 00:06:37.855 user 1m3.380s 00:06:37.855 sys 0m7.667s 00:06:37.855 11:01:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:37.855 11:01:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:37.855 ************************************ 00:06:37.855 END TEST nvmf_lvol 00:06:37.855 ************************************ 00:06:37.855 11:01:04 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:06:37.855 11:01:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:37.855 11:01:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:37.855 11:01:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:37.855 ************************************ 00:06:37.855 START TEST nvmf_lvs_grow 00:06:37.855 ************************************ 00:06:37.855 11:01:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:06:37.855 * Looking for test storage... 00:06:37.855 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:37.855 11:01:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:37.855 11:01:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:06:37.855 11:01:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:37.855 11:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:37.855 11:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:37.855 11:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:37.855 11:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:37.855 11:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:06:37.855 11:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:06:37.855 11:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:06:37.855 11:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:06:37.855 11:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:06:37.855 11:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:06:37.855 11:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:06:37.855 11:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:37.855 11:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:06:37.855 11:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:06:37.855 11:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:37.855 11:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:37.855 11:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:06:37.855 11:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:06:37.855 11:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:37.855 11:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:06:37.855 11:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:06:37.855 11:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:06:37.855 11:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:06:37.855 11:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:37.855 11:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:06:37.855 11:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:06:37.855 11:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:37.855 11:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:37.855 11:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:06:37.855 11:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:37.855 11:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:37.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.855 --rc genhtml_branch_coverage=1 00:06:37.855 --rc genhtml_function_coverage=1 00:06:37.855 --rc genhtml_legend=1 00:06:37.855 --rc geninfo_all_blocks=1 00:06:37.855 --rc geninfo_unexecuted_blocks=1 00:06:37.855 00:06:37.855 ' 00:06:37.855 11:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:37.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.855 --rc genhtml_branch_coverage=1 00:06:37.855 --rc genhtml_function_coverage=1 00:06:37.855 --rc genhtml_legend=1 00:06:37.855 --rc geninfo_all_blocks=1 00:06:37.855 --rc geninfo_unexecuted_blocks=1 00:06:37.855 00:06:37.855 ' 00:06:37.855 11:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:37.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.855 --rc genhtml_branch_coverage=1 00:06:37.855 --rc genhtml_function_coverage=1 00:06:37.855 --rc genhtml_legend=1 00:06:37.855 --rc geninfo_all_blocks=1 00:06:37.855 --rc geninfo_unexecuted_blocks=1 00:06:37.855 00:06:37.856 ' 00:06:37.856 11:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:37.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.856 --rc genhtml_branch_coverage=1 00:06:37.856 --rc genhtml_function_coverage=1 00:06:37.856 --rc genhtml_legend=1 00:06:37.856 --rc geninfo_all_blocks=1 00:06:37.856 --rc geninfo_unexecuted_blocks=1 00:06:37.856 00:06:37.856 ' 00:06:37.856 11:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:37.856 11:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:06:37.856 11:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:37.856 11:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:37.856 11:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:37.856 11:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:37.856 11:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:37.856 11:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:37.856 11:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:37.856 11:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:37.856 11:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:37.856 11:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:37.856 11:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:37.856 11:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:37.856 11:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:37.856 11:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:37.856 11:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:37.856 11:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:37.856 11:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:37.856 11:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:06:37.856 11:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:37.856 11:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:37.856 11:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:37.856 11:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.856 11:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.856 11:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.856 11:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:06:37.856 11:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.856 11:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:06:37.856 11:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:37.856 11:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:37.856 11:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:37.856 11:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:37.856 11:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:37.856 11:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:37.856 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:37.856 11:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:37.856 11:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:37.856 11:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:37.856 11:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:37.856 11:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:06:37.856 11:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:06:37.856 11:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:37.856 11:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:37.856 11:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:37.856 11:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:37.856 11:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:37.856 11:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:37.856 11:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:37.856 11:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:37.856 11:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:37.856 11:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:37.856 11:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:06:37.856 11:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:44.431 11:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:44.431 11:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:06:44.431 11:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:44.431 11:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:44.431 11:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:44.431 11:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:44.431 11:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:44.431 11:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:06:44.431 11:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:44.431 11:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:06:44.431 11:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:06:44.431 11:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:06:44.431 11:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:06:44.431 11:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:06:44.431 11:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:06:44.431 11:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:44.431 11:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:44.431 11:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:44.431 11:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:44.431 11:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:44.431 11:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:44.431 11:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:44.431 11:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:44.431 11:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:44.431 11:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:44.431 11:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:44.431 11:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:44.431 11:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:44.431 11:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:44.431 11:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:44.431 11:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:44.431 11:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:44.431 11:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:44.431 11:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:44.431 11:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:44.431 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:44.431 11:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:44.431 11:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:44.431 11:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:44.431 11:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:44.431 11:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:44.431 11:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:44.431 11:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:44.431 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:44.432 11:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:44.432 11:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:44.432 11:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:44.432 11:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:44.432 11:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:44.432 11:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:44.432 11:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:44.432 11:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:44.432 11:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:44.432 11:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:44.432 11:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:44.432 11:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:44.432 11:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:44.432 11:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:44.432 11:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:44.432 11:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:44.432 Found net devices under 0000:86:00.0: cvl_0_0 00:06:44.432 11:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:44.432 11:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:44.432 11:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:44.432 11:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:44.432 11:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:44.432 11:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:44.432 11:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:44.432 11:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:44.432 11:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:44.432 Found net devices under 0000:86:00.1: cvl_0_1 00:06:44.432 11:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:44.432 11:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:44.432 11:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:06:44.432 11:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:44.432 11:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:44.432 11:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:44.432 11:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:44.432 11:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:44.432 11:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:44.432 11:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:44.432 11:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:44.432 11:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:44.432 11:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:44.432 11:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:44.432 11:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:44.432 11:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:44.432 11:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:44.432 11:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:44.432 11:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:44.432 11:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:44.432 11:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:44.432 11:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:44.432 11:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:44.432 11:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:44.432 11:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:44.432 11:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:44.432 11:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:44.432 11:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:44.432 11:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:44.432 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:44.432 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.474 ms 00:06:44.432 00:06:44.432 --- 10.0.0.2 ping statistics --- 00:06:44.432 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:44.432 rtt min/avg/max/mdev = 0.474/0.474/0.474/0.000 ms 00:06:44.432 11:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:44.432 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:44.432 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:06:44.432 00:06:44.432 --- 10.0.0.1 ping statistics --- 00:06:44.432 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:44.432 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:06:44.432 11:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:44.432 11:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:06:44.432 11:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:44.432 11:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:44.432 11:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:44.432 11:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:44.432 11:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:44.432 11:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:44.432 11:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:44.432 11:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:06:44.432 11:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:44.432 11:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:44.432 11:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:44.432 11:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=3908524 00:06:44.432 11:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:06:44.432 11:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 3908524 00:06:44.432 11:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 3908524 ']' 00:06:44.432 11:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:44.432 11:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:44.432 11:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:44.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:44.432 11:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:44.432 11:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:44.432 [2024-11-20 11:01:11.170977] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:06:44.432 [2024-11-20 11:01:11.171027] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:44.432 [2024-11-20 11:01:11.250133] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.432 [2024-11-20 11:01:11.291650] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:44.432 [2024-11-20 11:01:11.291686] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:44.432 [2024-11-20 11:01:11.291693] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:44.432 [2024-11-20 11:01:11.291700] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:44.432 [2024-11-20 11:01:11.291705] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:44.432 [2024-11-20 11:01:11.292263] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.432 11:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:44.432 11:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:06:44.432 11:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:44.432 11:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:44.432 11:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:44.432 11:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:44.432 11:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:44.432 [2024-11-20 11:01:11.592479] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:44.432 11:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:06:44.432 11:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:44.432 11:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:44.433 11:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:44.433 ************************************ 00:06:44.433 START TEST lvs_grow_clean 00:06:44.433 ************************************ 00:06:44.433 11:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:06:44.433 11:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:06:44.433 11:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:06:44.433 11:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:06:44.433 11:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:06:44.433 11:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:06:44.433 11:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:06:44.433 11:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:44.433 11:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:44.433 11:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:06:44.433 11:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:06:44.433 11:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:06:44.692 11:01:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=c13059fc-9f58-4093-95c6-2c3f532bb3f7 00:06:44.692 11:01:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c13059fc-9f58-4093-95c6-2c3f532bb3f7 00:06:44.692 11:01:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:06:44.952 11:01:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:06:44.952 11:01:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:06:44.952 11:01:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u c13059fc-9f58-4093-95c6-2c3f532bb3f7 lvol 150 00:06:45.211 11:01:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=5a1530a6-b4be-483e-803d-3a11738979d0 00:06:45.211 11:01:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:45.211 11:01:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:06:45.211 [2024-11-20 11:01:12.638532] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:06:45.211 [2024-11-20 11:01:12.638581] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:06:45.211 true 00:06:45.211 11:01:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c13059fc-9f58-4093-95c6-2c3f532bb3f7 00:06:45.211 11:01:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:06:45.471 11:01:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:06:45.471 11:01:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:45.730 11:01:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 5a1530a6-b4be-483e-803d-3a11738979d0 00:06:45.990 11:01:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:45.990 [2024-11-20 11:01:13.404811] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:45.990 11:01:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:46.249 11:01:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3909020 00:06:46.249 11:01:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:46.249 11:01:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3909020 /var/tmp/bdevperf.sock 00:06:46.249 11:01:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 3909020 ']' 00:06:46.249 11:01:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:06:46.249 11:01:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:46.249 11:01:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:46.249 11:01:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:46.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:46.249 11:01:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:46.249 11:01:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:06:46.249 [2024-11-20 11:01:13.657676] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:06:46.249 [2024-11-20 11:01:13.657722] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3909020 ] 00:06:46.249 [2024-11-20 11:01:13.733864] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.508 [2024-11-20 11:01:13.778491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:46.508 11:01:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:46.508 11:01:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:06:46.509 11:01:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:06:46.767 Nvme0n1 00:06:46.767 11:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:06:47.026 [ 00:06:47.026 { 00:06:47.026 "name": "Nvme0n1", 00:06:47.026 "aliases": [ 00:06:47.026 "5a1530a6-b4be-483e-803d-3a11738979d0" 00:06:47.026 ], 00:06:47.026 "product_name": "NVMe disk", 00:06:47.026 "block_size": 4096, 00:06:47.026 "num_blocks": 38912, 00:06:47.026 "uuid": "5a1530a6-b4be-483e-803d-3a11738979d0", 00:06:47.026 "numa_id": 1, 00:06:47.026 "assigned_rate_limits": { 00:06:47.026 "rw_ios_per_sec": 0, 00:06:47.026 "rw_mbytes_per_sec": 0, 00:06:47.026 "r_mbytes_per_sec": 0, 00:06:47.026 "w_mbytes_per_sec": 0 00:06:47.026 }, 00:06:47.026 "claimed": false, 00:06:47.026 "zoned": false, 00:06:47.026 "supported_io_types": { 00:06:47.026 "read": true, 00:06:47.026 "write": true, 00:06:47.026 "unmap": true, 00:06:47.026 "flush": true, 00:06:47.026 "reset": true, 00:06:47.026 "nvme_admin": true, 00:06:47.026 "nvme_io": true, 00:06:47.026 "nvme_io_md": false, 00:06:47.026 "write_zeroes": true, 00:06:47.026 "zcopy": false, 00:06:47.026 "get_zone_info": false, 00:06:47.026 "zone_management": false, 00:06:47.026 "zone_append": false, 00:06:47.026 "compare": true, 00:06:47.026 "compare_and_write": true, 00:06:47.026 "abort": true, 00:06:47.026 "seek_hole": false, 00:06:47.026 "seek_data": false, 00:06:47.026 "copy": true, 00:06:47.026 "nvme_iov_md": false 00:06:47.026 }, 00:06:47.026 "memory_domains": [ 00:06:47.026 { 00:06:47.026 "dma_device_id": "system", 00:06:47.026 "dma_device_type": 1 00:06:47.026 } 00:06:47.026 ], 00:06:47.026 "driver_specific": { 00:06:47.027 "nvme": [ 00:06:47.027 { 00:06:47.027 "trid": { 00:06:47.027 "trtype": "TCP", 00:06:47.027 "adrfam": "IPv4", 00:06:47.027 "traddr": "10.0.0.2", 00:06:47.027 "trsvcid": "4420", 00:06:47.027 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:06:47.027 }, 00:06:47.027 "ctrlr_data": { 00:06:47.027 "cntlid": 1, 00:06:47.027 "vendor_id": "0x8086", 00:06:47.027 "model_number": "SPDK bdev Controller", 00:06:47.027 "serial_number": "SPDK0", 00:06:47.027 "firmware_revision": "25.01", 00:06:47.027 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:47.027 "oacs": { 00:06:47.027 "security": 0, 00:06:47.027 "format": 0, 00:06:47.027 "firmware": 0, 00:06:47.027 "ns_manage": 0 00:06:47.027 }, 00:06:47.027 "multi_ctrlr": true, 00:06:47.027 "ana_reporting": false 00:06:47.027 }, 00:06:47.027 "vs": { 00:06:47.027 "nvme_version": "1.3" 00:06:47.027 }, 00:06:47.027 "ns_data": { 00:06:47.027 "id": 1, 00:06:47.027 "can_share": true 00:06:47.027 } 00:06:47.027 } 00:06:47.027 ], 00:06:47.027 "mp_policy": "active_passive" 00:06:47.027 } 00:06:47.027 } 00:06:47.027 ] 00:06:47.027 11:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3909070 00:06:47.027 11:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:06:47.027 11:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:06:47.027 Running I/O for 10 seconds... 00:06:48.405 Latency(us) 00:06:48.405 [2024-11-20T10:01:15.901Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:48.405 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:48.405 Nvme0n1 : 1.00 22551.00 88.09 0.00 0.00 0.00 0.00 0.00 00:06:48.405 [2024-11-20T10:01:15.901Z] =================================================================================================================== 00:06:48.405 [2024-11-20T10:01:15.901Z] Total : 22551.00 88.09 0.00 0.00 0.00 0.00 0.00 00:06:48.405 00:06:48.973 11:01:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u c13059fc-9f58-4093-95c6-2c3f532bb3f7 00:06:49.231 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:49.231 Nvme0n1 : 2.00 22685.50 88.62 0.00 0.00 0.00 0.00 0.00 00:06:49.231 [2024-11-20T10:01:16.727Z] =================================================================================================================== 00:06:49.231 [2024-11-20T10:01:16.727Z] Total : 22685.50 88.62 0.00 0.00 0.00 0.00 0.00 00:06:49.231 00:06:49.231 true 00:06:49.231 11:01:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c13059fc-9f58-4093-95c6-2c3f532bb3f7 00:06:49.231 11:01:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:06:49.490 11:01:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:06:49.490 11:01:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:06:49.490 11:01:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3909070 00:06:50.058 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:50.058 Nvme0n1 : 3.00 22712.67 88.72 0.00 0.00 0.00 0.00 0.00 00:06:50.058 [2024-11-20T10:01:17.554Z] =================================================================================================================== 00:06:50.058 [2024-11-20T10:01:17.554Z] Total : 22712.67 88.72 0.00 0.00 0.00 0.00 0.00 00:06:50.058 00:06:50.997 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:50.997 Nvme0n1 : 4.00 22778.25 88.98 0.00 0.00 0.00 0.00 0.00 00:06:50.997 [2024-11-20T10:01:18.493Z] =================================================================================================================== 00:06:50.997 [2024-11-20T10:01:18.493Z] Total : 22778.25 88.98 0.00 0.00 0.00 0.00 0.00 00:06:50.997 00:06:52.376 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:52.376 Nvme0n1 : 5.00 22820.80 89.14 0.00 0.00 0.00 0.00 0.00 00:06:52.376 [2024-11-20T10:01:19.872Z] =================================================================================================================== 00:06:52.376 [2024-11-20T10:01:19.872Z] Total : 22820.80 89.14 0.00 0.00 0.00 0.00 0.00 00:06:52.376 00:06:53.313 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:53.313 Nvme0n1 : 6.00 22799.50 89.06 0.00 0.00 0.00 0.00 0.00 00:06:53.313 [2024-11-20T10:01:20.809Z] =================================================================================================================== 00:06:53.313 [2024-11-20T10:01:20.809Z] Total : 22799.50 89.06 0.00 0.00 0.00 0.00 0.00 00:06:53.313 00:06:54.250 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:54.250 Nvme0n1 : 7.00 22808.14 89.09 0.00 0.00 0.00 0.00 0.00 00:06:54.250 [2024-11-20T10:01:21.746Z] =================================================================================================================== 00:06:54.250 [2024-11-20T10:01:21.746Z] Total : 22808.14 89.09 0.00 0.00 0.00 0.00 0.00 00:06:54.250 00:06:55.187 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:55.187 Nvme0n1 : 8.00 22847.38 89.25 0.00 0.00 0.00 0.00 0.00 00:06:55.187 [2024-11-20T10:01:22.683Z] =================================================================================================================== 00:06:55.187 [2024-11-20T10:01:22.683Z] Total : 22847.38 89.25 0.00 0.00 0.00 0.00 0.00 00:06:55.187 00:06:56.125 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:56.125 Nvme0n1 : 9.00 22879.00 89.37 0.00 0.00 0.00 0.00 0.00 00:06:56.125 [2024-11-20T10:01:23.621Z] =================================================================================================================== 00:06:56.125 [2024-11-20T10:01:23.621Z] Total : 22879.00 89.37 0.00 0.00 0.00 0.00 0.00 00:06:56.125 00:06:57.064 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:57.064 Nvme0n1 : 10.00 22896.70 89.44 0.00 0.00 0.00 0.00 0.00 00:06:57.064 [2024-11-20T10:01:24.560Z] =================================================================================================================== 00:06:57.064 [2024-11-20T10:01:24.560Z] Total : 22896.70 89.44 0.00 0.00 0.00 0.00 0.00 00:06:57.064 00:06:57.064 00:06:57.064 Latency(us) 00:06:57.064 [2024-11-20T10:01:24.560Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:57.064 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:57.064 Nvme0n1 : 10.00 22901.54 89.46 0.00 0.00 5586.13 3205.57 10941.66 00:06:57.064 [2024-11-20T10:01:24.560Z] =================================================================================================================== 00:06:57.064 [2024-11-20T10:01:24.560Z] Total : 22901.54 89.46 0.00 0.00 5586.13 3205.57 10941.66 00:06:57.064 { 00:06:57.064 "results": [ 00:06:57.064 { 00:06:57.064 "job": "Nvme0n1", 00:06:57.064 "core_mask": "0x2", 00:06:57.064 "workload": "randwrite", 00:06:57.064 "status": "finished", 00:06:57.064 "queue_depth": 128, 00:06:57.064 "io_size": 4096, 00:06:57.064 "runtime": 10.003474, 00:06:57.064 "iops": 22901.544003613144, 00:06:57.064 "mibps": 89.45915626411384, 00:06:57.064 "io_failed": 0, 00:06:57.064 "io_timeout": 0, 00:06:57.064 "avg_latency_us": 5586.130347945649, 00:06:57.064 "min_latency_us": 3205.5652173913045, 00:06:57.064 "max_latency_us": 10941.662608695653 00:06:57.064 } 00:06:57.064 ], 00:06:57.064 "core_count": 1 00:06:57.064 } 00:06:57.064 11:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3909020 00:06:57.064 11:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 3909020 ']' 00:06:57.064 11:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 3909020 00:06:57.064 11:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:06:57.064 11:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:57.064 11:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3909020 00:06:57.323 11:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:57.323 11:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:57.323 11:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3909020' 00:06:57.323 killing process with pid 3909020 00:06:57.323 11:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 3909020 00:06:57.323 Received shutdown signal, test time was about 10.000000 seconds 00:06:57.323 00:06:57.323 Latency(us) 00:06:57.323 [2024-11-20T10:01:24.819Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:57.323 [2024-11-20T10:01:24.819Z] =================================================================================================================== 00:06:57.323 [2024-11-20T10:01:24.819Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:06:57.323 11:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 3909020 00:06:57.323 11:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:57.582 11:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:57.842 11:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c13059fc-9f58-4093-95c6-2c3f532bb3f7 00:06:57.842 11:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:06:58.125 11:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:06:58.125 11:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:06:58.125 11:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:06:58.125 [2024-11-20 11:01:25.517571] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:06:58.125 11:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c13059fc-9f58-4093-95c6-2c3f532bb3f7 00:06:58.125 11:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:06:58.125 11:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c13059fc-9f58-4093-95c6-2c3f532bb3f7 00:06:58.125 11:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:58.125 11:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:58.125 11:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:58.125 11:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:58.125 11:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:58.125 11:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:58.125 11:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:58.125 11:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:58.125 11:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c13059fc-9f58-4093-95c6-2c3f532bb3f7 00:06:58.423 request: 00:06:58.423 { 00:06:58.423 "uuid": "c13059fc-9f58-4093-95c6-2c3f532bb3f7", 00:06:58.423 "method": "bdev_lvol_get_lvstores", 00:06:58.423 "req_id": 1 00:06:58.423 } 00:06:58.423 Got JSON-RPC error response 00:06:58.423 response: 00:06:58.423 { 00:06:58.423 "code": -19, 00:06:58.423 "message": "No such device" 00:06:58.423 } 00:06:58.423 11:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:06:58.423 11:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:58.423 11:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:58.423 11:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:58.423 11:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:06:58.699 aio_bdev 00:06:58.699 11:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 5a1530a6-b4be-483e-803d-3a11738979d0 00:06:58.699 11:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=5a1530a6-b4be-483e-803d-3a11738979d0 00:06:58.699 11:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:06:58.699 11:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:06:58.699 11:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:06:58.699 11:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:06:58.699 11:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:06:58.699 11:01:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 5a1530a6-b4be-483e-803d-3a11738979d0 -t 2000 00:06:58.988 [ 00:06:58.988 { 00:06:58.988 "name": "5a1530a6-b4be-483e-803d-3a11738979d0", 00:06:58.988 "aliases": [ 00:06:58.988 "lvs/lvol" 00:06:58.988 ], 00:06:58.988 "product_name": "Logical Volume", 00:06:58.988 "block_size": 4096, 00:06:58.988 "num_blocks": 38912, 00:06:58.988 "uuid": "5a1530a6-b4be-483e-803d-3a11738979d0", 00:06:58.988 "assigned_rate_limits": { 00:06:58.988 "rw_ios_per_sec": 0, 00:06:58.988 "rw_mbytes_per_sec": 0, 00:06:58.988 "r_mbytes_per_sec": 0, 00:06:58.988 "w_mbytes_per_sec": 0 00:06:58.988 }, 00:06:58.988 "claimed": false, 00:06:58.988 "zoned": false, 00:06:58.988 "supported_io_types": { 00:06:58.988 "read": true, 00:06:58.988 "write": true, 00:06:58.988 "unmap": true, 00:06:58.988 "flush": false, 00:06:58.988 "reset": true, 00:06:58.988 "nvme_admin": false, 00:06:58.988 "nvme_io": false, 00:06:58.988 "nvme_io_md": false, 00:06:58.988 "write_zeroes": true, 00:06:58.988 "zcopy": false, 00:06:58.988 "get_zone_info": false, 00:06:58.988 "zone_management": false, 00:06:58.988 "zone_append": false, 00:06:58.988 "compare": false, 00:06:58.988 "compare_and_write": false, 00:06:58.988 "abort": false, 00:06:58.988 "seek_hole": true, 00:06:58.988 "seek_data": true, 00:06:58.988 "copy": false, 00:06:58.988 "nvme_iov_md": false 00:06:58.988 }, 00:06:58.988 "driver_specific": { 00:06:58.988 "lvol": { 00:06:58.988 "lvol_store_uuid": "c13059fc-9f58-4093-95c6-2c3f532bb3f7", 00:06:58.988 "base_bdev": "aio_bdev", 00:06:58.988 "thin_provision": false, 00:06:58.988 "num_allocated_clusters": 38, 00:06:58.989 "snapshot": false, 00:06:58.989 "clone": false, 00:06:58.989 "esnap_clone": false 00:06:58.989 } 00:06:58.989 } 00:06:58.989 } 00:06:58.989 ] 00:06:58.989 11:01:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:06:58.989 11:01:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c13059fc-9f58-4093-95c6-2c3f532bb3f7 00:06:58.989 11:01:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:06:59.302 11:01:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:06:59.302 11:01:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c13059fc-9f58-4093-95c6-2c3f532bb3f7 00:06:59.302 11:01:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:06:59.302 11:01:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:06:59.302 11:01:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 5a1530a6-b4be-483e-803d-3a11738979d0 00:06:59.586 11:01:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c13059fc-9f58-4093-95c6-2c3f532bb3f7 00:06:59.845 11:01:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:06:59.845 11:01:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:59.845 00:06:59.845 real 0m15.675s 00:06:59.845 user 0m15.191s 00:06:59.845 sys 0m1.554s 00:06:59.845 11:01:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:59.845 11:01:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:06:59.845 ************************************ 00:06:59.845 END TEST lvs_grow_clean 00:06:59.845 ************************************ 00:07:00.104 11:01:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:07:00.104 11:01:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:00.104 11:01:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:00.104 11:01:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:00.104 ************************************ 00:07:00.104 START TEST lvs_grow_dirty 00:07:00.104 ************************************ 00:07:00.104 11:01:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:07:00.104 11:01:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:00.104 11:01:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:00.104 11:01:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:00.104 11:01:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:00.104 11:01:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:00.104 11:01:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:00.104 11:01:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:00.104 11:01:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:00.104 11:01:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:00.363 11:01:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:00.363 11:01:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:00.363 11:01:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=a59da146-b4aa-4b8f-a9f2-edc46af06dfa 00:07:00.363 11:01:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a59da146-b4aa-4b8f-a9f2-edc46af06dfa 00:07:00.363 11:01:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:00.622 11:01:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:00.622 11:01:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:00.622 11:01:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u a59da146-b4aa-4b8f-a9f2-edc46af06dfa lvol 150 00:07:00.880 11:01:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=f88e4434-3d22-4863-9d98-30e97a5ab956 00:07:00.880 11:01:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:00.880 11:01:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:01.140 [2024-11-20 11:01:28.382830] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:01.140 [2024-11-20 11:01:28.382880] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:01.140 true 00:07:01.140 11:01:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a59da146-b4aa-4b8f-a9f2-edc46af06dfa 00:07:01.140 11:01:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:01.140 11:01:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:01.140 11:01:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:01.399 11:01:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f88e4434-3d22-4863-9d98-30e97a5ab956 00:07:01.658 11:01:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:01.658 [2024-11-20 11:01:29.137056] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:01.916 11:01:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:01.916 11:01:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3911633 00:07:01.916 11:01:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:01.916 11:01:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:01.916 11:01:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3911633 /var/tmp/bdevperf.sock 00:07:01.916 11:01:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 3911633 ']' 00:07:01.916 11:01:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:01.916 11:01:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:01.916 11:01:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:01.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:01.916 11:01:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:01.916 11:01:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:01.916 [2024-11-20 11:01:29.375376] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:07:01.916 [2024-11-20 11:01:29.375424] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3911633 ] 00:07:02.174 [2024-11-20 11:01:29.449577] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.174 [2024-11-20 11:01:29.490129] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:02.174 11:01:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:02.174 11:01:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:07:02.174 11:01:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:02.738 Nvme0n1 00:07:02.738 11:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:02.738 [ 00:07:02.738 { 00:07:02.738 "name": "Nvme0n1", 00:07:02.738 "aliases": [ 00:07:02.738 "f88e4434-3d22-4863-9d98-30e97a5ab956" 00:07:02.738 ], 00:07:02.738 "product_name": "NVMe disk", 00:07:02.738 "block_size": 4096, 00:07:02.738 "num_blocks": 38912, 00:07:02.738 "uuid": "f88e4434-3d22-4863-9d98-30e97a5ab956", 00:07:02.738 "numa_id": 1, 00:07:02.738 "assigned_rate_limits": { 00:07:02.738 "rw_ios_per_sec": 0, 00:07:02.738 "rw_mbytes_per_sec": 0, 00:07:02.738 "r_mbytes_per_sec": 0, 00:07:02.738 "w_mbytes_per_sec": 0 00:07:02.738 }, 00:07:02.738 "claimed": false, 00:07:02.738 "zoned": false, 00:07:02.738 "supported_io_types": { 00:07:02.738 "read": true, 00:07:02.738 "write": true, 00:07:02.738 "unmap": true, 00:07:02.738 "flush": true, 00:07:02.738 "reset": true, 00:07:02.738 "nvme_admin": true, 00:07:02.738 "nvme_io": true, 00:07:02.738 "nvme_io_md": false, 00:07:02.738 "write_zeroes": true, 00:07:02.738 "zcopy": false, 00:07:02.738 "get_zone_info": false, 00:07:02.738 "zone_management": false, 00:07:02.738 "zone_append": false, 00:07:02.738 "compare": true, 00:07:02.738 "compare_and_write": true, 00:07:02.738 "abort": true, 00:07:02.738 "seek_hole": false, 00:07:02.738 "seek_data": false, 00:07:02.738 "copy": true, 00:07:02.738 "nvme_iov_md": false 00:07:02.738 }, 00:07:02.738 "memory_domains": [ 00:07:02.738 { 00:07:02.738 "dma_device_id": "system", 00:07:02.738 "dma_device_type": 1 00:07:02.738 } 00:07:02.738 ], 00:07:02.738 "driver_specific": { 00:07:02.738 "nvme": [ 00:07:02.738 { 00:07:02.738 "trid": { 00:07:02.738 "trtype": "TCP", 00:07:02.738 "adrfam": "IPv4", 00:07:02.738 "traddr": "10.0.0.2", 00:07:02.738 "trsvcid": "4420", 00:07:02.738 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:02.738 }, 00:07:02.738 "ctrlr_data": { 00:07:02.738 "cntlid": 1, 00:07:02.738 "vendor_id": "0x8086", 00:07:02.738 "model_number": "SPDK bdev Controller", 00:07:02.738 "serial_number": "SPDK0", 00:07:02.738 "firmware_revision": "25.01", 00:07:02.738 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:02.738 "oacs": { 00:07:02.738 "security": 0, 00:07:02.738 "format": 0, 00:07:02.738 "firmware": 0, 00:07:02.738 "ns_manage": 0 00:07:02.738 }, 00:07:02.738 "multi_ctrlr": true, 00:07:02.738 "ana_reporting": false 00:07:02.738 }, 00:07:02.738 "vs": { 00:07:02.738 "nvme_version": "1.3" 00:07:02.738 }, 00:07:02.738 "ns_data": { 00:07:02.738 "id": 1, 00:07:02.738 "can_share": true 00:07:02.738 } 00:07:02.738 } 00:07:02.738 ], 00:07:02.738 "mp_policy": "active_passive" 00:07:02.738 } 00:07:02.738 } 00:07:02.738 ] 00:07:02.738 11:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3911865 00:07:02.738 11:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:02.738 11:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:02.996 Running I/O for 10 seconds... 00:07:03.931 Latency(us) 00:07:03.931 [2024-11-20T10:01:31.427Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:03.931 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:03.931 Nvme0n1 : 1.00 22357.00 87.33 0.00 0.00 0.00 0.00 0.00 00:07:03.931 [2024-11-20T10:01:31.427Z] =================================================================================================================== 00:07:03.931 [2024-11-20T10:01:31.427Z] Total : 22357.00 87.33 0.00 0.00 0.00 0.00 0.00 00:07:03.931 00:07:04.866 11:01:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u a59da146-b4aa-4b8f-a9f2-edc46af06dfa 00:07:04.866 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:04.866 Nvme0n1 : 2.00 22611.50 88.33 0.00 0.00 0.00 0.00 0.00 00:07:04.866 [2024-11-20T10:01:32.362Z] =================================================================================================================== 00:07:04.866 [2024-11-20T10:01:32.362Z] Total : 22611.50 88.33 0.00 0.00 0.00 0.00 0.00 00:07:04.866 00:07:05.125 true 00:07:05.125 11:01:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a59da146-b4aa-4b8f-a9f2-edc46af06dfa 00:07:05.125 11:01:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:05.125 11:01:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:05.125 11:01:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:05.125 11:01:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3911865 00:07:06.062 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:06.062 Nvme0n1 : 3.00 22702.33 88.68 0.00 0.00 0.00 0.00 0.00 00:07:06.062 [2024-11-20T10:01:33.558Z] =================================================================================================================== 00:07:06.062 [2024-11-20T10:01:33.558Z] Total : 22702.33 88.68 0.00 0.00 0.00 0.00 0.00 00:07:06.062 00:07:07.000 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:07.000 Nvme0n1 : 4.00 22785.25 89.00 0.00 0.00 0.00 0.00 0.00 00:07:07.000 [2024-11-20T10:01:34.496Z] =================================================================================================================== 00:07:07.000 [2024-11-20T10:01:34.496Z] Total : 22785.25 89.00 0.00 0.00 0.00 0.00 0.00 00:07:07.000 00:07:07.936 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:07.936 Nvme0n1 : 5.00 22846.00 89.24 0.00 0.00 0.00 0.00 0.00 00:07:07.936 [2024-11-20T10:01:35.432Z] =================================================================================================================== 00:07:07.936 [2024-11-20T10:01:35.432Z] Total : 22846.00 89.24 0.00 0.00 0.00 0.00 0.00 00:07:07.936 00:07:08.872 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:08.872 Nvme0n1 : 6.00 22903.00 89.46 0.00 0.00 0.00 0.00 0.00 00:07:08.872 [2024-11-20T10:01:36.368Z] =================================================================================================================== 00:07:08.872 [2024-11-20T10:01:36.368Z] Total : 22903.00 89.46 0.00 0.00 0.00 0.00 0.00 00:07:08.872 00:07:10.249 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:10.249 Nvme0n1 : 7.00 22935.00 89.59 0.00 0.00 0.00 0.00 0.00 00:07:10.249 [2024-11-20T10:01:37.745Z] =================================================================================================================== 00:07:10.249 [2024-11-20T10:01:37.745Z] Total : 22935.00 89.59 0.00 0.00 0.00 0.00 0.00 00:07:10.249 00:07:11.186 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:11.186 Nvme0n1 : 8.00 22960.00 89.69 0.00 0.00 0.00 0.00 0.00 00:07:11.186 [2024-11-20T10:01:38.682Z] =================================================================================================================== 00:07:11.186 [2024-11-20T10:01:38.682Z] Total : 22960.00 89.69 0.00 0.00 0.00 0.00 0.00 00:07:11.186 00:07:12.123 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:12.123 Nvme0n1 : 9.00 22977.11 89.75 0.00 0.00 0.00 0.00 0.00 00:07:12.123 [2024-11-20T10:01:39.620Z] =================================================================================================================== 00:07:12.124 [2024-11-20T10:01:39.620Z] Total : 22977.11 89.75 0.00 0.00 0.00 0.00 0.00 00:07:12.124 00:07:13.060 00:07:13.061 Latency(us) 00:07:13.061 [2024-11-20T10:01:40.557Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:13.061 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:13.061 Nvme0n1 : 10.00 22980.57 89.77 0.00 0.00 5567.13 1823.61 11226.60 00:07:13.061 [2024-11-20T10:01:40.557Z] =================================================================================================================== 00:07:13.061 [2024-11-20T10:01:40.557Z] Total : 22980.57 89.77 0.00 0.00 5567.13 1823.61 11226.60 00:07:13.061 { 00:07:13.061 "results": [ 00:07:13.061 { 00:07:13.061 "job": "Nvme0n1", 00:07:13.061 "core_mask": "0x2", 00:07:13.061 "workload": "randwrite", 00:07:13.061 "status": "finished", 00:07:13.061 "queue_depth": 128, 00:07:13.061 "io_size": 4096, 00:07:13.061 "runtime": 10.001232, 00:07:13.061 "iops": 22980.568793924587, 00:07:13.061 "mibps": 89.76784685126792, 00:07:13.061 "io_failed": 0, 00:07:13.061 "io_timeout": 0, 00:07:13.061 "avg_latency_us": 5567.126173166191, 00:07:13.061 "min_latency_us": 1823.6104347826088, 00:07:13.061 "max_latency_us": 11226.601739130434 00:07:13.061 } 00:07:13.061 ], 00:07:13.061 "core_count": 1 00:07:13.061 } 00:07:13.061 11:01:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3911633 00:07:13.061 11:01:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 3911633 ']' 00:07:13.061 11:01:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 3911633 00:07:13.061 11:01:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:07:13.061 11:01:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:13.061 11:01:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3911633 00:07:13.061 11:01:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:13.061 11:01:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:13.061 11:01:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3911633' 00:07:13.061 killing process with pid 3911633 00:07:13.061 11:01:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 3911633 00:07:13.061 Received shutdown signal, test time was about 10.000000 seconds 00:07:13.061 00:07:13.061 Latency(us) 00:07:13.061 [2024-11-20T10:01:40.557Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:13.061 [2024-11-20T10:01:40.557Z] =================================================================================================================== 00:07:13.061 [2024-11-20T10:01:40.557Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:13.061 11:01:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 3911633 00:07:13.320 11:01:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:13.320 11:01:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:13.578 11:01:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a59da146-b4aa-4b8f-a9f2-edc46af06dfa 00:07:13.578 11:01:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:13.837 11:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:13.837 11:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:07:13.837 11:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3908524 00:07:13.837 11:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3908524 00:07:13.837 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3908524 Killed "${NVMF_APP[@]}" "$@" 00:07:13.837 11:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:07:13.837 11:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:07:13.837 11:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:13.837 11:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:13.837 11:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:13.837 11:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=3913713 00:07:13.837 11:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 3913713 00:07:13.837 11:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:13.837 11:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 3913713 ']' 00:07:13.837 11:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:13.837 11:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:13.837 11:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:13.837 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:13.837 11:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:13.837 11:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:13.837 [2024-11-20 11:01:41.276839] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:07:13.837 [2024-11-20 11:01:41.276887] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:14.095 [2024-11-20 11:01:41.354540] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.096 [2024-11-20 11:01:41.395460] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:14.096 [2024-11-20 11:01:41.395497] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:14.096 [2024-11-20 11:01:41.395504] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:14.096 [2024-11-20 11:01:41.395510] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:14.096 [2024-11-20 11:01:41.395515] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:14.096 [2024-11-20 11:01:41.396105] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.096 11:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:14.096 11:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:07:14.096 11:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:14.096 11:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:14.096 11:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:14.096 11:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:14.096 11:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:14.354 [2024-11-20 11:01:41.692688] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:07:14.354 [2024-11-20 11:01:41.692771] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:07:14.354 [2024-11-20 11:01:41.692798] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:07:14.354 11:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:07:14.354 11:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev f88e4434-3d22-4863-9d98-30e97a5ab956 00:07:14.355 11:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=f88e4434-3d22-4863-9d98-30e97a5ab956 00:07:14.355 11:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:14.355 11:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:07:14.355 11:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:14.355 11:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:14.355 11:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:14.614 11:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b f88e4434-3d22-4863-9d98-30e97a5ab956 -t 2000 00:07:14.614 [ 00:07:14.614 { 00:07:14.614 "name": "f88e4434-3d22-4863-9d98-30e97a5ab956", 00:07:14.614 "aliases": [ 00:07:14.614 "lvs/lvol" 00:07:14.614 ], 00:07:14.614 "product_name": "Logical Volume", 00:07:14.614 "block_size": 4096, 00:07:14.614 "num_blocks": 38912, 00:07:14.614 "uuid": "f88e4434-3d22-4863-9d98-30e97a5ab956", 00:07:14.614 "assigned_rate_limits": { 00:07:14.614 "rw_ios_per_sec": 0, 00:07:14.614 "rw_mbytes_per_sec": 0, 00:07:14.614 "r_mbytes_per_sec": 0, 00:07:14.614 "w_mbytes_per_sec": 0 00:07:14.614 }, 00:07:14.614 "claimed": false, 00:07:14.614 "zoned": false, 00:07:14.614 "supported_io_types": { 00:07:14.614 "read": true, 00:07:14.614 "write": true, 00:07:14.614 "unmap": true, 00:07:14.614 "flush": false, 00:07:14.614 "reset": true, 00:07:14.614 "nvme_admin": false, 00:07:14.614 "nvme_io": false, 00:07:14.614 "nvme_io_md": false, 00:07:14.614 "write_zeroes": true, 00:07:14.614 "zcopy": false, 00:07:14.614 "get_zone_info": false, 00:07:14.614 "zone_management": false, 00:07:14.614 "zone_append": false, 00:07:14.614 "compare": false, 00:07:14.614 "compare_and_write": false, 00:07:14.614 "abort": false, 00:07:14.614 "seek_hole": true, 00:07:14.614 "seek_data": true, 00:07:14.614 "copy": false, 00:07:14.614 "nvme_iov_md": false 00:07:14.614 }, 00:07:14.614 "driver_specific": { 00:07:14.614 "lvol": { 00:07:14.614 "lvol_store_uuid": "a59da146-b4aa-4b8f-a9f2-edc46af06dfa", 00:07:14.614 "base_bdev": "aio_bdev", 00:07:14.614 "thin_provision": false, 00:07:14.614 "num_allocated_clusters": 38, 00:07:14.614 "snapshot": false, 00:07:14.614 "clone": false, 00:07:14.614 "esnap_clone": false 00:07:14.614 } 00:07:14.614 } 00:07:14.614 } 00:07:14.614 ] 00:07:14.614 11:01:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:07:14.873 11:01:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a59da146-b4aa-4b8f-a9f2-edc46af06dfa 00:07:14.873 11:01:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:07:14.873 11:01:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:07:14.873 11:01:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a59da146-b4aa-4b8f-a9f2-edc46af06dfa 00:07:14.873 11:01:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:07:15.131 11:01:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:07:15.131 11:01:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:15.390 [2024-11-20 11:01:42.665543] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:15.390 11:01:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a59da146-b4aa-4b8f-a9f2-edc46af06dfa 00:07:15.390 11:01:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:07:15.390 11:01:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a59da146-b4aa-4b8f-a9f2-edc46af06dfa 00:07:15.390 11:01:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:15.390 11:01:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:15.390 11:01:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:15.390 11:01:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:15.390 11:01:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:15.390 11:01:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:15.390 11:01:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:15.390 11:01:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:15.390 11:01:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a59da146-b4aa-4b8f-a9f2-edc46af06dfa 00:07:15.390 request: 00:07:15.390 { 00:07:15.390 "uuid": "a59da146-b4aa-4b8f-a9f2-edc46af06dfa", 00:07:15.390 "method": "bdev_lvol_get_lvstores", 00:07:15.390 "req_id": 1 00:07:15.390 } 00:07:15.390 Got JSON-RPC error response 00:07:15.390 response: 00:07:15.390 { 00:07:15.390 "code": -19, 00:07:15.390 "message": "No such device" 00:07:15.390 } 00:07:15.648 11:01:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:07:15.648 11:01:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:15.648 11:01:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:15.648 11:01:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:15.648 11:01:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:15.648 aio_bdev 00:07:15.649 11:01:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev f88e4434-3d22-4863-9d98-30e97a5ab956 00:07:15.649 11:01:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=f88e4434-3d22-4863-9d98-30e97a5ab956 00:07:15.649 11:01:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:15.649 11:01:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:07:15.649 11:01:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:15.649 11:01:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:15.649 11:01:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:15.908 11:01:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b f88e4434-3d22-4863-9d98-30e97a5ab956 -t 2000 00:07:16.167 [ 00:07:16.167 { 00:07:16.167 "name": "f88e4434-3d22-4863-9d98-30e97a5ab956", 00:07:16.167 "aliases": [ 00:07:16.167 "lvs/lvol" 00:07:16.167 ], 00:07:16.167 "product_name": "Logical Volume", 00:07:16.167 "block_size": 4096, 00:07:16.167 "num_blocks": 38912, 00:07:16.167 "uuid": "f88e4434-3d22-4863-9d98-30e97a5ab956", 00:07:16.167 "assigned_rate_limits": { 00:07:16.167 "rw_ios_per_sec": 0, 00:07:16.167 "rw_mbytes_per_sec": 0, 00:07:16.167 "r_mbytes_per_sec": 0, 00:07:16.167 "w_mbytes_per_sec": 0 00:07:16.167 }, 00:07:16.167 "claimed": false, 00:07:16.167 "zoned": false, 00:07:16.167 "supported_io_types": { 00:07:16.167 "read": true, 00:07:16.167 "write": true, 00:07:16.167 "unmap": true, 00:07:16.167 "flush": false, 00:07:16.167 "reset": true, 00:07:16.167 "nvme_admin": false, 00:07:16.167 "nvme_io": false, 00:07:16.167 "nvme_io_md": false, 00:07:16.167 "write_zeroes": true, 00:07:16.167 "zcopy": false, 00:07:16.167 "get_zone_info": false, 00:07:16.167 "zone_management": false, 00:07:16.167 "zone_append": false, 00:07:16.167 "compare": false, 00:07:16.167 "compare_and_write": false, 00:07:16.167 "abort": false, 00:07:16.167 "seek_hole": true, 00:07:16.167 "seek_data": true, 00:07:16.167 "copy": false, 00:07:16.167 "nvme_iov_md": false 00:07:16.167 }, 00:07:16.167 "driver_specific": { 00:07:16.167 "lvol": { 00:07:16.167 "lvol_store_uuid": "a59da146-b4aa-4b8f-a9f2-edc46af06dfa", 00:07:16.167 "base_bdev": "aio_bdev", 00:07:16.167 "thin_provision": false, 00:07:16.167 "num_allocated_clusters": 38, 00:07:16.167 "snapshot": false, 00:07:16.167 "clone": false, 00:07:16.167 "esnap_clone": false 00:07:16.167 } 00:07:16.167 } 00:07:16.167 } 00:07:16.167 ] 00:07:16.167 11:01:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:07:16.167 11:01:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a59da146-b4aa-4b8f-a9f2-edc46af06dfa 00:07:16.167 11:01:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:16.167 11:01:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:16.426 11:01:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:16.426 11:01:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a59da146-b4aa-4b8f-a9f2-edc46af06dfa 00:07:16.426 11:01:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:16.426 11:01:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete f88e4434-3d22-4863-9d98-30e97a5ab956 00:07:16.685 11:01:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a59da146-b4aa-4b8f-a9f2-edc46af06dfa 00:07:16.943 11:01:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:17.203 11:01:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:17.203 00:07:17.203 real 0m17.055s 00:07:17.203 user 0m44.082s 00:07:17.203 sys 0m3.762s 00:07:17.203 11:01:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:17.203 11:01:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:17.203 ************************************ 00:07:17.203 END TEST lvs_grow_dirty 00:07:17.203 ************************************ 00:07:17.203 11:01:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:07:17.203 11:01:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:07:17.203 11:01:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:07:17.203 11:01:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:07:17.203 11:01:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:07:17.203 11:01:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:07:17.203 11:01:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:07:17.203 11:01:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:07:17.203 11:01:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:07:17.203 nvmf_trace.0 00:07:17.203 11:01:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:07:17.203 11:01:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:07:17.203 11:01:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:17.203 11:01:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:07:17.203 11:01:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:17.203 11:01:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:07:17.203 11:01:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:17.203 11:01:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:17.203 rmmod nvme_tcp 00:07:17.203 rmmod nvme_fabrics 00:07:17.203 rmmod nvme_keyring 00:07:17.203 11:01:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:17.203 11:01:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:07:17.203 11:01:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:07:17.203 11:01:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 3913713 ']' 00:07:17.203 11:01:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 3913713 00:07:17.203 11:01:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 3913713 ']' 00:07:17.203 11:01:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 3913713 00:07:17.203 11:01:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:07:17.203 11:01:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:17.203 11:01:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3913713 00:07:17.203 11:01:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:17.203 11:01:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:17.203 11:01:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3913713' 00:07:17.203 killing process with pid 3913713 00:07:17.203 11:01:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 3913713 00:07:17.203 11:01:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 3913713 00:07:17.462 11:01:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:17.462 11:01:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:17.462 11:01:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:17.462 11:01:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:07:17.462 11:01:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:07:17.462 11:01:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:17.462 11:01:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:07:17.462 11:01:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:17.462 11:01:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:17.462 11:01:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:17.462 11:01:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:17.462 11:01:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:19.998 11:01:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:19.998 00:07:19.998 real 0m42.023s 00:07:19.998 user 1m4.953s 00:07:19.998 sys 0m10.265s 00:07:19.998 11:01:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:19.998 11:01:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:19.998 ************************************ 00:07:19.998 END TEST nvmf_lvs_grow 00:07:19.998 ************************************ 00:07:19.998 11:01:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:19.998 11:01:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:19.998 11:01:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:19.998 11:01:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:19.998 ************************************ 00:07:19.998 START TEST nvmf_bdev_io_wait 00:07:19.998 ************************************ 00:07:19.998 11:01:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:19.998 * Looking for test storage... 00:07:19.998 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:19.998 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:19.998 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:07:19.998 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:19.998 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:19.998 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:19.998 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:19.998 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:19.998 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:07:19.998 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:07:19.998 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:07:19.998 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:07:19.998 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:07:19.998 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:07:19.998 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:07:19.998 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:19.998 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:07:19.998 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:07:19.998 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:19.998 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:19.998 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:07:19.998 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:07:19.998 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:19.998 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:07:19.998 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:07:19.998 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:07:19.998 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:07:19.998 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:19.998 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:07:19.998 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:07:19.998 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:19.998 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:19.999 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:07:19.999 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:19.999 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:19.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.999 --rc genhtml_branch_coverage=1 00:07:19.999 --rc genhtml_function_coverage=1 00:07:19.999 --rc genhtml_legend=1 00:07:19.999 --rc geninfo_all_blocks=1 00:07:19.999 --rc geninfo_unexecuted_blocks=1 00:07:19.999 00:07:19.999 ' 00:07:19.999 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:19.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.999 --rc genhtml_branch_coverage=1 00:07:19.999 --rc genhtml_function_coverage=1 00:07:19.999 --rc genhtml_legend=1 00:07:19.999 --rc geninfo_all_blocks=1 00:07:19.999 --rc geninfo_unexecuted_blocks=1 00:07:19.999 00:07:19.999 ' 00:07:19.999 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:19.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.999 --rc genhtml_branch_coverage=1 00:07:19.999 --rc genhtml_function_coverage=1 00:07:19.999 --rc genhtml_legend=1 00:07:19.999 --rc geninfo_all_blocks=1 00:07:19.999 --rc geninfo_unexecuted_blocks=1 00:07:19.999 00:07:19.999 ' 00:07:19.999 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:19.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.999 --rc genhtml_branch_coverage=1 00:07:19.999 --rc genhtml_function_coverage=1 00:07:19.999 --rc genhtml_legend=1 00:07:19.999 --rc geninfo_all_blocks=1 00:07:19.999 --rc geninfo_unexecuted_blocks=1 00:07:19.999 00:07:19.999 ' 00:07:19.999 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:19.999 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:07:19.999 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:19.999 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:19.999 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:19.999 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:19.999 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:19.999 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:19.999 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:19.999 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:19.999 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:19.999 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:19.999 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:19.999 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:19.999 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:19.999 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:19.999 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:19.999 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:19.999 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:19.999 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:07:19.999 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:19.999 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:19.999 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:19.999 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:19.999 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:19.999 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:19.999 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:07:19.999 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:19.999 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:07:19.999 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:19.999 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:19.999 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:19.999 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:19.999 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:19.999 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:19.999 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:19.999 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:19.999 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:19.999 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:19.999 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:19.999 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:19.999 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:07:19.999 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:19.999 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:19.999 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:19.999 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:19.999 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:19.999 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:19.999 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:19.999 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:19.999 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:19.999 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:19.999 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:07:19.999 11:01:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:26.588 11:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:26.588 11:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:07:26.588 11:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:26.588 11:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:26.588 11:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:26.588 11:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:26.588 11:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:26.588 11:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:07:26.588 11:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:26.588 11:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:07:26.588 11:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:07:26.588 11:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:07:26.588 11:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:07:26.588 11:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:07:26.588 11:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:07:26.588 11:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:26.588 11:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:26.588 11:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:26.588 11:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:26.588 11:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:26.588 11:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:26.588 11:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:26.588 11:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:26.588 11:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:26.588 11:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:26.588 11:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:26.588 11:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:26.588 11:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:26.588 11:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:26.588 11:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:26.588 11:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:26.588 11:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:26.588 11:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:26.588 11:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:26.588 11:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:26.588 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:26.588 11:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:26.588 11:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:26.588 11:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:26.589 11:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:26.589 11:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:26.589 11:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:26.589 11:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:26.589 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:26.589 11:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:26.589 11:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:26.589 11:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:26.589 11:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:26.589 11:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:26.589 11:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:26.589 11:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:26.589 11:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:26.589 11:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:26.589 11:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:26.589 11:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:26.589 11:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:26.589 11:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:26.589 11:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:26.589 11:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:26.589 11:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:26.589 Found net devices under 0000:86:00.0: cvl_0_0 00:07:26.589 11:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:26.589 11:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:26.589 11:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:26.589 11:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:26.589 11:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:26.589 11:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:26.589 11:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:26.589 11:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:26.589 11:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:26.589 Found net devices under 0000:86:00.1: cvl_0_1 00:07:26.589 11:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:26.589 11:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:26.589 11:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:07:26.589 11:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:26.589 11:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:26.589 11:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:26.589 11:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:26.589 11:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:26.589 11:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:26.589 11:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:26.589 11:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:26.589 11:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:26.589 11:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:26.589 11:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:26.589 11:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:26.589 11:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:26.589 11:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:26.589 11:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:26.589 11:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:26.589 11:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:26.589 11:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:26.589 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:26.589 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:26.589 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:26.589 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:26.589 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:26.589 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:26.589 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:26.589 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:26.589 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:26.589 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.238 ms 00:07:26.589 00:07:26.589 --- 10.0.0.2 ping statistics --- 00:07:26.589 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:26.589 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:07:26.589 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:26.589 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:26.589 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.222 ms 00:07:26.589 00:07:26.589 --- 10.0.0.1 ping statistics --- 00:07:26.589 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:26.589 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:07:26.589 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:26.589 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:07:26.589 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:26.589 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:26.589 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:26.589 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:26.589 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:26.589 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:26.589 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:26.589 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:07:26.589 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:26.589 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:26.589 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:26.589 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=3917839 00:07:26.590 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 3917839 00:07:26.590 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:07:26.590 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 3917839 ']' 00:07:26.590 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:26.590 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:26.590 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:26.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:26.590 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:26.590 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:26.590 [2024-11-20 11:01:53.244198] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:07:26.590 [2024-11-20 11:01:53.244242] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:26.590 [2024-11-20 11:01:53.324802] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:26.590 [2024-11-20 11:01:53.369045] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:26.590 [2024-11-20 11:01:53.369083] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:26.590 [2024-11-20 11:01:53.369092] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:26.590 [2024-11-20 11:01:53.369099] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:26.590 [2024-11-20 11:01:53.369104] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:26.590 [2024-11-20 11:01:53.370711] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:26.590 [2024-11-20 11:01:53.370742] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:26.590 [2024-11-20 11:01:53.370854] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.590 [2024-11-20 11:01:53.370854] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:26.850 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:26.850 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:07:26.850 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:26.850 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:26.850 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:26.850 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:26.850 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:07:26.850 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.850 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:26.850 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.850 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:07:26.850 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.850 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:26.850 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.850 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:26.850 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.850 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:26.850 [2024-11-20 11:01:54.193538] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:26.850 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.850 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:07:26.850 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.850 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:26.850 Malloc0 00:07:26.850 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.850 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:26.850 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.850 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:26.850 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.850 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:26.851 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.851 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:26.851 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.851 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:26.851 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.851 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:26.851 [2024-11-20 11:01:54.241172] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:26.851 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.851 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3918029 00:07:26.851 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:07:26.851 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:07:26.851 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3918031 00:07:26.851 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:26.851 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:26.851 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:26.851 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:26.851 { 00:07:26.851 "params": { 00:07:26.851 "name": "Nvme$subsystem", 00:07:26.851 "trtype": "$TEST_TRANSPORT", 00:07:26.851 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:26.851 "adrfam": "ipv4", 00:07:26.851 "trsvcid": "$NVMF_PORT", 00:07:26.851 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:26.851 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:26.851 "hdgst": ${hdgst:-false}, 00:07:26.851 "ddgst": ${ddgst:-false} 00:07:26.851 }, 00:07:26.851 "method": "bdev_nvme_attach_controller" 00:07:26.851 } 00:07:26.851 EOF 00:07:26.851 )") 00:07:26.851 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:07:26.851 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:07:26.851 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3918033 00:07:26.851 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:26.851 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:26.851 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:26.851 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:26.851 { 00:07:26.851 "params": { 00:07:26.851 "name": "Nvme$subsystem", 00:07:26.851 "trtype": "$TEST_TRANSPORT", 00:07:26.851 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:26.851 "adrfam": "ipv4", 00:07:26.851 "trsvcid": "$NVMF_PORT", 00:07:26.851 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:26.851 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:26.851 "hdgst": ${hdgst:-false}, 00:07:26.851 "ddgst": ${ddgst:-false} 00:07:26.851 }, 00:07:26.851 "method": "bdev_nvme_attach_controller" 00:07:26.851 } 00:07:26.851 EOF 00:07:26.851 )") 00:07:26.851 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:07:26.851 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3918036 00:07:26.851 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:07:26.851 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:26.851 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:07:26.851 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:26.851 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:26.851 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:26.851 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:07:26.851 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:07:26.851 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:26.851 { 00:07:26.851 "params": { 00:07:26.851 "name": "Nvme$subsystem", 00:07:26.851 "trtype": "$TEST_TRANSPORT", 00:07:26.851 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:26.851 "adrfam": "ipv4", 00:07:26.851 "trsvcid": "$NVMF_PORT", 00:07:26.851 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:26.851 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:26.851 "hdgst": ${hdgst:-false}, 00:07:26.851 "ddgst": ${ddgst:-false} 00:07:26.851 }, 00:07:26.851 "method": "bdev_nvme_attach_controller" 00:07:26.851 } 00:07:26.851 EOF 00:07:26.851 )") 00:07:26.851 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:26.851 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:26.851 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:26.851 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:26.851 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:26.851 { 00:07:26.851 "params": { 00:07:26.851 "name": "Nvme$subsystem", 00:07:26.851 "trtype": "$TEST_TRANSPORT", 00:07:26.851 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:26.851 "adrfam": "ipv4", 00:07:26.851 "trsvcid": "$NVMF_PORT", 00:07:26.851 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:26.851 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:26.851 "hdgst": ${hdgst:-false}, 00:07:26.851 "ddgst": ${ddgst:-false} 00:07:26.851 }, 00:07:26.851 "method": "bdev_nvme_attach_controller" 00:07:26.851 } 00:07:26.851 EOF 00:07:26.851 )") 00:07:26.851 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:26.851 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3918029 00:07:26.851 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:26.851 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:26.851 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:26.851 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:26.851 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:26.851 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:26.851 "params": { 00:07:26.851 "name": "Nvme1", 00:07:26.851 "trtype": "tcp", 00:07:26.851 "traddr": "10.0.0.2", 00:07:26.851 "adrfam": "ipv4", 00:07:26.851 "trsvcid": "4420", 00:07:26.851 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:26.851 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:26.851 "hdgst": false, 00:07:26.851 "ddgst": false 00:07:26.851 }, 00:07:26.851 "method": "bdev_nvme_attach_controller" 00:07:26.851 }' 00:07:26.851 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:26.851 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:26.851 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:26.851 "params": { 00:07:26.851 "name": "Nvme1", 00:07:26.851 "trtype": "tcp", 00:07:26.851 "traddr": "10.0.0.2", 00:07:26.851 "adrfam": "ipv4", 00:07:26.851 "trsvcid": "4420", 00:07:26.851 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:26.851 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:26.851 "hdgst": false, 00:07:26.851 "ddgst": false 00:07:26.851 }, 00:07:26.851 "method": "bdev_nvme_attach_controller" 00:07:26.851 }' 00:07:26.851 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:26.851 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:26.851 "params": { 00:07:26.851 "name": "Nvme1", 00:07:26.851 "trtype": "tcp", 00:07:26.851 "traddr": "10.0.0.2", 00:07:26.851 "adrfam": "ipv4", 00:07:26.851 "trsvcid": "4420", 00:07:26.851 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:26.851 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:26.851 "hdgst": false, 00:07:26.851 "ddgst": false 00:07:26.851 }, 00:07:26.851 "method": "bdev_nvme_attach_controller" 00:07:26.851 }' 00:07:26.851 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:26.851 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:26.851 "params": { 00:07:26.851 "name": "Nvme1", 00:07:26.851 "trtype": "tcp", 00:07:26.851 "traddr": "10.0.0.2", 00:07:26.851 "adrfam": "ipv4", 00:07:26.851 "trsvcid": "4420", 00:07:26.851 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:26.851 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:26.851 "hdgst": false, 00:07:26.851 "ddgst": false 00:07:26.851 }, 00:07:26.851 "method": "bdev_nvme_attach_controller" 00:07:26.851 }' 00:07:26.851 [2024-11-20 11:01:54.293590] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:07:26.851 [2024-11-20 11:01:54.293592] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:07:26.852 [2024-11-20 11:01:54.293643] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-20 11:01:54.293644] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:07:26.852 --proc-type=auto ] 00:07:26.852 [2024-11-20 11:01:54.293764] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:07:26.852 [2024-11-20 11:01:54.293773] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:07:26.852 [2024-11-20 11:01:54.293800] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:07:26.852 [2024-11-20 11:01:54.293809] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:07:27.111 [2024-11-20 11:01:54.479274] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.111 [2024-11-20 11:01:54.522664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:07:27.111 [2024-11-20 11:01:54.574563] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.370 [2024-11-20 11:01:54.617647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:27.370 [2024-11-20 11:01:54.670628] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.370 [2024-11-20 11:01:54.713749] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:07:27.370 [2024-11-20 11:01:54.770753] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.370 [2024-11-20 11:01:54.820553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:07:27.627 Running I/O for 1 seconds... 00:07:27.627 Running I/O for 1 seconds... 00:07:27.627 Running I/O for 1 seconds... 00:07:27.627 Running I/O for 1 seconds... 00:07:28.561 238064.00 IOPS, 929.94 MiB/s 00:07:28.561 Latency(us) 00:07:28.561 [2024-11-20T10:01:56.057Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:28.561 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:07:28.561 Nvme1n1 : 1.00 237658.00 928.35 0.00 0.00 535.32 235.07 1688.26 00:07:28.561 [2024-11-20T10:01:56.057Z] =================================================================================================================== 00:07:28.561 [2024-11-20T10:01:56.057Z] Total : 237658.00 928.35 0.00 0.00 535.32 235.07 1688.26 00:07:28.561 11500.00 IOPS, 44.92 MiB/s [2024-11-20T10:01:56.057Z] 11326.00 IOPS, 44.24 MiB/s 00:07:28.561 Latency(us) 00:07:28.561 [2024-11-20T10:01:56.057Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:28.561 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:07:28.561 Nvme1n1 : 1.01 11546.34 45.10 0.00 0.00 11044.50 6069.20 12936.24 00:07:28.561 [2024-11-20T10:01:56.057Z] =================================================================================================================== 00:07:28.561 [2024-11-20T10:01:56.057Z] Total : 11546.34 45.10 0.00 0.00 11044.50 6069.20 12936.24 00:07:28.561 9704.00 IOPS, 37.91 MiB/s 00:07:28.561 Latency(us) 00:07:28.561 [2024-11-20T10:01:56.057Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:28.561 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:07:28.561 Nvme1n1 : 1.01 11395.82 44.51 0.00 0.00 11197.13 4701.50 23251.03 00:07:28.561 [2024-11-20T10:01:56.057Z] =================================================================================================================== 00:07:28.561 [2024-11-20T10:01:56.057Z] Total : 11395.82 44.51 0.00 0.00 11197.13 4701.50 23251.03 00:07:28.561 00:07:28.561 Latency(us) 00:07:28.561 [2024-11-20T10:01:56.057Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:28.561 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:07:28.561 Nvme1n1 : 1.01 9780.30 38.20 0.00 0.00 13045.71 4673.00 23478.98 00:07:28.561 [2024-11-20T10:01:56.057Z] =================================================================================================================== 00:07:28.561 [2024-11-20T10:01:56.057Z] Total : 9780.30 38.20 0.00 0.00 13045.71 4673.00 23478.98 00:07:28.819 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3918031 00:07:28.819 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3918033 00:07:28.819 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3918036 00:07:28.819 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:28.819 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.819 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:28.819 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.819 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:07:28.819 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:07:28.819 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:28.819 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:07:28.819 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:28.819 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:07:28.819 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:28.819 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:28.819 rmmod nvme_tcp 00:07:28.819 rmmod nvme_fabrics 00:07:28.819 rmmod nvme_keyring 00:07:28.819 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:28.819 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:07:28.819 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:07:28.819 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 3917839 ']' 00:07:28.819 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 3917839 00:07:28.819 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 3917839 ']' 00:07:28.819 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 3917839 00:07:28.819 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:07:28.819 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:28.819 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3917839 00:07:28.819 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:28.819 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:28.819 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3917839' 00:07:28.819 killing process with pid 3917839 00:07:28.819 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 3917839 00:07:28.819 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 3917839 00:07:29.078 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:29.078 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:29.078 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:29.078 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:07:29.078 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:07:29.078 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:29.078 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:07:29.078 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:29.078 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:29.078 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:29.078 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:29.078 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:31.613 11:01:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:31.613 00:07:31.613 real 0m11.507s 00:07:31.613 user 0m19.101s 00:07:31.613 sys 0m6.302s 00:07:31.613 11:01:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:31.613 11:01:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:31.613 ************************************ 00:07:31.613 END TEST nvmf_bdev_io_wait 00:07:31.613 ************************************ 00:07:31.613 11:01:58 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:07:31.613 11:01:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:31.613 11:01:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:31.613 11:01:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:31.613 ************************************ 00:07:31.613 START TEST nvmf_queue_depth 00:07:31.613 ************************************ 00:07:31.613 11:01:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:07:31.613 * Looking for test storage... 00:07:31.613 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:31.613 11:01:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:31.613 11:01:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:07:31.613 11:01:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:31.613 11:01:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:31.613 11:01:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:31.613 11:01:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:31.613 11:01:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:31.613 11:01:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:07:31.613 11:01:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:07:31.613 11:01:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:07:31.613 11:01:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:07:31.613 11:01:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:07:31.613 11:01:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:07:31.613 11:01:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:07:31.613 11:01:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:31.613 11:01:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:07:31.613 11:01:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:07:31.613 11:01:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:31.613 11:01:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:31.613 11:01:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:07:31.613 11:01:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:07:31.613 11:01:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:31.613 11:01:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:07:31.613 11:01:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:07:31.613 11:01:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:07:31.613 11:01:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:07:31.613 11:01:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:31.613 11:01:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:07:31.613 11:01:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:07:31.613 11:01:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:31.613 11:01:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:31.613 11:01:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:07:31.613 11:01:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:31.613 11:01:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:31.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.613 --rc genhtml_branch_coverage=1 00:07:31.613 --rc genhtml_function_coverage=1 00:07:31.613 --rc genhtml_legend=1 00:07:31.613 --rc geninfo_all_blocks=1 00:07:31.613 --rc geninfo_unexecuted_blocks=1 00:07:31.613 00:07:31.613 ' 00:07:31.613 11:01:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:31.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.613 --rc genhtml_branch_coverage=1 00:07:31.613 --rc genhtml_function_coverage=1 00:07:31.613 --rc genhtml_legend=1 00:07:31.613 --rc geninfo_all_blocks=1 00:07:31.613 --rc geninfo_unexecuted_blocks=1 00:07:31.613 00:07:31.613 ' 00:07:31.613 11:01:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:31.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.613 --rc genhtml_branch_coverage=1 00:07:31.613 --rc genhtml_function_coverage=1 00:07:31.613 --rc genhtml_legend=1 00:07:31.613 --rc geninfo_all_blocks=1 00:07:31.613 --rc geninfo_unexecuted_blocks=1 00:07:31.613 00:07:31.613 ' 00:07:31.613 11:01:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:31.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.613 --rc genhtml_branch_coverage=1 00:07:31.613 --rc genhtml_function_coverage=1 00:07:31.613 --rc genhtml_legend=1 00:07:31.613 --rc geninfo_all_blocks=1 00:07:31.613 --rc geninfo_unexecuted_blocks=1 00:07:31.613 00:07:31.613 ' 00:07:31.613 11:01:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:31.613 11:01:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:07:31.613 11:01:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:31.613 11:01:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:31.613 11:01:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:31.613 11:01:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:31.613 11:01:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:31.613 11:01:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:31.613 11:01:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:31.613 11:01:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:31.613 11:01:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:31.614 11:01:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:31.614 11:01:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:31.614 11:01:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:31.614 11:01:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:31.614 11:01:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:31.614 11:01:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:31.614 11:01:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:31.614 11:01:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:31.614 11:01:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:07:31.614 11:01:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:31.614 11:01:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:31.614 11:01:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:31.614 11:01:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.614 11:01:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.614 11:01:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.614 11:01:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:07:31.614 11:01:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.614 11:01:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:07:31.614 11:01:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:31.614 11:01:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:31.614 11:01:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:31.614 11:01:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:31.614 11:01:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:31.614 11:01:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:31.614 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:31.614 11:01:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:31.614 11:01:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:31.614 11:01:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:31.614 11:01:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:07:31.614 11:01:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:07:31.614 11:01:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:31.614 11:01:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:07:31.614 11:01:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:31.614 11:01:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:31.614 11:01:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:31.614 11:01:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:31.614 11:01:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:31.614 11:01:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:31.614 11:01:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:31.614 11:01:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:31.614 11:01:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:31.614 11:01:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:31.614 11:01:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:07:31.614 11:01:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:37.011 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:37.011 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:07:37.011 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:37.011 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:37.011 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:37.011 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:37.011 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:37.011 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:07:37.011 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:37.011 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:07:37.011 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:07:37.011 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:07:37.011 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:07:37.011 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:07:37.011 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:07:37.011 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:37.011 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:37.011 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:37.011 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:37.011 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:37.011 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:37.011 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:37.011 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:37.011 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:37.011 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:37.011 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:37.011 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:37.011 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:37.011 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:37.011 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:37.011 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:37.011 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:37.011 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:37.011 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:37.011 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:37.011 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:37.011 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:37.011 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:37.011 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:37.011 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:37.011 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:37.011 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:37.011 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:37.011 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:37.011 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:37.011 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:37.011 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:37.011 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:37.011 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:37.011 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:37.011 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:37.011 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:37.011 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:37.012 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:37.012 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:37.012 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:37.012 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:37.012 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:37.012 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:37.012 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:37.012 Found net devices under 0000:86:00.0: cvl_0_0 00:07:37.012 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:37.012 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:37.012 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:37.012 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:37.012 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:37.012 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:37.012 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:37.012 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:37.012 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:37.012 Found net devices under 0000:86:00.1: cvl_0_1 00:07:37.012 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:37.012 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:37.012 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:07:37.012 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:37.012 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:37.012 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:37.012 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:37.012 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:37.012 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:37.012 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:37.012 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:37.012 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:37.012 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:37.012 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:37.012 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:37.012 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:37.012 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:37.012 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:37.012 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:37.012 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:37.012 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:37.272 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:37.272 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:37.272 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:37.272 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:37.272 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:37.272 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:37.272 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:37.272 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:37.272 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:37.272 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.443 ms 00:07:37.272 00:07:37.272 --- 10.0.0.2 ping statistics --- 00:07:37.272 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:37.272 rtt min/avg/max/mdev = 0.443/0.443/0.443/0.000 ms 00:07:37.272 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:37.272 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:37.272 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:07:37.272 00:07:37.272 --- 10.0.0.1 ping statistics --- 00:07:37.272 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:37.272 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:07:37.272 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:37.272 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:07:37.272 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:37.272 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:37.272 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:37.272 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:37.272 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:37.272 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:37.272 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:37.272 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:07:37.272 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:37.272 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:37.272 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:37.272 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=3922053 00:07:37.272 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 3922053 00:07:37.272 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:07:37.272 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 3922053 ']' 00:07:37.272 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:37.532 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:37.532 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:37.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:37.532 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:37.532 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:37.532 [2024-11-20 11:02:04.815317] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:07:37.532 [2024-11-20 11:02:04.815368] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:37.532 [2024-11-20 11:02:04.900207] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.532 [2024-11-20 11:02:04.940528] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:37.532 [2024-11-20 11:02:04.940563] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:37.532 [2024-11-20 11:02:04.940571] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:37.532 [2024-11-20 11:02:04.940577] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:37.532 [2024-11-20 11:02:04.940583] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:37.532 [2024-11-20 11:02:04.941158] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:37.792 11:02:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:37.792 11:02:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:07:37.792 11:02:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:37.792 11:02:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:37.792 11:02:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:37.792 11:02:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:37.792 11:02:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:37.792 11:02:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.792 11:02:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:37.792 [2024-11-20 11:02:05.084257] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:37.792 11:02:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.792 11:02:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:07:37.792 11:02:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.792 11:02:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:37.792 Malloc0 00:07:37.792 11:02:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.792 11:02:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:37.792 11:02:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.792 11:02:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:37.792 11:02:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.792 11:02:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:37.792 11:02:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.792 11:02:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:37.792 11:02:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.792 11:02:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:37.792 11:02:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.792 11:02:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:37.792 [2024-11-20 11:02:05.134416] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:37.792 11:02:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.792 11:02:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3922077 00:07:37.792 11:02:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:07:37.792 11:02:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:37.792 11:02:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3922077 /var/tmp/bdevperf.sock 00:07:37.792 11:02:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 3922077 ']' 00:07:37.792 11:02:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:37.792 11:02:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:37.792 11:02:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:37.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:37.792 11:02:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:37.792 11:02:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:37.792 [2024-11-20 11:02:05.179134] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:07:37.792 [2024-11-20 11:02:05.179174] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3922077 ] 00:07:37.792 [2024-11-20 11:02:05.254825] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.052 [2024-11-20 11:02:05.297945] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.052 11:02:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:38.052 11:02:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:07:38.053 11:02:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:07:38.053 11:02:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.053 11:02:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:38.315 NVMe0n1 00:07:38.315 11:02:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.315 11:02:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:38.315 Running I/O for 10 seconds... 00:07:40.190 11281.00 IOPS, 44.07 MiB/s [2024-11-20T10:02:09.063Z] 11776.00 IOPS, 46.00 MiB/s [2024-11-20T10:02:10.001Z] 11941.00 IOPS, 46.64 MiB/s [2024-11-20T10:02:10.957Z] 12020.50 IOPS, 46.96 MiB/s [2024-11-20T10:02:11.895Z] 12070.00 IOPS, 47.15 MiB/s [2024-11-20T10:02:12.832Z] 12107.17 IOPS, 47.29 MiB/s [2024-11-20T10:02:13.768Z] 12133.00 IOPS, 47.39 MiB/s [2024-11-20T10:02:14.705Z] 12144.50 IOPS, 47.44 MiB/s [2024-11-20T10:02:16.083Z] 12178.33 IOPS, 47.57 MiB/s [2024-11-20T10:02:16.083Z] 12192.50 IOPS, 47.63 MiB/s 00:07:48.587 Latency(us) 00:07:48.587 [2024-11-20T10:02:16.083Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:48.587 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:07:48.587 Verification LBA range: start 0x0 length 0x4000 00:07:48.587 NVMe0n1 : 10.05 12230.99 47.78 0.00 0.00 83425.58 12822.26 57443.73 00:07:48.587 [2024-11-20T10:02:16.083Z] =================================================================================================================== 00:07:48.587 [2024-11-20T10:02:16.083Z] Total : 12230.99 47.78 0.00 0.00 83425.58 12822.26 57443.73 00:07:48.587 { 00:07:48.587 "results": [ 00:07:48.587 { 00:07:48.587 "job": "NVMe0n1", 00:07:48.587 "core_mask": "0x1", 00:07:48.587 "workload": "verify", 00:07:48.587 "status": "finished", 00:07:48.587 "verify_range": { 00:07:48.587 "start": 0, 00:07:48.587 "length": 16384 00:07:48.587 }, 00:07:48.587 "queue_depth": 1024, 00:07:48.587 "io_size": 4096, 00:07:48.587 "runtime": 10.052249, 00:07:48.587 "iops": 12230.994277996893, 00:07:48.587 "mibps": 47.777321398425364, 00:07:48.587 "io_failed": 0, 00:07:48.587 "io_timeout": 0, 00:07:48.587 "avg_latency_us": 83425.57869821598, 00:07:48.587 "min_latency_us": 12822.260869565218, 00:07:48.587 "max_latency_us": 57443.72869565217 00:07:48.587 } 00:07:48.587 ], 00:07:48.587 "core_count": 1 00:07:48.587 } 00:07:48.587 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3922077 00:07:48.587 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 3922077 ']' 00:07:48.587 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 3922077 00:07:48.587 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:07:48.587 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:48.587 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3922077 00:07:48.587 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:48.587 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:48.587 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3922077' 00:07:48.587 killing process with pid 3922077 00:07:48.587 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 3922077 00:07:48.587 Received shutdown signal, test time was about 10.000000 seconds 00:07:48.587 00:07:48.587 Latency(us) 00:07:48.587 [2024-11-20T10:02:16.083Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:48.587 [2024-11-20T10:02:16.083Z] =================================================================================================================== 00:07:48.587 [2024-11-20T10:02:16.083Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:48.588 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 3922077 00:07:48.588 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:07:48.588 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:07:48.588 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:48.588 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:07:48.588 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:48.588 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:07:48.588 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:48.588 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:48.588 rmmod nvme_tcp 00:07:48.588 rmmod nvme_fabrics 00:07:48.588 rmmod nvme_keyring 00:07:48.588 11:02:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:48.588 11:02:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:07:48.588 11:02:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:07:48.588 11:02:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 3922053 ']' 00:07:48.588 11:02:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 3922053 00:07:48.588 11:02:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 3922053 ']' 00:07:48.588 11:02:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 3922053 00:07:48.588 11:02:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:07:48.588 11:02:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:48.588 11:02:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3922053 00:07:48.847 11:02:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:48.847 11:02:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:48.847 11:02:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3922053' 00:07:48.847 killing process with pid 3922053 00:07:48.847 11:02:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 3922053 00:07:48.847 11:02:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 3922053 00:07:48.847 11:02:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:48.847 11:02:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:48.847 11:02:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:48.847 11:02:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:07:48.847 11:02:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:07:48.847 11:02:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:48.847 11:02:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:07:48.847 11:02:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:48.847 11:02:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:48.847 11:02:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:48.847 11:02:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:48.847 11:02:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:51.383 11:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:51.383 00:07:51.383 real 0m19.790s 00:07:51.383 user 0m23.200s 00:07:51.383 sys 0m6.049s 00:07:51.383 11:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:51.383 11:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:51.383 ************************************ 00:07:51.383 END TEST nvmf_queue_depth 00:07:51.383 ************************************ 00:07:51.383 11:02:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:07:51.383 11:02:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:51.383 11:02:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:51.383 11:02:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:51.383 ************************************ 00:07:51.383 START TEST nvmf_target_multipath 00:07:51.383 ************************************ 00:07:51.383 11:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:07:51.383 * Looking for test storage... 00:07:51.383 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:51.383 11:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:51.383 11:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:07:51.383 11:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:51.383 11:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:51.383 11:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:51.383 11:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:51.383 11:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:51.383 11:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:07:51.383 11:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:07:51.383 11:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:07:51.383 11:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:07:51.383 11:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:07:51.383 11:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:07:51.383 11:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:07:51.383 11:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:51.383 11:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:07:51.383 11:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:07:51.383 11:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:51.383 11:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:51.383 11:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:07:51.383 11:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:07:51.383 11:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:51.383 11:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:07:51.383 11:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:07:51.383 11:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:07:51.383 11:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:07:51.383 11:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:51.383 11:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:07:51.383 11:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:07:51.383 11:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:51.383 11:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:51.383 11:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:07:51.383 11:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:51.383 11:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:51.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.383 --rc genhtml_branch_coverage=1 00:07:51.383 --rc genhtml_function_coverage=1 00:07:51.383 --rc genhtml_legend=1 00:07:51.383 --rc geninfo_all_blocks=1 00:07:51.383 --rc geninfo_unexecuted_blocks=1 00:07:51.383 00:07:51.383 ' 00:07:51.383 11:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:51.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.383 --rc genhtml_branch_coverage=1 00:07:51.383 --rc genhtml_function_coverage=1 00:07:51.383 --rc genhtml_legend=1 00:07:51.383 --rc geninfo_all_blocks=1 00:07:51.383 --rc geninfo_unexecuted_blocks=1 00:07:51.383 00:07:51.383 ' 00:07:51.383 11:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:51.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.383 --rc genhtml_branch_coverage=1 00:07:51.383 --rc genhtml_function_coverage=1 00:07:51.383 --rc genhtml_legend=1 00:07:51.383 --rc geninfo_all_blocks=1 00:07:51.383 --rc geninfo_unexecuted_blocks=1 00:07:51.383 00:07:51.383 ' 00:07:51.383 11:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:51.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.383 --rc genhtml_branch_coverage=1 00:07:51.383 --rc genhtml_function_coverage=1 00:07:51.383 --rc genhtml_legend=1 00:07:51.383 --rc geninfo_all_blocks=1 00:07:51.383 --rc geninfo_unexecuted_blocks=1 00:07:51.383 00:07:51.383 ' 00:07:51.383 11:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:51.383 11:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:07:51.383 11:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:51.383 11:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:51.383 11:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:51.383 11:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:51.383 11:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:51.383 11:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:51.383 11:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:51.383 11:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:51.383 11:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:51.384 11:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:51.384 11:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:51.384 11:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:51.384 11:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:51.384 11:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:51.384 11:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:51.384 11:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:51.384 11:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:51.384 11:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:07:51.384 11:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:51.384 11:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:51.384 11:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:51.384 11:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.384 11:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.384 11:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.384 11:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:07:51.384 11:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.384 11:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:07:51.384 11:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:51.384 11:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:51.384 11:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:51.384 11:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:51.384 11:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:51.384 11:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:51.384 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:51.384 11:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:51.384 11:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:51.384 11:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:51.384 11:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:51.384 11:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:51.384 11:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:07:51.384 11:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:51.384 11:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:07:51.384 11:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:51.384 11:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:51.384 11:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:51.384 11:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:51.384 11:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:51.384 11:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:51.384 11:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:51.384 11:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:51.384 11:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:51.384 11:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:51.384 11:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:07:51.384 11:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:07:57.951 11:02:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:57.951 11:02:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:07:57.951 11:02:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:57.951 11:02:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:57.951 11:02:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:57.951 11:02:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:57.951 11:02:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:57.951 11:02:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:07:57.951 11:02:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:57.951 11:02:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:07:57.951 11:02:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:07:57.951 11:02:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:07:57.951 11:02:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:07:57.951 11:02:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:07:57.951 11:02:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:07:57.951 11:02:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:57.951 11:02:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:57.951 11:02:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:57.951 11:02:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:57.951 11:02:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:57.951 11:02:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:57.951 11:02:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:57.951 11:02:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:57.951 11:02:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:57.951 11:02:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:57.951 11:02:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:57.951 11:02:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:57.951 11:02:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:57.951 11:02:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:57.951 11:02:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:57.951 11:02:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:57.951 11:02:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:57.951 11:02:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:57.951 11:02:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:57.951 11:02:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:57.952 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:57.952 11:02:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:57.952 11:02:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:57.952 11:02:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:57.952 11:02:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:57.952 11:02:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:57.952 11:02:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:57.952 11:02:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:57.952 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:57.952 11:02:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:57.952 11:02:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:57.952 11:02:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:57.952 11:02:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:57.952 11:02:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:57.952 11:02:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:57.952 11:02:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:57.952 11:02:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:57.952 11:02:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:57.952 11:02:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:57.952 11:02:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:57.952 11:02:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:57.952 11:02:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:57.952 11:02:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:57.952 11:02:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:57.952 11:02:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:57.952 Found net devices under 0000:86:00.0: cvl_0_0 00:07:57.952 11:02:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:57.952 11:02:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:57.952 11:02:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:57.952 11:02:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:57.952 11:02:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:57.952 11:02:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:57.952 11:02:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:57.952 11:02:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:57.952 11:02:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:57.952 Found net devices under 0000:86:00.1: cvl_0_1 00:07:57.952 11:02:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:57.952 11:02:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:57.952 11:02:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:07:57.952 11:02:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:57.952 11:02:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:57.952 11:02:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:57.952 11:02:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:57.952 11:02:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:57.952 11:02:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:57.952 11:02:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:57.952 11:02:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:57.952 11:02:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:57.952 11:02:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:57.952 11:02:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:57.952 11:02:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:57.952 11:02:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:57.952 11:02:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:57.952 11:02:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:57.952 11:02:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:57.952 11:02:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:57.952 11:02:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:57.952 11:02:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:57.952 11:02:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:57.952 11:02:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:57.952 11:02:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:57.952 11:02:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:57.952 11:02:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:57.952 11:02:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:57.952 11:02:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:57.952 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:57.952 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.372 ms 00:07:57.952 00:07:57.952 --- 10.0.0.2 ping statistics --- 00:07:57.952 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:57.952 rtt min/avg/max/mdev = 0.372/0.372/0.372/0.000 ms 00:07:57.952 11:02:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:57.952 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:57.952 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:07:57.952 00:07:57.952 --- 10.0.0.1 ping statistics --- 00:07:57.952 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:57.952 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:07:57.952 11:02:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:57.952 11:02:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:07:57.952 11:02:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:57.952 11:02:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:57.952 11:02:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:57.952 11:02:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:57.952 11:02:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:57.952 11:02:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:57.952 11:02:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:57.952 11:02:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:07:57.952 11:02:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:07:57.952 only one NIC for nvmf test 00:07:57.952 11:02:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:07:57.952 11:02:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:57.952 11:02:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:07:57.952 11:02:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:57.952 11:02:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:07:57.952 11:02:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:57.952 11:02:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:57.952 rmmod nvme_tcp 00:07:57.952 rmmod nvme_fabrics 00:07:57.952 rmmod nvme_keyring 00:07:57.952 11:02:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:57.952 11:02:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:07:57.952 11:02:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:07:57.952 11:02:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:07:57.952 11:02:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:57.952 11:02:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:57.952 11:02:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:57.952 11:02:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:07:57.952 11:02:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:07:57.952 11:02:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:57.952 11:02:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:07:57.952 11:02:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:57.952 11:02:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:57.953 11:02:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:57.953 11:02:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:57.953 11:02:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:59.331 11:02:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:59.331 11:02:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:07:59.331 11:02:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:07:59.331 11:02:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:59.331 11:02:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:07:59.331 11:02:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:59.331 11:02:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:07:59.331 11:02:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:59.331 11:02:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:59.331 11:02:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:59.331 11:02:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:07:59.331 11:02:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:07:59.331 11:02:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:07:59.331 11:02:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:59.331 11:02:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:59.331 11:02:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:59.331 11:02:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:07:59.331 11:02:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:07:59.331 11:02:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:59.331 11:02:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:07:59.331 11:02:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:59.331 11:02:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:59.331 11:02:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:59.331 11:02:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:59.331 11:02:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:59.331 11:02:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:59.331 00:07:59.331 real 0m8.370s 00:07:59.331 user 0m1.878s 00:07:59.331 sys 0m4.520s 00:07:59.331 11:02:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:59.331 11:02:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:07:59.331 ************************************ 00:07:59.331 END TEST nvmf_target_multipath 00:07:59.331 ************************************ 00:07:59.591 11:02:26 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:07:59.591 11:02:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:59.591 11:02:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:59.591 11:02:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:59.591 ************************************ 00:07:59.591 START TEST nvmf_zcopy 00:07:59.591 ************************************ 00:07:59.591 11:02:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:07:59.591 * Looking for test storage... 00:07:59.591 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:59.591 11:02:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:59.591 11:02:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:07:59.591 11:02:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:59.591 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:59.591 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:59.591 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:59.591 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:59.591 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:07:59.591 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:07:59.591 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:07:59.591 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:07:59.591 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:07:59.591 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:07:59.591 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:07:59.591 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:59.591 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:07:59.591 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:07:59.591 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:59.591 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:59.591 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:07:59.591 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:07:59.591 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:59.591 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:07:59.591 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:07:59.591 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:07:59.591 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:07:59.591 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:59.591 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:07:59.591 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:07:59.591 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:59.591 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:59.591 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:07:59.591 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:59.591 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:59.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.591 --rc genhtml_branch_coverage=1 00:07:59.591 --rc genhtml_function_coverage=1 00:07:59.591 --rc genhtml_legend=1 00:07:59.591 --rc geninfo_all_blocks=1 00:07:59.591 --rc geninfo_unexecuted_blocks=1 00:07:59.591 00:07:59.591 ' 00:07:59.591 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:59.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.591 --rc genhtml_branch_coverage=1 00:07:59.591 --rc genhtml_function_coverage=1 00:07:59.591 --rc genhtml_legend=1 00:07:59.591 --rc geninfo_all_blocks=1 00:07:59.591 --rc geninfo_unexecuted_blocks=1 00:07:59.591 00:07:59.591 ' 00:07:59.591 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:59.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.591 --rc genhtml_branch_coverage=1 00:07:59.591 --rc genhtml_function_coverage=1 00:07:59.591 --rc genhtml_legend=1 00:07:59.591 --rc geninfo_all_blocks=1 00:07:59.591 --rc geninfo_unexecuted_blocks=1 00:07:59.591 00:07:59.591 ' 00:07:59.591 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:59.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.591 --rc genhtml_branch_coverage=1 00:07:59.591 --rc genhtml_function_coverage=1 00:07:59.591 --rc genhtml_legend=1 00:07:59.591 --rc geninfo_all_blocks=1 00:07:59.591 --rc geninfo_unexecuted_blocks=1 00:07:59.591 00:07:59.591 ' 00:07:59.591 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:59.591 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:07:59.591 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:59.591 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:59.591 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:59.591 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:59.591 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:59.591 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:59.591 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:59.591 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:59.591 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:59.592 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:59.592 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:59.592 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:59.592 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:59.592 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:59.592 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:59.592 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:59.592 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:59.592 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:07:59.592 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:59.592 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:59.592 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:59.592 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.592 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.592 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.592 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:07:59.592 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.592 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:07:59.592 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:59.592 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:59.592 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:59.592 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:59.592 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:59.592 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:59.592 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:59.592 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:59.592 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:59.592 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:59.592 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:07:59.592 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:59.592 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:59.592 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:59.592 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:59.592 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:59.592 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:59.592 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:59.592 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:59.850 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:59.850 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:59.851 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:07:59.851 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:06.420 11:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:06.420 11:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:08:06.420 11:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:06.420 11:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:06.420 11:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:06.420 11:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:06.420 11:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:06.420 11:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:08:06.420 11:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:06.420 11:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:08:06.420 11:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:08:06.420 11:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:08:06.420 11:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:08:06.420 11:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:08:06.420 11:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:08:06.420 11:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:06.420 11:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:06.420 11:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:06.420 11:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:06.420 11:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:06.420 11:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:06.420 11:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:06.420 11:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:06.420 11:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:06.420 11:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:06.420 11:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:06.420 11:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:06.420 11:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:06.420 11:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:06.420 11:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:06.420 11:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:06.420 11:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:06.420 11:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:06.420 11:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:06.420 11:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:06.420 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:06.420 11:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:06.420 11:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:06.420 11:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:06.420 11:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:06.420 11:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:06.420 11:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:06.420 11:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:06.420 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:06.420 11:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:06.420 11:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:06.420 11:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:06.420 11:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:06.420 11:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:06.420 11:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:06.420 11:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:06.420 11:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:06.420 11:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:06.420 11:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:06.420 11:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:06.420 11:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:06.420 11:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:06.420 11:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:06.420 11:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:06.420 11:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:06.420 Found net devices under 0000:86:00.0: cvl_0_0 00:08:06.420 11:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:06.420 11:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:06.420 11:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:06.420 11:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:06.420 11:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:06.420 11:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:06.420 11:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:06.420 11:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:06.420 11:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:06.420 Found net devices under 0000:86:00.1: cvl_0_1 00:08:06.420 11:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:06.420 11:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:06.420 11:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:08:06.420 11:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:06.420 11:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:06.420 11:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:06.420 11:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:06.420 11:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:06.420 11:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:06.420 11:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:06.420 11:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:06.420 11:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:06.420 11:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:06.420 11:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:06.420 11:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:06.420 11:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:06.420 11:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:06.420 11:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:06.420 11:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:06.420 11:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:06.420 11:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:06.420 11:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:06.420 11:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:06.420 11:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:06.420 11:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:06.420 11:02:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:06.420 11:02:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:06.420 11:02:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:06.420 11:02:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:06.420 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:06.420 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.357 ms 00:08:06.420 00:08:06.420 --- 10.0.0.2 ping statistics --- 00:08:06.420 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:06.420 rtt min/avg/max/mdev = 0.357/0.357/0.357/0.000 ms 00:08:06.420 11:02:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:06.420 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:06.420 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.137 ms 00:08:06.420 00:08:06.421 --- 10.0.0.1 ping statistics --- 00:08:06.421 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:06.421 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:08:06.421 11:02:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:06.421 11:02:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:08:06.421 11:02:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:06.421 11:02:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:06.421 11:02:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:06.421 11:02:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:06.421 11:02:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:06.421 11:02:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:06.421 11:02:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:06.421 11:02:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:08:06.421 11:02:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:06.421 11:02:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:06.421 11:02:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:06.421 11:02:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=3930984 00:08:06.421 11:02:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:06.421 11:02:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 3930984 00:08:06.421 11:02:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 3930984 ']' 00:08:06.421 11:02:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:06.421 11:02:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:06.421 11:02:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:06.421 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:06.421 11:02:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:06.421 11:02:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:06.421 [2024-11-20 11:02:33.152915] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:08:06.421 [2024-11-20 11:02:33.152963] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:06.421 [2024-11-20 11:02:33.232580] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.421 [2024-11-20 11:02:33.272965] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:06.421 [2024-11-20 11:02:33.273003] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:06.421 [2024-11-20 11:02:33.273011] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:06.421 [2024-11-20 11:02:33.273018] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:06.421 [2024-11-20 11:02:33.273024] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:06.421 [2024-11-20 11:02:33.273610] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:06.679 11:02:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:06.679 11:02:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:08:06.679 11:02:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:06.679 11:02:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:06.679 11:02:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:06.679 11:02:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:06.679 11:02:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:08:06.679 11:02:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:08:06.679 11:02:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.679 11:02:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:06.679 [2024-11-20 11:02:34.043033] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:06.679 11:02:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.679 11:02:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:06.679 11:02:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.679 11:02:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:06.679 11:02:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.679 11:02:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:06.679 11:02:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.679 11:02:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:06.679 [2024-11-20 11:02:34.063272] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:06.679 11:02:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.679 11:02:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:06.679 11:02:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.679 11:02:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:06.679 11:02:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.679 11:02:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:08:06.679 11:02:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.679 11:02:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:06.679 malloc0 00:08:06.679 11:02:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.679 11:02:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:08:06.679 11:02:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.679 11:02:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:06.679 11:02:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.679 11:02:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:08:06.679 11:02:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:08:06.679 11:02:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:08:06.679 11:02:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:08:06.679 11:02:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:06.679 11:02:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:06.679 { 00:08:06.679 "params": { 00:08:06.679 "name": "Nvme$subsystem", 00:08:06.679 "trtype": "$TEST_TRANSPORT", 00:08:06.679 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:06.679 "adrfam": "ipv4", 00:08:06.679 "trsvcid": "$NVMF_PORT", 00:08:06.679 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:06.679 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:06.679 "hdgst": ${hdgst:-false}, 00:08:06.679 "ddgst": ${ddgst:-false} 00:08:06.679 }, 00:08:06.679 "method": "bdev_nvme_attach_controller" 00:08:06.679 } 00:08:06.679 EOF 00:08:06.679 )") 00:08:06.679 11:02:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:08:06.679 11:02:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:08:06.679 11:02:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:08:06.679 11:02:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:06.679 "params": { 00:08:06.679 "name": "Nvme1", 00:08:06.679 "trtype": "tcp", 00:08:06.679 "traddr": "10.0.0.2", 00:08:06.679 "adrfam": "ipv4", 00:08:06.679 "trsvcid": "4420", 00:08:06.679 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:06.679 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:06.679 "hdgst": false, 00:08:06.679 "ddgst": false 00:08:06.679 }, 00:08:06.680 "method": "bdev_nvme_attach_controller" 00:08:06.680 }' 00:08:06.680 [2024-11-20 11:02:34.146193] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:08:06.680 [2024-11-20 11:02:34.146233] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3931229 ] 00:08:06.938 [2024-11-20 11:02:34.222516] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.938 [2024-11-20 11:02:34.264064] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.196 Running I/O for 10 seconds... 00:08:09.069 8424.00 IOPS, 65.81 MiB/s [2024-11-20T10:02:37.942Z] 8486.50 IOPS, 66.30 MiB/s [2024-11-20T10:02:38.878Z] 8527.00 IOPS, 66.62 MiB/s [2024-11-20T10:02:39.814Z] 8536.50 IOPS, 66.69 MiB/s [2024-11-20T10:02:40.750Z] 8548.80 IOPS, 66.79 MiB/s [2024-11-20T10:02:41.685Z] 8530.00 IOPS, 66.64 MiB/s [2024-11-20T10:02:42.620Z] 8530.00 IOPS, 66.64 MiB/s [2024-11-20T10:02:43.997Z] 8540.00 IOPS, 66.72 MiB/s [2024-11-20T10:02:44.933Z] 8539.89 IOPS, 66.72 MiB/s [2024-11-20T10:02:44.933Z] 8547.60 IOPS, 66.78 MiB/s 00:08:17.437 Latency(us) 00:08:17.437 [2024-11-20T10:02:44.933Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:17.437 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:08:17.437 Verification LBA range: start 0x0 length 0x1000 00:08:17.437 Nvme1n1 : 10.01 8548.60 66.79 0.00 0.00 14929.97 1837.86 22909.11 00:08:17.437 [2024-11-20T10:02:44.933Z] =================================================================================================================== 00:08:17.437 [2024-11-20T10:02:44.933Z] Total : 8548.60 66.79 0.00 0.00 14929.97 1837.86 22909.11 00:08:17.437 11:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=3932891 00:08:17.437 11:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:08:17.437 11:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:17.437 11:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:08:17.437 11:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:08:17.437 11:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:08:17.437 11:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:08:17.437 11:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:17.437 11:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:17.437 { 00:08:17.437 "params": { 00:08:17.437 "name": "Nvme$subsystem", 00:08:17.437 "trtype": "$TEST_TRANSPORT", 00:08:17.437 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:17.437 "adrfam": "ipv4", 00:08:17.437 "trsvcid": "$NVMF_PORT", 00:08:17.437 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:17.437 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:17.437 "hdgst": ${hdgst:-false}, 00:08:17.437 "ddgst": ${ddgst:-false} 00:08:17.437 }, 00:08:17.437 "method": "bdev_nvme_attach_controller" 00:08:17.437 } 00:08:17.437 EOF 00:08:17.437 )") 00:08:17.437 11:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:08:17.437 [2024-11-20 11:02:44.745115] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.437 [2024-11-20 11:02:44.745149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.437 11:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:08:17.437 11:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:08:17.437 11:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:17.437 "params": { 00:08:17.437 "name": "Nvme1", 00:08:17.437 "trtype": "tcp", 00:08:17.437 "traddr": "10.0.0.2", 00:08:17.437 "adrfam": "ipv4", 00:08:17.437 "trsvcid": "4420", 00:08:17.437 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:17.437 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:17.437 "hdgst": false, 00:08:17.437 "ddgst": false 00:08:17.437 }, 00:08:17.437 "method": "bdev_nvme_attach_controller" 00:08:17.437 }' 00:08:17.437 [2024-11-20 11:02:44.757111] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.437 [2024-11-20 11:02:44.757125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.437 [2024-11-20 11:02:44.769137] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.437 [2024-11-20 11:02:44.769148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.437 [2024-11-20 11:02:44.781166] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.437 [2024-11-20 11:02:44.781177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.437 [2024-11-20 11:02:44.783288] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:08:17.437 [2024-11-20 11:02:44.783330] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3932891 ] 00:08:17.437 [2024-11-20 11:02:44.793201] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.437 [2024-11-20 11:02:44.793223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.437 [2024-11-20 11:02:44.805230] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.437 [2024-11-20 11:02:44.805241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.437 [2024-11-20 11:02:44.817263] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.437 [2024-11-20 11:02:44.817274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.437 [2024-11-20 11:02:44.829293] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.437 [2024-11-20 11:02:44.829304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.437 [2024-11-20 11:02:44.841325] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.437 [2024-11-20 11:02:44.841336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.437 [2024-11-20 11:02:44.853357] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.437 [2024-11-20 11:02:44.853367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.437 [2024-11-20 11:02:44.858002] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.437 [2024-11-20 11:02:44.865389] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.437 [2024-11-20 11:02:44.865402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.437 [2024-11-20 11:02:44.877422] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.438 [2024-11-20 11:02:44.877437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.438 [2024-11-20 11:02:44.889456] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.438 [2024-11-20 11:02:44.889467] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.438 [2024-11-20 11:02:44.899801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.438 [2024-11-20 11:02:44.901491] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.438 [2024-11-20 11:02:44.901504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.438 [2024-11-20 11:02:44.913534] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.438 [2024-11-20 11:02:44.913551] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.438 [2024-11-20 11:02:44.925567] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.438 [2024-11-20 11:02:44.925589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.698 [2024-11-20 11:02:44.937604] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.698 [2024-11-20 11:02:44.937626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.698 [2024-11-20 11:02:44.949626] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.698 [2024-11-20 11:02:44.949639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.698 [2024-11-20 11:02:44.961650] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.698 [2024-11-20 11:02:44.961665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.698 [2024-11-20 11:02:44.973678] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.698 [2024-11-20 11:02:44.973690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.698 [2024-11-20 11:02:44.985709] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.698 [2024-11-20 11:02:44.985720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.698 [2024-11-20 11:02:44.997758] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.698 [2024-11-20 11:02:44.997783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.698 [2024-11-20 11:02:45.009787] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.698 [2024-11-20 11:02:45.009803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.698 [2024-11-20 11:02:45.021815] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.698 [2024-11-20 11:02:45.021829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.698 [2024-11-20 11:02:45.033844] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.698 [2024-11-20 11:02:45.033855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.698 [2024-11-20 11:02:45.045874] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.698 [2024-11-20 11:02:45.045885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.698 [2024-11-20 11:02:45.057913] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.698 [2024-11-20 11:02:45.057927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.698 [2024-11-20 11:02:45.069945] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.698 [2024-11-20 11:02:45.069964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.698 [2024-11-20 11:02:45.081977] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.698 [2024-11-20 11:02:45.081987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.698 [2024-11-20 11:02:45.094009] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.698 [2024-11-20 11:02:45.094019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.698 [2024-11-20 11:02:45.106041] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.698 [2024-11-20 11:02:45.106050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.698 [2024-11-20 11:02:45.118078] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.698 [2024-11-20 11:02:45.118092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.698 [2024-11-20 11:02:45.130107] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.698 [2024-11-20 11:02:45.130117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.698 [2024-11-20 11:02:45.142157] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.698 [2024-11-20 11:02:45.142169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.698 [2024-11-20 11:02:45.154176] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.698 [2024-11-20 11:02:45.154189] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.698 [2024-11-20 11:02:45.166204] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.698 [2024-11-20 11:02:45.166214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.698 [2024-11-20 11:02:45.178238] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.698 [2024-11-20 11:02:45.178248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.698 [2024-11-20 11:02:45.190284] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.698 [2024-11-20 11:02:45.190300] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.957 [2024-11-20 11:02:45.202313] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.957 [2024-11-20 11:02:45.202329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.957 [2024-11-20 11:02:45.214348] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.957 [2024-11-20 11:02:45.214366] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.957 Running I/O for 5 seconds... 00:08:17.957 [2024-11-20 11:02:45.226370] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.957 [2024-11-20 11:02:45.226381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.957 [2024-11-20 11:02:45.242469] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.957 [2024-11-20 11:02:45.242489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.957 [2024-11-20 11:02:45.257401] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.957 [2024-11-20 11:02:45.257421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.957 [2024-11-20 11:02:45.268438] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.957 [2024-11-20 11:02:45.268458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.958 [2024-11-20 11:02:45.282561] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.958 [2024-11-20 11:02:45.282580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.958 [2024-11-20 11:02:45.295952] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.958 [2024-11-20 11:02:45.295971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.958 [2024-11-20 11:02:45.310282] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.958 [2024-11-20 11:02:45.310304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.958 [2024-11-20 11:02:45.321495] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.958 [2024-11-20 11:02:45.321515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.958 [2024-11-20 11:02:45.335853] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.958 [2024-11-20 11:02:45.335874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.958 [2024-11-20 11:02:45.344922] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.958 [2024-11-20 11:02:45.344941] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.958 [2024-11-20 11:02:45.359695] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.958 [2024-11-20 11:02:45.359714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.958 [2024-11-20 11:02:45.373654] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.958 [2024-11-20 11:02:45.373673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.958 [2024-11-20 11:02:45.387463] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.958 [2024-11-20 11:02:45.387482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.958 [2024-11-20 11:02:45.401432] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.958 [2024-11-20 11:02:45.401451] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.958 [2024-11-20 11:02:45.415449] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.958 [2024-11-20 11:02:45.415472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.958 [2024-11-20 11:02:45.429492] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.958 [2024-11-20 11:02:45.429510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.958 [2024-11-20 11:02:45.443576] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.958 [2024-11-20 11:02:45.443595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.217 [2024-11-20 11:02:45.457921] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.217 [2024-11-20 11:02:45.457941] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.217 [2024-11-20 11:02:45.468716] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.217 [2024-11-20 11:02:45.468735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.217 [2024-11-20 11:02:45.483740] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.217 [2024-11-20 11:02:45.483759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.217 [2024-11-20 11:02:45.499015] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.217 [2024-11-20 11:02:45.499034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.217 [2024-11-20 11:02:45.513007] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.217 [2024-11-20 11:02:45.513026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.217 [2024-11-20 11:02:45.527150] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.217 [2024-11-20 11:02:45.527170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.217 [2024-11-20 11:02:45.540959] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.218 [2024-11-20 11:02:45.540978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.218 [2024-11-20 11:02:45.555400] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.218 [2024-11-20 11:02:45.555419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.218 [2024-11-20 11:02:45.566572] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.218 [2024-11-20 11:02:45.566591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.218 [2024-11-20 11:02:45.581620] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.218 [2024-11-20 11:02:45.581638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.218 [2024-11-20 11:02:45.592946] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.218 [2024-11-20 11:02:45.592971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.218 [2024-11-20 11:02:45.607603] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.218 [2024-11-20 11:02:45.607622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.218 [2024-11-20 11:02:45.618480] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.218 [2024-11-20 11:02:45.618499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.218 [2024-11-20 11:02:45.633540] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.218 [2024-11-20 11:02:45.633559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.218 [2024-11-20 11:02:45.648627] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.218 [2024-11-20 11:02:45.648646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.218 [2024-11-20 11:02:45.662672] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.218 [2024-11-20 11:02:45.662691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.218 [2024-11-20 11:02:45.676796] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.218 [2024-11-20 11:02:45.676815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.218 [2024-11-20 11:02:45.691361] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.218 [2024-11-20 11:02:45.691380] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.218 [2024-11-20 11:02:45.702425] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.218 [2024-11-20 11:02:45.702443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.477 [2024-11-20 11:02:45.717210] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.477 [2024-11-20 11:02:45.717230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.477 [2024-11-20 11:02:45.731090] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.477 [2024-11-20 11:02:45.731109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.477 [2024-11-20 11:02:45.744922] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.477 [2024-11-20 11:02:45.744942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.477 [2024-11-20 11:02:45.759212] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.477 [2024-11-20 11:02:45.759231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.477 [2024-11-20 11:02:45.773886] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.477 [2024-11-20 11:02:45.773905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.477 [2024-11-20 11:02:45.789589] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.477 [2024-11-20 11:02:45.789609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.477 [2024-11-20 11:02:45.803579] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.477 [2024-11-20 11:02:45.803599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.477 [2024-11-20 11:02:45.817216] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.477 [2024-11-20 11:02:45.817235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.477 [2024-11-20 11:02:45.831763] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.477 [2024-11-20 11:02:45.831783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.477 [2024-11-20 11:02:45.842562] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.477 [2024-11-20 11:02:45.842581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.478 [2024-11-20 11:02:45.857201] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.478 [2024-11-20 11:02:45.857226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.478 [2024-11-20 11:02:45.871694] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.478 [2024-11-20 11:02:45.871713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.478 [2024-11-20 11:02:45.886074] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.478 [2024-11-20 11:02:45.886093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.478 [2024-11-20 11:02:45.900361] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.478 [2024-11-20 11:02:45.900379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.478 [2024-11-20 11:02:45.914800] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.478 [2024-11-20 11:02:45.914819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.478 [2024-11-20 11:02:45.925613] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.478 [2024-11-20 11:02:45.925632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.478 [2024-11-20 11:02:45.940001] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.478 [2024-11-20 11:02:45.940021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.478 [2024-11-20 11:02:45.954023] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.478 [2024-11-20 11:02:45.954042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.478 [2024-11-20 11:02:45.968242] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.478 [2024-11-20 11:02:45.968262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.737 [2024-11-20 11:02:45.979105] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.737 [2024-11-20 11:02:45.979126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.737 [2024-11-20 11:02:45.993771] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.737 [2024-11-20 11:02:45.993790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.737 [2024-11-20 11:02:46.004893] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.737 [2024-11-20 11:02:46.004912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.737 [2024-11-20 11:02:46.014323] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.737 [2024-11-20 11:02:46.014342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.737 [2024-11-20 11:02:46.028885] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.737 [2024-11-20 11:02:46.028903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.737 [2024-11-20 11:02:46.042488] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.737 [2024-11-20 11:02:46.042508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.737 [2024-11-20 11:02:46.056622] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.737 [2024-11-20 11:02:46.056642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.737 [2024-11-20 11:02:46.070632] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.737 [2024-11-20 11:02:46.070651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.737 [2024-11-20 11:02:46.085348] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.737 [2024-11-20 11:02:46.085367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.737 [2024-11-20 11:02:46.096572] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.737 [2024-11-20 11:02:46.096591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.737 [2024-11-20 11:02:46.110804] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.737 [2024-11-20 11:02:46.110823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.737 [2024-11-20 11:02:46.124488] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.737 [2024-11-20 11:02:46.124508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.737 [2024-11-20 11:02:46.138795] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.737 [2024-11-20 11:02:46.138814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.737 [2024-11-20 11:02:46.148192] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.737 [2024-11-20 11:02:46.148221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.737 [2024-11-20 11:02:46.162969] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.737 [2024-11-20 11:02:46.162989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.737 [2024-11-20 11:02:46.174112] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.737 [2024-11-20 11:02:46.174131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.737 [2024-11-20 11:02:46.188068] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.737 [2024-11-20 11:02:46.188088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.737 [2024-11-20 11:02:46.202167] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.737 [2024-11-20 11:02:46.202186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.737 [2024-11-20 11:02:46.216539] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.737 [2024-11-20 11:02:46.216559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.737 16405.00 IOPS, 128.16 MiB/s [2024-11-20T10:02:46.233Z] [2024-11-20 11:02:46.230836] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.737 [2024-11-20 11:02:46.230856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.997 [2024-11-20 11:02:46.241533] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.997 [2024-11-20 11:02:46.241558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.997 [2024-11-20 11:02:46.250928] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.997 [2024-11-20 11:02:46.250955] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.997 [2024-11-20 11:02:46.265199] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.997 [2024-11-20 11:02:46.265223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.997 [2024-11-20 11:02:46.279064] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.997 [2024-11-20 11:02:46.279084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.997 [2024-11-20 11:02:46.293318] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.997 [2024-11-20 11:02:46.293337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.997 [2024-11-20 11:02:46.307290] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.997 [2024-11-20 11:02:46.307309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.997 [2024-11-20 11:02:46.321251] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.997 [2024-11-20 11:02:46.321271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.997 [2024-11-20 11:02:46.335525] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.997 [2024-11-20 11:02:46.335544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.997 [2024-11-20 11:02:46.349458] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.997 [2024-11-20 11:02:46.349479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.997 [2024-11-20 11:02:46.363414] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.997 [2024-11-20 11:02:46.363434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.997 [2024-11-20 11:02:46.377878] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.997 [2024-11-20 11:02:46.377898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.997 [2024-11-20 11:02:46.394169] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.997 [2024-11-20 11:02:46.394189] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.997 [2024-11-20 11:02:46.404877] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.997 [2024-11-20 11:02:46.404896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.997 [2024-11-20 11:02:46.419384] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.997 [2024-11-20 11:02:46.419403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.997 [2024-11-20 11:02:46.433581] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.997 [2024-11-20 11:02:46.433599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.997 [2024-11-20 11:02:46.447691] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.997 [2024-11-20 11:02:46.447711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.997 [2024-11-20 11:02:46.458697] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.997 [2024-11-20 11:02:46.458716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.997 [2024-11-20 11:02:46.473499] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.997 [2024-11-20 11:02:46.473517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.997 [2024-11-20 11:02:46.484330] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.997 [2024-11-20 11:02:46.484349] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.256 [2024-11-20 11:02:46.498992] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.256 [2024-11-20 11:02:46.499020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.256 [2024-11-20 11:02:46.512800] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.256 [2024-11-20 11:02:46.512820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.256 [2024-11-20 11:02:46.526991] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.256 [2024-11-20 11:02:46.527010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.256 [2024-11-20 11:02:46.540871] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.256 [2024-11-20 11:02:46.540890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.256 [2024-11-20 11:02:46.555387] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.256 [2024-11-20 11:02:46.555406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.256 [2024-11-20 11:02:46.566145] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.256 [2024-11-20 11:02:46.566164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.256 [2024-11-20 11:02:46.580607] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.256 [2024-11-20 11:02:46.580627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.256 [2024-11-20 11:02:46.594666] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.256 [2024-11-20 11:02:46.594684] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.256 [2024-11-20 11:02:46.608812] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.256 [2024-11-20 11:02:46.608830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.256 [2024-11-20 11:02:46.622435] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.256 [2024-11-20 11:02:46.622454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.256 [2024-11-20 11:02:46.636675] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.256 [2024-11-20 11:02:46.636694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.256 [2024-11-20 11:02:46.650321] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.256 [2024-11-20 11:02:46.650340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.256 [2024-11-20 11:02:46.664538] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.256 [2024-11-20 11:02:46.664557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.256 [2024-11-20 11:02:46.675974] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.256 [2024-11-20 11:02:46.675993] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.256 [2024-11-20 11:02:46.690362] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.256 [2024-11-20 11:02:46.690381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.256 [2024-11-20 11:02:46.699213] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.256 [2024-11-20 11:02:46.699232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.256 [2024-11-20 11:02:46.713886] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.256 [2024-11-20 11:02:46.713906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.256 [2024-11-20 11:02:46.727630] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.256 [2024-11-20 11:02:46.727649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.256 [2024-11-20 11:02:46.741680] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.256 [2024-11-20 11:02:46.741699] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.515 [2024-11-20 11:02:46.752642] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.515 [2024-11-20 11:02:46.752671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.515 [2024-11-20 11:02:46.767331] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.515 [2024-11-20 11:02:46.767350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.515 [2024-11-20 11:02:46.781223] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.515 [2024-11-20 11:02:46.781244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.515 [2024-11-20 11:02:46.795851] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.515 [2024-11-20 11:02:46.795870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.515 [2024-11-20 11:02:46.806908] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.515 [2024-11-20 11:02:46.806926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.515 [2024-11-20 11:02:46.816370] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.515 [2024-11-20 11:02:46.816389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.515 [2024-11-20 11:02:46.825655] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.515 [2024-11-20 11:02:46.825673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.515 [2024-11-20 11:02:46.840606] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.515 [2024-11-20 11:02:46.840626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.515 [2024-11-20 11:02:46.851905] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.515 [2024-11-20 11:02:46.851924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.515 [2024-11-20 11:02:46.866332] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.516 [2024-11-20 11:02:46.866351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.516 [2024-11-20 11:02:46.879946] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.516 [2024-11-20 11:02:46.879970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.516 [2024-11-20 11:02:46.894054] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.516 [2024-11-20 11:02:46.894075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.516 [2024-11-20 11:02:46.908283] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.516 [2024-11-20 11:02:46.908302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.516 [2024-11-20 11:02:46.922213] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.516 [2024-11-20 11:02:46.922232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.516 [2024-11-20 11:02:46.936591] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.516 [2024-11-20 11:02:46.936611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.516 [2024-11-20 11:02:46.950600] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.516 [2024-11-20 11:02:46.950618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.516 [2024-11-20 11:02:46.964417] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.516 [2024-11-20 11:02:46.964436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.516 [2024-11-20 11:02:46.979046] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.516 [2024-11-20 11:02:46.979065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.516 [2024-11-20 11:02:46.990376] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.516 [2024-11-20 11:02:46.990395] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.516 [2024-11-20 11:02:47.004975] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.516 [2024-11-20 11:02:47.004996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.775 [2024-11-20 11:02:47.018635] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.775 [2024-11-20 11:02:47.018655] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.775 [2024-11-20 11:02:47.032890] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.775 [2024-11-20 11:02:47.032909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.775 [2024-11-20 11:02:47.046803] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.775 [2024-11-20 11:02:47.046822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.775 [2024-11-20 11:02:47.061026] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.775 [2024-11-20 11:02:47.061045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.775 [2024-11-20 11:02:47.075376] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.775 [2024-11-20 11:02:47.075395] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.775 [2024-11-20 11:02:47.089835] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.775 [2024-11-20 11:02:47.089854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.775 [2024-11-20 11:02:47.100735] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.775 [2024-11-20 11:02:47.100753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.775 [2024-11-20 11:02:47.115630] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.775 [2024-11-20 11:02:47.115650] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.775 [2024-11-20 11:02:47.126842] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.775 [2024-11-20 11:02:47.126863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.775 [2024-11-20 11:02:47.141192] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.775 [2024-11-20 11:02:47.141211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.775 [2024-11-20 11:02:47.155377] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.775 [2024-11-20 11:02:47.155396] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.775 [2024-11-20 11:02:47.169533] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.775 [2024-11-20 11:02:47.169552] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.775 [2024-11-20 11:02:47.183803] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.775 [2024-11-20 11:02:47.183822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.775 [2024-11-20 11:02:47.195121] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.775 [2024-11-20 11:02:47.195139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.775 [2024-11-20 11:02:47.209678] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.775 [2024-11-20 11:02:47.209696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.775 [2024-11-20 11:02:47.223700] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.775 [2024-11-20 11:02:47.223719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.775 16478.00 IOPS, 128.73 MiB/s [2024-11-20T10:02:47.271Z] [2024-11-20 11:02:47.237658] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.775 [2024-11-20 11:02:47.237676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.775 [2024-11-20 11:02:47.252000] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.775 [2024-11-20 11:02:47.252019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.775 [2024-11-20 11:02:47.262994] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.775 [2024-11-20 11:02:47.263014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.034 [2024-11-20 11:02:47.277384] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.034 [2024-11-20 11:02:47.277404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.035 [2024-11-20 11:02:47.291350] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.035 [2024-11-20 11:02:47.291370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.035 [2024-11-20 11:02:47.305002] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.035 [2024-11-20 11:02:47.305021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.035 [2024-11-20 11:02:47.319343] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.035 [2024-11-20 11:02:47.319362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.035 [2024-11-20 11:02:47.333194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.035 [2024-11-20 11:02:47.333213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.035 [2024-11-20 11:02:47.343872] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.035 [2024-11-20 11:02:47.343891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.035 [2024-11-20 11:02:47.358356] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.035 [2024-11-20 11:02:47.358375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.035 [2024-11-20 11:02:47.372581] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.035 [2024-11-20 11:02:47.372599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.035 [2024-11-20 11:02:47.387977] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.035 [2024-11-20 11:02:47.387998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.035 [2024-11-20 11:02:47.401966] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.035 [2024-11-20 11:02:47.401985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.035 [2024-11-20 11:02:47.416373] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.035 [2024-11-20 11:02:47.416393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.035 [2024-11-20 11:02:47.427513] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.035 [2024-11-20 11:02:47.427532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.035 [2024-11-20 11:02:47.437230] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.035 [2024-11-20 11:02:47.437250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.035 [2024-11-20 11:02:47.451717] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.035 [2024-11-20 11:02:47.451738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.035 [2024-11-20 11:02:47.464846] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.035 [2024-11-20 11:02:47.464867] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.035 [2024-11-20 11:02:47.479377] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.035 [2024-11-20 11:02:47.479398] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.035 [2024-11-20 11:02:47.490527] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.035 [2024-11-20 11:02:47.490547] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.035 [2024-11-20 11:02:47.505388] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.035 [2024-11-20 11:02:47.505411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.035 [2024-11-20 11:02:47.521465] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.035 [2024-11-20 11:02:47.521485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.294 [2024-11-20 11:02:47.536257] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.294 [2024-11-20 11:02:47.536277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.294 [2024-11-20 11:02:47.551381] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.294 [2024-11-20 11:02:47.551400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.294 [2024-11-20 11:02:47.565936] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.294 [2024-11-20 11:02:47.565961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.294 [2024-11-20 11:02:47.581594] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.294 [2024-11-20 11:02:47.581613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.294 [2024-11-20 11:02:47.595893] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.294 [2024-11-20 11:02:47.595913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.294 [2024-11-20 11:02:47.609991] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.294 [2024-11-20 11:02:47.610010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.294 [2024-11-20 11:02:47.624189] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.294 [2024-11-20 11:02:47.624220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.294 [2024-11-20 11:02:47.638540] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.294 [2024-11-20 11:02:47.638561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.294 [2024-11-20 11:02:47.649442] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.294 [2024-11-20 11:02:47.649462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.294 [2024-11-20 11:02:47.664069] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.294 [2024-11-20 11:02:47.664089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.294 [2024-11-20 11:02:47.678190] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.294 [2024-11-20 11:02:47.678210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.294 [2024-11-20 11:02:47.688991] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.294 [2024-11-20 11:02:47.689010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.294 [2024-11-20 11:02:47.703536] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.294 [2024-11-20 11:02:47.703556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.294 [2024-11-20 11:02:47.717719] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.294 [2024-11-20 11:02:47.717739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.294 [2024-11-20 11:02:47.728762] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.294 [2024-11-20 11:02:47.728784] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.294 [2024-11-20 11:02:47.743319] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.294 [2024-11-20 11:02:47.743339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.294 [2024-11-20 11:02:47.757216] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.294 [2024-11-20 11:02:47.757236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.294 [2024-11-20 11:02:47.771298] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.294 [2024-11-20 11:02:47.771322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.294 [2024-11-20 11:02:47.785492] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.294 [2024-11-20 11:02:47.785513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.553 [2024-11-20 11:02:47.799690] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.553 [2024-11-20 11:02:47.799710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.553 [2024-11-20 11:02:47.813487] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.553 [2024-11-20 11:02:47.813506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.553 [2024-11-20 11:02:47.828062] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.553 [2024-11-20 11:02:47.828081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.553 [2024-11-20 11:02:47.842833] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.553 [2024-11-20 11:02:47.842851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.553 [2024-11-20 11:02:47.858676] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.553 [2024-11-20 11:02:47.858696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.553 [2024-11-20 11:02:47.873018] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.553 [2024-11-20 11:02:47.873038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.553 [2024-11-20 11:02:47.886799] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.553 [2024-11-20 11:02:47.886818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.553 [2024-11-20 11:02:47.900865] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.553 [2024-11-20 11:02:47.900884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.553 [2024-11-20 11:02:47.914779] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.553 [2024-11-20 11:02:47.914798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.553 [2024-11-20 11:02:47.928502] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.553 [2024-11-20 11:02:47.928520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.553 [2024-11-20 11:02:47.942895] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.553 [2024-11-20 11:02:47.942914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.553 [2024-11-20 11:02:47.956679] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.553 [2024-11-20 11:02:47.956697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.553 [2024-11-20 11:02:47.970743] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.553 [2024-11-20 11:02:47.970762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.553 [2024-11-20 11:02:47.985175] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.553 [2024-11-20 11:02:47.985194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.553 [2024-11-20 11:02:48.000697] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.553 [2024-11-20 11:02:48.000715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.553 [2024-11-20 11:02:48.015157] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.553 [2024-11-20 11:02:48.015176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.553 [2024-11-20 11:02:48.029537] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.553 [2024-11-20 11:02:48.029556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.553 [2024-11-20 11:02:48.040418] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.553 [2024-11-20 11:02:48.040440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.812 [2024-11-20 11:02:48.050019] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.812 [2024-11-20 11:02:48.050039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.812 [2024-11-20 11:02:48.065115] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.812 [2024-11-20 11:02:48.065135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.812 [2024-11-20 11:02:48.080339] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.812 [2024-11-20 11:02:48.080359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.812 [2024-11-20 11:02:48.094698] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.812 [2024-11-20 11:02:48.094717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.812 [2024-11-20 11:02:48.108943] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.812 [2024-11-20 11:02:48.108967] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.812 [2024-11-20 11:02:48.120321] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.812 [2024-11-20 11:02:48.120340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.812 [2024-11-20 11:02:48.129741] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.812 [2024-11-20 11:02:48.129760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.812 [2024-11-20 11:02:48.144390] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.812 [2024-11-20 11:02:48.144408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.812 [2024-11-20 11:02:48.155793] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.812 [2024-11-20 11:02:48.155812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.812 [2024-11-20 11:02:48.170451] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.812 [2024-11-20 11:02:48.170470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.812 [2024-11-20 11:02:48.181933] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.812 [2024-11-20 11:02:48.181958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.812 [2024-11-20 11:02:48.196480] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.812 [2024-11-20 11:02:48.196499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.812 [2024-11-20 11:02:48.207927] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.812 [2024-11-20 11:02:48.207951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.812 [2024-11-20 11:02:48.222387] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.812 [2024-11-20 11:02:48.222406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.812 16475.67 IOPS, 128.72 MiB/s [2024-11-20T10:02:48.308Z] [2024-11-20 11:02:48.236500] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.812 [2024-11-20 11:02:48.236519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.812 [2024-11-20 11:02:48.247233] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.812 [2024-11-20 11:02:48.247253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.812 [2024-11-20 11:02:48.261772] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.812 [2024-11-20 11:02:48.261790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.812 [2024-11-20 11:02:48.275808] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.812 [2024-11-20 11:02:48.275827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.812 [2024-11-20 11:02:48.290026] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.812 [2024-11-20 11:02:48.290048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.812 [2024-11-20 11:02:48.301038] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.812 [2024-11-20 11:02:48.301057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.071 [2024-11-20 11:02:48.315710] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.071 [2024-11-20 11:02:48.315729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.071 [2024-11-20 11:02:48.329549] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.071 [2024-11-20 11:02:48.329569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.071 [2024-11-20 11:02:48.343234] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.071 [2024-11-20 11:02:48.343253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.071 [2024-11-20 11:02:48.357281] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.071 [2024-11-20 11:02:48.357300] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.071 [2024-11-20 11:02:48.371782] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.071 [2024-11-20 11:02:48.371802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.071 [2024-11-20 11:02:48.386963] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.071 [2024-11-20 11:02:48.386982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.071 [2024-11-20 11:02:48.401308] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.071 [2024-11-20 11:02:48.401327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.071 [2024-11-20 11:02:48.415674] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.071 [2024-11-20 11:02:48.415693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.071 [2024-11-20 11:02:48.430887] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.071 [2024-11-20 11:02:48.430906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.071 [2024-11-20 11:02:48.445707] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.071 [2024-11-20 11:02:48.445725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.071 [2024-11-20 11:02:48.460784] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.071 [2024-11-20 11:02:48.460803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.071 [2024-11-20 11:02:48.475393] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.071 [2024-11-20 11:02:48.475413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.071 [2024-11-20 11:02:48.489412] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.072 [2024-11-20 11:02:48.489431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.072 [2024-11-20 11:02:48.504142] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.072 [2024-11-20 11:02:48.504161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.072 [2024-11-20 11:02:48.519464] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.072 [2024-11-20 11:02:48.519484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.072 [2024-11-20 11:02:48.533893] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.072 [2024-11-20 11:02:48.533913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.072 [2024-11-20 11:02:48.544991] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.072 [2024-11-20 11:02:48.545011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.072 [2024-11-20 11:02:48.559372] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.072 [2024-11-20 11:02:48.559391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.331 [2024-11-20 11:02:48.573052] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.331 [2024-11-20 11:02:48.573072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.331 [2024-11-20 11:02:48.587312] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.331 [2024-11-20 11:02:48.587332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.331 [2024-11-20 11:02:48.601473] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.331 [2024-11-20 11:02:48.601492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.331 [2024-11-20 11:02:48.615936] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.331 [2024-11-20 11:02:48.615960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.331 [2024-11-20 11:02:48.631335] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.331 [2024-11-20 11:02:48.631355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.331 [2024-11-20 11:02:48.645358] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.331 [2024-11-20 11:02:48.645377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.331 [2024-11-20 11:02:48.659190] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.331 [2024-11-20 11:02:48.659210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.331 [2024-11-20 11:02:48.673642] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.331 [2024-11-20 11:02:48.673661] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.331 [2024-11-20 11:02:48.684046] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.331 [2024-11-20 11:02:48.684066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.331 [2024-11-20 11:02:48.698719] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.331 [2024-11-20 11:02:48.698739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.331 [2024-11-20 11:02:48.712707] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.331 [2024-11-20 11:02:48.712726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.331 [2024-11-20 11:02:48.726699] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.331 [2024-11-20 11:02:48.726719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.331 [2024-11-20 11:02:48.740994] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.331 [2024-11-20 11:02:48.741013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.331 [2024-11-20 11:02:48.755371] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.331 [2024-11-20 11:02:48.755389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.331 [2024-11-20 11:02:48.770832] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.331 [2024-11-20 11:02:48.770851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.331 [2024-11-20 11:02:48.785347] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.331 [2024-11-20 11:02:48.785367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.331 [2024-11-20 11:02:48.799629] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.331 [2024-11-20 11:02:48.799650] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.331 [2024-11-20 11:02:48.814022] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.331 [2024-11-20 11:02:48.814044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.591 [2024-11-20 11:02:48.825720] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.591 [2024-11-20 11:02:48.825741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.591 [2024-11-20 11:02:48.840239] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.591 [2024-11-20 11:02:48.840260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.591 [2024-11-20 11:02:48.853913] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.591 [2024-11-20 11:02:48.853932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.591 [2024-11-20 11:02:48.868617] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.591 [2024-11-20 11:02:48.868636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.591 [2024-11-20 11:02:48.883370] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.591 [2024-11-20 11:02:48.883390] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.591 [2024-11-20 11:02:48.897672] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.591 [2024-11-20 11:02:48.897691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.591 [2024-11-20 11:02:48.911424] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.591 [2024-11-20 11:02:48.911444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.591 [2024-11-20 11:02:48.925675] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.591 [2024-11-20 11:02:48.925695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.591 [2024-11-20 11:02:48.939952] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.591 [2024-11-20 11:02:48.939972] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.591 [2024-11-20 11:02:48.951384] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.591 [2024-11-20 11:02:48.951403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.591 [2024-11-20 11:02:48.965825] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.591 [2024-11-20 11:02:48.965844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.591 [2024-11-20 11:02:48.980049] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.591 [2024-11-20 11:02:48.980069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.591 [2024-11-20 11:02:48.994910] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.591 [2024-11-20 11:02:48.994930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.591 [2024-11-20 11:02:49.009459] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.591 [2024-11-20 11:02:49.009479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.591 [2024-11-20 11:02:49.019231] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.591 [2024-11-20 11:02:49.019250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.591 [2024-11-20 11:02:49.033743] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.591 [2024-11-20 11:02:49.033762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.591 [2024-11-20 11:02:49.048041] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.591 [2024-11-20 11:02:49.048060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.591 [2024-11-20 11:02:49.062298] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.591 [2024-11-20 11:02:49.062318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.591 [2024-11-20 11:02:49.076675] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.591 [2024-11-20 11:02:49.076695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.852 [2024-11-20 11:02:49.087640] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.852 [2024-11-20 11:02:49.087660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.852 [2024-11-20 11:02:49.097563] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.852 [2024-11-20 11:02:49.097583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.852 [2024-11-20 11:02:49.112496] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.852 [2024-11-20 11:02:49.112516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.852 [2024-11-20 11:02:49.127565] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.852 [2024-11-20 11:02:49.127585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.852 [2024-11-20 11:02:49.142167] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.852 [2024-11-20 11:02:49.142187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.852 [2024-11-20 11:02:49.156302] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.852 [2024-11-20 11:02:49.156321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.852 [2024-11-20 11:02:49.170635] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.852 [2024-11-20 11:02:49.170655] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.852 [2024-11-20 11:02:49.184796] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.852 [2024-11-20 11:02:49.184815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.852 [2024-11-20 11:02:49.199098] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.852 [2024-11-20 11:02:49.199118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.852 [2024-11-20 11:02:49.213651] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.852 [2024-11-20 11:02:49.213670] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.852 [2024-11-20 11:02:49.224553] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.852 [2024-11-20 11:02:49.224572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.852 16456.50 IOPS, 128.57 MiB/s [2024-11-20T10:02:49.348Z] [2024-11-20 11:02:49.239070] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.852 [2024-11-20 11:02:49.239089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.852 [2024-11-20 11:02:49.252930] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.852 [2024-11-20 11:02:49.252956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.852 [2024-11-20 11:02:49.266709] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.852 [2024-11-20 11:02:49.266728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.852 [2024-11-20 11:02:49.281306] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.852 [2024-11-20 11:02:49.281325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.852 [2024-11-20 11:02:49.296592] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.852 [2024-11-20 11:02:49.296611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.852 [2024-11-20 11:02:49.310923] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.852 [2024-11-20 11:02:49.310943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.852 [2024-11-20 11:02:49.321989] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.852 [2024-11-20 11:02:49.322007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.852 [2024-11-20 11:02:49.336316] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.853 [2024-11-20 11:02:49.336339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.111 [2024-11-20 11:02:49.350226] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.111 [2024-11-20 11:02:49.350246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.111 [2024-11-20 11:02:49.364660] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.111 [2024-11-20 11:02:49.364680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.111 [2024-11-20 11:02:49.375761] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.111 [2024-11-20 11:02:49.375780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.111 [2024-11-20 11:02:49.390216] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.111 [2024-11-20 11:02:49.390235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.111 [2024-11-20 11:02:49.404886] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.111 [2024-11-20 11:02:49.404905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.111 [2024-11-20 11:02:49.420213] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.111 [2024-11-20 11:02:49.420233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.111 [2024-11-20 11:02:49.434717] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.111 [2024-11-20 11:02:49.434737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.111 [2024-11-20 11:02:49.448561] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.111 [2024-11-20 11:02:49.448580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.111 [2024-11-20 11:02:49.463024] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.111 [2024-11-20 11:02:49.463043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.111 [2024-11-20 11:02:49.476727] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.111 [2024-11-20 11:02:49.476746] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.111 [2024-11-20 11:02:49.490634] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.111 [2024-11-20 11:02:49.490652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.111 [2024-11-20 11:02:49.504612] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.112 [2024-11-20 11:02:49.504631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.112 [2024-11-20 11:02:49.518662] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.112 [2024-11-20 11:02:49.518680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.112 [2024-11-20 11:02:49.532627] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.112 [2024-11-20 11:02:49.532645] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.112 [2024-11-20 11:02:49.546736] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.112 [2024-11-20 11:02:49.546755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.112 [2024-11-20 11:02:49.561022] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.112 [2024-11-20 11:02:49.561040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.112 [2024-11-20 11:02:49.575374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.112 [2024-11-20 11:02:49.575393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.112 [2024-11-20 11:02:49.589940] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.112 [2024-11-20 11:02:49.589964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.371 [2024-11-20 11:02:49.606192] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.371 [2024-11-20 11:02:49.606216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.371 [2024-11-20 11:02:49.620708] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.371 [2024-11-20 11:02:49.620727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.371 [2024-11-20 11:02:49.631647] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.371 [2024-11-20 11:02:49.631666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.371 [2024-11-20 11:02:49.645917] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.371 [2024-11-20 11:02:49.645936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.371 [2024-11-20 11:02:49.660080] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.371 [2024-11-20 11:02:49.660099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.371 [2024-11-20 11:02:49.674383] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.371 [2024-11-20 11:02:49.674403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.371 [2024-11-20 11:02:49.688332] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.371 [2024-11-20 11:02:49.688351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.371 [2024-11-20 11:02:49.702277] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.371 [2024-11-20 11:02:49.702295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.371 [2024-11-20 11:02:49.716242] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.371 [2024-11-20 11:02:49.716261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.371 [2024-11-20 11:02:49.730514] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.371 [2024-11-20 11:02:49.730533] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.371 [2024-11-20 11:02:49.741660] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.371 [2024-11-20 11:02:49.741679] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.371 [2024-11-20 11:02:49.756545] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.371 [2024-11-20 11:02:49.756564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.371 [2024-11-20 11:02:49.771927] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.371 [2024-11-20 11:02:49.771952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.371 [2024-11-20 11:02:49.785860] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.371 [2024-11-20 11:02:49.785879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.371 [2024-11-20 11:02:49.800260] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.371 [2024-11-20 11:02:49.800279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.371 [2024-11-20 11:02:49.815754] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.371 [2024-11-20 11:02:49.815773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.371 [2024-11-20 11:02:49.825322] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.371 [2024-11-20 11:02:49.825340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.371 [2024-11-20 11:02:49.834901] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.371 [2024-11-20 11:02:49.834919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.371 [2024-11-20 11:02:49.849478] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.371 [2024-11-20 11:02:49.849496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.371 [2024-11-20 11:02:49.863777] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.371 [2024-11-20 11:02:49.863801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.631 [2024-11-20 11:02:49.875060] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.631 [2024-11-20 11:02:49.875079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.631 [2024-11-20 11:02:49.889442] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.631 [2024-11-20 11:02:49.889462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.631 [2024-11-20 11:02:49.903605] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.631 [2024-11-20 11:02:49.903624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.631 [2024-11-20 11:02:49.917634] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.631 [2024-11-20 11:02:49.917653] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.631 [2024-11-20 11:02:49.932133] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.631 [2024-11-20 11:02:49.932152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.631 [2024-11-20 11:02:49.947521] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.631 [2024-11-20 11:02:49.947540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.631 [2024-11-20 11:02:49.961700] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.631 [2024-11-20 11:02:49.961720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.631 [2024-11-20 11:02:49.975479] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.631 [2024-11-20 11:02:49.975498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.631 [2024-11-20 11:02:49.989545] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.631 [2024-11-20 11:02:49.989563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.631 [2024-11-20 11:02:50.003874] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.631 [2024-11-20 11:02:50.003893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.631 [2024-11-20 11:02:50.016082] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.631 [2024-11-20 11:02:50.016101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.631 [2024-11-20 11:02:50.030878] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.631 [2024-11-20 11:02:50.030897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.631 [2024-11-20 11:02:50.042503] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.631 [2024-11-20 11:02:50.042522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.631 [2024-11-20 11:02:50.056884] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.631 [2024-11-20 11:02:50.056904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.631 [2024-11-20 11:02:50.071120] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.631 [2024-11-20 11:02:50.071139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.631 [2024-11-20 11:02:50.084838] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.631 [2024-11-20 11:02:50.084859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.632 [2024-11-20 11:02:50.098879] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.632 [2024-11-20 11:02:50.098899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.632 [2024-11-20 11:02:50.113135] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.632 [2024-11-20 11:02:50.113154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.891 [2024-11-20 11:02:50.127315] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.891 [2024-11-20 11:02:50.127335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.891 [2024-11-20 11:02:50.141376] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.891 [2024-11-20 11:02:50.141396] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.891 [2024-11-20 11:02:50.155648] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.891 [2024-11-20 11:02:50.155667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.891 [2024-11-20 11:02:50.170186] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.891 [2024-11-20 11:02:50.170207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.891 [2024-11-20 11:02:50.180829] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.891 [2024-11-20 11:02:50.180850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.891 [2024-11-20 11:02:50.195677] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.891 [2024-11-20 11:02:50.195698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.891 [2024-11-20 11:02:50.207230] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.891 [2024-11-20 11:02:50.207251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.891 [2024-11-20 11:02:50.222034] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.891 [2024-11-20 11:02:50.222055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.891 [2024-11-20 11:02:50.233304] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.891 [2024-11-20 11:02:50.233324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.891 16465.80 IOPS, 128.64 MiB/s 00:08:22.891 Latency(us) 00:08:22.891 [2024-11-20T10:02:50.387Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:22.891 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:08:22.891 Nvme1n1 : 5.01 16474.29 128.71 0.00 0.00 7763.61 3590.23 16526.47 00:08:22.891 [2024-11-20T10:02:50.387Z] =================================================================================================================== 00:08:22.891 [2024-11-20T10:02:50.387Z] Total : 16474.29 128.71 0.00 0.00 7763.61 3590.23 16526.47 00:08:22.891 [2024-11-20 11:02:50.243772] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.891 [2024-11-20 11:02:50.243791] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.891 [2024-11-20 11:02:50.255798] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.891 [2024-11-20 11:02:50.255814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.891 [2024-11-20 11:02:50.267842] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.891 [2024-11-20 11:02:50.267861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.891 [2024-11-20 11:02:50.279871] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.891 [2024-11-20 11:02:50.279890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.891 [2024-11-20 11:02:50.291899] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.891 [2024-11-20 11:02:50.291914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.891 [2024-11-20 11:02:50.303927] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.891 [2024-11-20 11:02:50.303943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.891 [2024-11-20 11:02:50.315966] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.891 [2024-11-20 11:02:50.315981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.891 [2024-11-20 11:02:50.327996] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.891 [2024-11-20 11:02:50.328012] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.891 [2024-11-20 11:02:50.340022] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.891 [2024-11-20 11:02:50.340037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.891 [2024-11-20 11:02:50.352051] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.891 [2024-11-20 11:02:50.352063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.891 [2024-11-20 11:02:50.364084] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.891 [2024-11-20 11:02:50.364095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.891 [2024-11-20 11:02:50.376123] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.891 [2024-11-20 11:02:50.376135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.150 [2024-11-20 11:02:50.388156] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.150 [2024-11-20 11:02:50.388175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.150 [2024-11-20 11:02:50.400183] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.150 [2024-11-20 11:02:50.400198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.150 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3932891) - No such process 00:08:23.150 11:02:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 3932891 00:08:23.150 11:02:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:23.150 11:02:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.150 11:02:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:23.150 11:02:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.150 11:02:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:23.150 11:02:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.150 11:02:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:23.150 delay0 00:08:23.150 11:02:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.150 11:02:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:08:23.150 11:02:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.150 11:02:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:23.150 11:02:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.150 11:02:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:08:23.150 [2024-11-20 11:02:50.503668] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:08:29.717 [2024-11-20 11:02:56.928821] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2203d60 is same with the state(6) to be set 00:08:29.717 [2024-11-20 11:02:56.928863] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2203d60 is same with the state(6) to be set 00:08:29.717 Initializing NVMe Controllers 00:08:29.717 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:29.717 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:29.717 Initialization complete. Launching workers. 00:08:29.717 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 295, failed: 9343 00:08:29.717 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 9572, failed to submit 66 00:08:29.717 success 9419, unsuccessful 153, failed 0 00:08:29.717 11:02:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:08:29.717 11:02:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:08:29.717 11:02:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:29.717 11:02:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:08:29.717 11:02:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:29.717 11:02:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:08:29.717 11:02:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:29.717 11:02:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:29.717 rmmod nvme_tcp 00:08:29.717 rmmod nvme_fabrics 00:08:29.717 rmmod nvme_keyring 00:08:29.717 11:02:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:29.717 11:02:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:08:29.717 11:02:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:08:29.717 11:02:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 3930984 ']' 00:08:29.717 11:02:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 3930984 00:08:29.717 11:02:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 3930984 ']' 00:08:29.717 11:02:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 3930984 00:08:29.717 11:02:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:08:29.717 11:02:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:29.717 11:02:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3930984 00:08:29.717 11:02:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:29.717 11:02:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:29.717 11:02:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3930984' 00:08:29.717 killing process with pid 3930984 00:08:29.717 11:02:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 3930984 00:08:29.717 11:02:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 3930984 00:08:29.976 11:02:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:29.976 11:02:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:29.976 11:02:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:29.976 11:02:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:08:29.976 11:02:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:08:29.976 11:02:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:29.977 11:02:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:08:29.977 11:02:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:29.977 11:02:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:29.977 11:02:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:29.977 11:02:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:29.977 11:02:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:31.883 11:02:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:31.883 00:08:31.883 real 0m32.452s 00:08:31.883 user 0m43.071s 00:08:31.883 sys 0m11.563s 00:08:31.883 11:02:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:31.883 11:02:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:31.883 ************************************ 00:08:31.883 END TEST nvmf_zcopy 00:08:31.883 ************************************ 00:08:31.883 11:02:59 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:08:31.883 11:02:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:31.883 11:02:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:31.883 11:02:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:32.143 ************************************ 00:08:32.143 START TEST nvmf_nmic 00:08:32.143 ************************************ 00:08:32.143 11:02:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:08:32.143 * Looking for test storage... 00:08:32.143 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:32.143 11:02:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:32.143 11:02:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:08:32.143 11:02:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:32.143 11:02:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:32.143 11:02:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:32.143 11:02:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:32.143 11:02:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:32.143 11:02:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:08:32.143 11:02:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:08:32.143 11:02:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:08:32.143 11:02:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:08:32.143 11:02:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:08:32.143 11:02:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:08:32.143 11:02:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:08:32.143 11:02:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:32.143 11:02:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:08:32.143 11:02:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:08:32.143 11:02:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:32.143 11:02:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:32.143 11:02:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:08:32.143 11:02:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:08:32.143 11:02:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:32.143 11:02:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:08:32.143 11:02:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:08:32.143 11:02:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:08:32.143 11:02:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:08:32.143 11:02:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:32.143 11:02:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:08:32.143 11:02:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:08:32.143 11:02:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:32.143 11:02:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:32.143 11:02:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:08:32.143 11:02:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:32.143 11:02:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:32.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:32.143 --rc genhtml_branch_coverage=1 00:08:32.143 --rc genhtml_function_coverage=1 00:08:32.143 --rc genhtml_legend=1 00:08:32.143 --rc geninfo_all_blocks=1 00:08:32.143 --rc geninfo_unexecuted_blocks=1 00:08:32.143 00:08:32.143 ' 00:08:32.143 11:02:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:32.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:32.143 --rc genhtml_branch_coverage=1 00:08:32.143 --rc genhtml_function_coverage=1 00:08:32.143 --rc genhtml_legend=1 00:08:32.143 --rc geninfo_all_blocks=1 00:08:32.143 --rc geninfo_unexecuted_blocks=1 00:08:32.143 00:08:32.143 ' 00:08:32.143 11:02:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:32.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:32.143 --rc genhtml_branch_coverage=1 00:08:32.143 --rc genhtml_function_coverage=1 00:08:32.143 --rc genhtml_legend=1 00:08:32.143 --rc geninfo_all_blocks=1 00:08:32.143 --rc geninfo_unexecuted_blocks=1 00:08:32.143 00:08:32.143 ' 00:08:32.143 11:02:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:32.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:32.143 --rc genhtml_branch_coverage=1 00:08:32.143 --rc genhtml_function_coverage=1 00:08:32.143 --rc genhtml_legend=1 00:08:32.143 --rc geninfo_all_blocks=1 00:08:32.143 --rc geninfo_unexecuted_blocks=1 00:08:32.143 00:08:32.143 ' 00:08:32.143 11:02:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:32.143 11:02:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:08:32.143 11:02:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:32.143 11:02:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:32.143 11:02:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:32.143 11:02:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:32.143 11:02:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:32.143 11:02:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:32.143 11:02:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:32.143 11:02:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:32.143 11:02:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:32.143 11:02:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:32.143 11:02:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:32.143 11:02:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:32.143 11:02:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:32.143 11:02:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:32.143 11:02:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:32.143 11:02:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:32.143 11:02:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:32.143 11:02:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:08:32.143 11:02:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:32.143 11:02:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:32.143 11:02:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:32.143 11:02:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.143 11:02:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.144 11:02:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.144 11:02:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:08:32.144 11:02:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.144 11:02:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:08:32.144 11:02:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:32.144 11:02:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:32.144 11:02:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:32.144 11:02:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:32.144 11:02:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:32.144 11:02:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:32.144 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:32.144 11:02:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:32.144 11:02:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:32.144 11:02:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:32.144 11:02:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:32.144 11:02:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:32.144 11:02:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:08:32.144 11:02:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:32.144 11:02:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:32.144 11:02:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:32.144 11:02:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:32.144 11:02:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:32.144 11:02:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:32.144 11:02:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:32.144 11:02:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:32.144 11:02:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:32.144 11:02:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:32.144 11:02:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:08:32.144 11:02:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:38.900 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:38.900 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:08:38.900 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:38.900 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:38.900 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:38.900 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:38.900 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:38.900 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:08:38.900 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:38.900 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:08:38.900 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:08:38.900 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:08:38.900 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:08:38.900 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:08:38.900 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:08:38.900 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:38.900 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:38.900 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:38.900 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:38.900 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:38.900 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:38.900 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:38.900 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:38.900 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:38.900 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:38.900 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:38.900 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:38.900 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:38.900 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:38.900 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:38.900 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:38.900 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:38.901 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:38.901 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:38.901 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:38.901 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:38.901 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:38.901 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:38.901 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:38.901 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:38.901 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:38.901 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:38.901 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:38.901 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:38.901 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:38.901 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:38.901 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:38.901 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:38.901 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:38.901 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:38.901 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:38.901 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:38.901 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:38.901 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:38.901 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:38.901 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:38.901 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:38.901 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:38.901 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:38.901 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:38.901 Found net devices under 0000:86:00.0: cvl_0_0 00:08:38.901 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:38.901 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:38.901 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:38.901 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:38.901 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:38.901 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:38.901 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:38.901 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:38.901 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:38.901 Found net devices under 0000:86:00.1: cvl_0_1 00:08:38.901 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:38.901 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:38.901 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:08:38.901 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:38.901 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:38.901 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:38.901 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:38.901 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:38.901 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:38.901 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:38.901 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:38.901 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:38.901 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:38.901 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:38.901 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:38.901 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:38.901 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:38.901 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:38.901 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:38.901 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:38.901 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:38.901 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:38.901 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:38.901 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:38.901 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:38.901 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:38.901 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:38.901 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:38.901 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:38.901 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:38.901 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.458 ms 00:08:38.901 00:08:38.901 --- 10.0.0.2 ping statistics --- 00:08:38.901 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:38.901 rtt min/avg/max/mdev = 0.458/0.458/0.458/0.000 ms 00:08:38.901 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:38.901 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:38.901 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.244 ms 00:08:38.901 00:08:38.901 --- 10.0.0.1 ping statistics --- 00:08:38.901 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:38.901 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:08:38.901 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:38.901 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:08:38.901 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:38.901 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:38.901 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:38.901 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:38.901 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:38.901 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:38.901 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:38.901 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:08:38.901 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:38.901 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:38.901 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:38.901 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=3938587 00:08:38.901 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:38.901 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 3938587 00:08:38.901 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 3938587 ']' 00:08:38.901 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:38.901 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:38.901 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:38.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:38.901 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:38.901 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:38.901 [2024-11-20 11:03:05.613131] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:08:38.901 [2024-11-20 11:03:05.613185] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:38.901 [2024-11-20 11:03:05.692990] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:38.901 [2024-11-20 11:03:05.740750] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:38.901 [2024-11-20 11:03:05.740785] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:38.901 [2024-11-20 11:03:05.740792] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:38.901 [2024-11-20 11:03:05.740799] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:38.902 [2024-11-20 11:03:05.740804] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:38.902 [2024-11-20 11:03:05.742337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:38.902 [2024-11-20 11:03:05.742357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:38.902 [2024-11-20 11:03:05.742391] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.902 [2024-11-20 11:03:05.742392] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:38.902 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:38.902 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:08:38.902 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:38.902 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:38.902 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:38.902 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:38.902 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:38.902 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.902 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:38.902 [2024-11-20 11:03:05.879318] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:38.902 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.902 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:38.902 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.902 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:38.902 Malloc0 00:08:38.902 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.902 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:38.902 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.902 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:38.902 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.902 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:38.902 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.902 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:38.902 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.902 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:38.902 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.902 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:38.902 [2024-11-20 11:03:05.946367] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:38.902 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.902 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:08:38.902 test case1: single bdev can't be used in multiple subsystems 00:08:38.902 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:08:38.902 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.902 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:38.902 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.902 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:38.902 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.902 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:38.902 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.902 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:08:38.902 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:08:38.902 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.902 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:38.902 [2024-11-20 11:03:05.974274] bdev.c:8203:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:08:38.902 [2024-11-20 11:03:05.974293] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:08:38.902 [2024-11-20 11:03:05.974300] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.902 request: 00:08:38.902 { 00:08:38.902 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:38.902 "namespace": { 00:08:38.902 "bdev_name": "Malloc0", 00:08:38.902 "no_auto_visible": false 00:08:38.902 }, 00:08:38.902 "method": "nvmf_subsystem_add_ns", 00:08:38.902 "req_id": 1 00:08:38.902 } 00:08:38.902 Got JSON-RPC error response 00:08:38.902 response: 00:08:38.902 { 00:08:38.902 "code": -32602, 00:08:38.902 "message": "Invalid parameters" 00:08:38.902 } 00:08:38.902 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:38.902 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:08:38.902 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:08:38.902 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:08:38.902 Adding namespace failed - expected result. 00:08:38.902 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:08:38.902 test case2: host connect to nvmf target in multiple paths 00:08:38.902 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:08:38.902 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.902 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:38.902 [2024-11-20 11:03:05.986417] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:08:38.902 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.902 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:39.838 11:03:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:08:41.214 11:03:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:08:41.214 11:03:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:08:41.214 11:03:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:08:41.214 11:03:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:08:41.214 11:03:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:08:43.116 11:03:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:08:43.116 11:03:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:08:43.116 11:03:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:08:43.116 11:03:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:08:43.116 11:03:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:08:43.116 11:03:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:08:43.116 11:03:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:08:43.116 [global] 00:08:43.116 thread=1 00:08:43.116 invalidate=1 00:08:43.116 rw=write 00:08:43.116 time_based=1 00:08:43.116 runtime=1 00:08:43.116 ioengine=libaio 00:08:43.116 direct=1 00:08:43.116 bs=4096 00:08:43.116 iodepth=1 00:08:43.116 norandommap=0 00:08:43.116 numjobs=1 00:08:43.116 00:08:43.116 verify_dump=1 00:08:43.116 verify_backlog=512 00:08:43.116 verify_state_save=0 00:08:43.116 do_verify=1 00:08:43.116 verify=crc32c-intel 00:08:43.116 [job0] 00:08:43.117 filename=/dev/nvme0n1 00:08:43.117 Could not set queue depth (nvme0n1) 00:08:43.375 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:43.375 fio-3.35 00:08:43.375 Starting 1 thread 00:08:44.310 00:08:44.310 job0: (groupid=0, jobs=1): err= 0: pid=3940035: Wed Nov 20 11:03:11 2024 00:08:44.310 read: IOPS=22, BW=89.7KiB/s (91.8kB/s)(92.0KiB/1026msec) 00:08:44.310 slat (nsec): min=9790, max=24827, avg=22059.04, stdev=2777.51 00:08:44.310 clat (usec): min=40845, max=42166, avg=41411.14, stdev=510.31 00:08:44.310 lat (usec): min=40867, max=42175, avg=41433.20, stdev=509.60 00:08:44.310 clat percentiles (usec): 00:08:44.310 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:08:44.310 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41681], 00:08:44.310 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:08:44.310 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:08:44.310 | 99.99th=[42206] 00:08:44.310 write: IOPS=499, BW=1996KiB/s (2044kB/s)(2048KiB/1026msec); 0 zone resets 00:08:44.310 slat (nsec): min=9365, max=39342, avg=10398.02, stdev=1610.35 00:08:44.310 clat (usec): min=110, max=331, avg=129.87, stdev=16.88 00:08:44.310 lat (usec): min=121, max=371, avg=140.27, stdev=17.66 00:08:44.310 clat percentiles (usec): 00:08:44.310 | 1.00th=[ 117], 5.00th=[ 119], 10.00th=[ 120], 20.00th=[ 122], 00:08:44.310 | 30.00th=[ 123], 40.00th=[ 125], 50.00th=[ 126], 60.00th=[ 128], 00:08:44.310 | 70.00th=[ 129], 80.00th=[ 135], 90.00th=[ 147], 95.00th=[ 163], 00:08:44.310 | 99.00th=[ 176], 99.50th=[ 233], 99.90th=[ 334], 99.95th=[ 334], 00:08:44.310 | 99.99th=[ 334] 00:08:44.310 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:08:44.310 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:08:44.310 lat (usec) : 250=95.33%, 500=0.37% 00:08:44.310 lat (msec) : 50=4.30% 00:08:44.310 cpu : usr=0.39%, sys=0.39%, ctx=535, majf=0, minf=1 00:08:44.310 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:44.310 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:44.310 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:44.310 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:44.310 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:44.310 00:08:44.310 Run status group 0 (all jobs): 00:08:44.310 READ: bw=89.7KiB/s (91.8kB/s), 89.7KiB/s-89.7KiB/s (91.8kB/s-91.8kB/s), io=92.0KiB (94.2kB), run=1026-1026msec 00:08:44.310 WRITE: bw=1996KiB/s (2044kB/s), 1996KiB/s-1996KiB/s (2044kB/s-2044kB/s), io=2048KiB (2097kB), run=1026-1026msec 00:08:44.310 00:08:44.310 Disk stats (read/write): 00:08:44.310 nvme0n1: ios=69/512, merge=0/0, ticks=803/66, in_queue=869, util=91.38% 00:08:44.569 11:03:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:44.569 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:08:44.569 11:03:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:44.569 11:03:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:08:44.569 11:03:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:08:44.569 11:03:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:44.569 11:03:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:08:44.569 11:03:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:44.569 11:03:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:08:44.569 11:03:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:08:44.569 11:03:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:08:44.569 11:03:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:44.569 11:03:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:08:44.569 11:03:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:44.569 11:03:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:08:44.569 11:03:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:44.569 11:03:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:44.569 rmmod nvme_tcp 00:08:44.569 rmmod nvme_fabrics 00:08:44.569 rmmod nvme_keyring 00:08:44.569 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:44.569 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:08:44.569 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:08:44.569 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 3938587 ']' 00:08:44.570 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 3938587 00:08:44.570 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 3938587 ']' 00:08:44.570 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 3938587 00:08:44.570 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:08:44.829 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:44.829 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3938587 00:08:44.829 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:44.829 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:44.829 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3938587' 00:08:44.829 killing process with pid 3938587 00:08:44.829 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 3938587 00:08:44.829 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 3938587 00:08:44.829 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:44.829 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:44.829 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:44.829 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:08:44.829 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:08:44.829 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:44.829 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:08:44.829 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:44.829 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:44.829 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:44.829 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:44.829 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:47.368 11:03:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:47.368 00:08:47.368 real 0m14.976s 00:08:47.368 user 0m33.105s 00:08:47.368 sys 0m5.221s 00:08:47.368 11:03:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:47.368 11:03:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:47.368 ************************************ 00:08:47.368 END TEST nvmf_nmic 00:08:47.368 ************************************ 00:08:47.368 11:03:14 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:08:47.368 11:03:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:47.368 11:03:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:47.368 11:03:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:47.368 ************************************ 00:08:47.368 START TEST nvmf_fio_target 00:08:47.368 ************************************ 00:08:47.368 11:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:08:47.368 * Looking for test storage... 00:08:47.368 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:47.368 11:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:47.368 11:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:08:47.368 11:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:47.368 11:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:47.368 11:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:47.368 11:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:47.368 11:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:47.369 11:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:08:47.369 11:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:08:47.369 11:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:08:47.369 11:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:08:47.369 11:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:08:47.369 11:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:08:47.369 11:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:08:47.369 11:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:47.369 11:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:08:47.369 11:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:08:47.369 11:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:47.369 11:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:47.369 11:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:08:47.369 11:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:08:47.369 11:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:47.369 11:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:08:47.369 11:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:08:47.369 11:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:08:47.369 11:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:08:47.369 11:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:47.369 11:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:08:47.369 11:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:08:47.369 11:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:47.369 11:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:47.369 11:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:08:47.369 11:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:47.369 11:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:47.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.369 --rc genhtml_branch_coverage=1 00:08:47.369 --rc genhtml_function_coverage=1 00:08:47.369 --rc genhtml_legend=1 00:08:47.369 --rc geninfo_all_blocks=1 00:08:47.369 --rc geninfo_unexecuted_blocks=1 00:08:47.369 00:08:47.369 ' 00:08:47.369 11:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:47.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.369 --rc genhtml_branch_coverage=1 00:08:47.369 --rc genhtml_function_coverage=1 00:08:47.369 --rc genhtml_legend=1 00:08:47.369 --rc geninfo_all_blocks=1 00:08:47.369 --rc geninfo_unexecuted_blocks=1 00:08:47.369 00:08:47.369 ' 00:08:47.369 11:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:47.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.369 --rc genhtml_branch_coverage=1 00:08:47.369 --rc genhtml_function_coverage=1 00:08:47.369 --rc genhtml_legend=1 00:08:47.369 --rc geninfo_all_blocks=1 00:08:47.369 --rc geninfo_unexecuted_blocks=1 00:08:47.369 00:08:47.369 ' 00:08:47.369 11:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:47.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.369 --rc genhtml_branch_coverage=1 00:08:47.369 --rc genhtml_function_coverage=1 00:08:47.369 --rc genhtml_legend=1 00:08:47.369 --rc geninfo_all_blocks=1 00:08:47.369 --rc geninfo_unexecuted_blocks=1 00:08:47.369 00:08:47.369 ' 00:08:47.369 11:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:47.369 11:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:08:47.369 11:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:47.369 11:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:47.369 11:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:47.369 11:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:47.369 11:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:47.369 11:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:47.369 11:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:47.369 11:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:47.369 11:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:47.369 11:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:47.369 11:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:47.369 11:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:47.369 11:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:47.369 11:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:47.369 11:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:47.369 11:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:47.369 11:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:47.369 11:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:08:47.369 11:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:47.369 11:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:47.369 11:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:47.369 11:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.369 11:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.369 11:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.369 11:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:08:47.369 11:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.369 11:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:08:47.369 11:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:47.369 11:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:47.369 11:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:47.369 11:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:47.369 11:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:47.369 11:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:47.369 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:47.369 11:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:47.369 11:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:47.369 11:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:47.369 11:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:47.369 11:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:47.369 11:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:47.369 11:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:08:47.369 11:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:47.369 11:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:47.369 11:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:47.369 11:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:47.369 11:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:47.369 11:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:47.370 11:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:47.370 11:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:47.370 11:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:47.370 11:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:47.370 11:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:08:47.370 11:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:53.944 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:53.944 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:08:53.944 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:53.944 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:53.944 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:53.944 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:53.944 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:53.944 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:08:53.944 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:53.944 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:08:53.944 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:08:53.944 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:08:53.944 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:08:53.944 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:08:53.944 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:08:53.944 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:53.944 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:53.944 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:53.944 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:53.944 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:53.944 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:53.944 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:53.944 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:53.944 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:53.944 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:53.944 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:53.944 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:53.944 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:53.944 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:53.944 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:53.944 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:53.944 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:53.944 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:53.944 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:53.944 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:53.944 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:53.944 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:53.944 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:53.944 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:53.944 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:53.944 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:53.944 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:53.944 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:53.944 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:53.944 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:53.944 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:53.944 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:53.944 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:53.944 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:53.944 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:53.944 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:53.944 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:53.944 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:53.944 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:53.944 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:53.944 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:53.944 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:53.944 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:53.944 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:53.944 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:53.944 Found net devices under 0000:86:00.0: cvl_0_0 00:08:53.944 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:53.944 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:53.945 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:53.945 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:53.945 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:53.945 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:53.945 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:53.945 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:53.945 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:53.945 Found net devices under 0000:86:00.1: cvl_0_1 00:08:53.945 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:53.945 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:53.945 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:08:53.945 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:53.945 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:53.945 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:53.945 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:53.945 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:53.945 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:53.945 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:53.945 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:53.945 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:53.945 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:53.945 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:53.945 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:53.945 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:53.945 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:53.945 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:53.945 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:53.945 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:53.945 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:53.945 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:53.945 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:53.945 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:53.945 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:53.945 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:53.945 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:53.945 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:53.945 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:53.945 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:53.945 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.374 ms 00:08:53.945 00:08:53.945 --- 10.0.0.2 ping statistics --- 00:08:53.945 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:53.945 rtt min/avg/max/mdev = 0.374/0.374/0.374/0.000 ms 00:08:53.945 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:53.945 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:53.945 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.176 ms 00:08:53.945 00:08:53.945 --- 10.0.0.1 ping statistics --- 00:08:53.945 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:53.945 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:08:53.945 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:53.945 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:08:53.945 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:53.945 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:53.945 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:53.945 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:53.945 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:53.945 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:53.945 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:53.945 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:08:53.945 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:53.945 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:53.945 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:53.945 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=3943809 00:08:53.945 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:53.945 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 3943809 00:08:53.945 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 3943809 ']' 00:08:53.945 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:53.945 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:53.945 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:53.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:53.945 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:53.945 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:53.945 [2024-11-20 11:03:20.651633] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:08:53.945 [2024-11-20 11:03:20.651684] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:53.945 [2024-11-20 11:03:20.731172] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:53.945 [2024-11-20 11:03:20.771936] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:53.945 [2024-11-20 11:03:20.771978] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:53.945 [2024-11-20 11:03:20.771985] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:53.945 [2024-11-20 11:03:20.771991] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:53.945 [2024-11-20 11:03:20.771996] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:53.945 [2024-11-20 11:03:20.773654] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:53.945 [2024-11-20 11:03:20.773760] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:53.945 [2024-11-20 11:03:20.773865] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:53.945 [2024-11-20 11:03:20.773866] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:53.945 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:53.945 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:08:53.945 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:53.945 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:53.945 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:53.945 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:53.945 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:53.945 [2024-11-20 11:03:21.095976] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:53.945 11:03:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:53.945 11:03:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:08:53.945 11:03:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:54.204 11:03:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:08:54.204 11:03:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:54.464 11:03:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:08:54.464 11:03:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:54.722 11:03:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:08:54.722 11:03:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:08:54.722 11:03:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:54.981 11:03:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:08:54.981 11:03:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:55.239 11:03:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:08:55.239 11:03:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:55.497 11:03:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:08:55.497 11:03:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:08:55.756 11:03:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:55.756 11:03:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:08:55.756 11:03:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:56.013 11:03:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:08:56.013 11:03:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:56.272 11:03:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:56.530 [2024-11-20 11:03:23.806611] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:56.530 11:03:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:08:56.788 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:08:56.789 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:58.166 11:03:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:08:58.166 11:03:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:08:58.166 11:03:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:08:58.166 11:03:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:08:58.166 11:03:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:08:58.166 11:03:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:09:00.068 11:03:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:00.068 11:03:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:00.068 11:03:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:00.068 11:03:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:09:00.068 11:03:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:00.068 11:03:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:09:00.068 11:03:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:00.068 [global] 00:09:00.068 thread=1 00:09:00.068 invalidate=1 00:09:00.068 rw=write 00:09:00.068 time_based=1 00:09:00.068 runtime=1 00:09:00.068 ioengine=libaio 00:09:00.068 direct=1 00:09:00.068 bs=4096 00:09:00.068 iodepth=1 00:09:00.068 norandommap=0 00:09:00.068 numjobs=1 00:09:00.068 00:09:00.068 verify_dump=1 00:09:00.068 verify_backlog=512 00:09:00.068 verify_state_save=0 00:09:00.068 do_verify=1 00:09:00.068 verify=crc32c-intel 00:09:00.068 [job0] 00:09:00.068 filename=/dev/nvme0n1 00:09:00.068 [job1] 00:09:00.068 filename=/dev/nvme0n2 00:09:00.068 [job2] 00:09:00.068 filename=/dev/nvme0n3 00:09:00.068 [job3] 00:09:00.068 filename=/dev/nvme0n4 00:09:00.068 Could not set queue depth (nvme0n1) 00:09:00.068 Could not set queue depth (nvme0n2) 00:09:00.068 Could not set queue depth (nvme0n3) 00:09:00.068 Could not set queue depth (nvme0n4) 00:09:00.327 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:00.327 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:00.327 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:00.327 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:00.327 fio-3.35 00:09:00.327 Starting 4 threads 00:09:01.704 00:09:01.704 job0: (groupid=0, jobs=1): err= 0: pid=3945165: Wed Nov 20 11:03:29 2024 00:09:01.704 read: IOPS=39, BW=157KiB/s (160kB/s)(160KiB/1021msec) 00:09:01.704 slat (nsec): min=7747, max=37634, avg=11604.67, stdev=5744.74 00:09:01.704 clat (usec): min=205, max=41963, avg=22685.59, stdev=20557.07 00:09:01.704 lat (usec): min=213, max=41974, avg=22697.20, stdev=20556.40 00:09:01.704 clat percentiles (usec): 00:09:01.704 | 1.00th=[ 206], 5.00th=[ 212], 10.00th=[ 223], 20.00th=[ 237], 00:09:01.704 | 30.00th=[ 247], 40.00th=[ 255], 50.00th=[40633], 60.00th=[41157], 00:09:01.704 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:09:01.704 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:01.704 | 99.99th=[42206] 00:09:01.704 write: IOPS=501, BW=2006KiB/s (2054kB/s)(2048KiB/1021msec); 0 zone resets 00:09:01.704 slat (nsec): min=9807, max=46301, avg=11631.71, stdev=2673.70 00:09:01.704 clat (usec): min=136, max=580, avg=205.40, stdev=35.73 00:09:01.704 lat (usec): min=147, max=592, avg=217.03, stdev=35.91 00:09:01.704 clat percentiles (usec): 00:09:01.704 | 1.00th=[ 145], 5.00th=[ 155], 10.00th=[ 161], 20.00th=[ 176], 00:09:01.704 | 30.00th=[ 182], 40.00th=[ 194], 50.00th=[ 206], 60.00th=[ 219], 00:09:01.704 | 70.00th=[ 227], 80.00th=[ 235], 90.00th=[ 245], 95.00th=[ 253], 00:09:01.704 | 99.00th=[ 269], 99.50th=[ 281], 99.90th=[ 578], 99.95th=[ 578], 00:09:01.704 | 99.99th=[ 578] 00:09:01.704 bw ( KiB/s): min= 4096, max= 4096, per=17.06%, avg=4096.00, stdev= 0.00, samples=1 00:09:01.704 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:01.704 lat (usec) : 250=89.67%, 500=6.16%, 750=0.18% 00:09:01.704 lat (msec) : 50=3.99% 00:09:01.704 cpu : usr=0.39%, sys=0.88%, ctx=553, majf=0, minf=1 00:09:01.704 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:01.704 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:01.704 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:01.704 issued rwts: total=40,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:01.704 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:01.704 job1: (groupid=0, jobs=1): err= 0: pid=3945166: Wed Nov 20 11:03:29 2024 00:09:01.704 read: IOPS=2069, BW=8280KiB/s (8478kB/s)(8288KiB/1001msec) 00:09:01.704 slat (nsec): min=7183, max=39570, avg=8313.51, stdev=1263.41 00:09:01.704 clat (usec): min=186, max=522, avg=245.38, stdev=24.83 00:09:01.704 lat (usec): min=209, max=529, avg=253.69, stdev=24.85 00:09:01.704 clat percentiles (usec): 00:09:01.704 | 1.00th=[ 217], 5.00th=[ 223], 10.00th=[ 227], 20.00th=[ 231], 00:09:01.704 | 30.00th=[ 235], 40.00th=[ 239], 50.00th=[ 241], 60.00th=[ 245], 00:09:01.704 | 70.00th=[ 249], 80.00th=[ 255], 90.00th=[ 265], 95.00th=[ 285], 00:09:01.704 | 99.00th=[ 326], 99.50th=[ 388], 99.90th=[ 478], 99.95th=[ 519], 00:09:01.704 | 99.99th=[ 523] 00:09:01.704 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:09:01.704 slat (nsec): min=9934, max=43733, avg=12021.92, stdev=1987.22 00:09:01.704 clat (usec): min=115, max=1459, avg=167.95, stdev=44.15 00:09:01.704 lat (usec): min=127, max=1470, avg=179.97, stdev=43.99 00:09:01.704 clat percentiles (usec): 00:09:01.704 | 1.00th=[ 125], 5.00th=[ 133], 10.00th=[ 137], 20.00th=[ 141], 00:09:01.704 | 30.00th=[ 145], 40.00th=[ 149], 50.00th=[ 155], 60.00th=[ 163], 00:09:01.704 | 70.00th=[ 172], 80.00th=[ 184], 90.00th=[ 241], 95.00th=[ 243], 00:09:01.704 | 99.00th=[ 253], 99.50th=[ 273], 99.90th=[ 297], 99.95th=[ 326], 00:09:01.704 | 99.99th=[ 1467] 00:09:01.704 bw ( KiB/s): min= 9600, max= 9600, per=39.97%, avg=9600.00, stdev= 0.00, samples=1 00:09:01.704 iops : min= 2400, max= 2400, avg=2400.00, stdev= 0.00, samples=1 00:09:01.704 lat (usec) : 250=86.87%, 500=13.06%, 750=0.04% 00:09:01.704 lat (msec) : 2=0.02% 00:09:01.704 cpu : usr=4.60%, sys=6.40%, ctx=4634, majf=0, minf=1 00:09:01.704 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:01.704 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:01.704 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:01.704 issued rwts: total=2072,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:01.704 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:01.704 job2: (groupid=0, jobs=1): err= 0: pid=3945171: Wed Nov 20 11:03:29 2024 00:09:01.704 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:09:01.704 slat (nsec): min=8521, max=23580, avg=9420.87, stdev=735.61 00:09:01.704 clat (usec): min=194, max=511, avg=252.47, stdev=47.85 00:09:01.704 lat (usec): min=203, max=521, avg=261.89, stdev=47.88 00:09:01.704 clat percentiles (usec): 00:09:01.704 | 1.00th=[ 208], 5.00th=[ 219], 10.00th=[ 223], 20.00th=[ 229], 00:09:01.704 | 30.00th=[ 233], 40.00th=[ 239], 50.00th=[ 243], 60.00th=[ 247], 00:09:01.704 | 70.00th=[ 251], 80.00th=[ 260], 90.00th=[ 277], 95.00th=[ 306], 00:09:01.704 | 99.00th=[ 494], 99.50th=[ 498], 99.90th=[ 502], 99.95th=[ 502], 00:09:01.704 | 99.99th=[ 510] 00:09:01.704 write: IOPS=2543, BW=9.93MiB/s (10.4MB/s)(9.95MiB/1001msec); 0 zone resets 00:09:01.704 slat (nsec): min=12537, max=47039, avg=13851.71, stdev=1868.99 00:09:01.704 clat (usec): min=121, max=328, avg=162.38, stdev=21.63 00:09:01.704 lat (usec): min=134, max=342, avg=176.23, stdev=21.89 00:09:01.704 clat percentiles (usec): 00:09:01.704 | 1.00th=[ 133], 5.00th=[ 139], 10.00th=[ 141], 20.00th=[ 143], 00:09:01.704 | 30.00th=[ 147], 40.00th=[ 151], 50.00th=[ 157], 60.00th=[ 165], 00:09:01.704 | 70.00th=[ 174], 80.00th=[ 182], 90.00th=[ 192], 95.00th=[ 202], 00:09:01.704 | 99.00th=[ 221], 99.50th=[ 229], 99.90th=[ 273], 99.95th=[ 306], 00:09:01.704 | 99.99th=[ 330] 00:09:01.704 bw ( KiB/s): min=10288, max=10288, per=42.84%, avg=10288.00, stdev= 0.00, samples=1 00:09:01.704 iops : min= 2572, max= 2572, avg=2572.00, stdev= 0.00, samples=1 00:09:01.704 lat (usec) : 250=85.26%, 500=14.63%, 750=0.11% 00:09:01.704 cpu : usr=5.20%, sys=7.30%, ctx=4597, majf=0, minf=1 00:09:01.704 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:01.704 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:01.704 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:01.704 issued rwts: total=2048,2546,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:01.704 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:01.705 job3: (groupid=0, jobs=1): err= 0: pid=3945175: Wed Nov 20 11:03:29 2024 00:09:01.705 read: IOPS=21, BW=87.3KiB/s (89.4kB/s)(88.0KiB/1008msec) 00:09:01.705 slat (nsec): min=9770, max=24602, avg=22357.09, stdev=2900.08 00:09:01.705 clat (usec): min=40632, max=41825, avg=40997.62, stdev=205.98 00:09:01.705 lat (usec): min=40642, max=41848, avg=41019.97, stdev=207.01 00:09:01.705 clat percentiles (usec): 00:09:01.705 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:09:01.705 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:01.705 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:01.705 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:09:01.705 | 99.99th=[41681] 00:09:01.705 write: IOPS=507, BW=2032KiB/s (2081kB/s)(2048KiB/1008msec); 0 zone resets 00:09:01.705 slat (nsec): min=9586, max=38183, avg=11381.75, stdev=2586.24 00:09:01.705 clat (usec): min=160, max=286, avg=191.38, stdev=14.93 00:09:01.705 lat (usec): min=170, max=324, avg=202.76, stdev=15.02 00:09:01.705 clat percentiles (usec): 00:09:01.705 | 1.00th=[ 165], 5.00th=[ 169], 10.00th=[ 176], 20.00th=[ 180], 00:09:01.705 | 30.00th=[ 184], 40.00th=[ 188], 50.00th=[ 190], 60.00th=[ 194], 00:09:01.705 | 70.00th=[ 198], 80.00th=[ 202], 90.00th=[ 208], 95.00th=[ 215], 00:09:01.705 | 99.00th=[ 239], 99.50th=[ 265], 99.90th=[ 285], 99.95th=[ 285], 00:09:01.705 | 99.99th=[ 285] 00:09:01.705 bw ( KiB/s): min= 4096, max= 4096, per=17.06%, avg=4096.00, stdev= 0.00, samples=1 00:09:01.705 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:01.705 lat (usec) : 250=95.13%, 500=0.75% 00:09:01.705 lat (msec) : 50=4.12% 00:09:01.705 cpu : usr=0.30%, sys=0.50%, ctx=535, majf=0, minf=1 00:09:01.705 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:01.705 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:01.705 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:01.705 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:01.705 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:01.705 00:09:01.705 Run status group 0 (all jobs): 00:09:01.705 READ: bw=16.0MiB/s (16.8MB/s), 87.3KiB/s-8280KiB/s (89.4kB/s-8478kB/s), io=16.3MiB (17.1MB), run=1001-1021msec 00:09:01.705 WRITE: bw=23.5MiB/s (24.6MB/s), 2006KiB/s-9.99MiB/s (2054kB/s-10.5MB/s), io=23.9MiB (25.1MB), run=1001-1021msec 00:09:01.705 00:09:01.705 Disk stats (read/write): 00:09:01.705 nvme0n1: ios=78/512, merge=0/0, ticks=723/96, in_queue=819, util=86.67% 00:09:01.705 nvme0n2: ios=1863/2048, merge=0/0, ticks=1426/322, in_queue=1748, util=98.17% 00:09:01.705 nvme0n3: ios=1870/2048, merge=0/0, ticks=1438/320, in_queue=1758, util=98.33% 00:09:01.705 nvme0n4: ios=18/512, merge=0/0, ticks=739/90, in_queue=829, util=89.59% 00:09:01.705 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:09:01.705 [global] 00:09:01.705 thread=1 00:09:01.705 invalidate=1 00:09:01.705 rw=randwrite 00:09:01.705 time_based=1 00:09:01.705 runtime=1 00:09:01.705 ioengine=libaio 00:09:01.705 direct=1 00:09:01.705 bs=4096 00:09:01.705 iodepth=1 00:09:01.705 norandommap=0 00:09:01.705 numjobs=1 00:09:01.705 00:09:01.705 verify_dump=1 00:09:01.705 verify_backlog=512 00:09:01.705 verify_state_save=0 00:09:01.705 do_verify=1 00:09:01.705 verify=crc32c-intel 00:09:01.705 [job0] 00:09:01.705 filename=/dev/nvme0n1 00:09:01.705 [job1] 00:09:01.705 filename=/dev/nvme0n2 00:09:01.705 [job2] 00:09:01.705 filename=/dev/nvme0n3 00:09:01.705 [job3] 00:09:01.705 filename=/dev/nvme0n4 00:09:01.705 Could not set queue depth (nvme0n1) 00:09:01.705 Could not set queue depth (nvme0n2) 00:09:01.705 Could not set queue depth (nvme0n3) 00:09:01.705 Could not set queue depth (nvme0n4) 00:09:01.963 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:01.963 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:01.963 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:01.963 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:01.963 fio-3.35 00:09:01.963 Starting 4 threads 00:09:03.339 00:09:03.339 job0: (groupid=0, jobs=1): err= 0: pid=3945580: Wed Nov 20 11:03:30 2024 00:09:03.339 read: IOPS=21, BW=87.6KiB/s (89.8kB/s)(88.0KiB/1004msec) 00:09:03.339 slat (nsec): min=11730, max=18183, avg=13978.09, stdev=1623.15 00:09:03.339 clat (usec): min=40934, max=41039, avg=40984.60, stdev=28.48 00:09:03.339 lat (usec): min=40948, max=41052, avg=40998.57, stdev=28.39 00:09:03.339 clat percentiles (usec): 00:09:03.339 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:09:03.339 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:03.339 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:03.339 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:03.339 | 99.99th=[41157] 00:09:03.339 write: IOPS=509, BW=2040KiB/s (2089kB/s)(2048KiB/1004msec); 0 zone resets 00:09:03.339 slat (nsec): min=9801, max=64677, avg=13623.76, stdev=3884.55 00:09:03.339 clat (usec): min=137, max=287, avg=180.93, stdev=24.50 00:09:03.339 lat (usec): min=150, max=301, avg=194.55, stdev=24.85 00:09:03.339 clat percentiles (usec): 00:09:03.339 | 1.00th=[ 147], 5.00th=[ 151], 10.00th=[ 155], 20.00th=[ 161], 00:09:03.339 | 30.00th=[ 167], 40.00th=[ 172], 50.00th=[ 176], 60.00th=[ 182], 00:09:03.339 | 70.00th=[ 188], 80.00th=[ 198], 90.00th=[ 219], 95.00th=[ 237], 00:09:03.339 | 99.00th=[ 243], 99.50th=[ 260], 99.90th=[ 289], 99.95th=[ 289], 00:09:03.339 | 99.99th=[ 289] 00:09:03.339 bw ( KiB/s): min= 4096, max= 4096, per=18.32%, avg=4096.00, stdev= 0.00, samples=1 00:09:03.339 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:03.339 lat (usec) : 250=95.13%, 500=0.75% 00:09:03.339 lat (msec) : 50=4.12% 00:09:03.339 cpu : usr=0.80%, sys=0.70%, ctx=534, majf=0, minf=1 00:09:03.339 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:03.339 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:03.339 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:03.339 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:03.339 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:03.339 job1: (groupid=0, jobs=1): err= 0: pid=3945599: Wed Nov 20 11:03:30 2024 00:09:03.339 read: IOPS=2129, BW=8519KiB/s (8724kB/s)(8528KiB/1001msec) 00:09:03.339 slat (nsec): min=6153, max=27509, avg=7160.79, stdev=1096.21 00:09:03.339 clat (usec): min=180, max=525, avg=252.46, stdev=48.73 00:09:03.339 lat (usec): min=187, max=532, avg=259.62, stdev=48.90 00:09:03.340 clat percentiles (usec): 00:09:03.340 | 1.00th=[ 194], 5.00th=[ 204], 10.00th=[ 210], 20.00th=[ 221], 00:09:03.340 | 30.00th=[ 229], 40.00th=[ 235], 50.00th=[ 243], 60.00th=[ 251], 00:09:03.340 | 70.00th=[ 260], 80.00th=[ 269], 90.00th=[ 302], 95.00th=[ 338], 00:09:03.340 | 99.00th=[ 490], 99.50th=[ 502], 99.90th=[ 519], 99.95th=[ 523], 00:09:03.340 | 99.99th=[ 529] 00:09:03.340 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:09:03.340 slat (nsec): min=7119, max=58082, avg=9910.21, stdev=1833.69 00:09:03.340 clat (usec): min=117, max=493, avg=161.04, stdev=28.87 00:09:03.340 lat (usec): min=126, max=544, avg=170.95, stdev=29.36 00:09:03.340 clat percentiles (usec): 00:09:03.340 | 1.00th=[ 126], 5.00th=[ 133], 10.00th=[ 137], 20.00th=[ 143], 00:09:03.340 | 30.00th=[ 147], 40.00th=[ 151], 50.00th=[ 155], 60.00th=[ 159], 00:09:03.340 | 70.00th=[ 163], 80.00th=[ 174], 90.00th=[ 204], 95.00th=[ 223], 00:09:03.340 | 99.00th=[ 253], 99.50th=[ 265], 99.90th=[ 367], 99.95th=[ 486], 00:09:03.340 | 99.99th=[ 494] 00:09:03.340 bw ( KiB/s): min=11032, max=11032, per=49.34%, avg=11032.00, stdev= 0.00, samples=1 00:09:03.340 iops : min= 2758, max= 2758, avg=2758.00, stdev= 0.00, samples=1 00:09:03.340 lat (usec) : 250=80.99%, 500=18.73%, 750=0.28% 00:09:03.340 cpu : usr=1.80%, sys=4.60%, ctx=4692, majf=0, minf=1 00:09:03.340 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:03.340 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:03.340 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:03.340 issued rwts: total=2132,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:03.340 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:03.340 job2: (groupid=0, jobs=1): err= 0: pid=3945622: Wed Nov 20 11:03:30 2024 00:09:03.340 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:09:03.340 slat (nsec): min=6974, max=37308, avg=8489.96, stdev=1556.05 00:09:03.340 clat (usec): min=195, max=1013, avg=273.14, stdev=65.66 00:09:03.340 lat (usec): min=203, max=1022, avg=281.63, stdev=65.93 00:09:03.340 clat percentiles (usec): 00:09:03.340 | 1.00th=[ 217], 5.00th=[ 227], 10.00th=[ 231], 20.00th=[ 237], 00:09:03.340 | 30.00th=[ 243], 40.00th=[ 247], 50.00th=[ 251], 60.00th=[ 258], 00:09:03.340 | 70.00th=[ 265], 80.00th=[ 281], 90.00th=[ 396], 95.00th=[ 445], 00:09:03.340 | 99.00th=[ 486], 99.50th=[ 502], 99.90th=[ 537], 99.95th=[ 701], 00:09:03.340 | 99.99th=[ 1012] 00:09:03.340 write: IOPS=2120, BW=8484KiB/s (8687kB/s)(8492KiB/1001msec); 0 zone resets 00:09:03.340 slat (nsec): min=9960, max=97950, avg=11334.82, stdev=2531.45 00:09:03.340 clat (usec): min=122, max=309, avg=182.01, stdev=38.14 00:09:03.340 lat (usec): min=133, max=345, avg=193.35, stdev=38.35 00:09:03.340 clat percentiles (usec): 00:09:03.340 | 1.00th=[ 133], 5.00th=[ 141], 10.00th=[ 147], 20.00th=[ 153], 00:09:03.340 | 30.00th=[ 159], 40.00th=[ 165], 50.00th=[ 172], 60.00th=[ 178], 00:09:03.340 | 70.00th=[ 188], 80.00th=[ 204], 90.00th=[ 247], 95.00th=[ 273], 00:09:03.340 | 99.00th=[ 293], 99.50th=[ 297], 99.90th=[ 306], 99.95th=[ 306], 00:09:03.340 | 99.99th=[ 310] 00:09:03.340 bw ( KiB/s): min= 8776, max= 8776, per=39.25%, avg=8776.00, stdev= 0.00, samples=1 00:09:03.340 iops : min= 2194, max= 2194, avg=2194.00, stdev= 0.00, samples=1 00:09:03.340 lat (usec) : 250=69.50%, 500=30.21%, 750=0.26% 00:09:03.340 lat (msec) : 2=0.02% 00:09:03.340 cpu : usr=3.40%, sys=6.90%, ctx=4172, majf=0, minf=1 00:09:03.340 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:03.340 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:03.340 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:03.340 issued rwts: total=2048,2123,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:03.340 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:03.340 job3: (groupid=0, jobs=1): err= 0: pid=3945628: Wed Nov 20 11:03:30 2024 00:09:03.340 read: IOPS=21, BW=86.2KiB/s (88.3kB/s)(88.0KiB/1021msec) 00:09:03.340 slat (nsec): min=10404, max=26699, avg=21189.55, stdev=5246.59 00:09:03.340 clat (usec): min=40825, max=41095, avg=40966.41, stdev=45.30 00:09:03.340 lat (usec): min=40836, max=41107, avg=40987.60, stdev=45.38 00:09:03.340 clat percentiles (usec): 00:09:03.340 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:09:03.340 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:03.340 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:03.340 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:03.340 | 99.99th=[41157] 00:09:03.340 write: IOPS=501, BW=2006KiB/s (2054kB/s)(2048KiB/1021msec); 0 zone resets 00:09:03.340 slat (nsec): min=9643, max=39648, avg=12751.48, stdev=2641.03 00:09:03.340 clat (usec): min=135, max=370, avg=215.39, stdev=50.16 00:09:03.340 lat (usec): min=146, max=391, avg=228.14, stdev=50.60 00:09:03.340 clat percentiles (usec): 00:09:03.340 | 1.00th=[ 145], 5.00th=[ 153], 10.00th=[ 159], 20.00th=[ 172], 00:09:03.340 | 30.00th=[ 182], 40.00th=[ 196], 50.00th=[ 206], 60.00th=[ 217], 00:09:03.340 | 70.00th=[ 231], 80.00th=[ 255], 90.00th=[ 293], 95.00th=[ 322], 00:09:03.340 | 99.00th=[ 351], 99.50th=[ 355], 99.90th=[ 371], 99.95th=[ 371], 00:09:03.340 | 99.99th=[ 371] 00:09:03.340 bw ( KiB/s): min= 4096, max= 4096, per=18.32%, avg=4096.00, stdev= 0.00, samples=1 00:09:03.340 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:03.340 lat (usec) : 250=74.91%, 500=20.97% 00:09:03.340 lat (msec) : 50=4.12% 00:09:03.340 cpu : usr=0.49%, sys=0.78%, ctx=539, majf=0, minf=1 00:09:03.340 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:03.340 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:03.340 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:03.340 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:03.340 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:03.340 00:09:03.340 Run status group 0 (all jobs): 00:09:03.340 READ: bw=16.2MiB/s (16.9MB/s), 86.2KiB/s-8519KiB/s (88.3kB/s-8724kB/s), io=16.5MiB (17.3MB), run=1001-1021msec 00:09:03.340 WRITE: bw=21.8MiB/s (22.9MB/s), 2006KiB/s-9.99MiB/s (2054kB/s-10.5MB/s), io=22.3MiB (23.4MB), run=1001-1021msec 00:09:03.340 00:09:03.340 Disk stats (read/write): 00:09:03.340 nvme0n1: ios=55/512, merge=0/0, ticks=830/86, in_queue=916, util=93.69% 00:09:03.340 nvme0n2: ios=1926/2048, merge=0/0, ticks=726/333, in_queue=1059, util=94.82% 00:09:03.340 nvme0n3: ios=1687/2048, merge=0/0, ticks=409/348, in_queue=757, util=88.95% 00:09:03.340 nvme0n4: ios=59/512, merge=0/0, ticks=1456/104, in_queue=1560, util=99.47% 00:09:03.340 11:03:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:09:03.340 [global] 00:09:03.340 thread=1 00:09:03.340 invalidate=1 00:09:03.340 rw=write 00:09:03.340 time_based=1 00:09:03.340 runtime=1 00:09:03.340 ioengine=libaio 00:09:03.340 direct=1 00:09:03.340 bs=4096 00:09:03.340 iodepth=128 00:09:03.340 norandommap=0 00:09:03.340 numjobs=1 00:09:03.340 00:09:03.340 verify_dump=1 00:09:03.340 verify_backlog=512 00:09:03.340 verify_state_save=0 00:09:03.340 do_verify=1 00:09:03.340 verify=crc32c-intel 00:09:03.340 [job0] 00:09:03.340 filename=/dev/nvme0n1 00:09:03.340 [job1] 00:09:03.340 filename=/dev/nvme0n2 00:09:03.340 [job2] 00:09:03.340 filename=/dev/nvme0n3 00:09:03.340 [job3] 00:09:03.340 filename=/dev/nvme0n4 00:09:03.340 Could not set queue depth (nvme0n1) 00:09:03.340 Could not set queue depth (nvme0n2) 00:09:03.340 Could not set queue depth (nvme0n3) 00:09:03.340 Could not set queue depth (nvme0n4) 00:09:03.599 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:03.599 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:03.599 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:03.599 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:03.599 fio-3.35 00:09:03.599 Starting 4 threads 00:09:04.990 00:09:04.991 job0: (groupid=0, jobs=1): err= 0: pid=3946075: Wed Nov 20 11:03:32 2024 00:09:04.991 read: IOPS=4422, BW=17.3MiB/s (18.1MB/s)(17.4MiB/1005msec) 00:09:04.991 slat (nsec): min=1030, max=30041k, avg=112754.35, stdev=999985.71 00:09:04.991 clat (usec): min=3254, max=67390, avg=14780.24, stdev=9584.17 00:09:04.991 lat (usec): min=3260, max=67413, avg=14892.99, stdev=9679.31 00:09:04.991 clat percentiles (usec): 00:09:04.991 | 1.00th=[ 4817], 5.00th=[ 5997], 10.00th=[ 7832], 20.00th=[10028], 00:09:04.991 | 30.00th=[10421], 40.00th=[10945], 50.00th=[11338], 60.00th=[11731], 00:09:04.991 | 70.00th=[11994], 80.00th=[19530], 90.00th=[30802], 95.00th=[34866], 00:09:04.991 | 99.00th=[52167], 99.50th=[52167], 99.90th=[52167], 99.95th=[52691], 00:09:04.991 | 99.99th=[67634] 00:09:04.991 write: IOPS=4585, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1005msec); 0 zone resets 00:09:04.991 slat (nsec): min=1938, max=14266k, avg=97528.00, stdev=572748.67 00:09:04.991 clat (usec): min=1163, max=55941, avg=13381.25, stdev=8863.43 00:09:04.991 lat (usec): min=1173, max=55949, avg=13478.77, stdev=8916.41 00:09:04.991 clat percentiles (usec): 00:09:04.991 | 1.00th=[ 4228], 5.00th=[ 7504], 10.00th=[ 8848], 20.00th=[ 9634], 00:09:04.991 | 30.00th=[ 9765], 40.00th=[10028], 50.00th=[10421], 60.00th=[10552], 00:09:04.991 | 70.00th=[10945], 80.00th=[16712], 90.00th=[18482], 95.00th=[35390], 00:09:04.991 | 99.00th=[51643], 99.50th=[52691], 99.90th=[55837], 99.95th=[55837], 00:09:04.991 | 99.99th=[55837] 00:09:04.991 bw ( KiB/s): min=16384, max=20480, per=28.91%, avg=18432.00, stdev=2896.31, samples=2 00:09:04.991 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:09:04.991 lat (msec) : 2=0.06%, 4=0.67%, 10=27.83%, 20=57.54%, 50=12.40% 00:09:04.991 lat (msec) : 100=1.50% 00:09:04.991 cpu : usr=2.09%, sys=5.88%, ctx=418, majf=0, minf=1 00:09:04.991 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:09:04.991 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:04.991 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:04.991 issued rwts: total=4445,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:04.991 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:04.991 job1: (groupid=0, jobs=1): err= 0: pid=3946088: Wed Nov 20 11:03:32 2024 00:09:04.991 read: IOPS=2402, BW=9610KiB/s (9840kB/s)(9648KiB/1004msec) 00:09:04.991 slat (nsec): min=1376, max=13557k, avg=138833.78, stdev=794656.70 00:09:04.991 clat (usec): min=2321, max=49698, avg=17542.88, stdev=6305.93 00:09:04.991 lat (usec): min=6814, max=49724, avg=17681.71, stdev=6369.90 00:09:04.991 clat percentiles (usec): 00:09:04.991 | 1.00th=[ 6980], 5.00th=[12649], 10.00th=[13960], 20.00th=[14353], 00:09:04.991 | 30.00th=[14746], 40.00th=[14877], 50.00th=[15008], 60.00th=[15270], 00:09:04.991 | 70.00th=[16909], 80.00th=[19006], 90.00th=[29230], 95.00th=[34866], 00:09:04.991 | 99.00th=[38536], 99.50th=[38536], 99.90th=[41157], 99.95th=[42730], 00:09:04.991 | 99.99th=[49546] 00:09:04.991 write: IOPS=2549, BW=9.96MiB/s (10.4MB/s)(10.0MiB/1004msec); 0 zone resets 00:09:04.991 slat (usec): min=2, max=24653, avg=253.13, stdev=1438.36 00:09:04.991 clat (msec): min=11, max=116, avg=32.79, stdev=22.15 00:09:04.991 lat (msec): min=11, max=116, avg=33.04, stdev=22.28 00:09:04.991 clat percentiles (msec): 00:09:04.991 | 1.00th=[ 12], 5.00th=[ 13], 10.00th=[ 15], 20.00th=[ 17], 00:09:04.991 | 30.00th=[ 18], 40.00th=[ 21], 50.00th=[ 23], 60.00th=[ 33], 00:09:04.991 | 70.00th=[ 40], 80.00th=[ 49], 90.00th=[ 54], 95.00th=[ 80], 00:09:04.991 | 99.00th=[ 112], 99.50th=[ 115], 99.90th=[ 117], 99.95th=[ 117], 00:09:04.991 | 99.99th=[ 117] 00:09:04.991 bw ( KiB/s): min= 8424, max=12056, per=16.06%, avg=10240.00, stdev=2568.21, samples=2 00:09:04.991 iops : min= 2106, max= 3014, avg=2560.00, stdev=642.05, samples=2 00:09:04.991 lat (msec) : 4=0.02%, 10=0.84%, 20=60.00%, 50=31.19%, 100=6.05% 00:09:04.991 lat (msec) : 250=1.89% 00:09:04.991 cpu : usr=3.29%, sys=2.79%, ctx=304, majf=0, minf=1 00:09:04.991 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:09:04.991 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:04.991 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:04.991 issued rwts: total=2412,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:04.991 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:04.991 job2: (groupid=0, jobs=1): err= 0: pid=3946108: Wed Nov 20 11:03:32 2024 00:09:04.991 read: IOPS=4051, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1011msec) 00:09:04.991 slat (nsec): min=1413, max=12719k, avg=111105.35, stdev=784873.17 00:09:04.991 clat (usec): min=4712, max=36003, avg=13363.09, stdev=4107.46 00:09:04.991 lat (usec): min=4724, max=36013, avg=13474.20, stdev=4172.75 00:09:04.991 clat percentiles (usec): 00:09:04.991 | 1.00th=[ 6718], 5.00th=[ 9372], 10.00th=[ 9765], 20.00th=[10290], 00:09:04.991 | 30.00th=[11207], 40.00th=[11994], 50.00th=[12518], 60.00th=[13435], 00:09:04.991 | 70.00th=[14353], 80.00th=[15008], 90.00th=[17695], 95.00th=[22414], 00:09:04.991 | 99.00th=[28181], 99.50th=[32375], 99.90th=[35914], 99.95th=[35914], 00:09:04.991 | 99.99th=[35914] 00:09:04.991 write: IOPS=4429, BW=17.3MiB/s (18.1MB/s)(17.5MiB/1011msec); 0 zone resets 00:09:04.991 slat (usec): min=2, max=12056, avg=115.22, stdev=599.40 00:09:04.991 clat (usec): min=3055, max=35969, avg=16366.43, stdev=5806.23 00:09:04.991 lat (usec): min=3066, max=35974, avg=16481.65, stdev=5857.89 00:09:04.991 clat percentiles (usec): 00:09:04.991 | 1.00th=[ 3916], 5.00th=[ 8225], 10.00th=[10028], 20.00th=[11207], 00:09:04.991 | 30.00th=[11863], 40.00th=[13435], 50.00th=[16909], 60.00th=[18220], 00:09:04.991 | 70.00th=[19792], 80.00th=[20579], 90.00th=[23462], 95.00th=[27132], 00:09:04.991 | 99.00th=[30278], 99.50th=[32375], 99.90th=[34341], 99.95th=[34341], 00:09:04.991 | 99.99th=[35914] 00:09:04.991 bw ( KiB/s): min=16384, max=18416, per=27.29%, avg=17400.00, stdev=1436.84, samples=2 00:09:04.991 iops : min= 4096, max= 4604, avg=4350.00, stdev=359.21, samples=2 00:09:04.991 lat (msec) : 4=0.61%, 10=11.57%, 20=69.64%, 50=18.18% 00:09:04.991 cpu : usr=3.37%, sys=6.04%, ctx=439, majf=0, minf=1 00:09:04.991 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:09:04.991 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:04.991 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:04.991 issued rwts: total=4096,4478,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:04.991 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:04.991 job3: (groupid=0, jobs=1): err= 0: pid=3946114: Wed Nov 20 11:03:32 2024 00:09:04.991 read: IOPS=4047, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1012msec) 00:09:04.991 slat (nsec): min=1101, max=16008k, avg=109026.90, stdev=779741.12 00:09:04.991 clat (usec): min=3983, max=44675, avg=13312.81, stdev=5009.94 00:09:04.991 lat (usec): min=3992, max=44684, avg=13421.84, stdev=5063.49 00:09:04.991 clat percentiles (usec): 00:09:04.991 | 1.00th=[ 6521], 5.00th=[ 8356], 10.00th=[ 9634], 20.00th=[10552], 00:09:04.991 | 30.00th=[11207], 40.00th=[11469], 50.00th=[11863], 60.00th=[12256], 00:09:04.991 | 70.00th=[13304], 80.00th=[14484], 90.00th=[19268], 95.00th=[24249], 00:09:04.991 | 99.00th=[34866], 99.50th=[39584], 99.90th=[44827], 99.95th=[44827], 00:09:04.991 | 99.99th=[44827] 00:09:04.991 write: IOPS=4432, BW=17.3MiB/s (18.2MB/s)(17.5MiB/1012msec); 0 zone resets 00:09:04.991 slat (nsec): min=1875, max=12103k, avg=115092.35, stdev=574358.31 00:09:04.991 clat (usec): min=1146, max=44686, avg=16478.93, stdev=8669.88 00:09:04.991 lat (usec): min=1156, max=45795, avg=16594.02, stdev=8726.82 00:09:04.991 clat percentiles (usec): 00:09:04.991 | 1.00th=[ 3425], 5.00th=[ 6783], 10.00th=[ 8094], 20.00th=[10159], 00:09:04.991 | 30.00th=[11731], 40.00th=[12256], 50.00th=[13173], 60.00th=[16319], 00:09:04.991 | 70.00th=[18220], 80.00th=[22676], 90.00th=[30016], 95.00th=[35914], 00:09:04.991 | 99.00th=[41681], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:09:04.991 | 99.99th=[44827] 00:09:04.991 bw ( KiB/s): min=14392, max=20480, per=27.35%, avg=17436.00, stdev=4304.87, samples=2 00:09:04.991 iops : min= 3598, max= 5120, avg=4359.00, stdev=1076.22, samples=2 00:09:04.991 lat (msec) : 2=0.24%, 4=0.82%, 10=15.22%, 20=67.32%, 50=16.41% 00:09:04.991 cpu : usr=3.26%, sys=5.14%, ctx=505, majf=0, minf=1 00:09:04.991 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:09:04.991 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:04.991 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:04.991 issued rwts: total=4096,4486,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:04.991 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:04.991 00:09:04.991 Run status group 0 (all jobs): 00:09:04.991 READ: bw=58.1MiB/s (60.9MB/s), 9610KiB/s-17.3MiB/s (9840kB/s-18.1MB/s), io=58.8MiB (61.6MB), run=1004-1012msec 00:09:04.991 WRITE: bw=62.3MiB/s (65.3MB/s), 9.96MiB/s-17.9MiB/s (10.4MB/s-18.8MB/s), io=63.0MiB (66.1MB), run=1004-1012msec 00:09:04.991 00:09:04.991 Disk stats (read/write): 00:09:04.991 nvme0n1: ios=3861/4096, merge=0/0, ticks=31944/26494, in_queue=58438, util=89.78% 00:09:04.991 nvme0n2: ios=2038/2048, merge=0/0, ticks=11734/23241, in_queue=34975, util=91.27% 00:09:04.991 nvme0n3: ios=3624/3647, merge=0/0, ticks=46912/57129, in_queue=104041, util=97.40% 00:09:04.991 nvme0n4: ios=3641/3695, merge=0/0, ticks=41686/51530, in_queue=93216, util=95.17% 00:09:04.991 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:09:04.991 [global] 00:09:04.991 thread=1 00:09:04.991 invalidate=1 00:09:04.991 rw=randwrite 00:09:04.991 time_based=1 00:09:04.991 runtime=1 00:09:04.991 ioengine=libaio 00:09:04.991 direct=1 00:09:04.991 bs=4096 00:09:04.991 iodepth=128 00:09:04.991 norandommap=0 00:09:04.991 numjobs=1 00:09:04.991 00:09:04.991 verify_dump=1 00:09:04.991 verify_backlog=512 00:09:04.991 verify_state_save=0 00:09:04.991 do_verify=1 00:09:04.991 verify=crc32c-intel 00:09:04.991 [job0] 00:09:04.991 filename=/dev/nvme0n1 00:09:04.991 [job1] 00:09:04.991 filename=/dev/nvme0n2 00:09:04.991 [job2] 00:09:04.991 filename=/dev/nvme0n3 00:09:04.991 [job3] 00:09:04.991 filename=/dev/nvme0n4 00:09:04.992 Could not set queue depth (nvme0n1) 00:09:04.992 Could not set queue depth (nvme0n2) 00:09:04.992 Could not set queue depth (nvme0n3) 00:09:04.992 Could not set queue depth (nvme0n4) 00:09:05.251 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:05.251 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:05.251 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:05.251 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:05.251 fio-3.35 00:09:05.251 Starting 4 threads 00:09:06.624 00:09:06.624 job0: (groupid=0, jobs=1): err= 0: pid=3946501: Wed Nov 20 11:03:33 2024 00:09:06.624 read: IOPS=6306, BW=24.6MiB/s (25.8MB/s)(24.7MiB/1002msec) 00:09:06.624 slat (nsec): min=1067, max=9894.5k, avg=73981.93, stdev=561249.48 00:09:06.624 clat (usec): min=743, max=23027, avg=10009.78, stdev=2796.32 00:09:06.624 lat (usec): min=899, max=23044, avg=10083.76, stdev=2835.91 00:09:06.624 clat percentiles (usec): 00:09:06.624 | 1.00th=[ 2040], 5.00th=[ 4359], 10.00th=[ 7504], 20.00th=[ 8356], 00:09:06.624 | 30.00th=[ 9110], 40.00th=[ 9765], 50.00th=[10159], 60.00th=[10552], 00:09:06.624 | 70.00th=[10814], 80.00th=[11338], 90.00th=[13698], 95.00th=[14353], 00:09:06.624 | 99.00th=[17695], 99.50th=[19268], 99.90th=[22938], 99.95th=[22938], 00:09:06.624 | 99.99th=[22938] 00:09:06.624 write: IOPS=6642, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1002msec); 0 zone resets 00:09:06.624 slat (nsec): min=1758, max=10829k, avg=68284.39, stdev=550762.75 00:09:06.624 clat (usec): min=949, max=22573, avg=9604.28, stdev=2821.77 00:09:06.624 lat (usec): min=961, max=22576, avg=9672.56, stdev=2876.19 00:09:06.624 clat percentiles (usec): 00:09:06.624 | 1.00th=[ 2507], 5.00th=[ 4113], 10.00th=[ 5932], 20.00th=[ 8160], 00:09:06.624 | 30.00th=[ 8848], 40.00th=[ 9241], 50.00th=[ 9765], 60.00th=[10159], 00:09:06.624 | 70.00th=[10552], 80.00th=[11469], 90.00th=[12256], 95.00th=[14222], 00:09:06.624 | 99.00th=[17433], 99.50th=[19006], 99.90th=[20579], 99.95th=[21365], 00:09:06.624 | 99.99th=[22676] 00:09:06.624 bw ( KiB/s): min=24576, max=28614, per=36.96%, avg=26595.00, stdev=2855.30, samples=2 00:09:06.624 iops : min= 6144, max= 7153, avg=6648.50, stdev=713.47, samples=2 00:09:06.624 lat (usec) : 750=0.01%, 1000=0.15% 00:09:06.624 lat (msec) : 2=0.78%, 4=3.74%, 10=46.75%, 20=48.32%, 50=0.26% 00:09:06.624 cpu : usr=4.30%, sys=5.99%, ctx=432, majf=0, minf=1 00:09:06.624 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:09:06.624 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:06.624 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:06.624 issued rwts: total=6319,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:06.624 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:06.624 job1: (groupid=0, jobs=1): err= 0: pid=3946502: Wed Nov 20 11:03:33 2024 00:09:06.624 read: IOPS=3562, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1006msec) 00:09:06.624 slat (nsec): min=1111, max=21245k, avg=118015.28, stdev=825284.48 00:09:06.624 clat (usec): min=1180, max=51035, avg=16203.73, stdev=7589.01 00:09:06.624 lat (usec): min=1188, max=59013, avg=16321.75, stdev=7639.10 00:09:06.624 clat percentiles (usec): 00:09:06.624 | 1.00th=[ 3949], 5.00th=[ 8586], 10.00th=[10552], 20.00th=[11600], 00:09:06.624 | 30.00th=[11994], 40.00th=[12780], 50.00th=[14222], 60.00th=[16057], 00:09:06.624 | 70.00th=[17957], 80.00th=[20055], 90.00th=[23725], 95.00th=[30278], 00:09:06.624 | 99.00th=[50594], 99.50th=[51119], 99.90th=[51119], 99.95th=[51119], 00:09:06.624 | 99.99th=[51119] 00:09:06.624 write: IOPS=3812, BW=14.9MiB/s (15.6MB/s)(15.0MiB/1006msec); 0 zone resets 00:09:06.624 slat (nsec): min=1866, max=22690k, avg=146017.13, stdev=927069.13 00:09:06.624 clat (usec): min=539, max=74362, avg=17971.92, stdev=12991.09 00:09:06.624 lat (usec): min=2768, max=74372, avg=18117.94, stdev=13086.21 00:09:06.624 clat percentiles (usec): 00:09:06.624 | 1.00th=[ 3032], 5.00th=[ 8455], 10.00th=[ 9896], 20.00th=[10552], 00:09:06.624 | 30.00th=[11469], 40.00th=[12125], 50.00th=[12649], 60.00th=[15008], 00:09:06.624 | 70.00th=[16319], 80.00th=[21890], 90.00th=[35390], 95.00th=[50594], 00:09:06.624 | 99.00th=[68682], 99.50th=[72877], 99.90th=[73925], 99.95th=[73925], 00:09:06.624 | 99.99th=[73925] 00:09:06.624 bw ( KiB/s): min=13272, max=16351, per=20.58%, avg=14811.50, stdev=2177.18, samples=2 00:09:06.624 iops : min= 3318, max= 4087, avg=3702.50, stdev=543.77, samples=2 00:09:06.624 lat (usec) : 750=0.01% 00:09:06.624 lat (msec) : 2=0.31%, 4=0.93%, 10=7.71%, 20=69.38%, 50=18.08% 00:09:06.624 lat (msec) : 100=3.59% 00:09:06.624 cpu : usr=2.49%, sys=3.88%, ctx=316, majf=0, minf=1 00:09:06.624 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:09:06.624 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:06.624 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:06.624 issued rwts: total=3584,3835,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:06.624 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:06.624 job2: (groupid=0, jobs=1): err= 0: pid=3946509: Wed Nov 20 11:03:33 2024 00:09:06.624 read: IOPS=3110, BW=12.2MiB/s (12.7MB/s)(12.3MiB/1010msec) 00:09:06.624 slat (nsec): min=1458, max=19807k, avg=137605.56, stdev=1011797.47 00:09:06.624 clat (usec): min=4712, max=60947, avg=15687.85, stdev=6842.76 00:09:06.624 lat (usec): min=4724, max=60954, avg=15825.45, stdev=6925.28 00:09:06.624 clat percentiles (usec): 00:09:06.624 | 1.00th=[ 5735], 5.00th=[ 8717], 10.00th=[10552], 20.00th=[11469], 00:09:06.624 | 30.00th=[12780], 40.00th=[13304], 50.00th=[13829], 60.00th=[14484], 00:09:06.624 | 70.00th=[16450], 80.00th=[19530], 90.00th=[21627], 95.00th=[27919], 00:09:06.624 | 99.00th=[48497], 99.50th=[54264], 99.90th=[61080], 99.95th=[61080], 00:09:06.624 | 99.99th=[61080] 00:09:06.624 write: IOPS=3548, BW=13.9MiB/s (14.5MB/s)(14.0MiB/1010msec); 0 zone resets 00:09:06.624 slat (usec): min=2, max=12723, avg=147.73, stdev=774.81 00:09:06.624 clat (usec): min=834, max=116636, avg=21997.15, stdev=19830.19 00:09:06.624 lat (usec): min=845, max=116649, avg=22144.88, stdev=19952.91 00:09:06.624 clat percentiles (msec): 00:09:06.624 | 1.00th=[ 4], 5.00th=[ 7], 10.00th=[ 9], 20.00th=[ 11], 00:09:06.624 | 30.00th=[ 12], 40.00th=[ 12], 50.00th=[ 14], 60.00th=[ 14], 00:09:06.624 | 70.00th=[ 24], 80.00th=[ 32], 90.00th=[ 48], 95.00th=[ 57], 00:09:06.624 | 99.00th=[ 107], 99.50th=[ 113], 99.90th=[ 117], 99.95th=[ 117], 00:09:06.624 | 99.99th=[ 117] 00:09:06.624 bw ( KiB/s): min=10075, max=18120, per=19.59%, avg=14097.50, stdev=5688.67, samples=2 00:09:06.624 iops : min= 2518, max= 4530, avg=3524.00, stdev=1422.70, samples=2 00:09:06.624 lat (usec) : 1000=0.04% 00:09:06.624 lat (msec) : 2=0.03%, 4=0.71%, 10=9.08%, 20=63.26%, 50=22.02% 00:09:06.624 lat (msec) : 100=3.91%, 250=0.94% 00:09:06.624 cpu : usr=2.68%, sys=4.36%, ctx=436, majf=0, minf=1 00:09:06.624 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:09:06.624 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:06.624 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:06.624 issued rwts: total=3142,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:06.624 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:06.624 job3: (groupid=0, jobs=1): err= 0: pid=3946510: Wed Nov 20 11:03:33 2024 00:09:06.624 read: IOPS=4036, BW=15.8MiB/s (16.5MB/s)(15.9MiB/1006msec) 00:09:06.624 slat (nsec): min=1567, max=19137k, avg=123764.76, stdev=866848.55 00:09:06.624 clat (usec): min=2024, max=55839, avg=15771.06, stdev=7186.84 00:09:06.624 lat (usec): min=5957, max=55862, avg=15894.82, stdev=7257.96 00:09:06.624 clat percentiles (usec): 00:09:06.624 | 1.00th=[ 6652], 5.00th=[ 9503], 10.00th=[10814], 20.00th=[11338], 00:09:06.624 | 30.00th=[11994], 40.00th=[12780], 50.00th=[13304], 60.00th=[13829], 00:09:06.624 | 70.00th=[15664], 80.00th=[17433], 90.00th=[27657], 95.00th=[33817], 00:09:06.624 | 99.00th=[39584], 99.50th=[39584], 99.90th=[43254], 99.95th=[44827], 00:09:06.624 | 99.99th=[55837] 00:09:06.624 write: IOPS=4071, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1006msec); 0 zone resets 00:09:06.624 slat (usec): min=2, max=24824, avg=115.41, stdev=716.74 00:09:06.624 clat (usec): min=7835, max=45692, avg=14859.40, stdev=5459.19 00:09:06.624 lat (usec): min=7845, max=45728, avg=14974.81, stdev=5517.28 00:09:06.624 clat percentiles (usec): 00:09:06.624 | 1.00th=[ 8979], 5.00th=[10683], 10.00th=[11207], 20.00th=[11863], 00:09:06.624 | 30.00th=[11994], 40.00th=[12387], 50.00th=[13042], 60.00th=[13566], 00:09:06.624 | 70.00th=[13960], 80.00th=[17171], 90.00th=[21365], 95.00th=[28705], 00:09:06.624 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:06.624 | 99.99th=[45876] 00:09:06.624 bw ( KiB/s): min=16351, max=16384, per=22.74%, avg=16367.50, stdev=23.33, samples=2 00:09:06.624 iops : min= 4087, max= 4096, avg=4091.50, stdev= 6.36, samples=2 00:09:06.624 lat (msec) : 4=0.01%, 10=4.71%, 20=81.43%, 50=13.84%, 100=0.01% 00:09:06.624 cpu : usr=3.38%, sys=5.47%, ctx=364, majf=0, minf=1 00:09:06.624 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:09:06.624 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:06.624 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:06.624 issued rwts: total=4061,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:06.624 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:06.624 00:09:06.624 Run status group 0 (all jobs): 00:09:06.624 READ: bw=66.2MiB/s (69.4MB/s), 12.2MiB/s-24.6MiB/s (12.7MB/s-25.8MB/s), io=66.8MiB (70.1MB), run=1002-1010msec 00:09:06.624 WRITE: bw=70.3MiB/s (73.7MB/s), 13.9MiB/s-25.9MiB/s (14.5MB/s-27.2MB/s), io=71.0MiB (74.4MB), run=1002-1010msec 00:09:06.624 00:09:06.624 Disk stats (read/write): 00:09:06.624 nvme0n1: ios=5446/5632, merge=0/0, ticks=46141/44371, in_queue=90512, util=88.28% 00:09:06.624 nvme0n2: ios=3122/3297, merge=0/0, ticks=31983/37525, in_queue=69508, util=97.46% 00:09:06.624 nvme0n3: ios=2685/3072, merge=0/0, ticks=41357/63615, in_queue=104972, util=89.07% 00:09:06.625 nvme0n4: ios=3116/3441, merge=0/0, ticks=25970/23201, in_queue=49171, util=99.79% 00:09:06.625 11:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:09:06.625 11:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3946682 00:09:06.625 11:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:09:06.625 11:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:09:06.625 [global] 00:09:06.625 thread=1 00:09:06.625 invalidate=1 00:09:06.625 rw=read 00:09:06.625 time_based=1 00:09:06.625 runtime=10 00:09:06.625 ioengine=libaio 00:09:06.625 direct=1 00:09:06.625 bs=4096 00:09:06.625 iodepth=1 00:09:06.625 norandommap=1 00:09:06.625 numjobs=1 00:09:06.625 00:09:06.625 [job0] 00:09:06.625 filename=/dev/nvme0n1 00:09:06.625 [job1] 00:09:06.625 filename=/dev/nvme0n2 00:09:06.625 [job2] 00:09:06.625 filename=/dev/nvme0n3 00:09:06.625 [job3] 00:09:06.625 filename=/dev/nvme0n4 00:09:06.625 Could not set queue depth (nvme0n1) 00:09:06.625 Could not set queue depth (nvme0n2) 00:09:06.625 Could not set queue depth (nvme0n3) 00:09:06.625 Could not set queue depth (nvme0n4) 00:09:06.625 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:06.625 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:06.625 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:06.625 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:06.625 fio-3.35 00:09:06.625 Starting 4 threads 00:09:09.901 11:03:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:09:09.901 11:03:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:09:09.901 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=33886208, buflen=4096 00:09:09.901 fio: pid=3946884, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:09.901 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=29007872, buflen=4096 00:09:09.901 fio: pid=3946882, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:09.901 11:03:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:09.901 11:03:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:09:09.901 11:03:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:09.901 11:03:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:09:09.901 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=315392, buflen=4096 00:09:09.901 fio: pid=3946880, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:10.159 11:03:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:10.159 11:03:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:09:10.159 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=4022272, buflen=4096 00:09:10.159 fio: pid=3946881, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:10.159 00:09:10.159 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3946880: Wed Nov 20 11:03:37 2024 00:09:10.159 read: IOPS=24, BW=97.7KiB/s (100kB/s)(308KiB/3152msec) 00:09:10.159 slat (usec): min=8, max=10879, avg=162.41, stdev=1229.22 00:09:10.159 clat (usec): min=351, max=41957, avg=40489.75, stdev=4638.94 00:09:10.159 lat (usec): min=401, max=52042, avg=40653.96, stdev=4818.13 00:09:10.159 clat percentiles (usec): 00:09:10.159 | 1.00th=[ 351], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:09:10.159 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:10.159 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:10.159 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:10.159 | 99.99th=[42206] 00:09:10.159 bw ( KiB/s): min= 93, max= 104, per=0.50%, avg=98.17, stdev= 4.67, samples=6 00:09:10.159 iops : min= 23, max= 26, avg=24.50, stdev= 1.22, samples=6 00:09:10.159 lat (usec) : 500=1.28% 00:09:10.159 lat (msec) : 50=97.44% 00:09:10.159 cpu : usr=0.13%, sys=0.00%, ctx=80, majf=0, minf=1 00:09:10.159 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:10.159 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:10.159 complete : 0=1.3%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:10.159 issued rwts: total=78,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:10.159 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:10.159 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3946881: Wed Nov 20 11:03:37 2024 00:09:10.159 read: IOPS=292, BW=1170KiB/s (1198kB/s)(3928KiB/3357msec) 00:09:10.159 slat (usec): min=6, max=13791, avg=36.50, stdev=563.38 00:09:10.159 clat (usec): min=192, max=42008, avg=3370.68, stdev=10843.10 00:09:10.159 lat (usec): min=200, max=55087, avg=3407.20, stdev=10957.75 00:09:10.159 clat percentiles (usec): 00:09:10.159 | 1.00th=[ 219], 5.00th=[ 231], 10.00th=[ 235], 20.00th=[ 239], 00:09:10.159 | 30.00th=[ 245], 40.00th=[ 247], 50.00th=[ 251], 60.00th=[ 255], 00:09:10.159 | 70.00th=[ 260], 80.00th=[ 265], 90.00th=[ 326], 95.00th=[41157], 00:09:10.159 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:09:10.159 | 99.99th=[42206] 00:09:10.159 bw ( KiB/s): min= 93, max= 4416, per=6.64%, avg=1298.17, stdev=1912.37, samples=6 00:09:10.159 iops : min= 23, max= 1104, avg=324.50, stdev=478.12, samples=6 00:09:10.159 lat (usec) : 250=48.63%, 500=43.64% 00:09:10.159 lat (msec) : 50=7.63% 00:09:10.159 cpu : usr=0.15%, sys=0.48%, ctx=986, majf=0, minf=2 00:09:10.159 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:10.159 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:10.159 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:10.159 issued rwts: total=983,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:10.159 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:10.159 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3946882: Wed Nov 20 11:03:37 2024 00:09:10.159 read: IOPS=2424, BW=9695KiB/s (9927kB/s)(27.7MiB/2922msec) 00:09:10.159 slat (usec): min=6, max=15562, avg=11.98, stdev=227.90 00:09:10.159 clat (usec): min=166, max=41292, avg=395.60, stdev=2545.61 00:09:10.159 lat (usec): min=174, max=41301, avg=407.58, stdev=2556.69 00:09:10.159 clat percentiles (usec): 00:09:10.159 | 1.00th=[ 178], 5.00th=[ 188], 10.00th=[ 194], 20.00th=[ 204], 00:09:10.159 | 30.00th=[ 217], 40.00th=[ 233], 50.00th=[ 239], 60.00th=[ 245], 00:09:10.159 | 70.00th=[ 249], 80.00th=[ 255], 90.00th=[ 262], 95.00th=[ 269], 00:09:10.159 | 99.00th=[ 306], 99.50th=[ 445], 99.90th=[41157], 99.95th=[41157], 00:09:10.159 | 99.99th=[41157] 00:09:10.159 bw ( KiB/s): min= 96, max=16576, per=44.44%, avg=8691.20, stdev=8096.88, samples=5 00:09:10.159 iops : min= 24, max= 4144, avg=2172.80, stdev=2024.22, samples=5 00:09:10.159 lat (usec) : 250=70.82%, 500=28.72%, 750=0.03% 00:09:10.159 lat (msec) : 10=0.01%, 20=0.01%, 50=0.40% 00:09:10.159 cpu : usr=1.61%, sys=3.63%, ctx=7085, majf=0, minf=2 00:09:10.159 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:10.159 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:10.159 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:10.159 issued rwts: total=7083,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:10.159 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:10.159 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3946884: Wed Nov 20 11:03:37 2024 00:09:10.159 read: IOPS=3021, BW=11.8MiB/s (12.4MB/s)(32.3MiB/2738msec) 00:09:10.159 slat (nsec): min=6751, max=42624, avg=8107.19, stdev=1468.64 00:09:10.159 clat (usec): min=160, max=41352, avg=318.40, stdev=1908.65 00:09:10.159 lat (usec): min=176, max=41362, avg=326.51, stdev=1909.02 00:09:10.159 clat percentiles (usec): 00:09:10.159 | 1.00th=[ 188], 5.00th=[ 198], 10.00th=[ 202], 20.00th=[ 208], 00:09:10.159 | 30.00th=[ 212], 40.00th=[ 219], 50.00th=[ 225], 60.00th=[ 233], 00:09:10.159 | 70.00th=[ 241], 80.00th=[ 247], 90.00th=[ 258], 95.00th=[ 265], 00:09:10.159 | 99.00th=[ 289], 99.50th=[ 412], 99.90th=[41157], 99.95th=[41157], 00:09:10.159 | 99.99th=[41157] 00:09:10.159 bw ( KiB/s): min= 7760, max=17888, per=67.63%, avg=13227.20, stdev=3806.31, samples=5 00:09:10.159 iops : min= 1940, max= 4472, avg=3306.80, stdev=951.58, samples=5 00:09:10.159 lat (usec) : 250=82.95%, 500=16.79%, 750=0.02% 00:09:10.159 lat (msec) : 20=0.01%, 50=0.22% 00:09:10.159 cpu : usr=1.53%, sys=4.90%, ctx=8275, majf=0, minf=2 00:09:10.159 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:10.159 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:10.159 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:10.159 issued rwts: total=8274,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:10.159 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:10.159 00:09:10.159 Run status group 0 (all jobs): 00:09:10.159 READ: bw=19.1MiB/s (20.0MB/s), 97.7KiB/s-11.8MiB/s (100kB/s-12.4MB/s), io=64.1MiB (67.2MB), run=2738-3357msec 00:09:10.159 00:09:10.159 Disk stats (read/write): 00:09:10.159 nvme0n1: ios=76/0, merge=0/0, ticks=3079/0, in_queue=3079, util=95.38% 00:09:10.159 nvme0n2: ios=1010/0, merge=0/0, ticks=3366/0, in_queue=3366, util=96.18% 00:09:10.159 nvme0n3: ios=6864/0, merge=0/0, ticks=2673/0, in_queue=2673, util=95.64% 00:09:10.159 nvme0n4: ios=8270/0, merge=0/0, ticks=2420/0, in_queue=2420, util=96.48% 00:09:10.416 11:03:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:10.416 11:03:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:09:10.673 11:03:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:10.673 11:03:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:09:10.930 11:03:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:10.930 11:03:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:09:10.930 11:03:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:10.930 11:03:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:09:11.187 11:03:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:09:11.187 11:03:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 3946682 00:09:11.187 11:03:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:09:11.187 11:03:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:11.444 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:11.444 11:03:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:11.444 11:03:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:09:11.444 11:03:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:11.444 11:03:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:11.444 11:03:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:11.444 11:03:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:11.444 11:03:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:09:11.444 11:03:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:09:11.444 11:03:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:09:11.444 nvmf hotplug test: fio failed as expected 00:09:11.444 11:03:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:11.702 11:03:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:09:11.702 11:03:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:09:11.702 11:03:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:09:11.702 11:03:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:09:11.702 11:03:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:09:11.702 11:03:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:11.702 11:03:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:09:11.702 11:03:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:11.702 11:03:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:09:11.702 11:03:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:11.702 11:03:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:11.702 rmmod nvme_tcp 00:09:11.702 rmmod nvme_fabrics 00:09:11.702 rmmod nvme_keyring 00:09:11.702 11:03:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:11.702 11:03:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:09:11.702 11:03:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:09:11.702 11:03:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 3943809 ']' 00:09:11.702 11:03:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 3943809 00:09:11.702 11:03:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 3943809 ']' 00:09:11.702 11:03:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 3943809 00:09:11.702 11:03:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:09:11.702 11:03:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:11.702 11:03:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3943809 00:09:11.702 11:03:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:11.702 11:03:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:11.702 11:03:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3943809' 00:09:11.702 killing process with pid 3943809 00:09:11.702 11:03:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 3943809 00:09:11.702 11:03:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 3943809 00:09:11.961 11:03:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:11.961 11:03:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:11.961 11:03:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:11.961 11:03:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:09:11.961 11:03:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:09:11.961 11:03:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:11.961 11:03:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:09:11.961 11:03:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:11.961 11:03:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:11.961 11:03:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:11.961 11:03:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:11.961 11:03:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:13.873 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:13.873 00:09:13.873 real 0m26.915s 00:09:13.874 user 1m47.404s 00:09:13.874 sys 0m8.559s 00:09:13.874 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:13.874 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:13.874 ************************************ 00:09:13.874 END TEST nvmf_fio_target 00:09:13.874 ************************************ 00:09:14.132 11:03:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:14.132 11:03:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:14.132 11:03:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:14.132 11:03:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:14.132 ************************************ 00:09:14.133 START TEST nvmf_bdevio 00:09:14.133 ************************************ 00:09:14.133 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:14.133 * Looking for test storage... 00:09:14.133 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:14.133 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:14.133 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:09:14.133 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:14.133 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:14.133 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:14.133 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:14.133 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:14.133 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:09:14.133 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:09:14.133 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:09:14.133 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:09:14.133 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:09:14.133 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:09:14.133 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:09:14.133 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:14.133 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:09:14.133 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:09:14.133 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:14.133 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:14.133 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:09:14.133 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:09:14.133 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:14.133 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:09:14.133 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:09:14.133 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:09:14.133 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:09:14.133 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:14.133 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:09:14.133 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:09:14.133 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:14.133 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:14.133 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:09:14.133 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:14.133 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:14.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.133 --rc genhtml_branch_coverage=1 00:09:14.133 --rc genhtml_function_coverage=1 00:09:14.133 --rc genhtml_legend=1 00:09:14.133 --rc geninfo_all_blocks=1 00:09:14.133 --rc geninfo_unexecuted_blocks=1 00:09:14.133 00:09:14.133 ' 00:09:14.133 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:14.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.133 --rc genhtml_branch_coverage=1 00:09:14.133 --rc genhtml_function_coverage=1 00:09:14.133 --rc genhtml_legend=1 00:09:14.133 --rc geninfo_all_blocks=1 00:09:14.133 --rc geninfo_unexecuted_blocks=1 00:09:14.133 00:09:14.133 ' 00:09:14.133 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:14.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.133 --rc genhtml_branch_coverage=1 00:09:14.133 --rc genhtml_function_coverage=1 00:09:14.133 --rc genhtml_legend=1 00:09:14.133 --rc geninfo_all_blocks=1 00:09:14.133 --rc geninfo_unexecuted_blocks=1 00:09:14.133 00:09:14.133 ' 00:09:14.133 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:14.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.133 --rc genhtml_branch_coverage=1 00:09:14.133 --rc genhtml_function_coverage=1 00:09:14.133 --rc genhtml_legend=1 00:09:14.133 --rc geninfo_all_blocks=1 00:09:14.133 --rc geninfo_unexecuted_blocks=1 00:09:14.133 00:09:14.133 ' 00:09:14.133 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:14.133 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:09:14.133 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:14.133 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:14.133 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:14.133 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:14.133 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:14.133 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:14.133 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:14.133 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:14.133 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:14.133 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:14.133 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:14.133 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:14.133 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:14.133 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:14.133 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:14.133 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:14.133 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:14.133 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:09:14.393 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:14.393 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:14.393 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:14.393 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.393 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.393 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.393 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:09:14.393 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.393 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:09:14.393 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:14.393 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:14.393 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:14.393 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:14.393 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:14.393 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:14.393 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:14.393 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:14.393 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:14.393 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:14.393 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:14.393 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:14.393 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:09:14.393 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:14.393 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:14.393 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:14.393 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:14.393 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:14.393 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:14.393 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:14.393 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:14.393 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:14.393 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:14.393 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:09:14.393 11:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:20.967 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:20.967 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:09:20.967 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:20.967 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:20.967 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:20.967 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:20.967 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:20.967 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:09:20.967 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:20.967 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:09:20.967 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:09:20.967 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:09:20.967 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:09:20.967 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:09:20.967 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:09:20.967 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:20.967 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:20.967 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:20.967 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:20.967 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:20.967 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:20.967 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:20.967 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:20.967 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:20.967 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:20.967 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:20.967 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:20.967 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:20.967 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:20.967 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:20.967 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:20.967 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:20.967 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:20.967 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:20.967 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:20.967 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:20.967 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:20.967 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:20.967 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:20.967 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:20.967 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:20.967 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:20.967 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:20.967 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:20.967 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:20.967 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:20.967 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:20.967 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:20.967 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:20.967 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:20.967 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:20.967 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:20.967 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:20.967 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:20.967 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:20.967 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:20.967 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:20.967 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:20.967 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:20.967 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:20.967 Found net devices under 0000:86:00.0: cvl_0_0 00:09:20.967 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:20.967 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:20.967 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:20.967 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:20.967 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:20.967 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:20.967 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:20.967 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:20.967 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:20.967 Found net devices under 0000:86:00.1: cvl_0_1 00:09:20.967 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:20.967 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:20.967 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:09:20.967 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:20.967 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:20.967 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:20.967 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:20.967 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:20.967 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:20.967 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:20.967 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:20.967 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:20.967 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:20.967 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:20.967 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:20.967 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:20.967 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:20.967 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:20.967 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:20.967 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:20.967 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:20.967 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:20.967 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:20.967 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:20.968 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:20.968 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:20.968 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:20.968 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:20.968 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:20.968 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:20.968 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.323 ms 00:09:20.968 00:09:20.968 --- 10.0.0.2 ping statistics --- 00:09:20.968 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:20.968 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:09:20.968 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:20.968 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:20.968 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.148 ms 00:09:20.968 00:09:20.968 --- 10.0.0.1 ping statistics --- 00:09:20.968 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:20.968 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:09:20.968 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:20.968 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:09:20.968 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:20.968 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:20.968 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:20.968 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:20.968 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:20.968 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:20.968 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:20.968 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:09:20.968 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:20.968 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:20.968 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:20.968 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=3951213 00:09:20.968 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:09:20.968 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 3951213 00:09:20.968 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 3951213 ']' 00:09:20.968 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:20.968 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:20.968 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:20.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:20.968 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:20.968 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:20.968 [2024-11-20 11:03:47.733778] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:09:20.968 [2024-11-20 11:03:47.733832] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:20.968 [2024-11-20 11:03:47.813672] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:20.968 [2024-11-20 11:03:47.856413] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:20.968 [2024-11-20 11:03:47.856453] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:20.968 [2024-11-20 11:03:47.856460] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:20.968 [2024-11-20 11:03:47.856466] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:20.968 [2024-11-20 11:03:47.856472] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:20.968 [2024-11-20 11:03:47.858049] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:20.968 [2024-11-20 11:03:47.858081] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:09:20.968 [2024-11-20 11:03:47.858188] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:20.968 [2024-11-20 11:03:47.858189] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:09:20.968 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:20.968 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:09:20.968 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:20.968 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:20.968 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:20.968 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:20.968 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:20.968 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.968 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:20.968 [2024-11-20 11:03:47.995049] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:20.968 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.968 11:03:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:20.968 11:03:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.968 11:03:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:20.968 Malloc0 00:09:20.968 11:03:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.968 11:03:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:20.968 11:03:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.968 11:03:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:20.968 11:03:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.968 11:03:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:20.968 11:03:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.968 11:03:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:20.968 11:03:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.968 11:03:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:20.968 11:03:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.968 11:03:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:20.968 [2024-11-20 11:03:48.056293] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:20.968 11:03:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.968 11:03:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:09:20.968 11:03:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:09:20.968 11:03:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:09:20.968 11:03:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:09:20.968 11:03:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:20.968 11:03:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:20.968 { 00:09:20.968 "params": { 00:09:20.968 "name": "Nvme$subsystem", 00:09:20.968 "trtype": "$TEST_TRANSPORT", 00:09:20.968 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:20.968 "adrfam": "ipv4", 00:09:20.968 "trsvcid": "$NVMF_PORT", 00:09:20.968 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:20.968 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:20.968 "hdgst": ${hdgst:-false}, 00:09:20.968 "ddgst": ${ddgst:-false} 00:09:20.968 }, 00:09:20.968 "method": "bdev_nvme_attach_controller" 00:09:20.968 } 00:09:20.968 EOF 00:09:20.968 )") 00:09:20.968 11:03:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:09:20.968 11:03:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:09:20.968 11:03:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:09:20.968 11:03:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:20.968 "params": { 00:09:20.968 "name": "Nvme1", 00:09:20.968 "trtype": "tcp", 00:09:20.968 "traddr": "10.0.0.2", 00:09:20.968 "adrfam": "ipv4", 00:09:20.968 "trsvcid": "4420", 00:09:20.968 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:20.968 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:20.968 "hdgst": false, 00:09:20.968 "ddgst": false 00:09:20.968 }, 00:09:20.968 "method": "bdev_nvme_attach_controller" 00:09:20.968 }' 00:09:20.968 [2024-11-20 11:03:48.107573] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:09:20.968 [2024-11-20 11:03:48.107617] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3951379 ] 00:09:20.969 [2024-11-20 11:03:48.186046] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:20.969 [2024-11-20 11:03:48.230268] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:20.969 [2024-11-20 11:03:48.230372] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:20.969 [2024-11-20 11:03:48.230373] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:20.969 I/O targets: 00:09:20.969 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:09:20.969 00:09:20.969 00:09:20.969 CUnit - A unit testing framework for C - Version 2.1-3 00:09:20.969 http://cunit.sourceforge.net/ 00:09:20.969 00:09:20.969 00:09:20.969 Suite: bdevio tests on: Nvme1n1 00:09:21.226 Test: blockdev write read block ...passed 00:09:21.226 Test: blockdev write zeroes read block ...passed 00:09:21.226 Test: blockdev write zeroes read no split ...passed 00:09:21.226 Test: blockdev write zeroes read split ...passed 00:09:21.226 Test: blockdev write zeroes read split partial ...passed 00:09:21.226 Test: blockdev reset ...[2024-11-20 11:03:48.541927] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:09:21.226 [2024-11-20 11:03:48.541999] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1766340 (9): Bad file descriptor 00:09:21.226 [2024-11-20 11:03:48.644227] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:09:21.226 passed 00:09:21.226 Test: blockdev write read 8 blocks ...passed 00:09:21.226 Test: blockdev write read size > 128k ...passed 00:09:21.226 Test: blockdev write read invalid size ...passed 00:09:21.484 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:21.484 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:21.484 Test: blockdev write read max offset ...passed 00:09:21.484 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:21.484 Test: blockdev writev readv 8 blocks ...passed 00:09:21.484 Test: blockdev writev readv 30 x 1block ...passed 00:09:21.484 Test: blockdev writev readv block ...passed 00:09:21.484 Test: blockdev writev readv size > 128k ...passed 00:09:21.484 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:21.484 Test: blockdev comparev and writev ...[2024-11-20 11:03:48.854772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:21.484 [2024-11-20 11:03:48.854798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:09:21.484 [2024-11-20 11:03:48.854812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:21.484 [2024-11-20 11:03:48.854820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:09:21.484 [2024-11-20 11:03:48.855058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:21.485 [2024-11-20 11:03:48.855068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:09:21.485 [2024-11-20 11:03:48.855080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:21.485 [2024-11-20 11:03:48.855087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:09:21.485 [2024-11-20 11:03:48.855325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:21.485 [2024-11-20 11:03:48.855334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:09:21.485 [2024-11-20 11:03:48.855346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:21.485 [2024-11-20 11:03:48.855354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:09:21.485 [2024-11-20 11:03:48.855606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:21.485 [2024-11-20 11:03:48.855616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:09:21.485 [2024-11-20 11:03:48.855628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:21.485 [2024-11-20 11:03:48.855635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:09:21.485 passed 00:09:21.485 Test: blockdev nvme passthru rw ...passed 00:09:21.485 Test: blockdev nvme passthru vendor specific ...[2024-11-20 11:03:48.938323] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:21.485 [2024-11-20 11:03:48.938341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:09:21.485 [2024-11-20 11:03:48.938449] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:21.485 [2024-11-20 11:03:48.938458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:09:21.485 [2024-11-20 11:03:48.938554] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:21.485 [2024-11-20 11:03:48.938564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:09:21.485 [2024-11-20 11:03:48.938664] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:21.485 [2024-11-20 11:03:48.938673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:09:21.485 passed 00:09:21.485 Test: blockdev nvme admin passthru ...passed 00:09:21.744 Test: blockdev copy ...passed 00:09:21.744 00:09:21.744 Run Summary: Type Total Ran Passed Failed Inactive 00:09:21.744 suites 1 1 n/a 0 0 00:09:21.744 tests 23 23 23 0 0 00:09:21.744 asserts 152 152 152 0 n/a 00:09:21.744 00:09:21.744 Elapsed time = 1.143 seconds 00:09:21.744 11:03:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:21.744 11:03:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.744 11:03:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:21.744 11:03:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.744 11:03:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:09:21.744 11:03:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:09:21.744 11:03:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:21.744 11:03:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:09:21.744 11:03:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:21.744 11:03:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:09:21.744 11:03:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:21.744 11:03:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:21.744 rmmod nvme_tcp 00:09:21.744 rmmod nvme_fabrics 00:09:21.744 rmmod nvme_keyring 00:09:21.744 11:03:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:21.744 11:03:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:09:21.744 11:03:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:09:21.744 11:03:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 3951213 ']' 00:09:21.744 11:03:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 3951213 00:09:21.744 11:03:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 3951213 ']' 00:09:21.744 11:03:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 3951213 00:09:21.744 11:03:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:09:21.744 11:03:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:21.744 11:03:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3951213 00:09:22.002 11:03:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:09:22.002 11:03:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:09:22.002 11:03:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3951213' 00:09:22.002 killing process with pid 3951213 00:09:22.002 11:03:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 3951213 00:09:22.002 11:03:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 3951213 00:09:22.002 11:03:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:22.002 11:03:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:22.002 11:03:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:22.002 11:03:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:09:22.002 11:03:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:09:22.002 11:03:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:22.002 11:03:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:09:22.002 11:03:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:22.002 11:03:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:22.002 11:03:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:22.002 11:03:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:22.002 11:03:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:24.536 11:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:24.536 00:09:24.536 real 0m10.070s 00:09:24.536 user 0m10.057s 00:09:24.536 sys 0m5.091s 00:09:24.536 11:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:24.536 11:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:24.536 ************************************ 00:09:24.536 END TEST nvmf_bdevio 00:09:24.536 ************************************ 00:09:24.536 11:03:51 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:09:24.536 00:09:24.536 real 4m36.754s 00:09:24.536 user 10m21.061s 00:09:24.536 sys 1m38.272s 00:09:24.536 11:03:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:24.536 11:03:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:24.536 ************************************ 00:09:24.536 END TEST nvmf_target_core 00:09:24.536 ************************************ 00:09:24.536 11:03:51 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:24.536 11:03:51 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:24.536 11:03:51 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:24.536 11:03:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:24.536 ************************************ 00:09:24.536 START TEST nvmf_target_extra 00:09:24.536 ************************************ 00:09:24.536 11:03:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:24.536 * Looking for test storage... 00:09:24.536 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:09:24.536 11:03:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:24.536 11:03:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lcov --version 00:09:24.536 11:03:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:24.536 11:03:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:24.536 11:03:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:24.536 11:03:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:24.536 11:03:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:24.536 11:03:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:09:24.536 11:03:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:09:24.536 11:03:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:09:24.536 11:03:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:09:24.536 11:03:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:09:24.536 11:03:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:09:24.536 11:03:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:09:24.536 11:03:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:24.536 11:03:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:09:24.536 11:03:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:09:24.536 11:03:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:24.536 11:03:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:24.536 11:03:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:09:24.536 11:03:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:09:24.536 11:03:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:24.536 11:03:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:09:24.536 11:03:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:09:24.536 11:03:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:09:24.536 11:03:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:09:24.536 11:03:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:24.536 11:03:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:09:24.536 11:03:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:09:24.536 11:03:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:24.536 11:03:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:24.536 11:03:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:09:24.536 11:03:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:24.536 11:03:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:24.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.536 --rc genhtml_branch_coverage=1 00:09:24.536 --rc genhtml_function_coverage=1 00:09:24.536 --rc genhtml_legend=1 00:09:24.536 --rc geninfo_all_blocks=1 00:09:24.536 --rc geninfo_unexecuted_blocks=1 00:09:24.536 00:09:24.536 ' 00:09:24.536 11:03:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:24.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.536 --rc genhtml_branch_coverage=1 00:09:24.536 --rc genhtml_function_coverage=1 00:09:24.536 --rc genhtml_legend=1 00:09:24.536 --rc geninfo_all_blocks=1 00:09:24.536 --rc geninfo_unexecuted_blocks=1 00:09:24.536 00:09:24.536 ' 00:09:24.536 11:03:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:24.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.536 --rc genhtml_branch_coverage=1 00:09:24.536 --rc genhtml_function_coverage=1 00:09:24.536 --rc genhtml_legend=1 00:09:24.536 --rc geninfo_all_blocks=1 00:09:24.536 --rc geninfo_unexecuted_blocks=1 00:09:24.536 00:09:24.536 ' 00:09:24.536 11:03:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:24.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.536 --rc genhtml_branch_coverage=1 00:09:24.536 --rc genhtml_function_coverage=1 00:09:24.536 --rc genhtml_legend=1 00:09:24.536 --rc geninfo_all_blocks=1 00:09:24.536 --rc geninfo_unexecuted_blocks=1 00:09:24.536 00:09:24.536 ' 00:09:24.536 11:03:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:24.536 11:03:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:09:24.536 11:03:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:24.536 11:03:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:24.536 11:03:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:24.536 11:03:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:24.536 11:03:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:24.536 11:03:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:24.536 11:03:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:24.536 11:03:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:24.536 11:03:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:24.536 11:03:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:24.536 11:03:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:24.536 11:03:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:24.536 11:03:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:24.536 11:03:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:24.536 11:03:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:24.536 11:03:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:24.537 11:03:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:24.537 11:03:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:09:24.537 11:03:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:24.537 11:03:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:24.537 11:03:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:24.537 11:03:51 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.537 11:03:51 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.537 11:03:51 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.537 11:03:51 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:09:24.537 11:03:51 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.537 11:03:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:09:24.537 11:03:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:24.537 11:03:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:24.537 11:03:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:24.537 11:03:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:24.537 11:03:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:24.537 11:03:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:24.537 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:24.537 11:03:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:24.537 11:03:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:24.537 11:03:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:24.537 11:03:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:09:24.537 11:03:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:09:24.537 11:03:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:09:24.537 11:03:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:09:24.537 11:03:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:24.537 11:03:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:24.537 11:03:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:24.537 ************************************ 00:09:24.537 START TEST nvmf_example 00:09:24.537 ************************************ 00:09:24.537 11:03:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:09:24.537 * Looking for test storage... 00:09:24.537 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:24.537 11:03:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:24.537 11:03:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lcov --version 00:09:24.537 11:03:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:24.537 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:24.537 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:24.537 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:24.537 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:24.537 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:09:24.537 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:09:24.537 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:09:24.537 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:09:24.537 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:09:24.537 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:09:24.537 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:09:24.537 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:24.537 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:09:24.537 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:09:24.537 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:24.537 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:24.796 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:09:24.796 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:09:24.796 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:24.796 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:09:24.796 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:09:24.796 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:09:24.796 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:09:24.796 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:24.796 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:09:24.796 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:09:24.796 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:24.796 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:24.796 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:09:24.796 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:24.796 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:24.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.796 --rc genhtml_branch_coverage=1 00:09:24.796 --rc genhtml_function_coverage=1 00:09:24.796 --rc genhtml_legend=1 00:09:24.796 --rc geninfo_all_blocks=1 00:09:24.796 --rc geninfo_unexecuted_blocks=1 00:09:24.796 00:09:24.796 ' 00:09:24.796 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:24.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.796 --rc genhtml_branch_coverage=1 00:09:24.796 --rc genhtml_function_coverage=1 00:09:24.796 --rc genhtml_legend=1 00:09:24.796 --rc geninfo_all_blocks=1 00:09:24.796 --rc geninfo_unexecuted_blocks=1 00:09:24.796 00:09:24.796 ' 00:09:24.796 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:24.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.796 --rc genhtml_branch_coverage=1 00:09:24.796 --rc genhtml_function_coverage=1 00:09:24.796 --rc genhtml_legend=1 00:09:24.796 --rc geninfo_all_blocks=1 00:09:24.796 --rc geninfo_unexecuted_blocks=1 00:09:24.796 00:09:24.796 ' 00:09:24.797 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:24.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.797 --rc genhtml_branch_coverage=1 00:09:24.797 --rc genhtml_function_coverage=1 00:09:24.797 --rc genhtml_legend=1 00:09:24.797 --rc geninfo_all_blocks=1 00:09:24.797 --rc geninfo_unexecuted_blocks=1 00:09:24.797 00:09:24.797 ' 00:09:24.797 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:24.797 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:09:24.797 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:24.797 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:24.797 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:24.797 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:24.797 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:24.797 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:24.797 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:24.797 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:24.797 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:24.797 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:24.797 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:24.797 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:24.797 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:24.797 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:24.797 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:24.797 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:24.797 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:24.797 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:09:24.797 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:24.797 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:24.797 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:24.797 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.797 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.797 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.797 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:09:24.797 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.797 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:09:24.797 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:24.797 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:24.797 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:24.797 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:24.797 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:24.797 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:24.797 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:24.797 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:24.797 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:24.797 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:24.797 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:09:24.797 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:09:24.797 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:09:24.797 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:09:24.797 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:09:24.797 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:09:24.797 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:09:24.797 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:09:24.797 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:24.797 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:24.797 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:09:24.797 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:24.797 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:24.797 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:24.797 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:24.797 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:24.797 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:24.797 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:24.797 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:24.797 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:24.797 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:24.797 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:09:24.797 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:31.367 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:31.367 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:09:31.367 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:31.367 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:31.367 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:31.367 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:31.367 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:31.367 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:09:31.367 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:31.367 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:09:31.367 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:09:31.367 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:09:31.367 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:09:31.367 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:09:31.367 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:09:31.367 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:31.367 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:31.367 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:31.367 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:31.367 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:31.367 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:31.367 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:31.367 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:31.367 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:31.367 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:31.367 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:31.368 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:31.368 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:31.368 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:31.368 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:31.368 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:31.368 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:31.368 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:31.368 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:31.368 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:31.368 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:31.368 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:31.368 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:31.368 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:31.368 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:31.368 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:31.368 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:31.368 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:31.368 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:31.368 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:31.368 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:31.368 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:31.368 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:31.368 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:31.368 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:31.368 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:31.368 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:31.368 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:31.368 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:31.368 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:31.368 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:31.368 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:31.368 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:31.368 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:31.368 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:31.368 Found net devices under 0000:86:00.0: cvl_0_0 00:09:31.368 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:31.368 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:31.368 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:31.368 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:31.368 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:31.368 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:31.368 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:31.368 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:31.368 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:31.368 Found net devices under 0000:86:00.1: cvl_0_1 00:09:31.368 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:31.368 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:31.368 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:09:31.368 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:31.368 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:31.368 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:31.368 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:31.368 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:31.368 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:31.368 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:31.368 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:31.368 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:31.368 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:31.368 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:31.368 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:31.368 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:31.368 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:31.368 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:31.368 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:31.368 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:31.368 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:31.368 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:31.368 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:31.368 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:31.368 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:31.368 11:03:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:31.368 11:03:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:31.368 11:03:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:31.369 11:03:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:31.369 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:31.369 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.462 ms 00:09:31.369 00:09:31.369 --- 10.0.0.2 ping statistics --- 00:09:31.369 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:31.369 rtt min/avg/max/mdev = 0.462/0.462/0.462/0.000 ms 00:09:31.369 11:03:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:31.369 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:31.369 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:09:31.369 00:09:31.369 --- 10.0.0.1 ping statistics --- 00:09:31.369 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:31.369 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:09:31.369 11:03:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:31.369 11:03:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:09:31.369 11:03:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:31.369 11:03:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:31.369 11:03:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:31.369 11:03:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:31.369 11:03:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:31.369 11:03:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:31.369 11:03:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:31.369 11:03:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:09:31.369 11:03:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:09:31.369 11:03:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:31.369 11:03:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:31.369 11:03:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:09:31.369 11:03:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:09:31.369 11:03:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=3955201 00:09:31.369 11:03:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:31.369 11:03:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:09:31.369 11:03:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 3955201 00:09:31.369 11:03:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 3955201 ']' 00:09:31.369 11:03:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:31.369 11:03:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:31.369 11:03:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:31.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:31.369 11:03:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:31.369 11:03:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:31.628 11:03:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:31.628 11:03:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:09:31.628 11:03:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:09:31.628 11:03:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:31.629 11:03:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:31.629 11:03:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:31.629 11:03:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.629 11:03:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:31.629 11:03:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.629 11:03:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:09:31.629 11:03:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.629 11:03:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:31.629 11:03:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.629 11:03:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:09:31.629 11:03:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:31.629 11:03:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.629 11:03:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:31.927 11:03:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.927 11:03:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:09:31.927 11:03:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:31.927 11:03:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.927 11:03:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:31.927 11:03:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.927 11:03:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:31.927 11:03:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.927 11:03:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:31.927 11:03:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.927 11:03:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:09:31.927 11:03:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:09:41.962 Initializing NVMe Controllers 00:09:41.962 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:41.962 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:41.962 Initialization complete. Launching workers. 00:09:41.962 ======================================================== 00:09:41.962 Latency(us) 00:09:41.962 Device Information : IOPS MiB/s Average min max 00:09:41.962 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 17718.20 69.21 3611.68 682.90 24401.26 00:09:41.962 ======================================================== 00:09:41.962 Total : 17718.20 69.21 3611.68 682.90 24401.26 00:09:41.962 00:09:41.962 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:09:41.962 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:09:41.962 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:41.962 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:09:41.962 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:41.962 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:09:41.962 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:41.962 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:42.221 rmmod nvme_tcp 00:09:42.221 rmmod nvme_fabrics 00:09:42.221 rmmod nvme_keyring 00:09:42.221 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:42.221 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:09:42.221 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:09:42.221 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 3955201 ']' 00:09:42.221 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 3955201 00:09:42.221 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 3955201 ']' 00:09:42.221 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 3955201 00:09:42.221 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:09:42.221 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:42.221 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3955201 00:09:42.221 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:09:42.221 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:09:42.221 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3955201' 00:09:42.221 killing process with pid 3955201 00:09:42.221 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 3955201 00:09:42.221 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 3955201 00:09:42.480 nvmf threads initialize successfully 00:09:42.480 bdev subsystem init successfully 00:09:42.480 created a nvmf target service 00:09:42.480 create targets's poll groups done 00:09:42.480 all subsystems of target started 00:09:42.480 nvmf target is running 00:09:42.480 all subsystems of target stopped 00:09:42.480 destroy targets's poll groups done 00:09:42.480 destroyed the nvmf target service 00:09:42.480 bdev subsystem finish successfully 00:09:42.480 nvmf threads destroy successfully 00:09:42.480 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:42.480 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:42.480 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:42.480 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:09:42.480 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:09:42.480 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:09:42.480 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:42.480 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:42.480 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:42.480 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:42.480 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:42.480 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:44.386 11:04:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:44.386 11:04:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:09:44.386 11:04:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:44.386 11:04:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:44.386 00:09:44.386 real 0m19.980s 00:09:44.386 user 0m46.594s 00:09:44.386 sys 0m6.162s 00:09:44.386 11:04:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:44.386 11:04:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:44.386 ************************************ 00:09:44.386 END TEST nvmf_example 00:09:44.386 ************************************ 00:09:44.386 11:04:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:09:44.386 11:04:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:44.386 11:04:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:44.386 11:04:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:44.646 ************************************ 00:09:44.646 START TEST nvmf_filesystem 00:09:44.646 ************************************ 00:09:44.646 11:04:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:09:44.646 * Looking for test storage... 00:09:44.646 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:44.646 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:44.646 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:09:44.646 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:44.646 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:44.646 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:44.646 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:44.646 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:44.646 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:09:44.646 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:09:44.646 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:09:44.646 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:09:44.646 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:09:44.646 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:09:44.646 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:09:44.646 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:44.646 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:09:44.646 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:09:44.646 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:44.646 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:44.646 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:09:44.646 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:09:44.646 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:44.646 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:09:44.646 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:09:44.646 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:09:44.646 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:09:44.646 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:44.646 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:09:44.646 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:09:44.646 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:44.646 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:44.647 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:09:44.647 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:44.647 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:44.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.647 --rc genhtml_branch_coverage=1 00:09:44.647 --rc genhtml_function_coverage=1 00:09:44.647 --rc genhtml_legend=1 00:09:44.647 --rc geninfo_all_blocks=1 00:09:44.647 --rc geninfo_unexecuted_blocks=1 00:09:44.647 00:09:44.647 ' 00:09:44.647 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:44.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.647 --rc genhtml_branch_coverage=1 00:09:44.647 --rc genhtml_function_coverage=1 00:09:44.647 --rc genhtml_legend=1 00:09:44.647 --rc geninfo_all_blocks=1 00:09:44.647 --rc geninfo_unexecuted_blocks=1 00:09:44.647 00:09:44.647 ' 00:09:44.647 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:44.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.647 --rc genhtml_branch_coverage=1 00:09:44.647 --rc genhtml_function_coverage=1 00:09:44.647 --rc genhtml_legend=1 00:09:44.647 --rc geninfo_all_blocks=1 00:09:44.647 --rc geninfo_unexecuted_blocks=1 00:09:44.647 00:09:44.647 ' 00:09:44.647 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:44.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.647 --rc genhtml_branch_coverage=1 00:09:44.647 --rc genhtml_function_coverage=1 00:09:44.647 --rc genhtml_legend=1 00:09:44.647 --rc geninfo_all_blocks=1 00:09:44.647 --rc geninfo_unexecuted_blocks=1 00:09:44.647 00:09:44.647 ' 00:09:44.647 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:09:44.647 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:09:44.647 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:09:44.647 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:09:44.647 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:09:44.647 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:09:44.647 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:09:44.647 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:09:44.647 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:09:44.647 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:09:44.647 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:09:44.647 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:09:44.647 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:09:44.647 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:09:44.647 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:09:44.647 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:09:44.647 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:09:44.647 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:09:44.647 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:09:44.647 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:09:44.647 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:09:44.647 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:09:44.647 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:09:44.647 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:09:44.647 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:09:44.647 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:09:44.647 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:09:44.647 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:09:44.647 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:09:44.647 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:09:44.647 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:09:44.647 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:09:44.647 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:09:44.647 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:09:44.647 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:09:44.647 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:09:44.647 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:09:44.647 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:09:44.647 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:09:44.647 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:09:44.647 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:09:44.647 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:09:44.647 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:09:44.647 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:09:44.647 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:09:44.647 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:09:44.647 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:09:44.647 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:09:44.647 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:09:44.647 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:09:44.647 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:09:44.647 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:09:44.647 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:09:44.647 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:09:44.647 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:09:44.647 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:09:44.647 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:09:44.647 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:09:44.647 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:09:44.647 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:09:44.647 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:09:44.647 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:09:44.647 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:09:44.647 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:09:44.647 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:09:44.647 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:09:44.647 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:09:44.647 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:09:44.647 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:09:44.647 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:09:44.647 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:09:44.647 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:09:44.647 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:09:44.647 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:09:44.647 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:09:44.647 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:09:44.647 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:09:44.647 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:09:44.647 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:09:44.647 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:09:44.647 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:09:44.647 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:09:44.647 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:09:44.647 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:09:44.647 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:09:44.648 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:09:44.648 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:09:44.648 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:09:44.648 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:09:44.648 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:09:44.648 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:09:44.648 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:09:44.648 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:09:44.648 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:09:44.648 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:09:44.648 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:09:44.648 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:09:44.648 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:09:44.648 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:09:44.648 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:09:44.648 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:09:44.648 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:09:44.648 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:09:44.648 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:09:44.648 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:09:44.648 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:09:44.648 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:09:44.648 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:09:44.648 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:09:44.648 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:09:44.648 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:09:44.648 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:09:44.648 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:09:44.648 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:09:44.648 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:09:44.648 #define SPDK_CONFIG_H 00:09:44.648 #define SPDK_CONFIG_AIO_FSDEV 1 00:09:44.648 #define SPDK_CONFIG_APPS 1 00:09:44.648 #define SPDK_CONFIG_ARCH native 00:09:44.648 #undef SPDK_CONFIG_ASAN 00:09:44.648 #undef SPDK_CONFIG_AVAHI 00:09:44.648 #undef SPDK_CONFIG_CET 00:09:44.648 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:09:44.648 #define SPDK_CONFIG_COVERAGE 1 00:09:44.648 #define SPDK_CONFIG_CROSS_PREFIX 00:09:44.648 #undef SPDK_CONFIG_CRYPTO 00:09:44.648 #undef SPDK_CONFIG_CRYPTO_MLX5 00:09:44.648 #undef SPDK_CONFIG_CUSTOMOCF 00:09:44.648 #undef SPDK_CONFIG_DAOS 00:09:44.648 #define SPDK_CONFIG_DAOS_DIR 00:09:44.648 #define SPDK_CONFIG_DEBUG 1 00:09:44.648 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:09:44.648 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:09:44.648 #define SPDK_CONFIG_DPDK_INC_DIR 00:09:44.648 #define SPDK_CONFIG_DPDK_LIB_DIR 00:09:44.648 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:09:44.648 #undef SPDK_CONFIG_DPDK_UADK 00:09:44.648 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:09:44.648 #define SPDK_CONFIG_EXAMPLES 1 00:09:44.648 #undef SPDK_CONFIG_FC 00:09:44.648 #define SPDK_CONFIG_FC_PATH 00:09:44.648 #define SPDK_CONFIG_FIO_PLUGIN 1 00:09:44.648 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:09:44.648 #define SPDK_CONFIG_FSDEV 1 00:09:44.648 #undef SPDK_CONFIG_FUSE 00:09:44.648 #undef SPDK_CONFIG_FUZZER 00:09:44.648 #define SPDK_CONFIG_FUZZER_LIB 00:09:44.648 #undef SPDK_CONFIG_GOLANG 00:09:44.648 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:09:44.648 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:09:44.648 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:09:44.648 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:09:44.648 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:09:44.648 #undef SPDK_CONFIG_HAVE_LIBBSD 00:09:44.648 #undef SPDK_CONFIG_HAVE_LZ4 00:09:44.648 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:09:44.648 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:09:44.648 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:09:44.648 #define SPDK_CONFIG_IDXD 1 00:09:44.648 #define SPDK_CONFIG_IDXD_KERNEL 1 00:09:44.648 #undef SPDK_CONFIG_IPSEC_MB 00:09:44.648 #define SPDK_CONFIG_IPSEC_MB_DIR 00:09:44.648 #define SPDK_CONFIG_ISAL 1 00:09:44.648 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:09:44.648 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:09:44.648 #define SPDK_CONFIG_LIBDIR 00:09:44.648 #undef SPDK_CONFIG_LTO 00:09:44.648 #define SPDK_CONFIG_MAX_LCORES 128 00:09:44.648 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:09:44.648 #define SPDK_CONFIG_NVME_CUSE 1 00:09:44.648 #undef SPDK_CONFIG_OCF 00:09:44.648 #define SPDK_CONFIG_OCF_PATH 00:09:44.648 #define SPDK_CONFIG_OPENSSL_PATH 00:09:44.648 #undef SPDK_CONFIG_PGO_CAPTURE 00:09:44.648 #define SPDK_CONFIG_PGO_DIR 00:09:44.648 #undef SPDK_CONFIG_PGO_USE 00:09:44.648 #define SPDK_CONFIG_PREFIX /usr/local 00:09:44.648 #undef SPDK_CONFIG_RAID5F 00:09:44.648 #undef SPDK_CONFIG_RBD 00:09:44.648 #define SPDK_CONFIG_RDMA 1 00:09:44.648 #define SPDK_CONFIG_RDMA_PROV verbs 00:09:44.648 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:09:44.648 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:09:44.648 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:09:44.648 #define SPDK_CONFIG_SHARED 1 00:09:44.648 #undef SPDK_CONFIG_SMA 00:09:44.648 #define SPDK_CONFIG_TESTS 1 00:09:44.648 #undef SPDK_CONFIG_TSAN 00:09:44.648 #define SPDK_CONFIG_UBLK 1 00:09:44.648 #define SPDK_CONFIG_UBSAN 1 00:09:44.648 #undef SPDK_CONFIG_UNIT_TESTS 00:09:44.648 #undef SPDK_CONFIG_URING 00:09:44.648 #define SPDK_CONFIG_URING_PATH 00:09:44.648 #undef SPDK_CONFIG_URING_ZNS 00:09:44.648 #undef SPDK_CONFIG_USDT 00:09:44.648 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:09:44.648 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:09:44.648 #define SPDK_CONFIG_VFIO_USER 1 00:09:44.648 #define SPDK_CONFIG_VFIO_USER_DIR 00:09:44.648 #define SPDK_CONFIG_VHOST 1 00:09:44.648 #define SPDK_CONFIG_VIRTIO 1 00:09:44.648 #undef SPDK_CONFIG_VTUNE 00:09:44.648 #define SPDK_CONFIG_VTUNE_DIR 00:09:44.648 #define SPDK_CONFIG_WERROR 1 00:09:44.648 #define SPDK_CONFIG_WPDK_DIR 00:09:44.648 #undef SPDK_CONFIG_XNVME 00:09:44.648 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:09:44.648 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:09:44.648 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:44.648 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:09:44.648 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:44.648 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:44.648 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:44.648 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.648 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.648 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.648 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:09:44.648 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.649 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:09:44.649 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:09:44.649 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:09:44.649 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:09:44.649 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:09:44.910 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:09:44.910 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:09:44.911 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:09:44.911 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:09:44.911 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:09:44.911 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:09:44.911 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:09:44.911 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:09:44.911 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:09:44.911 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:09:44.911 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:09:44.911 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:09:44.911 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:09:44.911 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:09:44.911 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:09:44.911 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:09:44.911 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:09:44.911 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:09:44.911 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:09:44.911 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:09:44.911 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:09:44.911 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:09:44.911 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:09:44.911 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:09:44.911 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:09:44.911 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:09:44.911 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:09:44.911 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:09:44.911 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:09:44.911 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:09:44.911 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:09:44.911 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:09:44.911 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:09:44.911 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:09:44.911 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:09:44.911 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:09:44.911 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:09:44.911 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:09:44.911 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:09:44.911 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:09:44.911 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:09:44.911 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:09:44.911 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:09:44.911 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:09:44.911 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:09:44.911 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:09:44.911 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:09:44.911 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:09:44.911 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:09:44.911 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:09:44.911 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:09:44.911 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:09:44.911 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:09:44.911 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:09:44.911 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:09:44.911 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:09:44.911 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:09:44.911 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:09:44.911 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:09:44.911 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:09:44.911 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:09:44.911 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:09:44.911 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:09:44.911 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:09:44.911 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:09:44.911 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:09:44.911 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:09:44.911 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:09:44.911 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:09:44.911 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:09:44.911 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:09:44.911 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:09:44.911 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:09:44.911 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:09:44.911 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:09:44.911 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:09:44.911 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:09:44.911 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:09:44.911 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:09:44.911 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:09:44.911 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:09:44.911 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:09:44.911 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:09:44.911 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:09:44.911 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:09:44.911 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:09:44.911 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:09:44.911 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:09:44.911 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:09:44.911 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:09:44.911 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:09:44.911 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:09:44.911 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:09:44.911 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:09:44.911 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:09:44.911 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:09:44.911 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:09:44.911 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:09:44.911 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:09:44.911 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:09:44.911 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:09:44.911 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:09:44.911 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:09:44.911 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:09:44.911 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:09:44.911 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:09:44.911 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:09:44.911 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:09:44.911 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:09:44.912 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:09:44.912 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:09:44.912 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:09:44.912 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:09:44.912 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:09:44.912 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:09:44.912 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:09:44.912 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:09:44.912 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:09:44.912 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:09:44.912 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:09:44.912 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:09:44.912 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:09:44.912 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:09:44.912 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:09:44.912 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:09:44.912 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:09:44.912 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:09:44.912 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:09:44.912 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:09:44.912 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:09:44.912 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:09:44.912 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:09:44.912 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:09:44.912 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:09:44.912 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:09:44.912 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:09:44.912 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:09:44.912 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:09:44.912 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:09:44.912 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:09:44.912 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:09:44.912 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:09:44.912 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:09:44.912 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:09:44.912 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:44.912 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:44.912 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:44.912 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:44.912 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:09:44.912 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:09:44.912 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:09:44.912 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:09:44.912 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:09:44.912 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:09:44.912 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:44.912 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:44.912 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:44.912 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:44.912 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:09:44.912 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:09:44.912 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:09:44.912 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:09:44.912 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:44.912 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:44.912 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:44.912 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:44.912 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:09:44.912 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:09:44.912 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:09:44.912 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:09:44.912 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:09:44.912 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:09:44.912 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:44.912 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:44.912 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:44.912 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:44.912 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:09:44.912 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:09:44.912 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:44.912 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:44.912 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:09:44.912 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:09:44.912 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:09:44.912 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:09:44.913 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:09:44.913 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:09:44.913 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:09:44.913 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:09:44.913 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:09:44.913 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:09:44.913 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:09:44.913 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:09:44.913 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:09:44.913 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:09:44.913 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:09:44.913 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:09:44.913 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:09:44.913 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j96 00:09:44.913 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:09:44.913 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:09:44.913 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:09:44.913 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:09:44.913 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:09:44.913 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:09:44.913 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:09:44.913 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 3957607 ]] 00:09:44.913 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 3957607 00:09:44.913 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:09:44.913 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:09:44.913 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:09:44.913 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:09:44.913 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:09:44.913 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:09:44.913 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:09:44.913 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:09:44.913 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.HWiijq 00:09:44.913 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:09:44.913 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:09:44.913 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:09:44.913 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.HWiijq/tests/target /tmp/spdk.HWiijq 00:09:44.913 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:09:44.913 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:44.913 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:09:44.913 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:09:44.913 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:09:44.913 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:09:44.913 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:09:44.913 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:09:44.913 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:09:44.913 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:44.913 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:09:44.913 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:09:44.913 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:09:44.913 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:09:44.913 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:09:44.913 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:44.913 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:09:44.913 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:09:44.913 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=189217292288 00:09:44.913 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=195963961344 00:09:44.913 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=6746669056 00:09:44.913 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:44.913 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:09:44.913 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:09:44.913 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=97971949568 00:09:44.913 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=97981980672 00:09:44.913 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=10031104 00:09:44.913 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:44.913 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:09:44.913 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:09:44.913 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=39169748992 00:09:44.913 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=39192793088 00:09:44.913 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=23044096 00:09:44.913 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:44.913 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:09:44.913 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:09:44.913 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=97981407232 00:09:44.913 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=97981980672 00:09:44.913 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=573440 00:09:44.913 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:44.913 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:09:44.913 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:09:44.913 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=19596382208 00:09:44.913 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=19596394496 00:09:44.913 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:09:44.913 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:44.913 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:09:44.913 * Looking for test storage... 00:09:44.913 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:09:44.913 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:09:44.913 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:44.913 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:09:44.913 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:09:44.913 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=189217292288 00:09:44.913 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:09:44.913 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:09:44.913 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:09:44.913 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:09:44.913 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:09:44.913 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=8961261568 00:09:44.913 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:09:44.913 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:44.914 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:44.914 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:44.914 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:44.914 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:09:44.914 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set -o errtrace 00:09:44.914 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:09:44.914 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:09:44.914 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:09:44.914 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # true 00:09:44.914 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # xtrace_fd 00:09:44.914 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:09:44.914 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:09:44.914 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:09:44.914 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:09:44.914 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:09:44.914 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:09:44.914 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:09:44.914 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:09:44.914 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:44.914 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:09:44.914 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:44.914 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:44.914 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:44.914 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:44.914 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:44.914 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:09:44.914 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:09:44.914 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:09:44.914 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:09:44.914 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:09:44.914 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:09:44.914 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:09:44.914 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:44.914 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:09:44.914 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:09:44.914 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:44.914 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:44.914 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:09:44.914 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:09:44.914 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:44.914 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:09:44.914 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:09:44.914 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:09:44.914 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:09:44.914 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:44.914 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:09:44.914 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:09:44.914 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:44.914 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:44.914 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:09:44.914 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:44.914 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:44.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.914 --rc genhtml_branch_coverage=1 00:09:44.914 --rc genhtml_function_coverage=1 00:09:44.914 --rc genhtml_legend=1 00:09:44.914 --rc geninfo_all_blocks=1 00:09:44.914 --rc geninfo_unexecuted_blocks=1 00:09:44.914 00:09:44.914 ' 00:09:44.914 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:44.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.914 --rc genhtml_branch_coverage=1 00:09:44.914 --rc genhtml_function_coverage=1 00:09:44.914 --rc genhtml_legend=1 00:09:44.914 --rc geninfo_all_blocks=1 00:09:44.914 --rc geninfo_unexecuted_blocks=1 00:09:44.914 00:09:44.914 ' 00:09:44.914 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:44.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.914 --rc genhtml_branch_coverage=1 00:09:44.914 --rc genhtml_function_coverage=1 00:09:44.914 --rc genhtml_legend=1 00:09:44.914 --rc geninfo_all_blocks=1 00:09:44.914 --rc geninfo_unexecuted_blocks=1 00:09:44.914 00:09:44.914 ' 00:09:44.914 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:44.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.914 --rc genhtml_branch_coverage=1 00:09:44.914 --rc genhtml_function_coverage=1 00:09:44.914 --rc genhtml_legend=1 00:09:44.914 --rc geninfo_all_blocks=1 00:09:44.914 --rc geninfo_unexecuted_blocks=1 00:09:44.914 00:09:44.914 ' 00:09:44.914 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:44.914 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:09:44.914 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:44.914 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:44.914 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:44.914 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:44.914 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:44.914 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:44.914 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:44.914 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:44.914 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:44.914 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:44.914 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:44.914 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:44.914 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:44.914 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:44.914 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:44.914 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:44.914 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:44.914 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:09:44.914 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:44.914 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:44.914 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:44.914 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.914 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.915 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.915 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:09:44.915 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.915 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:09:44.915 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:44.915 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:44.915 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:44.915 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:44.915 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:44.915 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:44.915 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:44.915 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:44.915 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:44.915 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:44.915 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:09:44.915 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:09:44.915 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:09:44.915 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:44.915 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:44.915 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:44.915 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:44.915 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:44.915 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:44.915 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:44.915 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:44.915 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:44.915 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:44.915 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:09:44.915 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:51.487 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:51.487 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:09:51.487 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:51.487 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:51.487 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:51.487 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:51.487 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:51.487 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:09:51.487 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:51.487 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:09:51.487 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:09:51.487 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:09:51.487 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:09:51.487 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:09:51.487 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:09:51.487 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:51.487 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:51.488 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:51.488 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:51.488 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:51.488 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:51.488 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:51.488 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:51.488 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:51.488 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:51.488 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:51.488 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:51.488 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:51.488 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:51.488 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:51.488 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:51.488 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:51.488 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:51.488 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:51.488 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:51.488 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:51.488 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:51.488 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:51.488 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:51.488 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:51.488 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:51.488 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:51.488 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:51.488 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:51.488 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:51.488 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:51.488 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:51.488 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:51.488 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:51.488 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:51.488 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:51.488 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:51.488 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:51.488 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:51.488 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:51.488 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:51.488 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:51.488 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:51.488 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:51.488 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:51.488 Found net devices under 0000:86:00.0: cvl_0_0 00:09:51.488 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:51.488 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:51.488 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:51.488 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:51.488 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:51.488 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:51.488 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:51.488 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:51.488 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:51.488 Found net devices under 0000:86:00.1: cvl_0_1 00:09:51.488 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:51.488 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:51.488 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:09:51.488 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:51.488 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:51.488 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:51.488 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:51.488 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:51.488 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:51.488 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:51.488 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:51.488 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:51.488 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:51.488 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:51.488 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:51.488 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:51.488 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:51.488 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:51.488 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:51.488 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:51.488 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:51.488 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:51.488 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:51.488 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:51.488 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:51.488 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:51.488 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:51.488 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:51.488 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:51.488 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:51.488 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.403 ms 00:09:51.488 00:09:51.488 --- 10.0.0.2 ping statistics --- 00:09:51.488 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:51.488 rtt min/avg/max/mdev = 0.403/0.403/0.403/0.000 ms 00:09:51.488 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:51.488 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:51.488 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:09:51.488 00:09:51.488 --- 10.0.0.1 ping statistics --- 00:09:51.488 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:51.488 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:09:51.488 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:51.488 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:09:51.489 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:51.489 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:51.489 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:51.489 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:51.489 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:51.489 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:51.489 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:51.489 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:09:51.489 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:51.489 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:51.489 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:51.489 ************************************ 00:09:51.489 START TEST nvmf_filesystem_no_in_capsule 00:09:51.489 ************************************ 00:09:51.489 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:09:51.489 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:09:51.489 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:09:51.489 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:51.489 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:51.489 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:51.489 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=3960651 00:09:51.489 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 3960651 00:09:51.489 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:51.489 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 3960651 ']' 00:09:51.489 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:51.489 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:51.489 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:51.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:51.489 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:51.489 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:51.489 [2024-11-20 11:04:18.462979] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:09:51.489 [2024-11-20 11:04:18.463023] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:51.489 [2024-11-20 11:04:18.543715] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:51.489 [2024-11-20 11:04:18.585823] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:51.489 [2024-11-20 11:04:18.585861] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:51.489 [2024-11-20 11:04:18.585868] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:51.489 [2024-11-20 11:04:18.585874] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:51.489 [2024-11-20 11:04:18.585879] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:51.489 [2024-11-20 11:04:18.587493] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:51.489 [2024-11-20 11:04:18.587601] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:51.489 [2024-11-20 11:04:18.587712] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:51.489 [2024-11-20 11:04:18.587714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:52.057 11:04:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:52.057 11:04:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:09:52.057 11:04:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:52.057 11:04:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:52.057 11:04:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:52.057 11:04:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:52.057 11:04:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:09:52.057 11:04:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:09:52.057 11:04:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.057 11:04:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:52.057 [2024-11-20 11:04:19.338710] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:52.057 11:04:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.057 11:04:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:09:52.057 11:04:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.057 11:04:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:52.057 Malloc1 00:09:52.057 11:04:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.057 11:04:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:52.057 11:04:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.057 11:04:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:52.057 11:04:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.057 11:04:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:52.057 11:04:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.057 11:04:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:52.057 11:04:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.057 11:04:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:52.057 11:04:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.057 11:04:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:52.057 [2024-11-20 11:04:19.494424] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:52.057 11:04:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.057 11:04:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:09:52.057 11:04:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:09:52.057 11:04:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:09:52.057 11:04:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:09:52.057 11:04:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:09:52.057 11:04:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:09:52.057 11:04:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.057 11:04:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:52.057 11:04:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.058 11:04:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:09:52.058 { 00:09:52.058 "name": "Malloc1", 00:09:52.058 "aliases": [ 00:09:52.058 "26842071-cead-4ef8-8efd-0fd32b6a5357" 00:09:52.058 ], 00:09:52.058 "product_name": "Malloc disk", 00:09:52.058 "block_size": 512, 00:09:52.058 "num_blocks": 1048576, 00:09:52.058 "uuid": "26842071-cead-4ef8-8efd-0fd32b6a5357", 00:09:52.058 "assigned_rate_limits": { 00:09:52.058 "rw_ios_per_sec": 0, 00:09:52.058 "rw_mbytes_per_sec": 0, 00:09:52.058 "r_mbytes_per_sec": 0, 00:09:52.058 "w_mbytes_per_sec": 0 00:09:52.058 }, 00:09:52.058 "claimed": true, 00:09:52.058 "claim_type": "exclusive_write", 00:09:52.058 "zoned": false, 00:09:52.058 "supported_io_types": { 00:09:52.058 "read": true, 00:09:52.058 "write": true, 00:09:52.058 "unmap": true, 00:09:52.058 "flush": true, 00:09:52.058 "reset": true, 00:09:52.058 "nvme_admin": false, 00:09:52.058 "nvme_io": false, 00:09:52.058 "nvme_io_md": false, 00:09:52.058 "write_zeroes": true, 00:09:52.058 "zcopy": true, 00:09:52.058 "get_zone_info": false, 00:09:52.058 "zone_management": false, 00:09:52.058 "zone_append": false, 00:09:52.058 "compare": false, 00:09:52.058 "compare_and_write": false, 00:09:52.058 "abort": true, 00:09:52.058 "seek_hole": false, 00:09:52.058 "seek_data": false, 00:09:52.058 "copy": true, 00:09:52.058 "nvme_iov_md": false 00:09:52.058 }, 00:09:52.058 "memory_domains": [ 00:09:52.058 { 00:09:52.058 "dma_device_id": "system", 00:09:52.058 "dma_device_type": 1 00:09:52.058 }, 00:09:52.058 { 00:09:52.058 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:52.058 "dma_device_type": 2 00:09:52.058 } 00:09:52.058 ], 00:09:52.058 "driver_specific": {} 00:09:52.058 } 00:09:52.058 ]' 00:09:52.058 11:04:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:09:52.317 11:04:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:09:52.317 11:04:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:09:52.317 11:04:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:09:52.317 11:04:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:09:52.317 11:04:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:09:52.317 11:04:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:09:52.317 11:04:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:53.695 11:04:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:09:53.695 11:04:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:09:53.695 11:04:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:53.695 11:04:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:09:53.695 11:04:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:09:55.600 11:04:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:55.600 11:04:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:55.600 11:04:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:55.600 11:04:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:09:55.600 11:04:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:55.600 11:04:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:09:55.600 11:04:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:09:55.600 11:04:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:09:55.600 11:04:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:09:55.600 11:04:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:09:55.600 11:04:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:09:55.600 11:04:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:09:55.600 11:04:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:09:55.600 11:04:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:09:55.600 11:04:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:09:55.600 11:04:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:09:55.600 11:04:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:09:56.167 11:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:09:56.426 11:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:09:57.363 11:04:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:09:57.363 11:04:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:09:57.363 11:04:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:57.363 11:04:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:57.363 11:04:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:57.363 ************************************ 00:09:57.363 START TEST filesystem_ext4 00:09:57.363 ************************************ 00:09:57.363 11:04:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:09:57.363 11:04:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:09:57.363 11:04:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:57.363 11:04:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:09:57.363 11:04:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:09:57.363 11:04:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:09:57.363 11:04:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:09:57.363 11:04:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:09:57.363 11:04:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:09:57.363 11:04:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:09:57.363 11:04:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:09:57.363 mke2fs 1.47.0 (5-Feb-2023) 00:09:57.623 Discarding device blocks: 0/522240 done 00:09:57.623 Creating filesystem with 522240 1k blocks and 130560 inodes 00:09:57.623 Filesystem UUID: 723a1ae8-e0aa-4210-bb8d-0488b02f1920 00:09:57.623 Superblock backups stored on blocks: 00:09:57.623 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:09:57.623 00:09:57.623 Allocating group tables: 0/64 done 00:09:57.623 Writing inode tables: 0/64 done 00:09:59.528 Creating journal (8192 blocks): done 00:10:01.290 Writing superblocks and filesystem accounting information: 0/64 2/64 done 00:10:01.290 00:10:01.290 11:04:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:10:01.290 11:04:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:07.854 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:07.854 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:10:07.854 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:07.854 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:10:07.854 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:07.854 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:07.854 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 3960651 00:10:07.854 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:07.854 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:07.854 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:07.854 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:07.854 00:10:07.854 real 0m10.079s 00:10:07.854 user 0m0.038s 00:10:07.854 sys 0m0.067s 00:10:07.854 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:07.854 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:07.854 ************************************ 00:10:07.854 END TEST filesystem_ext4 00:10:07.854 ************************************ 00:10:07.854 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:07.854 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:07.854 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:07.854 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:07.854 ************************************ 00:10:07.854 START TEST filesystem_btrfs 00:10:07.854 ************************************ 00:10:07.854 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:07.854 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:07.854 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:07.854 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:07.854 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:10:07.854 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:07.854 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:10:07.854 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:10:07.854 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:10:07.854 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:10:07.854 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:07.854 btrfs-progs v6.8.1 00:10:07.854 See https://btrfs.readthedocs.io for more information. 00:10:07.854 00:10:07.854 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:07.854 NOTE: several default settings have changed in version 5.15, please make sure 00:10:07.854 this does not affect your deployments: 00:10:07.854 - DUP for metadata (-m dup) 00:10:07.854 - enabled no-holes (-O no-holes) 00:10:07.854 - enabled free-space-tree (-R free-space-tree) 00:10:07.854 00:10:07.854 Label: (null) 00:10:07.854 UUID: a6c6f52d-12c3-4bbf-9ad3-0494afbf0bf5 00:10:07.854 Node size: 16384 00:10:07.855 Sector size: 4096 (CPU page size: 4096) 00:10:07.855 Filesystem size: 510.00MiB 00:10:07.855 Block group profiles: 00:10:07.855 Data: single 8.00MiB 00:10:07.855 Metadata: DUP 32.00MiB 00:10:07.855 System: DUP 8.00MiB 00:10:07.855 SSD detected: yes 00:10:07.855 Zoned device: no 00:10:07.855 Features: extref, skinny-metadata, no-holes, free-space-tree 00:10:07.855 Checksum: crc32c 00:10:07.855 Number of devices: 1 00:10:07.855 Devices: 00:10:07.855 ID SIZE PATH 00:10:07.855 1 510.00MiB /dev/nvme0n1p1 00:10:07.855 00:10:07.855 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:10:07.855 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:08.792 11:04:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:08.792 11:04:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:10:08.792 11:04:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:08.792 11:04:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:10:08.792 11:04:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:08.792 11:04:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:08.792 11:04:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 3960651 00:10:08.792 11:04:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:08.792 11:04:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:08.792 11:04:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:08.792 11:04:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:08.792 00:10:08.792 real 0m1.239s 00:10:08.792 user 0m0.033s 00:10:08.792 sys 0m0.107s 00:10:08.792 11:04:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:08.792 11:04:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:08.792 ************************************ 00:10:08.792 END TEST filesystem_btrfs 00:10:08.792 ************************************ 00:10:08.792 11:04:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:10:08.792 11:04:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:08.792 11:04:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:08.792 11:04:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:08.792 ************************************ 00:10:08.792 START TEST filesystem_xfs 00:10:08.792 ************************************ 00:10:08.792 11:04:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:10:08.792 11:04:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:08.792 11:04:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:08.792 11:04:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:08.792 11:04:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:10:08.792 11:04:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:08.792 11:04:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:10:08.792 11:04:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:10:08.792 11:04:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:10:08.792 11:04:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:10:08.792 11:04:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:09.052 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:09.052 = sectsz=512 attr=2, projid32bit=1 00:10:09.052 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:09.052 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:09.052 data = bsize=4096 blocks=130560, imaxpct=25 00:10:09.052 = sunit=0 swidth=0 blks 00:10:09.052 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:09.052 log =internal log bsize=4096 blocks=16384, version=2 00:10:09.052 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:09.052 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:09.620 Discarding blocks...Done. 00:10:09.620 11:04:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:10:09.620 11:04:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:12.154 11:04:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:12.154 11:04:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:10:12.154 11:04:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:12.154 11:04:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:10:12.154 11:04:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:10:12.154 11:04:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:12.154 11:04:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 3960651 00:10:12.154 11:04:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:12.154 11:04:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:12.154 11:04:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:12.154 11:04:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:12.154 00:10:12.154 real 0m3.230s 00:10:12.154 user 0m0.027s 00:10:12.154 sys 0m0.071s 00:10:12.154 11:04:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:12.154 11:04:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:12.154 ************************************ 00:10:12.154 END TEST filesystem_xfs 00:10:12.154 ************************************ 00:10:12.154 11:04:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:12.414 11:04:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:12.414 11:04:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:12.414 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:12.414 11:04:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:12.414 11:04:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:10:12.414 11:04:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:12.414 11:04:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:12.414 11:04:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:12.414 11:04:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:12.414 11:04:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:10:12.414 11:04:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:12.414 11:04:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.414 11:04:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:12.414 11:04:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.414 11:04:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:12.414 11:04:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 3960651 00:10:12.414 11:04:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 3960651 ']' 00:10:12.414 11:04:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 3960651 00:10:12.414 11:04:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:10:12.414 11:04:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:12.414 11:04:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3960651 00:10:12.673 11:04:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:12.673 11:04:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:12.673 11:04:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3960651' 00:10:12.673 killing process with pid 3960651 00:10:12.673 11:04:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 3960651 00:10:12.673 11:04:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 3960651 00:10:12.933 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:12.933 00:10:12.933 real 0m21.842s 00:10:12.933 user 1m26.263s 00:10:12.933 sys 0m1.483s 00:10:12.933 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:12.933 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:12.933 ************************************ 00:10:12.933 END TEST nvmf_filesystem_no_in_capsule 00:10:12.933 ************************************ 00:10:12.933 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:10:12.933 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:12.933 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:12.933 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:12.933 ************************************ 00:10:12.933 START TEST nvmf_filesystem_in_capsule 00:10:12.933 ************************************ 00:10:12.933 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:10:12.933 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:10:12.933 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:12.933 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:12.933 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:12.933 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:12.933 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=3964569 00:10:12.933 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 3964569 00:10:12.933 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:12.933 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 3964569 ']' 00:10:12.933 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:12.933 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:12.933 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:12.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:12.933 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:12.933 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:12.933 [2024-11-20 11:04:40.374315] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:10:12.933 [2024-11-20 11:04:40.374361] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:13.193 [2024-11-20 11:04:40.454409] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:13.193 [2024-11-20 11:04:40.493742] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:13.193 [2024-11-20 11:04:40.493784] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:13.193 [2024-11-20 11:04:40.493792] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:13.193 [2024-11-20 11:04:40.493798] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:13.193 [2024-11-20 11:04:40.493803] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:13.193 [2024-11-20 11:04:40.495328] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:13.193 [2024-11-20 11:04:40.495437] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:13.193 [2024-11-20 11:04:40.495546] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:13.193 [2024-11-20 11:04:40.495546] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:13.193 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:13.193 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:10:13.193 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:13.193 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:13.193 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:13.193 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:13.193 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:13.193 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:10:13.193 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.193 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:13.193 [2024-11-20 11:04:40.640864] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:13.193 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.194 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:13.194 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.194 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:13.453 Malloc1 00:10:13.453 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.453 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:13.453 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.453 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:13.453 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.453 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:13.453 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.453 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:13.453 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.453 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:13.453 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.453 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:13.453 [2024-11-20 11:04:40.789504] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:13.453 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.453 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:13.453 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:10:13.453 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:10:13.453 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:10:13.453 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:10:13.453 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:13.453 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.453 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:13.453 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.453 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:10:13.453 { 00:10:13.453 "name": "Malloc1", 00:10:13.453 "aliases": [ 00:10:13.453 "3aae3020-91cc-4ff3-8ca3-60a8b0d2ee29" 00:10:13.453 ], 00:10:13.453 "product_name": "Malloc disk", 00:10:13.453 "block_size": 512, 00:10:13.453 "num_blocks": 1048576, 00:10:13.453 "uuid": "3aae3020-91cc-4ff3-8ca3-60a8b0d2ee29", 00:10:13.453 "assigned_rate_limits": { 00:10:13.453 "rw_ios_per_sec": 0, 00:10:13.453 "rw_mbytes_per_sec": 0, 00:10:13.453 "r_mbytes_per_sec": 0, 00:10:13.453 "w_mbytes_per_sec": 0 00:10:13.453 }, 00:10:13.453 "claimed": true, 00:10:13.453 "claim_type": "exclusive_write", 00:10:13.453 "zoned": false, 00:10:13.453 "supported_io_types": { 00:10:13.453 "read": true, 00:10:13.453 "write": true, 00:10:13.453 "unmap": true, 00:10:13.453 "flush": true, 00:10:13.453 "reset": true, 00:10:13.453 "nvme_admin": false, 00:10:13.453 "nvme_io": false, 00:10:13.453 "nvme_io_md": false, 00:10:13.453 "write_zeroes": true, 00:10:13.453 "zcopy": true, 00:10:13.453 "get_zone_info": false, 00:10:13.453 "zone_management": false, 00:10:13.453 "zone_append": false, 00:10:13.453 "compare": false, 00:10:13.453 "compare_and_write": false, 00:10:13.453 "abort": true, 00:10:13.453 "seek_hole": false, 00:10:13.453 "seek_data": false, 00:10:13.453 "copy": true, 00:10:13.453 "nvme_iov_md": false 00:10:13.453 }, 00:10:13.453 "memory_domains": [ 00:10:13.453 { 00:10:13.453 "dma_device_id": "system", 00:10:13.453 "dma_device_type": 1 00:10:13.453 }, 00:10:13.453 { 00:10:13.453 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.453 "dma_device_type": 2 00:10:13.453 } 00:10:13.453 ], 00:10:13.453 "driver_specific": {} 00:10:13.453 } 00:10:13.453 ]' 00:10:13.453 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:10:13.453 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:10:13.453 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:10:13.453 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:10:13.453 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:10:13.453 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:10:13.453 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:13.453 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:14.830 11:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:14.830 11:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:10:14.830 11:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:14.830 11:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:14.830 11:04:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:10:16.734 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:16.734 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:16.734 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:16.734 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:16.734 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:16.734 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:10:16.734 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:16.734 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:16.734 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:16.734 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:16.734 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:16.734 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:16.734 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:16.734 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:16.734 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:16.734 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:16.734 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:16.993 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:17.251 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:18.184 11:04:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:10:18.184 11:04:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:18.184 11:04:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:18.184 11:04:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:18.184 11:04:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:18.184 ************************************ 00:10:18.184 START TEST filesystem_in_capsule_ext4 00:10:18.184 ************************************ 00:10:18.184 11:04:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:18.184 11:04:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:18.184 11:04:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:18.184 11:04:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:18.184 11:04:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:10:18.184 11:04:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:18.184 11:04:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:10:18.184 11:04:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:10:18.184 11:04:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:10:18.184 11:04:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:10:18.184 11:04:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:18.442 mke2fs 1.47.0 (5-Feb-2023) 00:10:18.442 Discarding device blocks: 0/522240 done 00:10:18.442 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:18.442 Filesystem UUID: 145b9619-fb85-42be-bac2-8b09aa27a487 00:10:18.442 Superblock backups stored on blocks: 00:10:18.442 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:18.442 00:10:18.442 Allocating group tables: 0/64 done 00:10:18.442 Writing inode tables: 0/64 done 00:10:18.700 Creating journal (8192 blocks): done 00:10:18.700 Writing superblocks and filesystem accounting information: 0/64 done 00:10:18.700 00:10:18.700 11:04:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:10:18.700 11:04:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:23.962 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:23.962 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:10:23.962 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:23.962 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:10:23.962 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:23.962 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:23.962 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 3964569 00:10:23.962 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:23.962 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:23.962 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:23.962 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:23.962 00:10:23.962 real 0m5.692s 00:10:23.962 user 0m0.028s 00:10:23.962 sys 0m0.067s 00:10:23.962 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:23.962 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:23.962 ************************************ 00:10:23.962 END TEST filesystem_in_capsule_ext4 00:10:23.962 ************************************ 00:10:23.962 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:23.962 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:23.962 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:23.962 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:23.962 ************************************ 00:10:23.962 START TEST filesystem_in_capsule_btrfs 00:10:23.962 ************************************ 00:10:23.962 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:23.962 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:23.962 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:23.962 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:23.962 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:10:23.962 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:23.962 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:10:23.962 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:10:23.962 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:10:23.963 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:10:23.963 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:24.221 btrfs-progs v6.8.1 00:10:24.221 See https://btrfs.readthedocs.io for more information. 00:10:24.221 00:10:24.221 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:24.221 NOTE: several default settings have changed in version 5.15, please make sure 00:10:24.221 this does not affect your deployments: 00:10:24.221 - DUP for metadata (-m dup) 00:10:24.221 - enabled no-holes (-O no-holes) 00:10:24.221 - enabled free-space-tree (-R free-space-tree) 00:10:24.221 00:10:24.221 Label: (null) 00:10:24.221 UUID: 1e4a81f1-dfb3-4cfe-966f-045dc91a9e6a 00:10:24.221 Node size: 16384 00:10:24.221 Sector size: 4096 (CPU page size: 4096) 00:10:24.221 Filesystem size: 510.00MiB 00:10:24.221 Block group profiles: 00:10:24.221 Data: single 8.00MiB 00:10:24.221 Metadata: DUP 32.00MiB 00:10:24.221 System: DUP 8.00MiB 00:10:24.221 SSD detected: yes 00:10:24.221 Zoned device: no 00:10:24.221 Features: extref, skinny-metadata, no-holes, free-space-tree 00:10:24.221 Checksum: crc32c 00:10:24.221 Number of devices: 1 00:10:24.221 Devices: 00:10:24.221 ID SIZE PATH 00:10:24.221 1 510.00MiB /dev/nvme0n1p1 00:10:24.221 00:10:24.221 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:10:24.221 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:25.155 11:04:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:25.155 11:04:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:10:25.155 11:04:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:25.155 11:04:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:10:25.155 11:04:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:25.155 11:04:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:25.155 11:04:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 3964569 00:10:25.156 11:04:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:25.156 11:04:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:25.156 11:04:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:25.156 11:04:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:25.156 00:10:25.156 real 0m0.950s 00:10:25.156 user 0m0.023s 00:10:25.156 sys 0m0.117s 00:10:25.156 11:04:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:25.156 11:04:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:25.156 ************************************ 00:10:25.156 END TEST filesystem_in_capsule_btrfs 00:10:25.156 ************************************ 00:10:25.156 11:04:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:10:25.156 11:04:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:25.156 11:04:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:25.156 11:04:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:25.156 ************************************ 00:10:25.156 START TEST filesystem_in_capsule_xfs 00:10:25.156 ************************************ 00:10:25.156 11:04:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:10:25.156 11:04:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:25.156 11:04:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:25.156 11:04:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:25.156 11:04:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:10:25.156 11:04:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:25.156 11:04:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:10:25.156 11:04:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:10:25.156 11:04:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:10:25.156 11:04:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:10:25.156 11:04:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:25.156 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:25.156 = sectsz=512 attr=2, projid32bit=1 00:10:25.156 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:25.156 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:25.156 data = bsize=4096 blocks=130560, imaxpct=25 00:10:25.156 = sunit=0 swidth=0 blks 00:10:25.156 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:25.156 log =internal log bsize=4096 blocks=16384, version=2 00:10:25.156 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:25.156 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:26.089 Discarding blocks...Done. 00:10:26.089 11:04:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:10:26.089 11:04:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:28.616 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:28.616 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:10:28.616 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:28.616 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:10:28.616 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:10:28.616 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:28.616 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 3964569 00:10:28.616 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:28.616 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:28.616 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:28.616 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:28.616 00:10:28.616 real 0m3.449s 00:10:28.616 user 0m0.024s 00:10:28.616 sys 0m0.075s 00:10:28.616 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:28.616 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:28.616 ************************************ 00:10:28.616 END TEST filesystem_in_capsule_xfs 00:10:28.616 ************************************ 00:10:28.616 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:28.616 11:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:28.617 11:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:28.617 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:28.617 11:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:28.617 11:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:10:28.617 11:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:28.617 11:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:28.875 11:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:28.875 11:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:28.875 11:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:10:28.875 11:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:28.875 11:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.875 11:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:28.875 11:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.875 11:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:28.875 11:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 3964569 00:10:28.875 11:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 3964569 ']' 00:10:28.875 11:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 3964569 00:10:28.875 11:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:10:28.875 11:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:28.875 11:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3964569 00:10:28.875 11:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:28.875 11:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:28.875 11:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3964569' 00:10:28.875 killing process with pid 3964569 00:10:28.875 11:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 3964569 00:10:28.875 11:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 3964569 00:10:29.135 11:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:29.135 00:10:29.135 real 0m16.204s 00:10:29.135 user 1m3.696s 00:10:29.135 sys 0m1.380s 00:10:29.135 11:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:29.135 11:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:29.135 ************************************ 00:10:29.135 END TEST nvmf_filesystem_in_capsule 00:10:29.135 ************************************ 00:10:29.135 11:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:10:29.135 11:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:29.135 11:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:10:29.135 11:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:29.135 11:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:10:29.135 11:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:29.135 11:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:29.135 rmmod nvme_tcp 00:10:29.135 rmmod nvme_fabrics 00:10:29.135 rmmod nvme_keyring 00:10:29.135 11:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:29.135 11:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:10:29.135 11:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:10:29.135 11:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:10:29.135 11:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:29.135 11:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:29.135 11:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:29.135 11:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:10:29.135 11:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:10:29.135 11:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:29.135 11:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:10:29.395 11:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:29.395 11:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:29.395 11:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:29.395 11:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:29.395 11:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:31.303 11:04:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:31.303 00:10:31.303 real 0m46.788s 00:10:31.303 user 2m32.052s 00:10:31.303 sys 0m7.533s 00:10:31.303 11:04:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:31.303 11:04:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:31.303 ************************************ 00:10:31.303 END TEST nvmf_filesystem 00:10:31.303 ************************************ 00:10:31.303 11:04:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:10:31.303 11:04:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:31.303 11:04:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:31.303 11:04:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:31.303 ************************************ 00:10:31.303 START TEST nvmf_target_discovery 00:10:31.303 ************************************ 00:10:31.303 11:04:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:10:31.563 * Looking for test storage... 00:10:31.563 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:31.563 11:04:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:31.563 11:04:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:10:31.563 11:04:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:31.563 11:04:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:31.563 11:04:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:31.563 11:04:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:31.563 11:04:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:31.563 11:04:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:10:31.563 11:04:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:10:31.563 11:04:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:10:31.563 11:04:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:10:31.563 11:04:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:10:31.563 11:04:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:10:31.563 11:04:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:10:31.563 11:04:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:31.563 11:04:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:10:31.563 11:04:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:10:31.563 11:04:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:31.563 11:04:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:31.564 11:04:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:10:31.564 11:04:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:10:31.564 11:04:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:31.564 11:04:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:10:31.564 11:04:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:10:31.564 11:04:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:10:31.564 11:04:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:10:31.564 11:04:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:31.564 11:04:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:10:31.564 11:04:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:10:31.564 11:04:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:31.564 11:04:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:31.564 11:04:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:10:31.564 11:04:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:31.564 11:04:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:31.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:31.564 --rc genhtml_branch_coverage=1 00:10:31.564 --rc genhtml_function_coverage=1 00:10:31.564 --rc genhtml_legend=1 00:10:31.564 --rc geninfo_all_blocks=1 00:10:31.564 --rc geninfo_unexecuted_blocks=1 00:10:31.564 00:10:31.564 ' 00:10:31.564 11:04:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:31.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:31.564 --rc genhtml_branch_coverage=1 00:10:31.564 --rc genhtml_function_coverage=1 00:10:31.564 --rc genhtml_legend=1 00:10:31.564 --rc geninfo_all_blocks=1 00:10:31.564 --rc geninfo_unexecuted_blocks=1 00:10:31.564 00:10:31.564 ' 00:10:31.564 11:04:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:31.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:31.564 --rc genhtml_branch_coverage=1 00:10:31.564 --rc genhtml_function_coverage=1 00:10:31.564 --rc genhtml_legend=1 00:10:31.564 --rc geninfo_all_blocks=1 00:10:31.564 --rc geninfo_unexecuted_blocks=1 00:10:31.564 00:10:31.564 ' 00:10:31.564 11:04:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:31.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:31.564 --rc genhtml_branch_coverage=1 00:10:31.564 --rc genhtml_function_coverage=1 00:10:31.564 --rc genhtml_legend=1 00:10:31.564 --rc geninfo_all_blocks=1 00:10:31.564 --rc geninfo_unexecuted_blocks=1 00:10:31.564 00:10:31.564 ' 00:10:31.564 11:04:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:31.564 11:04:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:10:31.564 11:04:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:31.564 11:04:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:31.564 11:04:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:31.564 11:04:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:31.564 11:04:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:31.564 11:04:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:31.564 11:04:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:31.564 11:04:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:31.564 11:04:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:31.564 11:04:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:31.564 11:04:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:31.564 11:04:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:31.564 11:04:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:31.564 11:04:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:31.564 11:04:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:31.564 11:04:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:31.564 11:04:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:31.564 11:04:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:10:31.564 11:04:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:31.564 11:04:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:31.564 11:04:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:31.564 11:04:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:31.564 11:04:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:31.564 11:04:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:31.564 11:04:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:10:31.564 11:04:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:31.564 11:04:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:10:31.564 11:04:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:31.564 11:04:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:31.564 11:04:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:31.564 11:04:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:31.564 11:04:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:31.564 11:04:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:31.564 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:31.564 11:04:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:31.564 11:04:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:31.564 11:04:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:31.564 11:04:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:10:31.564 11:04:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:10:31.564 11:04:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:10:31.564 11:04:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:10:31.564 11:04:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:10:31.564 11:04:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:31.565 11:04:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:31.565 11:04:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:31.565 11:04:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:31.565 11:04:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:31.565 11:04:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:31.565 11:04:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:31.565 11:04:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:31.565 11:04:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:31.565 11:04:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:31.565 11:04:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:10:31.565 11:04:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:38.308 11:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:38.308 11:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:10:38.308 11:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:38.308 11:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:38.308 11:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:38.308 11:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:38.308 11:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:38.308 11:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:10:38.308 11:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:38.308 11:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:10:38.308 11:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:10:38.308 11:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:10:38.308 11:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:10:38.308 11:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:10:38.308 11:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:10:38.308 11:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:38.309 11:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:38.309 11:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:38.309 11:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:38.309 11:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:38.309 11:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:38.309 11:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:38.309 11:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:38.309 11:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:38.309 11:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:38.309 11:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:38.309 11:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:38.309 11:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:38.309 11:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:38.309 11:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:38.309 11:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:38.309 11:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:38.309 11:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:38.309 11:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:38.309 11:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:38.309 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:38.309 11:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:38.309 11:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:38.309 11:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:38.309 11:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:38.309 11:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:38.309 11:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:38.309 11:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:38.309 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:38.309 11:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:38.309 11:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:38.309 11:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:38.309 11:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:38.309 11:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:38.309 11:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:38.309 11:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:38.309 11:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:38.309 11:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:38.309 11:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:38.309 11:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:38.309 11:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:38.309 11:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:38.309 11:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:38.309 11:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:38.309 11:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:38.309 Found net devices under 0000:86:00.0: cvl_0_0 00:10:38.309 11:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:38.309 11:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:38.309 11:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:38.309 11:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:38.309 11:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:38.309 11:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:38.309 11:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:38.309 11:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:38.309 11:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:38.309 Found net devices under 0000:86:00.1: cvl_0_1 00:10:38.309 11:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:38.309 11:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:38.309 11:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:10:38.309 11:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:38.309 11:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:38.309 11:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:38.309 11:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:38.309 11:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:38.309 11:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:38.309 11:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:38.309 11:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:38.309 11:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:38.309 11:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:38.309 11:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:38.309 11:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:38.309 11:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:38.309 11:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:38.309 11:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:38.309 11:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:38.309 11:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:38.309 11:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:38.309 11:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:38.309 11:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:38.309 11:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:38.309 11:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:38.309 11:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:38.309 11:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:38.309 11:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:38.309 11:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:38.309 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:38.309 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.207 ms 00:10:38.309 00:10:38.309 --- 10.0.0.2 ping statistics --- 00:10:38.309 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:38.309 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:10:38.309 11:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:38.309 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:38.309 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.073 ms 00:10:38.309 00:10:38.309 --- 10.0.0.1 ping statistics --- 00:10:38.309 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:38.309 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:10:38.309 11:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:38.309 11:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:10:38.309 11:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:38.309 11:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:38.309 11:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:38.309 11:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:38.309 11:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:38.309 11:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:38.309 11:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:38.310 11:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:10:38.310 11:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:38.310 11:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:38.310 11:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:38.310 11:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=3970915 00:10:38.310 11:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 3970915 00:10:38.310 11:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:38.310 11:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 3970915 ']' 00:10:38.310 11:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:38.310 11:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:38.310 11:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:38.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:38.310 11:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:38.310 11:05:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:38.310 [2024-11-20 11:05:05.047607] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:10:38.310 [2024-11-20 11:05:05.047650] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:38.310 [2024-11-20 11:05:05.124921] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:38.310 [2024-11-20 11:05:05.166055] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:38.310 [2024-11-20 11:05:05.166097] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:38.310 [2024-11-20 11:05:05.166105] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:38.310 [2024-11-20 11:05:05.166110] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:38.310 [2024-11-20 11:05:05.166115] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:38.310 [2024-11-20 11:05:05.167740] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:38.310 [2024-11-20 11:05:05.167847] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:38.310 [2024-11-20 11:05:05.167972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:38.310 [2024-11-20 11:05:05.167973] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:38.310 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:38.310 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:10:38.310 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:38.310 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:38.310 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:38.310 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:38.310 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:38.310 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.310 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:38.310 [2024-11-20 11:05:05.317474] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:38.310 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.310 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:10:38.310 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:38.310 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:10:38.310 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.310 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:38.310 Null1 00:10:38.310 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.310 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:38.310 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.310 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:38.310 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.310 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:10:38.310 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.310 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:38.310 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.310 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:38.310 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.310 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:38.310 [2024-11-20 11:05:05.362955] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:38.310 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.310 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:38.310 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:10:38.310 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.310 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:38.310 Null2 00:10:38.310 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.310 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:10:38.310 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.310 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:38.310 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.310 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:10:38.310 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.310 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:38.310 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.310 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:38.310 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.310 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:38.310 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.310 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:38.310 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:10:38.310 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.310 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:38.310 Null3 00:10:38.310 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.310 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:10:38.310 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.310 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:38.310 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.310 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:10:38.310 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.310 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:38.310 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.310 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:10:38.310 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.310 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:38.310 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.310 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:38.310 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:10:38.310 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.310 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:38.310 Null4 00:10:38.310 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.311 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:10:38.311 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.311 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:38.311 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.311 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:10:38.311 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.311 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:38.311 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.311 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:10:38.311 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.311 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:38.311 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.311 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:38.311 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.311 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:38.311 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.311 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:10:38.311 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.311 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:38.311 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.311 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:10:38.311 00:10:38.311 Discovery Log Number of Records 6, Generation counter 6 00:10:38.311 =====Discovery Log Entry 0====== 00:10:38.311 trtype: tcp 00:10:38.311 adrfam: ipv4 00:10:38.311 subtype: current discovery subsystem 00:10:38.311 treq: not required 00:10:38.311 portid: 0 00:10:38.311 trsvcid: 4420 00:10:38.311 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:10:38.311 traddr: 10.0.0.2 00:10:38.311 eflags: explicit discovery connections, duplicate discovery information 00:10:38.311 sectype: none 00:10:38.311 =====Discovery Log Entry 1====== 00:10:38.311 trtype: tcp 00:10:38.311 adrfam: ipv4 00:10:38.311 subtype: nvme subsystem 00:10:38.311 treq: not required 00:10:38.311 portid: 0 00:10:38.311 trsvcid: 4420 00:10:38.311 subnqn: nqn.2016-06.io.spdk:cnode1 00:10:38.311 traddr: 10.0.0.2 00:10:38.311 eflags: none 00:10:38.311 sectype: none 00:10:38.311 =====Discovery Log Entry 2====== 00:10:38.311 trtype: tcp 00:10:38.311 adrfam: ipv4 00:10:38.311 subtype: nvme subsystem 00:10:38.311 treq: not required 00:10:38.311 portid: 0 00:10:38.311 trsvcid: 4420 00:10:38.311 subnqn: nqn.2016-06.io.spdk:cnode2 00:10:38.311 traddr: 10.0.0.2 00:10:38.311 eflags: none 00:10:38.311 sectype: none 00:10:38.311 =====Discovery Log Entry 3====== 00:10:38.311 trtype: tcp 00:10:38.311 adrfam: ipv4 00:10:38.311 subtype: nvme subsystem 00:10:38.311 treq: not required 00:10:38.311 portid: 0 00:10:38.311 trsvcid: 4420 00:10:38.311 subnqn: nqn.2016-06.io.spdk:cnode3 00:10:38.311 traddr: 10.0.0.2 00:10:38.311 eflags: none 00:10:38.311 sectype: none 00:10:38.311 =====Discovery Log Entry 4====== 00:10:38.311 trtype: tcp 00:10:38.311 adrfam: ipv4 00:10:38.311 subtype: nvme subsystem 00:10:38.311 treq: not required 00:10:38.311 portid: 0 00:10:38.311 trsvcid: 4420 00:10:38.311 subnqn: nqn.2016-06.io.spdk:cnode4 00:10:38.311 traddr: 10.0.0.2 00:10:38.311 eflags: none 00:10:38.311 sectype: none 00:10:38.311 =====Discovery Log Entry 5====== 00:10:38.311 trtype: tcp 00:10:38.311 adrfam: ipv4 00:10:38.311 subtype: discovery subsystem referral 00:10:38.311 treq: not required 00:10:38.311 portid: 0 00:10:38.311 trsvcid: 4430 00:10:38.311 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:10:38.311 traddr: 10.0.0.2 00:10:38.311 eflags: none 00:10:38.311 sectype: none 00:10:38.311 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:10:38.311 Perform nvmf subsystem discovery via RPC 00:10:38.311 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:10:38.311 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.311 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:38.311 [ 00:10:38.311 { 00:10:38.311 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:38.311 "subtype": "Discovery", 00:10:38.311 "listen_addresses": [ 00:10:38.311 { 00:10:38.311 "trtype": "TCP", 00:10:38.311 "adrfam": "IPv4", 00:10:38.311 "traddr": "10.0.0.2", 00:10:38.311 "trsvcid": "4420" 00:10:38.311 } 00:10:38.311 ], 00:10:38.311 "allow_any_host": true, 00:10:38.311 "hosts": [] 00:10:38.311 }, 00:10:38.311 { 00:10:38.311 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:10:38.311 "subtype": "NVMe", 00:10:38.311 "listen_addresses": [ 00:10:38.311 { 00:10:38.311 "trtype": "TCP", 00:10:38.311 "adrfam": "IPv4", 00:10:38.311 "traddr": "10.0.0.2", 00:10:38.311 "trsvcid": "4420" 00:10:38.311 } 00:10:38.311 ], 00:10:38.311 "allow_any_host": true, 00:10:38.311 "hosts": [], 00:10:38.311 "serial_number": "SPDK00000000000001", 00:10:38.311 "model_number": "SPDK bdev Controller", 00:10:38.311 "max_namespaces": 32, 00:10:38.311 "min_cntlid": 1, 00:10:38.311 "max_cntlid": 65519, 00:10:38.311 "namespaces": [ 00:10:38.311 { 00:10:38.311 "nsid": 1, 00:10:38.311 "bdev_name": "Null1", 00:10:38.311 "name": "Null1", 00:10:38.311 "nguid": "48D36D7247554D64830D081741665784", 00:10:38.311 "uuid": "48d36d72-4755-4d64-830d-081741665784" 00:10:38.311 } 00:10:38.311 ] 00:10:38.311 }, 00:10:38.311 { 00:10:38.311 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:38.311 "subtype": "NVMe", 00:10:38.311 "listen_addresses": [ 00:10:38.311 { 00:10:38.311 "trtype": "TCP", 00:10:38.311 "adrfam": "IPv4", 00:10:38.311 "traddr": "10.0.0.2", 00:10:38.311 "trsvcid": "4420" 00:10:38.311 } 00:10:38.311 ], 00:10:38.311 "allow_any_host": true, 00:10:38.311 "hosts": [], 00:10:38.311 "serial_number": "SPDK00000000000002", 00:10:38.311 "model_number": "SPDK bdev Controller", 00:10:38.311 "max_namespaces": 32, 00:10:38.311 "min_cntlid": 1, 00:10:38.311 "max_cntlid": 65519, 00:10:38.311 "namespaces": [ 00:10:38.311 { 00:10:38.311 "nsid": 1, 00:10:38.311 "bdev_name": "Null2", 00:10:38.311 "name": "Null2", 00:10:38.311 "nguid": "6EC882298024428A83E99182C59580E8", 00:10:38.311 "uuid": "6ec88229-8024-428a-83e9-9182c59580e8" 00:10:38.311 } 00:10:38.311 ] 00:10:38.311 }, 00:10:38.311 { 00:10:38.311 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:10:38.311 "subtype": "NVMe", 00:10:38.311 "listen_addresses": [ 00:10:38.311 { 00:10:38.311 "trtype": "TCP", 00:10:38.311 "adrfam": "IPv4", 00:10:38.311 "traddr": "10.0.0.2", 00:10:38.311 "trsvcid": "4420" 00:10:38.311 } 00:10:38.311 ], 00:10:38.311 "allow_any_host": true, 00:10:38.311 "hosts": [], 00:10:38.311 "serial_number": "SPDK00000000000003", 00:10:38.311 "model_number": "SPDK bdev Controller", 00:10:38.311 "max_namespaces": 32, 00:10:38.311 "min_cntlid": 1, 00:10:38.311 "max_cntlid": 65519, 00:10:38.311 "namespaces": [ 00:10:38.311 { 00:10:38.311 "nsid": 1, 00:10:38.311 "bdev_name": "Null3", 00:10:38.311 "name": "Null3", 00:10:38.311 "nguid": "128733AB9FE3493D8756FD5E8AC2C2AB", 00:10:38.311 "uuid": "128733ab-9fe3-493d-8756-fd5e8ac2c2ab" 00:10:38.311 } 00:10:38.311 ] 00:10:38.311 }, 00:10:38.311 { 00:10:38.311 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:10:38.311 "subtype": "NVMe", 00:10:38.311 "listen_addresses": [ 00:10:38.311 { 00:10:38.311 "trtype": "TCP", 00:10:38.311 "adrfam": "IPv4", 00:10:38.311 "traddr": "10.0.0.2", 00:10:38.311 "trsvcid": "4420" 00:10:38.311 } 00:10:38.311 ], 00:10:38.311 "allow_any_host": true, 00:10:38.311 "hosts": [], 00:10:38.311 "serial_number": "SPDK00000000000004", 00:10:38.311 "model_number": "SPDK bdev Controller", 00:10:38.311 "max_namespaces": 32, 00:10:38.311 "min_cntlid": 1, 00:10:38.311 "max_cntlid": 65519, 00:10:38.311 "namespaces": [ 00:10:38.311 { 00:10:38.311 "nsid": 1, 00:10:38.311 "bdev_name": "Null4", 00:10:38.311 "name": "Null4", 00:10:38.311 "nguid": "0D3A5CA87AB4415AA384109E902F5175", 00:10:38.311 "uuid": "0d3a5ca8-7ab4-415a-a384-109e902f5175" 00:10:38.311 } 00:10:38.311 ] 00:10:38.311 } 00:10:38.311 ] 00:10:38.311 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.312 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:10:38.312 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:38.312 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:38.312 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.312 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:38.312 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.312 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:10:38.312 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.312 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:38.312 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.312 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:38.312 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:10:38.312 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.312 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:38.312 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.312 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:10:38.312 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.312 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:38.312 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.312 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:38.312 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:10:38.312 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.312 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:38.312 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.312 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:10:38.312 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.312 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:38.312 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.312 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:38.312 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:10:38.312 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.312 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:38.312 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.312 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:10:38.312 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.312 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:38.312 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.312 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:10:38.312 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.312 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:38.312 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.312 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:10:38.312 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:10:38.312 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.312 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:38.312 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.570 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:10:38.570 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:10:38.570 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:10:38.570 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:10:38.570 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:38.570 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:10:38.570 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:38.570 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:10:38.570 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:38.570 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:38.570 rmmod nvme_tcp 00:10:38.570 rmmod nvme_fabrics 00:10:38.570 rmmod nvme_keyring 00:10:38.570 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:38.570 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:10:38.570 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:10:38.570 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 3970915 ']' 00:10:38.570 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 3970915 00:10:38.570 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 3970915 ']' 00:10:38.570 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 3970915 00:10:38.570 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:10:38.570 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:38.570 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3970915 00:10:38.570 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:38.570 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:38.570 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3970915' 00:10:38.570 killing process with pid 3970915 00:10:38.570 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 3970915 00:10:38.570 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 3970915 00:10:38.828 11:05:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:38.828 11:05:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:38.828 11:05:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:38.828 11:05:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:10:38.828 11:05:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:10:38.828 11:05:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:10:38.828 11:05:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:38.828 11:05:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:38.828 11:05:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:38.828 11:05:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:38.828 11:05:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:38.828 11:05:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:40.732 11:05:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:40.732 00:10:40.732 real 0m9.383s 00:10:40.732 user 0m5.513s 00:10:40.732 sys 0m4.901s 00:10:40.732 11:05:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:40.732 11:05:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.732 ************************************ 00:10:40.732 END TEST nvmf_target_discovery 00:10:40.732 ************************************ 00:10:40.732 11:05:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:10:40.732 11:05:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:40.732 11:05:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:40.732 11:05:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:40.992 ************************************ 00:10:40.992 START TEST nvmf_referrals 00:10:40.992 ************************************ 00:10:40.992 11:05:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:10:40.992 * Looking for test storage... 00:10:40.992 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:40.992 11:05:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:40.992 11:05:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lcov --version 00:10:40.992 11:05:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:40.992 11:05:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:40.992 11:05:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:40.992 11:05:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:40.992 11:05:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:40.992 11:05:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:10:40.992 11:05:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:10:40.992 11:05:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:10:40.992 11:05:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:10:40.992 11:05:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:10:40.992 11:05:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:10:40.992 11:05:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:10:40.992 11:05:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:40.992 11:05:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:10:40.992 11:05:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:10:40.992 11:05:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:40.992 11:05:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:40.992 11:05:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:10:40.992 11:05:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:10:40.992 11:05:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:40.992 11:05:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:10:40.992 11:05:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:10:40.992 11:05:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:10:40.992 11:05:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:10:40.992 11:05:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:40.992 11:05:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:10:40.992 11:05:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:10:40.992 11:05:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:40.992 11:05:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:40.992 11:05:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:10:40.992 11:05:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:40.992 11:05:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:40.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:40.992 --rc genhtml_branch_coverage=1 00:10:40.992 --rc genhtml_function_coverage=1 00:10:40.992 --rc genhtml_legend=1 00:10:40.992 --rc geninfo_all_blocks=1 00:10:40.992 --rc geninfo_unexecuted_blocks=1 00:10:40.992 00:10:40.992 ' 00:10:40.992 11:05:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:40.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:40.992 --rc genhtml_branch_coverage=1 00:10:40.992 --rc genhtml_function_coverage=1 00:10:40.992 --rc genhtml_legend=1 00:10:40.992 --rc geninfo_all_blocks=1 00:10:40.992 --rc geninfo_unexecuted_blocks=1 00:10:40.992 00:10:40.992 ' 00:10:40.992 11:05:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:40.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:40.992 --rc genhtml_branch_coverage=1 00:10:40.992 --rc genhtml_function_coverage=1 00:10:40.992 --rc genhtml_legend=1 00:10:40.992 --rc geninfo_all_blocks=1 00:10:40.992 --rc geninfo_unexecuted_blocks=1 00:10:40.992 00:10:40.992 ' 00:10:40.992 11:05:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:40.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:40.992 --rc genhtml_branch_coverage=1 00:10:40.992 --rc genhtml_function_coverage=1 00:10:40.992 --rc genhtml_legend=1 00:10:40.992 --rc geninfo_all_blocks=1 00:10:40.992 --rc geninfo_unexecuted_blocks=1 00:10:40.992 00:10:40.992 ' 00:10:40.992 11:05:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:40.992 11:05:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:10:40.992 11:05:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:40.992 11:05:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:40.992 11:05:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:40.992 11:05:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:40.992 11:05:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:40.992 11:05:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:40.992 11:05:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:40.993 11:05:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:40.993 11:05:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:40.993 11:05:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:40.993 11:05:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:40.993 11:05:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:40.993 11:05:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:40.993 11:05:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:40.993 11:05:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:40.993 11:05:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:40.993 11:05:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:40.993 11:05:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:10:40.993 11:05:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:40.993 11:05:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:40.993 11:05:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:40.993 11:05:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.993 11:05:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.993 11:05:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.993 11:05:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:10:40.993 11:05:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.993 11:05:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:10:40.993 11:05:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:40.993 11:05:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:40.993 11:05:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:40.993 11:05:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:40.993 11:05:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:40.993 11:05:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:40.993 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:40.993 11:05:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:40.993 11:05:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:40.993 11:05:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:40.993 11:05:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:10:40.993 11:05:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:10:40.993 11:05:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:10:40.993 11:05:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:10:40.993 11:05:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:10:40.993 11:05:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:10:40.993 11:05:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:10:40.993 11:05:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:40.993 11:05:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:40.993 11:05:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:40.993 11:05:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:40.993 11:05:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:40.993 11:05:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:40.993 11:05:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:40.993 11:05:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:40.993 11:05:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:40.993 11:05:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:40.993 11:05:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:10:40.993 11:05:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:47.566 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:47.566 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:10:47.566 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:47.566 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:47.566 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:47.566 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:47.566 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:47.566 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:10:47.566 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:47.566 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:10:47.566 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:10:47.566 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:10:47.566 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:10:47.566 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:10:47.566 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:10:47.566 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:47.566 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:47.566 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:47.566 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:47.566 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:47.566 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:47.566 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:47.566 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:47.566 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:47.566 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:47.566 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:47.566 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:47.566 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:47.566 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:47.566 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:47.566 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:47.566 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:47.566 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:47.566 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:47.566 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:47.566 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:47.566 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:47.566 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:47.566 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:47.566 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:47.566 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:47.566 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:47.566 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:47.566 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:47.566 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:47.566 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:47.566 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:47.566 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:47.566 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:47.566 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:47.566 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:47.566 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:47.566 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:47.566 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:47.566 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:47.566 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:47.566 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:47.566 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:47.566 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:47.566 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:47.566 Found net devices under 0000:86:00.0: cvl_0_0 00:10:47.566 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:47.566 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:47.566 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:47.566 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:47.566 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:47.566 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:47.566 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:47.566 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:47.566 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:47.566 Found net devices under 0000:86:00.1: cvl_0_1 00:10:47.566 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:47.566 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:47.566 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:10:47.567 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:47.567 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:47.567 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:47.567 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:47.567 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:47.567 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:47.567 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:47.567 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:47.567 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:47.567 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:47.567 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:47.567 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:47.567 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:47.567 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:47.567 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:47.567 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:47.567 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:47.567 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:47.567 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:47.567 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:47.567 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:47.567 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:47.567 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:47.567 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:47.567 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:47.567 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:47.567 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:47.567 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.408 ms 00:10:47.567 00:10:47.567 --- 10.0.0.2 ping statistics --- 00:10:47.567 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:47.567 rtt min/avg/max/mdev = 0.408/0.408/0.408/0.000 ms 00:10:47.567 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:47.567 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:47.567 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.151 ms 00:10:47.567 00:10:47.567 --- 10.0.0.1 ping statistics --- 00:10:47.567 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:47.567 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:10:47.567 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:47.567 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:10:47.567 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:47.567 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:47.567 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:47.567 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:47.567 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:47.567 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:47.567 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:47.567 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:10:47.567 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:47.567 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:47.567 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:47.567 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=3974626 00:10:47.567 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:47.567 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 3974626 00:10:47.567 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 3974626 ']' 00:10:47.567 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:47.567 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:47.567 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:47.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:47.567 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:47.567 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:47.567 [2024-11-20 11:05:14.511115] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:10:47.567 [2024-11-20 11:05:14.511164] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:47.567 [2024-11-20 11:05:14.591019] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:47.567 [2024-11-20 11:05:14.631640] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:47.567 [2024-11-20 11:05:14.631681] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:47.567 [2024-11-20 11:05:14.631688] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:47.567 [2024-11-20 11:05:14.631694] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:47.567 [2024-11-20 11:05:14.631700] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:47.567 [2024-11-20 11:05:14.633297] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:47.567 [2024-11-20 11:05:14.633408] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:47.567 [2024-11-20 11:05:14.633493] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:47.567 [2024-11-20 11:05:14.633493] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:47.567 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:47.567 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:10:47.567 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:47.567 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:47.567 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:47.567 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:47.567 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:47.567 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.567 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:47.567 [2024-11-20 11:05:14.783133] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:47.567 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.567 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:10:47.567 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.567 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:47.567 [2024-11-20 11:05:14.796544] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:10:47.567 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.567 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:10:47.567 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.567 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:47.567 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.567 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:10:47.567 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.567 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:47.567 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.568 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:10:47.568 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.568 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:47.568 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.568 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:47.568 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:10:47.568 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.568 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:47.568 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.568 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:10:47.568 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:10:47.568 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:47.568 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:47.568 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.568 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:47.568 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:47.568 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:47.568 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.568 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:10:47.568 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:10:47.568 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:10:47.568 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:47.568 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:47.568 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:47.568 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:47.568 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:47.568 11:05:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:10:47.828 11:05:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:10:47.828 11:05:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:10:47.828 11:05:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.828 11:05:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:47.828 11:05:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.828 11:05:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:10:47.828 11:05:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.828 11:05:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:47.828 11:05:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.828 11:05:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:10:47.828 11:05:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.828 11:05:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:47.828 11:05:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.828 11:05:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:47.828 11:05:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:10:47.828 11:05:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.828 11:05:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:47.828 11:05:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.828 11:05:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:10:47.828 11:05:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:10:47.828 11:05:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:47.828 11:05:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:47.828 11:05:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:47.828 11:05:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:47.828 11:05:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:47.828 11:05:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:10:47.828 11:05:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:10:47.828 11:05:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:10:47.828 11:05:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.828 11:05:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:47.828 11:05:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.828 11:05:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:10:47.828 11:05:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.828 11:05:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:48.087 11:05:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.087 11:05:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:10:48.087 11:05:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:48.087 11:05:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:48.087 11:05:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:48.088 11:05:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.088 11:05:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:48.088 11:05:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:48.088 11:05:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.088 11:05:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:10:48.088 11:05:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:10:48.088 11:05:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:10:48.088 11:05:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:48.088 11:05:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:48.088 11:05:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:48.088 11:05:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:48.088 11:05:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:48.088 11:05:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:10:48.088 11:05:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:10:48.088 11:05:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:10:48.088 11:05:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:10:48.088 11:05:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:10:48.088 11:05:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:48.088 11:05:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:10:48.347 11:05:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:10:48.347 11:05:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:10:48.347 11:05:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:10:48.347 11:05:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:10:48.347 11:05:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:48.347 11:05:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:10:48.606 11:05:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:10:48.606 11:05:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:10:48.606 11:05:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.606 11:05:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:48.606 11:05:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.606 11:05:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:10:48.606 11:05:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:48.606 11:05:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:48.606 11:05:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:48.606 11:05:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.606 11:05:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:48.606 11:05:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:48.606 11:05:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.606 11:05:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:10:48.606 11:05:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:10:48.606 11:05:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:10:48.606 11:05:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:48.606 11:05:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:48.606 11:05:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:48.606 11:05:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:48.606 11:05:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:48.866 11:05:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:10:48.866 11:05:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:10:48.866 11:05:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:10:48.866 11:05:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:10:48.866 11:05:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:10:48.866 11:05:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:48.866 11:05:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:10:48.866 11:05:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:10:48.866 11:05:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:10:48.866 11:05:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:10:48.866 11:05:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:10:48.866 11:05:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:48.866 11:05:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:10:49.132 11:05:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:10:49.132 11:05:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:10:49.132 11:05:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.132 11:05:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:49.132 11:05:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.132 11:05:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:49.132 11:05:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:10:49.132 11:05:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.132 11:05:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:49.132 11:05:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.132 11:05:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:10:49.132 11:05:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:10:49.132 11:05:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:49.132 11:05:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:49.132 11:05:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:49.132 11:05:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:49.132 11:05:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:49.394 11:05:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:10:49.394 11:05:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:10:49.394 11:05:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:10:49.394 11:05:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:10:49.394 11:05:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:49.394 11:05:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:10:49.394 11:05:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:49.394 11:05:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:10:49.394 11:05:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:49.394 11:05:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:49.394 rmmod nvme_tcp 00:10:49.394 rmmod nvme_fabrics 00:10:49.394 rmmod nvme_keyring 00:10:49.394 11:05:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:49.394 11:05:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:10:49.394 11:05:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:10:49.394 11:05:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 3974626 ']' 00:10:49.394 11:05:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 3974626 00:10:49.394 11:05:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 3974626 ']' 00:10:49.394 11:05:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 3974626 00:10:49.394 11:05:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:10:49.394 11:05:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:49.394 11:05:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3974626 00:10:49.394 11:05:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:49.394 11:05:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:49.394 11:05:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3974626' 00:10:49.394 killing process with pid 3974626 00:10:49.394 11:05:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 3974626 00:10:49.394 11:05:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 3974626 00:10:49.654 11:05:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:49.654 11:05:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:49.654 11:05:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:49.654 11:05:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:10:49.654 11:05:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:10:49.654 11:05:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:49.654 11:05:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:10:49.654 11:05:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:49.655 11:05:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:49.655 11:05:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:49.655 11:05:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:49.655 11:05:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:52.193 11:05:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:52.193 00:10:52.193 real 0m10.869s 00:10:52.193 user 0m12.089s 00:10:52.193 sys 0m5.267s 00:10:52.193 11:05:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:52.193 11:05:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:52.193 ************************************ 00:10:52.193 END TEST nvmf_referrals 00:10:52.193 ************************************ 00:10:52.193 11:05:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:10:52.193 11:05:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:52.193 11:05:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:52.193 11:05:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:52.193 ************************************ 00:10:52.193 START TEST nvmf_connect_disconnect 00:10:52.193 ************************************ 00:10:52.193 11:05:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:10:52.193 * Looking for test storage... 00:10:52.193 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:52.193 11:05:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:52.193 11:05:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:10:52.193 11:05:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:52.193 11:05:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:52.193 11:05:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:52.193 11:05:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:52.193 11:05:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:52.193 11:05:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:10:52.193 11:05:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:10:52.193 11:05:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:10:52.193 11:05:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:10:52.193 11:05:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:10:52.193 11:05:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:10:52.193 11:05:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:10:52.193 11:05:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:52.193 11:05:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:10:52.193 11:05:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:10:52.193 11:05:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:52.193 11:05:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:52.193 11:05:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:10:52.193 11:05:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:10:52.193 11:05:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:52.193 11:05:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:10:52.193 11:05:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:10:52.193 11:05:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:10:52.193 11:05:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:10:52.193 11:05:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:52.193 11:05:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:10:52.193 11:05:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:10:52.193 11:05:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:52.193 11:05:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:52.193 11:05:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:10:52.193 11:05:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:52.193 11:05:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:52.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:52.193 --rc genhtml_branch_coverage=1 00:10:52.193 --rc genhtml_function_coverage=1 00:10:52.193 --rc genhtml_legend=1 00:10:52.193 --rc geninfo_all_blocks=1 00:10:52.193 --rc geninfo_unexecuted_blocks=1 00:10:52.193 00:10:52.193 ' 00:10:52.193 11:05:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:52.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:52.193 --rc genhtml_branch_coverage=1 00:10:52.193 --rc genhtml_function_coverage=1 00:10:52.193 --rc genhtml_legend=1 00:10:52.193 --rc geninfo_all_blocks=1 00:10:52.193 --rc geninfo_unexecuted_blocks=1 00:10:52.193 00:10:52.193 ' 00:10:52.193 11:05:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:52.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:52.193 --rc genhtml_branch_coverage=1 00:10:52.193 --rc genhtml_function_coverage=1 00:10:52.193 --rc genhtml_legend=1 00:10:52.193 --rc geninfo_all_blocks=1 00:10:52.193 --rc geninfo_unexecuted_blocks=1 00:10:52.193 00:10:52.193 ' 00:10:52.193 11:05:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:52.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:52.193 --rc genhtml_branch_coverage=1 00:10:52.193 --rc genhtml_function_coverage=1 00:10:52.193 --rc genhtml_legend=1 00:10:52.193 --rc geninfo_all_blocks=1 00:10:52.193 --rc geninfo_unexecuted_blocks=1 00:10:52.193 00:10:52.193 ' 00:10:52.193 11:05:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:52.193 11:05:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:10:52.193 11:05:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:52.193 11:05:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:52.193 11:05:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:52.193 11:05:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:52.193 11:05:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:52.193 11:05:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:52.193 11:05:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:52.193 11:05:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:52.193 11:05:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:52.193 11:05:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:52.193 11:05:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:52.193 11:05:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:52.194 11:05:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:52.194 11:05:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:52.194 11:05:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:52.194 11:05:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:52.194 11:05:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:52.194 11:05:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:10:52.194 11:05:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:52.194 11:05:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:52.194 11:05:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:52.194 11:05:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.194 11:05:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.194 11:05:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.194 11:05:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:10:52.194 11:05:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.194 11:05:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:10:52.194 11:05:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:52.194 11:05:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:52.194 11:05:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:52.194 11:05:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:52.194 11:05:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:52.194 11:05:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:52.194 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:52.194 11:05:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:52.194 11:05:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:52.194 11:05:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:52.194 11:05:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:52.194 11:05:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:52.194 11:05:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:10:52.194 11:05:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:52.194 11:05:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:52.194 11:05:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:52.194 11:05:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:52.194 11:05:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:52.194 11:05:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:52.194 11:05:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:52.194 11:05:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:52.194 11:05:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:52.194 11:05:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:52.194 11:05:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:10:52.194 11:05:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:58.765 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:58.765 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:10:58.765 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:58.765 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:58.765 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:58.765 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:58.765 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:58.765 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:10:58.765 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:58.765 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:10:58.765 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:10:58.765 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:10:58.765 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:10:58.765 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:10:58.765 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:10:58.765 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:58.765 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:58.765 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:58.765 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:58.765 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:58.765 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:58.765 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:58.765 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:58.765 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:58.765 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:58.765 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:58.765 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:58.765 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:58.765 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:58.765 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:58.765 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:58.765 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:58.766 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:58.766 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:58.766 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:58.766 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:58.766 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:58.766 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:58.766 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:58.766 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:58.766 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:58.766 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:58.766 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:58.766 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:58.766 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:58.766 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:58.766 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:58.766 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:58.766 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:58.766 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:58.766 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:58.766 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:58.766 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:58.766 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:58.766 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:58.766 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:58.766 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:58.766 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:58.766 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:58.766 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:58.766 Found net devices under 0000:86:00.0: cvl_0_0 00:10:58.766 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:58.766 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:58.766 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:58.766 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:58.766 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:58.766 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:58.766 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:58.766 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:58.766 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:58.766 Found net devices under 0000:86:00.1: cvl_0_1 00:10:58.766 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:58.766 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:58.766 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:10:58.766 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:58.766 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:58.766 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:58.766 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:58.766 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:58.766 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:58.766 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:58.766 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:58.766 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:58.766 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:58.766 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:58.766 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:58.766 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:58.766 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:58.766 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:58.766 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:58.766 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:58.766 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:58.766 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:58.766 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:58.766 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:58.766 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:58.766 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:58.766 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:58.766 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:58.766 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:58.766 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:58.766 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.487 ms 00:10:58.766 00:10:58.766 --- 10.0.0.2 ping statistics --- 00:10:58.766 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:58.766 rtt min/avg/max/mdev = 0.487/0.487/0.487/0.000 ms 00:10:58.766 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:58.766 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:58.766 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.205 ms 00:10:58.766 00:10:58.766 --- 10.0.0.1 ping statistics --- 00:10:58.766 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:58.766 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:10:58.766 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:58.766 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:10:58.766 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:58.766 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:58.766 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:58.766 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:58.766 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:58.766 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:58.766 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:58.766 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:10:58.766 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:58.766 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:58.766 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:58.766 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=3978699 00:10:58.766 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 3978699 00:10:58.766 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:58.766 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 3978699 ']' 00:10:58.766 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:58.766 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:58.766 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:58.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:58.766 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:58.766 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:58.766 [2024-11-20 11:05:25.417391] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:10:58.766 [2024-11-20 11:05:25.417439] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:58.766 [2024-11-20 11:05:25.497913] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:58.766 [2024-11-20 11:05:25.542127] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:58.767 [2024-11-20 11:05:25.542162] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:58.767 [2024-11-20 11:05:25.542170] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:58.767 [2024-11-20 11:05:25.542179] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:58.767 [2024-11-20 11:05:25.542185] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:58.767 [2024-11-20 11:05:25.543698] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:58.767 [2024-11-20 11:05:25.543803] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:58.767 [2024-11-20 11:05:25.543827] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:58.767 [2024-11-20 11:05:25.543828] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:58.767 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:58.767 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:10:58.767 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:58.767 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:58.767 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:58.767 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:58.767 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:10:58.767 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.767 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:58.767 [2024-11-20 11:05:25.681449] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:58.767 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.767 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:10:58.767 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.767 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:58.767 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.767 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:10:58.767 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:58.767 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.767 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:58.767 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.767 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:58.767 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.767 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:58.767 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.767 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:58.767 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.767 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:58.767 [2024-11-20 11:05:25.744256] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:58.767 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.767 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:10:58.767 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:10:58.767 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:11:02.047 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:05.333 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:08.620 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:11.907 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:15.192 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:15.192 11:05:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:11:15.192 11:05:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:11:15.192 11:05:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:15.193 11:05:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:11:15.193 11:05:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:15.193 11:05:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:11:15.193 11:05:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:15.193 11:05:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:15.193 rmmod nvme_tcp 00:11:15.193 rmmod nvme_fabrics 00:11:15.193 rmmod nvme_keyring 00:11:15.193 11:05:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:15.193 11:05:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:11:15.193 11:05:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:11:15.193 11:05:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 3978699 ']' 00:11:15.193 11:05:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 3978699 00:11:15.193 11:05:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 3978699 ']' 00:11:15.193 11:05:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 3978699 00:11:15.193 11:05:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:11:15.193 11:05:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:15.193 11:05:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3978699 00:11:15.193 11:05:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:15.193 11:05:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:15.193 11:05:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3978699' 00:11:15.193 killing process with pid 3978699 00:11:15.193 11:05:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 3978699 00:11:15.193 11:05:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 3978699 00:11:15.193 11:05:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:15.193 11:05:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:15.193 11:05:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:15.193 11:05:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:11:15.193 11:05:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:11:15.193 11:05:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:15.193 11:05:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:11:15.193 11:05:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:15.193 11:05:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:15.193 11:05:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:15.193 11:05:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:15.193 11:05:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:17.729 11:05:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:17.729 00:11:17.729 real 0m25.479s 00:11:17.729 user 1m9.375s 00:11:17.729 sys 0m5.849s 00:11:17.729 11:05:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:17.729 11:05:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:17.729 ************************************ 00:11:17.729 END TEST nvmf_connect_disconnect 00:11:17.729 ************************************ 00:11:17.729 11:05:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:17.729 11:05:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:17.729 11:05:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:17.729 11:05:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:17.729 ************************************ 00:11:17.729 START TEST nvmf_multitarget 00:11:17.729 ************************************ 00:11:17.729 11:05:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:17.729 * Looking for test storage... 00:11:17.729 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:17.729 11:05:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:17.729 11:05:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lcov --version 00:11:17.729 11:05:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:17.729 11:05:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:17.729 11:05:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:17.729 11:05:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:17.729 11:05:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:17.729 11:05:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:11:17.729 11:05:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:11:17.729 11:05:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:11:17.729 11:05:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:11:17.729 11:05:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:11:17.729 11:05:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:11:17.729 11:05:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:11:17.729 11:05:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:17.729 11:05:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:11:17.729 11:05:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:11:17.729 11:05:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:17.729 11:05:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:17.729 11:05:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:11:17.729 11:05:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:11:17.729 11:05:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:17.729 11:05:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:11:17.729 11:05:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:11:17.729 11:05:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:11:17.729 11:05:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:11:17.729 11:05:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:17.729 11:05:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:11:17.729 11:05:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:11:17.729 11:05:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:17.729 11:05:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:17.729 11:05:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:11:17.729 11:05:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:17.729 11:05:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:17.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:17.729 --rc genhtml_branch_coverage=1 00:11:17.729 --rc genhtml_function_coverage=1 00:11:17.729 --rc genhtml_legend=1 00:11:17.729 --rc geninfo_all_blocks=1 00:11:17.729 --rc geninfo_unexecuted_blocks=1 00:11:17.729 00:11:17.729 ' 00:11:17.730 11:05:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:17.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:17.730 --rc genhtml_branch_coverage=1 00:11:17.730 --rc genhtml_function_coverage=1 00:11:17.730 --rc genhtml_legend=1 00:11:17.730 --rc geninfo_all_blocks=1 00:11:17.730 --rc geninfo_unexecuted_blocks=1 00:11:17.730 00:11:17.730 ' 00:11:17.730 11:05:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:17.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:17.730 --rc genhtml_branch_coverage=1 00:11:17.730 --rc genhtml_function_coverage=1 00:11:17.730 --rc genhtml_legend=1 00:11:17.730 --rc geninfo_all_blocks=1 00:11:17.730 --rc geninfo_unexecuted_blocks=1 00:11:17.730 00:11:17.730 ' 00:11:17.730 11:05:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:17.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:17.730 --rc genhtml_branch_coverage=1 00:11:17.730 --rc genhtml_function_coverage=1 00:11:17.730 --rc genhtml_legend=1 00:11:17.730 --rc geninfo_all_blocks=1 00:11:17.730 --rc geninfo_unexecuted_blocks=1 00:11:17.730 00:11:17.730 ' 00:11:17.730 11:05:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:17.730 11:05:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:11:17.730 11:05:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:17.730 11:05:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:17.730 11:05:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:17.730 11:05:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:17.730 11:05:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:17.730 11:05:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:17.730 11:05:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:17.730 11:05:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:17.730 11:05:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:17.730 11:05:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:17.730 11:05:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:17.730 11:05:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:17.730 11:05:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:17.730 11:05:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:17.730 11:05:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:17.730 11:05:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:17.730 11:05:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:17.730 11:05:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:11:17.730 11:05:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:17.730 11:05:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:17.730 11:05:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:17.730 11:05:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.730 11:05:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.730 11:05:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.730 11:05:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:11:17.730 11:05:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.730 11:05:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:11:17.730 11:05:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:17.730 11:05:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:17.730 11:05:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:17.730 11:05:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:17.730 11:05:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:17.730 11:05:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:17.730 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:17.730 11:05:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:17.730 11:05:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:17.730 11:05:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:17.730 11:05:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:11:17.730 11:05:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:11:17.730 11:05:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:17.730 11:05:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:17.730 11:05:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:17.730 11:05:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:17.730 11:05:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:17.730 11:05:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:17.730 11:05:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:17.730 11:05:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:17.730 11:05:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:17.730 11:05:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:17.730 11:05:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:11:17.730 11:05:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:24.312 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:24.312 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:11:24.312 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:24.312 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:24.312 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:24.312 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:24.312 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:24.312 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:11:24.312 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:24.312 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:11:24.312 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:11:24.312 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:11:24.312 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:11:24.312 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:11:24.312 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:11:24.312 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:24.312 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:24.312 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:24.312 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:24.312 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:24.312 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:24.312 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:24.312 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:24.312 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:24.312 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:24.312 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:24.312 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:24.312 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:24.312 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:24.312 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:24.312 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:24.312 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:24.312 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:24.312 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:24.312 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:24.312 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:24.312 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:24.312 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:24.312 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:24.312 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:24.312 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:24.312 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:24.312 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:24.312 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:24.312 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:24.312 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:24.312 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:24.313 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:24.313 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:24.313 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:24.313 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:24.313 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:24.313 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:24.313 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:24.313 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:24.313 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:24.313 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:24.313 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:24.313 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:24.313 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:24.313 Found net devices under 0000:86:00.0: cvl_0_0 00:11:24.313 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:24.313 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:24.313 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:24.313 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:24.313 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:24.313 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:24.313 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:24.313 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:24.313 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:24.313 Found net devices under 0000:86:00.1: cvl_0_1 00:11:24.313 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:24.313 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:24.313 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:11:24.313 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:24.313 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:24.313 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:24.313 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:24.313 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:24.313 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:24.313 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:24.313 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:24.313 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:24.313 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:24.313 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:24.313 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:24.313 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:24.313 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:24.313 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:24.313 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:24.313 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:24.313 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:24.313 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:24.313 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:24.313 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:24.313 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:24.313 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:24.313 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:24.313 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:24.313 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:24.313 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:24.313 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.494 ms 00:11:24.313 00:11:24.313 --- 10.0.0.2 ping statistics --- 00:11:24.313 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:24.313 rtt min/avg/max/mdev = 0.494/0.494/0.494/0.000 ms 00:11:24.313 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:24.313 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:24.313 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.151 ms 00:11:24.313 00:11:24.313 --- 10.0.0.1 ping statistics --- 00:11:24.313 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:24.313 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:11:24.313 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:24.313 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:11:24.313 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:24.313 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:24.313 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:24.313 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:24.313 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:24.313 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:24.313 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:24.313 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:11:24.313 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:24.313 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:24.313 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:24.313 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=3985097 00:11:24.313 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 3985097 00:11:24.313 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:24.313 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 3985097 ']' 00:11:24.313 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:24.313 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:24.313 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:24.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:24.313 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:24.313 11:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:24.313 [2024-11-20 11:05:51.030089] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:11:24.313 [2024-11-20 11:05:51.030138] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:24.313 [2024-11-20 11:05:51.113167] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:24.313 [2024-11-20 11:05:51.157410] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:24.313 [2024-11-20 11:05:51.157442] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:24.313 [2024-11-20 11:05:51.157449] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:24.313 [2024-11-20 11:05:51.157456] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:24.313 [2024-11-20 11:05:51.157461] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:24.313 [2024-11-20 11:05:51.158981] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:24.313 [2024-11-20 11:05:51.159008] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:24.313 [2024-11-20 11:05:51.159045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:24.313 [2024-11-20 11:05:51.159046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:24.313 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:24.313 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:11:24.313 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:24.313 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:24.313 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:24.313 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:24.313 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:11:24.313 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:24.314 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:11:24.314 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:11:24.314 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:11:24.314 "nvmf_tgt_1" 00:11:24.314 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:11:24.314 "nvmf_tgt_2" 00:11:24.314 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:11:24.314 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:24.314 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:11:24.314 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:11:24.573 true 00:11:24.573 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:11:24.573 true 00:11:24.573 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:24.573 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:11:24.573 11:05:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:11:24.573 11:05:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:11:24.573 11:05:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:11:24.573 11:05:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:24.573 11:05:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:11:24.573 11:05:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:24.573 11:05:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:11:24.573 11:05:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:24.573 11:05:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:24.832 rmmod nvme_tcp 00:11:24.832 rmmod nvme_fabrics 00:11:24.832 rmmod nvme_keyring 00:11:24.832 11:05:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:24.832 11:05:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:11:24.832 11:05:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:11:24.833 11:05:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 3985097 ']' 00:11:24.833 11:05:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 3985097 00:11:24.833 11:05:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 3985097 ']' 00:11:24.833 11:05:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 3985097 00:11:24.833 11:05:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:11:24.833 11:05:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:24.833 11:05:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3985097 00:11:24.833 11:05:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:24.833 11:05:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:24.833 11:05:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3985097' 00:11:24.833 killing process with pid 3985097 00:11:24.833 11:05:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 3985097 00:11:24.833 11:05:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 3985097 00:11:25.092 11:05:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:25.092 11:05:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:25.092 11:05:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:25.092 11:05:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:11:25.092 11:05:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:11:25.092 11:05:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:25.092 11:05:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:11:25.092 11:05:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:25.092 11:05:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:25.092 11:05:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:25.092 11:05:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:25.092 11:05:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:26.999 11:05:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:26.999 00:11:26.999 real 0m9.673s 00:11:26.999 user 0m7.235s 00:11:26.999 sys 0m4.946s 00:11:26.999 11:05:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:26.999 11:05:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:26.999 ************************************ 00:11:26.999 END TEST nvmf_multitarget 00:11:27.000 ************************************ 00:11:27.000 11:05:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:11:27.000 11:05:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:27.000 11:05:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:27.000 11:05:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:27.000 ************************************ 00:11:27.000 START TEST nvmf_rpc 00:11:27.000 ************************************ 00:11:27.000 11:05:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:11:27.260 * Looking for test storage... 00:11:27.260 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:27.260 11:05:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:27.260 11:05:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:11:27.260 11:05:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:27.260 11:05:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:27.260 11:05:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:27.260 11:05:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:27.260 11:05:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:27.260 11:05:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:11:27.260 11:05:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:11:27.260 11:05:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:11:27.260 11:05:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:11:27.260 11:05:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:11:27.260 11:05:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:11:27.260 11:05:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:11:27.260 11:05:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:27.260 11:05:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:11:27.260 11:05:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:11:27.260 11:05:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:27.260 11:05:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:27.260 11:05:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:11:27.260 11:05:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:11:27.260 11:05:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:27.260 11:05:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:11:27.260 11:05:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:11:27.260 11:05:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:11:27.260 11:05:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:11:27.260 11:05:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:27.260 11:05:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:11:27.260 11:05:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:11:27.260 11:05:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:27.260 11:05:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:27.260 11:05:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:11:27.260 11:05:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:27.260 11:05:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:27.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:27.260 --rc genhtml_branch_coverage=1 00:11:27.260 --rc genhtml_function_coverage=1 00:11:27.260 --rc genhtml_legend=1 00:11:27.260 --rc geninfo_all_blocks=1 00:11:27.260 --rc geninfo_unexecuted_blocks=1 00:11:27.260 00:11:27.260 ' 00:11:27.260 11:05:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:27.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:27.260 --rc genhtml_branch_coverage=1 00:11:27.260 --rc genhtml_function_coverage=1 00:11:27.260 --rc genhtml_legend=1 00:11:27.260 --rc geninfo_all_blocks=1 00:11:27.260 --rc geninfo_unexecuted_blocks=1 00:11:27.260 00:11:27.260 ' 00:11:27.260 11:05:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:27.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:27.260 --rc genhtml_branch_coverage=1 00:11:27.260 --rc genhtml_function_coverage=1 00:11:27.260 --rc genhtml_legend=1 00:11:27.260 --rc geninfo_all_blocks=1 00:11:27.260 --rc geninfo_unexecuted_blocks=1 00:11:27.260 00:11:27.260 ' 00:11:27.260 11:05:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:27.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:27.260 --rc genhtml_branch_coverage=1 00:11:27.260 --rc genhtml_function_coverage=1 00:11:27.260 --rc genhtml_legend=1 00:11:27.260 --rc geninfo_all_blocks=1 00:11:27.260 --rc geninfo_unexecuted_blocks=1 00:11:27.260 00:11:27.260 ' 00:11:27.260 11:05:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:27.260 11:05:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:11:27.260 11:05:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:27.260 11:05:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:27.260 11:05:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:27.260 11:05:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:27.260 11:05:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:27.260 11:05:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:27.260 11:05:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:27.260 11:05:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:27.260 11:05:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:27.260 11:05:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:27.260 11:05:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:27.260 11:05:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:27.260 11:05:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:27.260 11:05:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:27.260 11:05:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:27.260 11:05:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:27.260 11:05:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:27.260 11:05:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:11:27.260 11:05:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:27.260 11:05:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:27.260 11:05:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:27.260 11:05:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.261 11:05:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.261 11:05:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.261 11:05:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:11:27.261 11:05:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.261 11:05:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:11:27.261 11:05:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:27.261 11:05:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:27.261 11:05:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:27.261 11:05:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:27.261 11:05:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:27.261 11:05:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:27.261 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:27.261 11:05:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:27.261 11:05:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:27.261 11:05:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:27.261 11:05:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:11:27.261 11:05:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:11:27.261 11:05:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:27.261 11:05:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:27.261 11:05:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:27.261 11:05:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:27.261 11:05:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:27.261 11:05:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:27.261 11:05:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:27.261 11:05:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:27.261 11:05:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:27.261 11:05:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:27.261 11:05:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:11:27.261 11:05:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:33.836 11:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:33.836 11:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:11:33.836 11:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:33.836 11:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:33.836 11:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:33.836 11:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:33.836 11:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:33.836 11:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:11:33.836 11:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:33.836 11:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:11:33.836 11:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:11:33.836 11:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:11:33.836 11:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:11:33.836 11:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:11:33.836 11:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:11:33.836 11:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:33.836 11:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:33.836 11:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:33.836 11:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:33.836 11:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:33.836 11:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:33.836 11:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:33.836 11:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:33.836 11:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:33.836 11:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:33.836 11:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:33.836 11:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:33.836 11:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:33.836 11:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:33.836 11:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:33.836 11:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:33.836 11:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:33.836 11:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:33.836 11:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:33.836 11:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:33.836 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:33.836 11:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:33.836 11:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:33.836 11:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:33.836 11:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:33.836 11:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:33.836 11:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:33.836 11:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:33.836 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:33.836 11:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:33.836 11:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:33.836 11:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:33.836 11:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:33.836 11:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:33.836 11:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:33.836 11:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:33.836 11:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:33.836 11:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:33.836 11:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:33.836 11:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:33.836 11:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:33.836 11:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:33.836 11:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:33.836 11:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:33.836 11:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:33.836 Found net devices under 0000:86:00.0: cvl_0_0 00:11:33.836 11:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:33.836 11:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:33.836 11:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:33.836 11:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:33.836 11:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:33.836 11:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:33.836 11:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:33.836 11:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:33.836 11:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:33.836 Found net devices under 0000:86:00.1: cvl_0_1 00:11:33.836 11:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:33.836 11:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:33.836 11:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:11:33.836 11:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:33.836 11:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:33.836 11:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:33.836 11:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:33.836 11:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:33.836 11:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:33.836 11:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:33.836 11:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:33.836 11:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:33.837 11:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:33.837 11:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:33.837 11:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:33.837 11:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:33.837 11:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:33.837 11:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:33.837 11:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:33.837 11:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:33.837 11:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:33.837 11:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:33.837 11:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:33.837 11:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:33.837 11:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:33.837 11:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:33.837 11:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:33.837 11:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:33.837 11:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:33.837 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:33.837 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.361 ms 00:11:33.837 00:11:33.837 --- 10.0.0.2 ping statistics --- 00:11:33.837 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:33.837 rtt min/avg/max/mdev = 0.361/0.361/0.361/0.000 ms 00:11:33.837 11:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:33.837 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:33.837 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:11:33.837 00:11:33.837 --- 10.0.0.1 ping statistics --- 00:11:33.837 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:33.837 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:11:33.837 11:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:33.837 11:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:11:33.837 11:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:33.837 11:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:33.837 11:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:33.837 11:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:33.837 11:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:33.837 11:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:33.837 11:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:33.837 11:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:11:33.837 11:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:33.837 11:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:33.837 11:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:33.837 11:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=3988932 00:11:33.837 11:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:33.837 11:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 3988932 00:11:33.837 11:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 3988932 ']' 00:11:33.837 11:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:33.837 11:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:33.837 11:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:33.837 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:33.837 11:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:33.837 11:06:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:33.837 [2024-11-20 11:06:00.739117] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:11:33.837 [2024-11-20 11:06:00.739159] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:33.837 [2024-11-20 11:06:00.819290] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:33.837 [2024-11-20 11:06:00.863870] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:33.837 [2024-11-20 11:06:00.863907] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:33.837 [2024-11-20 11:06:00.863915] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:33.837 [2024-11-20 11:06:00.863922] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:33.837 [2024-11-20 11:06:00.863927] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:33.837 [2024-11-20 11:06:00.865538] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:33.837 [2024-11-20 11:06:00.865571] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:33.837 [2024-11-20 11:06:00.865675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:33.837 [2024-11-20 11:06:00.865676] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:34.095 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:34.095 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:11:34.095 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:34.095 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:34.095 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:34.354 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:34.354 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:11:34.354 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.354 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:34.354 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.354 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:11:34.354 "tick_rate": 2300000000, 00:11:34.354 "poll_groups": [ 00:11:34.354 { 00:11:34.354 "name": "nvmf_tgt_poll_group_000", 00:11:34.354 "admin_qpairs": 0, 00:11:34.354 "io_qpairs": 0, 00:11:34.354 "current_admin_qpairs": 0, 00:11:34.354 "current_io_qpairs": 0, 00:11:34.354 "pending_bdev_io": 0, 00:11:34.354 "completed_nvme_io": 0, 00:11:34.354 "transports": [] 00:11:34.354 }, 00:11:34.354 { 00:11:34.354 "name": "nvmf_tgt_poll_group_001", 00:11:34.355 "admin_qpairs": 0, 00:11:34.355 "io_qpairs": 0, 00:11:34.355 "current_admin_qpairs": 0, 00:11:34.355 "current_io_qpairs": 0, 00:11:34.355 "pending_bdev_io": 0, 00:11:34.355 "completed_nvme_io": 0, 00:11:34.355 "transports": [] 00:11:34.355 }, 00:11:34.355 { 00:11:34.355 "name": "nvmf_tgt_poll_group_002", 00:11:34.355 "admin_qpairs": 0, 00:11:34.355 "io_qpairs": 0, 00:11:34.355 "current_admin_qpairs": 0, 00:11:34.355 "current_io_qpairs": 0, 00:11:34.355 "pending_bdev_io": 0, 00:11:34.355 "completed_nvme_io": 0, 00:11:34.355 "transports": [] 00:11:34.355 }, 00:11:34.355 { 00:11:34.355 "name": "nvmf_tgt_poll_group_003", 00:11:34.355 "admin_qpairs": 0, 00:11:34.355 "io_qpairs": 0, 00:11:34.355 "current_admin_qpairs": 0, 00:11:34.355 "current_io_qpairs": 0, 00:11:34.355 "pending_bdev_io": 0, 00:11:34.355 "completed_nvme_io": 0, 00:11:34.355 "transports": [] 00:11:34.355 } 00:11:34.355 ] 00:11:34.355 }' 00:11:34.355 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:11:34.355 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:11:34.355 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:11:34.355 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:11:34.355 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:11:34.355 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:11:34.355 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:11:34.355 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:34.355 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.355 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:34.355 [2024-11-20 11:06:01.737752] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:34.355 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.355 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:11:34.355 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.355 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:34.355 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.355 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:11:34.355 "tick_rate": 2300000000, 00:11:34.355 "poll_groups": [ 00:11:34.355 { 00:11:34.355 "name": "nvmf_tgt_poll_group_000", 00:11:34.355 "admin_qpairs": 0, 00:11:34.355 "io_qpairs": 0, 00:11:34.355 "current_admin_qpairs": 0, 00:11:34.355 "current_io_qpairs": 0, 00:11:34.355 "pending_bdev_io": 0, 00:11:34.355 "completed_nvme_io": 0, 00:11:34.355 "transports": [ 00:11:34.355 { 00:11:34.355 "trtype": "TCP" 00:11:34.355 } 00:11:34.355 ] 00:11:34.355 }, 00:11:34.355 { 00:11:34.355 "name": "nvmf_tgt_poll_group_001", 00:11:34.355 "admin_qpairs": 0, 00:11:34.355 "io_qpairs": 0, 00:11:34.355 "current_admin_qpairs": 0, 00:11:34.355 "current_io_qpairs": 0, 00:11:34.355 "pending_bdev_io": 0, 00:11:34.355 "completed_nvme_io": 0, 00:11:34.355 "transports": [ 00:11:34.355 { 00:11:34.355 "trtype": "TCP" 00:11:34.355 } 00:11:34.355 ] 00:11:34.355 }, 00:11:34.355 { 00:11:34.355 "name": "nvmf_tgt_poll_group_002", 00:11:34.355 "admin_qpairs": 0, 00:11:34.355 "io_qpairs": 0, 00:11:34.355 "current_admin_qpairs": 0, 00:11:34.355 "current_io_qpairs": 0, 00:11:34.355 "pending_bdev_io": 0, 00:11:34.355 "completed_nvme_io": 0, 00:11:34.355 "transports": [ 00:11:34.355 { 00:11:34.355 "trtype": "TCP" 00:11:34.355 } 00:11:34.355 ] 00:11:34.355 }, 00:11:34.355 { 00:11:34.355 "name": "nvmf_tgt_poll_group_003", 00:11:34.355 "admin_qpairs": 0, 00:11:34.355 "io_qpairs": 0, 00:11:34.355 "current_admin_qpairs": 0, 00:11:34.355 "current_io_qpairs": 0, 00:11:34.355 "pending_bdev_io": 0, 00:11:34.355 "completed_nvme_io": 0, 00:11:34.355 "transports": [ 00:11:34.355 { 00:11:34.355 "trtype": "TCP" 00:11:34.355 } 00:11:34.355 ] 00:11:34.355 } 00:11:34.355 ] 00:11:34.355 }' 00:11:34.355 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:11:34.355 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:11:34.355 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:11:34.355 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:34.355 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:11:34.355 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:11:34.355 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:11:34.355 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:11:34.355 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:34.614 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:11:34.614 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:11:34.614 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:11:34.614 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:11:34.614 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:11:34.614 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.614 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:34.614 Malloc1 00:11:34.614 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.614 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:34.614 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.614 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:34.614 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.614 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:34.614 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.614 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:34.614 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.614 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:11:34.614 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.614 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:34.614 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.614 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:34.614 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.614 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:34.614 [2024-11-20 11:06:01.926502] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:34.614 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.615 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:11:34.615 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:11:34.615 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:11:34.615 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:11:34.615 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:34.615 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:11:34.615 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:34.615 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:11:34.615 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:34.615 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:11:34.615 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:11:34.615 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:11:34.615 [2024-11-20 11:06:01.961147] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562' 00:11:34.615 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:34.615 could not add new controller: failed to write to nvme-fabrics device 00:11:34.615 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:11:34.615 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:34.615 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:34.615 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:34.615 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:34.615 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.615 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:34.615 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.615 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:35.990 11:06:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:11:35.990 11:06:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:35.990 11:06:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:35.990 11:06:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:35.990 11:06:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:37.893 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:37.893 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:37.893 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:37.893 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:37.893 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:37.893 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:37.893 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:37.893 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:37.893 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:37.893 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:37.893 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:37.893 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:37.893 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:37.893 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:37.893 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:37.893 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:37.894 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.894 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:37.894 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.894 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:37.894 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:11:37.894 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:37.894 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:11:37.894 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:37.894 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:11:37.894 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:37.894 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:11:37.894 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:37.894 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:11:37.894 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:11:37.894 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:37.894 [2024-11-20 11:06:05.276862] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562' 00:11:37.894 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:37.894 could not add new controller: failed to write to nvme-fabrics device 00:11:37.894 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:11:37.894 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:37.894 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:37.894 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:37.894 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:11:37.894 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.894 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:37.894 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.894 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:39.271 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:11:39.271 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:39.271 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:39.271 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:39.271 11:06:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:41.174 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:41.174 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:41.174 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:41.174 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:41.174 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:41.174 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:41.174 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:41.174 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:41.174 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:41.174 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:41.174 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:41.174 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:41.174 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:41.174 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:41.174 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:41.174 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:41.174 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.174 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:41.174 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.174 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:11:41.174 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:41.174 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:41.174 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.174 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:41.174 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.174 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:41.174 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.174 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:41.174 [2024-11-20 11:06:08.592516] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:41.174 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.174 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:41.174 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.174 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:41.174 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.174 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:41.174 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.174 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:41.174 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.174 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:42.552 11:06:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:42.552 11:06:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:42.552 11:06:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:42.552 11:06:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:42.552 11:06:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:44.621 11:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:44.621 11:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:44.621 11:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:44.621 11:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:44.621 11:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:44.621 11:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:44.621 11:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:44.621 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:44.621 11:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:44.621 11:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:44.621 11:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:44.621 11:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:44.621 11:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:44.621 11:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:44.621 11:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:44.621 11:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:44.621 11:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.621 11:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:44.621 11:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.621 11:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:44.621 11:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.621 11:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:44.621 11:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.621 11:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:44.621 11:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:44.621 11:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.621 11:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:44.621 11:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.621 11:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:44.621 11:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.621 11:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:44.621 [2024-11-20 11:06:11.887680] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:44.621 11:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.621 11:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:44.621 11:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.621 11:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:44.621 11:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.621 11:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:44.621 11:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.621 11:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:44.621 11:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.621 11:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:45.556 11:06:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:45.556 11:06:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:45.556 11:06:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:45.556 11:06:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:45.556 11:06:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:48.089 11:06:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:48.089 11:06:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:48.089 11:06:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:48.089 11:06:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:48.089 11:06:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:48.089 11:06:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:48.089 11:06:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:48.089 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:48.089 11:06:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:48.089 11:06:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:48.089 11:06:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:48.089 11:06:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:48.089 11:06:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:48.089 11:06:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:48.089 11:06:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:48.089 11:06:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:48.089 11:06:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.089 11:06:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:48.089 11:06:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.089 11:06:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:48.089 11:06:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.089 11:06:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:48.089 11:06:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.089 11:06:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:48.089 11:06:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:48.089 11:06:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.089 11:06:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:48.089 11:06:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.089 11:06:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:48.089 11:06:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.089 11:06:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:48.089 [2024-11-20 11:06:15.209531] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:48.090 11:06:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.090 11:06:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:48.090 11:06:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.090 11:06:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:48.090 11:06:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.090 11:06:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:48.090 11:06:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.090 11:06:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:48.090 11:06:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.090 11:06:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:49.026 11:06:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:49.026 11:06:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:49.026 11:06:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:49.026 11:06:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:49.026 11:06:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:50.929 11:06:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:50.929 11:06:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:50.929 11:06:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:51.188 11:06:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:51.188 11:06:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:51.188 11:06:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:51.188 11:06:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:51.188 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:51.188 11:06:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:51.188 11:06:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:51.188 11:06:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:51.188 11:06:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:51.188 11:06:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:51.188 11:06:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:51.188 11:06:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:51.188 11:06:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:51.188 11:06:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.188 11:06:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:51.188 11:06:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.188 11:06:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:51.188 11:06:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.188 11:06:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:51.188 11:06:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.188 11:06:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:51.188 11:06:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:51.188 11:06:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.188 11:06:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:51.188 11:06:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.188 11:06:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:51.188 11:06:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.188 11:06:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:51.188 [2024-11-20 11:06:18.648469] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:51.188 11:06:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.188 11:06:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:51.188 11:06:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.188 11:06:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:51.188 11:06:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.188 11:06:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:51.188 11:06:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.188 11:06:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:51.188 11:06:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.188 11:06:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:52.565 11:06:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:52.565 11:06:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:52.565 11:06:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:52.565 11:06:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:52.565 11:06:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:54.470 11:06:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:54.470 11:06:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:54.470 11:06:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:54.470 11:06:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:54.470 11:06:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:54.470 11:06:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:54.470 11:06:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:54.470 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:54.470 11:06:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:54.470 11:06:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:54.470 11:06:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:54.470 11:06:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:54.470 11:06:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:54.729 11:06:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:54.729 11:06:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:54.729 11:06:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:54.729 11:06:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.729 11:06:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:54.729 11:06:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.729 11:06:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:54.729 11:06:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.729 11:06:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:54.729 11:06:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.729 11:06:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:54.729 11:06:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:54.729 11:06:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.729 11:06:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:54.729 11:06:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.729 11:06:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:54.729 11:06:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.729 11:06:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:54.729 [2024-11-20 11:06:22.006634] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:54.729 11:06:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.729 11:06:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:54.729 11:06:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.729 11:06:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:54.729 11:06:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.729 11:06:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:54.729 11:06:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.729 11:06:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:54.729 11:06:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.729 11:06:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:55.666 11:06:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:55.666 11:06:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:55.666 11:06:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:55.666 11:06:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:55.666 11:06:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:58.196 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:58.197 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:58.197 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:58.197 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:58.197 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:58.197 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:58.197 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:58.197 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:58.197 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:58.197 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:58.197 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:58.197 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:58.197 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:58.197 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:58.197 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:58.197 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:58.197 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.197 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:58.197 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.197 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:58.197 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.197 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:58.197 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.197 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:11:58.197 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:58.197 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:58.197 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.197 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:58.197 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.197 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:58.197 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.197 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:58.197 [2024-11-20 11:06:25.338691] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:58.197 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.197 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:58.197 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.197 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:58.197 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.197 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:58.197 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.197 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:58.197 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.197 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:58.197 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.197 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:58.197 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.197 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:58.197 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.197 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:58.197 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.197 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:58.197 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:58.197 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.197 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:58.197 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.197 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:58.197 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.197 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:58.197 [2024-11-20 11:06:25.386732] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:58.197 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.197 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:58.197 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.197 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:58.197 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.197 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:58.197 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.197 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:58.197 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.197 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:58.197 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.197 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:58.197 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.197 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:58.197 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.197 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:58.197 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.197 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:58.197 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:58.197 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.197 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:58.197 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.197 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:58.198 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.198 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:58.198 [2024-11-20 11:06:25.434867] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:58.198 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.198 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:58.198 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.198 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:58.198 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.198 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:58.198 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.198 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:58.198 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.198 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:58.198 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.198 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:58.198 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.198 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:58.198 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.198 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:58.198 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.198 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:58.198 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:58.198 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.198 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:58.198 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.198 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:58.198 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.198 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:58.198 [2024-11-20 11:06:25.483032] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:58.198 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.198 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:58.198 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.198 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:58.198 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.198 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:58.198 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.198 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:58.198 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.198 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:58.198 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.198 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:58.198 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.198 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:58.198 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.198 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:58.198 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.198 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:58.198 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:58.198 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.198 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:58.198 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.198 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:58.198 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.198 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:58.198 [2024-11-20 11:06:25.531197] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:58.198 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.198 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:58.198 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.198 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:58.198 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.198 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:58.198 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.198 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:58.198 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.198 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:58.198 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.198 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:58.198 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.198 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:58.198 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.198 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:58.198 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.198 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:11:58.198 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.198 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:58.198 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.198 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:11:58.198 "tick_rate": 2300000000, 00:11:58.198 "poll_groups": [ 00:11:58.198 { 00:11:58.198 "name": "nvmf_tgt_poll_group_000", 00:11:58.198 "admin_qpairs": 2, 00:11:58.198 "io_qpairs": 168, 00:11:58.198 "current_admin_qpairs": 0, 00:11:58.198 "current_io_qpairs": 0, 00:11:58.198 "pending_bdev_io": 0, 00:11:58.198 "completed_nvme_io": 267, 00:11:58.198 "transports": [ 00:11:58.198 { 00:11:58.198 "trtype": "TCP" 00:11:58.198 } 00:11:58.198 ] 00:11:58.198 }, 00:11:58.198 { 00:11:58.198 "name": "nvmf_tgt_poll_group_001", 00:11:58.198 "admin_qpairs": 2, 00:11:58.198 "io_qpairs": 168, 00:11:58.198 "current_admin_qpairs": 0, 00:11:58.198 "current_io_qpairs": 0, 00:11:58.198 "pending_bdev_io": 0, 00:11:58.198 "completed_nvme_io": 218, 00:11:58.198 "transports": [ 00:11:58.198 { 00:11:58.198 "trtype": "TCP" 00:11:58.198 } 00:11:58.198 ] 00:11:58.198 }, 00:11:58.198 { 00:11:58.198 "name": "nvmf_tgt_poll_group_002", 00:11:58.198 "admin_qpairs": 1, 00:11:58.198 "io_qpairs": 168, 00:11:58.198 "current_admin_qpairs": 0, 00:11:58.198 "current_io_qpairs": 0, 00:11:58.199 "pending_bdev_io": 0, 00:11:58.199 "completed_nvme_io": 318, 00:11:58.199 "transports": [ 00:11:58.199 { 00:11:58.199 "trtype": "TCP" 00:11:58.199 } 00:11:58.199 ] 00:11:58.199 }, 00:11:58.199 { 00:11:58.199 "name": "nvmf_tgt_poll_group_003", 00:11:58.199 "admin_qpairs": 2, 00:11:58.199 "io_qpairs": 168, 00:11:58.199 "current_admin_qpairs": 0, 00:11:58.199 "current_io_qpairs": 0, 00:11:58.199 "pending_bdev_io": 0, 00:11:58.199 "completed_nvme_io": 219, 00:11:58.199 "transports": [ 00:11:58.199 { 00:11:58.199 "trtype": "TCP" 00:11:58.199 } 00:11:58.199 ] 00:11:58.199 } 00:11:58.199 ] 00:11:58.199 }' 00:11:58.199 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:11:58.199 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:11:58.199 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:11:58.199 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:58.199 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:11:58.199 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:11:58.199 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:11:58.199 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:11:58.199 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:58.199 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 672 > 0 )) 00:11:58.199 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:11:58.199 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:11:58.199 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:11:58.199 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:58.199 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:11:58.199 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:58.199 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:11:58.199 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:58.199 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:58.199 rmmod nvme_tcp 00:11:58.199 rmmod nvme_fabrics 00:11:58.457 rmmod nvme_keyring 00:11:58.457 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:58.457 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:11:58.457 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:11:58.457 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 3988932 ']' 00:11:58.457 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 3988932 00:11:58.457 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 3988932 ']' 00:11:58.457 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 3988932 00:11:58.457 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:11:58.457 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:58.457 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3988932 00:11:58.457 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:58.457 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:58.457 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3988932' 00:11:58.457 killing process with pid 3988932 00:11:58.457 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 3988932 00:11:58.457 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 3988932 00:11:58.715 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:58.715 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:58.715 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:58.715 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:11:58.715 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:11:58.715 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:58.715 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:11:58.715 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:58.715 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:58.715 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:58.715 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:58.715 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:00.616 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:00.616 00:12:00.616 real 0m33.569s 00:12:00.616 user 1m41.943s 00:12:00.616 sys 0m6.530s 00:12:00.616 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:00.616 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:00.616 ************************************ 00:12:00.616 END TEST nvmf_rpc 00:12:00.616 ************************************ 00:12:00.616 11:06:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:00.616 11:06:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:00.616 11:06:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:00.616 11:06:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:00.877 ************************************ 00:12:00.877 START TEST nvmf_invalid 00:12:00.877 ************************************ 00:12:00.877 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:00.877 * Looking for test storage... 00:12:00.877 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:00.877 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:00.877 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lcov --version 00:12:00.877 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:00.877 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:00.877 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:00.877 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:00.877 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:00.877 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:12:00.877 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:12:00.877 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:12:00.877 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:12:00.877 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:12:00.877 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:12:00.877 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:12:00.877 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:00.877 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:12:00.877 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:12:00.877 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:00.877 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:00.877 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:12:00.877 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:12:00.877 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:00.877 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:12:00.877 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:12:00.877 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:12:00.877 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:12:00.877 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:00.877 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:12:00.877 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:12:00.877 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:00.877 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:00.877 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:12:00.877 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:00.877 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:00.877 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:00.877 --rc genhtml_branch_coverage=1 00:12:00.877 --rc genhtml_function_coverage=1 00:12:00.877 --rc genhtml_legend=1 00:12:00.877 --rc geninfo_all_blocks=1 00:12:00.877 --rc geninfo_unexecuted_blocks=1 00:12:00.877 00:12:00.877 ' 00:12:00.877 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:00.877 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:00.877 --rc genhtml_branch_coverage=1 00:12:00.877 --rc genhtml_function_coverage=1 00:12:00.877 --rc genhtml_legend=1 00:12:00.877 --rc geninfo_all_blocks=1 00:12:00.877 --rc geninfo_unexecuted_blocks=1 00:12:00.877 00:12:00.877 ' 00:12:00.877 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:00.877 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:00.877 --rc genhtml_branch_coverage=1 00:12:00.877 --rc genhtml_function_coverage=1 00:12:00.877 --rc genhtml_legend=1 00:12:00.877 --rc geninfo_all_blocks=1 00:12:00.877 --rc geninfo_unexecuted_blocks=1 00:12:00.877 00:12:00.877 ' 00:12:00.877 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:00.877 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:00.877 --rc genhtml_branch_coverage=1 00:12:00.877 --rc genhtml_function_coverage=1 00:12:00.877 --rc genhtml_legend=1 00:12:00.877 --rc geninfo_all_blocks=1 00:12:00.877 --rc geninfo_unexecuted_blocks=1 00:12:00.877 00:12:00.877 ' 00:12:00.877 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:00.877 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:12:00.877 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:00.877 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:00.877 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:00.877 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:00.877 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:00.877 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:00.877 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:00.877 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:00.878 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:00.878 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:00.878 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:00.878 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:00.878 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:00.878 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:00.878 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:00.878 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:00.878 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:00.878 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:12:00.878 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:00.878 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:00.878 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:00.878 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:00.878 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:00.878 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:00.878 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:12:00.878 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:00.878 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:12:00.878 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:00.878 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:00.878 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:00.878 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:00.878 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:00.878 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:00.878 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:00.878 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:00.878 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:00.878 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:00.878 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:00.878 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:00.878 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:12:00.878 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:12:00.878 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:12:00.878 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:12:00.878 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:00.878 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:00.878 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:00.878 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:00.878 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:00.878 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:00.878 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:00.878 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:00.878 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:00.878 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:00.878 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:12:00.878 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:07.446 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:07.446 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:12:07.446 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:07.446 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:07.446 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:07.446 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:07.446 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:07.446 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:12:07.446 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:07.446 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:12:07.446 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:12:07.446 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:12:07.446 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:12:07.446 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:12:07.446 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:12:07.446 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:07.446 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:07.446 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:07.446 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:07.446 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:07.446 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:07.446 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:07.446 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:07.446 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:07.446 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:07.446 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:07.446 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:07.446 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:07.446 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:07.446 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:07.446 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:07.446 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:07.447 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:07.447 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:07.447 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:07.447 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:07.447 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:07.447 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:07.447 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:07.447 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:07.447 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:07.447 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:07.447 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:07.447 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:07.447 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:07.447 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:07.447 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:07.447 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:07.447 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:07.447 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:07.447 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:07.447 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:07.447 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:07.447 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:07.447 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:07.447 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:07.447 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:07.447 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:07.447 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:07.447 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:07.447 Found net devices under 0000:86:00.0: cvl_0_0 00:12:07.447 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:07.447 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:07.447 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:07.447 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:07.447 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:07.447 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:07.447 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:07.447 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:07.447 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:07.447 Found net devices under 0000:86:00.1: cvl_0_1 00:12:07.447 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:07.447 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:07.447 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:12:07.447 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:07.447 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:07.447 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:07.447 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:07.447 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:07.447 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:07.447 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:07.447 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:07.447 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:07.447 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:07.447 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:07.447 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:07.447 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:07.447 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:07.447 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:07.447 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:07.447 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:07.447 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:07.447 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:07.447 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:07.447 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:07.447 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:07.447 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:07.447 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:07.447 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:07.447 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:07.447 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:07.447 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.442 ms 00:12:07.447 00:12:07.447 --- 10.0.0.2 ping statistics --- 00:12:07.447 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:07.447 rtt min/avg/max/mdev = 0.442/0.442/0.442/0.000 ms 00:12:07.447 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:07.447 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:07.447 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.077 ms 00:12:07.447 00:12:07.447 --- 10.0.0.1 ping statistics --- 00:12:07.447 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:07.447 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:12:07.447 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:07.447 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:12:07.447 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:07.447 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:07.447 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:07.447 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:07.447 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:07.447 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:07.447 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:07.447 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:12:07.447 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:07.447 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:07.447 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:07.447 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=3997210 00:12:07.447 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 3997210 00:12:07.447 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:07.447 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 3997210 ']' 00:12:07.447 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:07.447 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:07.447 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:07.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:07.447 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:07.447 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:07.447 [2024-11-20 11:06:34.378069] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:12:07.447 [2024-11-20 11:06:34.378121] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:07.447 [2024-11-20 11:06:34.458159] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:07.447 [2024-11-20 11:06:34.500944] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:07.447 [2024-11-20 11:06:34.500984] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:07.447 [2024-11-20 11:06:34.500991] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:07.447 [2024-11-20 11:06:34.500998] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:07.448 [2024-11-20 11:06:34.501003] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:07.448 [2024-11-20 11:06:34.502577] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:07.448 [2024-11-20 11:06:34.502614] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:07.448 [2024-11-20 11:06:34.502722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:07.448 [2024-11-20 11:06:34.502723] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:07.448 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:07.448 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:12:07.448 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:07.448 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:07.448 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:07.448 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:07.448 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:07.448 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode30122 00:12:07.448 [2024-11-20 11:06:34.808873] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:12:07.448 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:12:07.448 { 00:12:07.448 "nqn": "nqn.2016-06.io.spdk:cnode30122", 00:12:07.448 "tgt_name": "foobar", 00:12:07.448 "method": "nvmf_create_subsystem", 00:12:07.448 "req_id": 1 00:12:07.448 } 00:12:07.448 Got JSON-RPC error response 00:12:07.448 response: 00:12:07.448 { 00:12:07.448 "code": -32603, 00:12:07.448 "message": "Unable to find target foobar" 00:12:07.448 }' 00:12:07.448 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:12:07.448 { 00:12:07.448 "nqn": "nqn.2016-06.io.spdk:cnode30122", 00:12:07.448 "tgt_name": "foobar", 00:12:07.448 "method": "nvmf_create_subsystem", 00:12:07.448 "req_id": 1 00:12:07.448 } 00:12:07.448 Got JSON-RPC error response 00:12:07.448 response: 00:12:07.448 { 00:12:07.448 "code": -32603, 00:12:07.448 "message": "Unable to find target foobar" 00:12:07.448 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:12:07.448 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:12:07.448 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode12849 00:12:07.706 [2024-11-20 11:06:35.013578] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12849: invalid serial number 'SPDKISFASTANDAWESOME' 00:12:07.706 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:12:07.706 { 00:12:07.706 "nqn": "nqn.2016-06.io.spdk:cnode12849", 00:12:07.706 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:07.706 "method": "nvmf_create_subsystem", 00:12:07.706 "req_id": 1 00:12:07.706 } 00:12:07.706 Got JSON-RPC error response 00:12:07.706 response: 00:12:07.706 { 00:12:07.706 "code": -32602, 00:12:07.706 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:07.706 }' 00:12:07.706 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:12:07.706 { 00:12:07.706 "nqn": "nqn.2016-06.io.spdk:cnode12849", 00:12:07.706 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:07.706 "method": "nvmf_create_subsystem", 00:12:07.706 "req_id": 1 00:12:07.706 } 00:12:07.706 Got JSON-RPC error response 00:12:07.706 response: 00:12:07.706 { 00:12:07.706 "code": -32602, 00:12:07.706 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:07.706 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:07.706 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:12:07.706 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode22674 00:12:07.965 [2024-11-20 11:06:35.210210] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode22674: invalid model number 'SPDK_Controller' 00:12:07.965 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:12:07.965 { 00:12:07.965 "nqn": "nqn.2016-06.io.spdk:cnode22674", 00:12:07.965 "model_number": "SPDK_Controller\u001f", 00:12:07.965 "method": "nvmf_create_subsystem", 00:12:07.965 "req_id": 1 00:12:07.965 } 00:12:07.965 Got JSON-RPC error response 00:12:07.965 response: 00:12:07.965 { 00:12:07.965 "code": -32602, 00:12:07.965 "message": "Invalid MN SPDK_Controller\u001f" 00:12:07.965 }' 00:12:07.965 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:12:07.965 { 00:12:07.965 "nqn": "nqn.2016-06.io.spdk:cnode22674", 00:12:07.965 "model_number": "SPDK_Controller\u001f", 00:12:07.965 "method": "nvmf_create_subsystem", 00:12:07.965 "req_id": 1 00:12:07.965 } 00:12:07.965 Got JSON-RPC error response 00:12:07.965 response: 00:12:07.965 { 00:12:07.965 "code": -32602, 00:12:07.965 "message": "Invalid MN SPDK_Controller\u001f" 00:12:07.965 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:07.965 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:12:07.965 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:12:07.965 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:07.965 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:07.965 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:07.965 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:07.965 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.965 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:12:07.965 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:12:07.965 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:12:07.965 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.965 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.965 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:12:07.965 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:12:07.965 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:12:07.965 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.965 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.965 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:12:07.965 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:12:07.965 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:12:07.965 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.965 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.965 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:12:07.965 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:12:07.965 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:12:07.965 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.965 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.965 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:12:07.965 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:12:07.965 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:12:07.965 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.965 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.965 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:12:07.965 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:12:07.965 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:12:07.965 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.965 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.965 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:12:07.965 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:12:07.965 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:12:07.965 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.965 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.965 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:12:07.965 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:12:07.965 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:12:07.965 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.965 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.965 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:12:07.965 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:12:07.965 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:12:07.965 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.965 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.965 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:12:07.965 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:12:07.965 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:12:07.965 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.965 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.965 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:12:07.965 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:12:07.965 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:12:07.965 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.965 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.965 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:12:07.965 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:12:07.966 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:12:07.966 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.966 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.966 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:12:07.966 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:12:07.966 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:12:07.966 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.966 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.966 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:12:07.966 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:12:07.966 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:12:07.966 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.966 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.966 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:12:07.966 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:12:07.966 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:12:07.966 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.966 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.966 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:12:07.966 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:12:07.966 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:12:07.966 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.966 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.966 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:12:07.966 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:12:07.966 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:12:07.966 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.966 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.966 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:12:07.966 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:12:07.966 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:12:07.966 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.966 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.966 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:12:07.966 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:12:07.966 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:12:07.966 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.966 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.966 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:12:07.966 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:12:07.966 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:12:07.966 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.966 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.966 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:12:07.966 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:12:07.966 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:12:07.966 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.966 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.966 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ & == \- ]] 00:12:07.966 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '&!ci0}kxJ-k}{y*I)SeB~' 00:12:07.966 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '&!ci0}kxJ-k}{y*I)SeB~' nqn.2016-06.io.spdk:cnode21302 00:12:08.224 [2024-11-20 11:06:35.559424] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21302: invalid serial number '&!ci0}kxJ-k}{y*I)SeB~' 00:12:08.224 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:12:08.224 { 00:12:08.224 "nqn": "nqn.2016-06.io.spdk:cnode21302", 00:12:08.224 "serial_number": "&!ci0}kxJ-k}{y*I)SeB~", 00:12:08.224 "method": "nvmf_create_subsystem", 00:12:08.224 "req_id": 1 00:12:08.224 } 00:12:08.224 Got JSON-RPC error response 00:12:08.224 response: 00:12:08.224 { 00:12:08.224 "code": -32602, 00:12:08.224 "message": "Invalid SN &!ci0}kxJ-k}{y*I)SeB~" 00:12:08.224 }' 00:12:08.224 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:12:08.224 { 00:12:08.224 "nqn": "nqn.2016-06.io.spdk:cnode21302", 00:12:08.224 "serial_number": "&!ci0}kxJ-k}{y*I)SeB~", 00:12:08.224 "method": "nvmf_create_subsystem", 00:12:08.224 "req_id": 1 00:12:08.224 } 00:12:08.225 Got JSON-RPC error response 00:12:08.225 response: 00:12:08.225 { 00:12:08.225 "code": -32602, 00:12:08.225 "message": "Invalid SN &!ci0}kxJ-k}{y*I)SeB~" 00:12:08.225 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:08.225 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:12:08.225 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:12:08.225 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:08.225 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:08.225 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:08.225 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:08.225 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.225 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:12:08.225 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:12:08.225 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:12:08.225 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:08.225 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.225 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:12:08.225 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:12:08.225 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:12:08.225 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:08.225 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.225 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:12:08.225 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:12:08.225 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:12:08.225 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:08.225 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.225 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:12:08.225 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:12:08.225 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:12:08.225 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:08.225 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.225 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:12:08.225 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:12:08.225 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:12:08.225 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:08.225 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.225 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:12:08.225 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:12:08.225 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:12:08.225 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:08.225 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.225 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:12:08.225 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:12:08.225 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:12:08.225 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:08.225 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.225 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:12:08.225 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:12:08.225 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:12:08.225 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:08.225 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.225 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:12:08.225 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:12:08.225 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:12:08.225 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:08.225 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.225 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:12:08.225 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:12:08.225 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:12:08.225 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:08.225 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.225 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:12:08.225 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:12:08.225 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:12:08.225 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:08.225 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.225 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:12:08.225 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:12:08.225 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:12:08.225 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:08.225 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.225 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:12:08.225 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:12:08.225 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:12:08.225 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:08.225 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.225 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:12:08.225 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:12:08.225 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:12:08.225 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:08.225 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.225 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:12:08.225 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:12:08.225 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:12:08.225 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:08.225 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.225 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:12:08.225 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:12:08.225 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:12:08.225 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:08.225 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.225 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:12:08.225 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:12:08.225 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:12:08.225 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:08.225 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.225 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:12:08.225 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:12:08.225 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:12:08.225 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:08.225 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.225 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:12:08.225 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:12:08.225 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:12:08.225 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:08.225 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.484 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:12:08.484 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:12:08.484 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:12:08.484 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:08.484 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.484 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:12:08.484 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:12:08.484 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:12:08.484 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:08.484 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.484 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:12:08.484 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:12:08.484 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:12:08.484 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:08.484 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.484 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:12:08.484 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:12:08.484 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:12:08.484 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:08.484 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.484 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:12:08.484 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:12:08.484 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:12:08.484 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:08.484 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.484 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:12:08.484 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:12:08.484 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:12:08.484 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:08.484 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.484 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:12:08.484 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:12:08.484 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:12:08.484 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:08.484 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.484 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:12:08.484 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:12:08.484 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:12:08.484 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:08.484 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.484 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:12:08.484 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:12:08.484 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:12:08.484 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:08.484 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.484 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:12:08.484 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:12:08.484 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:12:08.484 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:08.484 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.484 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:12:08.485 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:12:08.485 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:12:08.485 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:08.485 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.485 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:12:08.485 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:12:08.485 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:12:08.485 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:08.485 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.485 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:12:08.485 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:12:08.485 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:12:08.485 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:08.485 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.485 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:12:08.485 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:12:08.485 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:12:08.485 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:08.485 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.485 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:12:08.485 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:12:08.485 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:12:08.485 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:08.485 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.485 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:12:08.485 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:12:08.485 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:12:08.485 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:08.485 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.485 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:12:08.485 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:12:08.485 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:12:08.485 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:08.485 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.485 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:12:08.485 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:12:08.485 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:12:08.485 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:08.485 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.485 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:12:08.485 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:12:08.485 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:12:08.485 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:08.485 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.485 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:12:08.485 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:12:08.485 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:12:08.485 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:08.485 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.485 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:12:08.485 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:12:08.485 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:12:08.485 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:08.485 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.485 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:12:08.485 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:12:08.485 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:12:08.485 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:08.485 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.485 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ ; == \- ]] 00:12:08.485 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo ';O]FGP32$wW_Jo-aQx;P'\'']vV5l[%`IkHO<-Ed,i)(' 00:12:08.485 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d ';O]FGP32$wW_Jo-aQx;P'\'']vV5l[%`IkHO<-Ed,i)(' nqn.2016-06.io.spdk:cnode10773 00:12:08.743 [2024-11-20 11:06:36.028980] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10773: invalid model number ';O]FGP32$wW_Jo-aQx;P']vV5l[%`IkHO<-Ed,i)(' 00:12:08.743 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:12:08.743 { 00:12:08.743 "nqn": "nqn.2016-06.io.spdk:cnode10773", 00:12:08.743 "model_number": ";O]FGP32$wW_Jo-aQx;P'\'']vV5l[%`IkHO<-Ed,i)(", 00:12:08.743 "method": "nvmf_create_subsystem", 00:12:08.743 "req_id": 1 00:12:08.743 } 00:12:08.743 Got JSON-RPC error response 00:12:08.743 response: 00:12:08.743 { 00:12:08.743 "code": -32602, 00:12:08.743 "message": "Invalid MN ;O]FGP32$wW_Jo-aQx;P'\'']vV5l[%`IkHO<-Ed,i)(" 00:12:08.743 }' 00:12:08.743 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:12:08.743 { 00:12:08.743 "nqn": "nqn.2016-06.io.spdk:cnode10773", 00:12:08.743 "model_number": ";O]FGP32$wW_Jo-aQx;P']vV5l[%`IkHO<-Ed,i)(", 00:12:08.743 "method": "nvmf_create_subsystem", 00:12:08.743 "req_id": 1 00:12:08.743 } 00:12:08.743 Got JSON-RPC error response 00:12:08.743 response: 00:12:08.743 { 00:12:08.743 "code": -32602, 00:12:08.743 "message": "Invalid MN ;O]FGP32$wW_Jo-aQx;P']vV5l[%`IkHO<-Ed,i)(" 00:12:08.743 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:08.743 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:12:08.743 [2024-11-20 11:06:36.233741] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:09.001 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:12:09.001 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:12:09.001 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:12:09.001 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:12:09.001 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:12:09.001 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:12:09.258 [2024-11-20 11:06:36.647127] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:12:09.258 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:12:09.258 { 00:12:09.258 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:09.258 "listen_address": { 00:12:09.258 "trtype": "tcp", 00:12:09.258 "traddr": "", 00:12:09.258 "trsvcid": "4421" 00:12:09.258 }, 00:12:09.258 "method": "nvmf_subsystem_remove_listener", 00:12:09.258 "req_id": 1 00:12:09.258 } 00:12:09.258 Got JSON-RPC error response 00:12:09.258 response: 00:12:09.258 { 00:12:09.258 "code": -32602, 00:12:09.258 "message": "Invalid parameters" 00:12:09.258 }' 00:12:09.258 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:12:09.258 { 00:12:09.258 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:09.258 "listen_address": { 00:12:09.258 "trtype": "tcp", 00:12:09.258 "traddr": "", 00:12:09.258 "trsvcid": "4421" 00:12:09.258 }, 00:12:09.258 "method": "nvmf_subsystem_remove_listener", 00:12:09.258 "req_id": 1 00:12:09.258 } 00:12:09.258 Got JSON-RPC error response 00:12:09.258 response: 00:12:09.258 { 00:12:09.258 "code": -32602, 00:12:09.258 "message": "Invalid parameters" 00:12:09.258 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:12:09.258 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode31898 -i 0 00:12:09.516 [2024-11-20 11:06:36.855797] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31898: invalid cntlid range [0-65519] 00:12:09.516 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:12:09.516 { 00:12:09.516 "nqn": "nqn.2016-06.io.spdk:cnode31898", 00:12:09.516 "min_cntlid": 0, 00:12:09.516 "method": "nvmf_create_subsystem", 00:12:09.516 "req_id": 1 00:12:09.516 } 00:12:09.516 Got JSON-RPC error response 00:12:09.516 response: 00:12:09.516 { 00:12:09.516 "code": -32602, 00:12:09.516 "message": "Invalid cntlid range [0-65519]" 00:12:09.516 }' 00:12:09.516 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:12:09.516 { 00:12:09.516 "nqn": "nqn.2016-06.io.spdk:cnode31898", 00:12:09.516 "min_cntlid": 0, 00:12:09.516 "method": "nvmf_create_subsystem", 00:12:09.516 "req_id": 1 00:12:09.516 } 00:12:09.516 Got JSON-RPC error response 00:12:09.516 response: 00:12:09.516 { 00:12:09.516 "code": -32602, 00:12:09.516 "message": "Invalid cntlid range [0-65519]" 00:12:09.516 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:09.516 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode26006 -i 65520 00:12:09.774 [2024-11-20 11:06:37.076508] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26006: invalid cntlid range [65520-65519] 00:12:09.774 11:06:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:12:09.774 { 00:12:09.774 "nqn": "nqn.2016-06.io.spdk:cnode26006", 00:12:09.774 "min_cntlid": 65520, 00:12:09.774 "method": "nvmf_create_subsystem", 00:12:09.774 "req_id": 1 00:12:09.774 } 00:12:09.774 Got JSON-RPC error response 00:12:09.774 response: 00:12:09.774 { 00:12:09.774 "code": -32602, 00:12:09.774 "message": "Invalid cntlid range [65520-65519]" 00:12:09.774 }' 00:12:09.774 11:06:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:12:09.774 { 00:12:09.774 "nqn": "nqn.2016-06.io.spdk:cnode26006", 00:12:09.774 "min_cntlid": 65520, 00:12:09.774 "method": "nvmf_create_subsystem", 00:12:09.774 "req_id": 1 00:12:09.774 } 00:12:09.774 Got JSON-RPC error response 00:12:09.774 response: 00:12:09.774 { 00:12:09.774 "code": -32602, 00:12:09.774 "message": "Invalid cntlid range [65520-65519]" 00:12:09.774 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:09.774 11:06:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10798 -I 0 00:12:10.032 [2024-11-20 11:06:37.277187] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10798: invalid cntlid range [1-0] 00:12:10.032 11:06:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:12:10.032 { 00:12:10.032 "nqn": "nqn.2016-06.io.spdk:cnode10798", 00:12:10.032 "max_cntlid": 0, 00:12:10.032 "method": "nvmf_create_subsystem", 00:12:10.032 "req_id": 1 00:12:10.032 } 00:12:10.032 Got JSON-RPC error response 00:12:10.032 response: 00:12:10.032 { 00:12:10.032 "code": -32602, 00:12:10.032 "message": "Invalid cntlid range [1-0]" 00:12:10.032 }' 00:12:10.032 11:06:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:12:10.032 { 00:12:10.032 "nqn": "nqn.2016-06.io.spdk:cnode10798", 00:12:10.032 "max_cntlid": 0, 00:12:10.032 "method": "nvmf_create_subsystem", 00:12:10.032 "req_id": 1 00:12:10.032 } 00:12:10.032 Got JSON-RPC error response 00:12:10.032 response: 00:12:10.032 { 00:12:10.032 "code": -32602, 00:12:10.032 "message": "Invalid cntlid range [1-0]" 00:12:10.032 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:10.032 11:06:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8977 -I 65520 00:12:10.032 [2024-11-20 11:06:37.477882] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8977: invalid cntlid range [1-65520] 00:12:10.032 11:06:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:12:10.032 { 00:12:10.032 "nqn": "nqn.2016-06.io.spdk:cnode8977", 00:12:10.032 "max_cntlid": 65520, 00:12:10.032 "method": "nvmf_create_subsystem", 00:12:10.032 "req_id": 1 00:12:10.032 } 00:12:10.032 Got JSON-RPC error response 00:12:10.032 response: 00:12:10.032 { 00:12:10.032 "code": -32602, 00:12:10.032 "message": "Invalid cntlid range [1-65520]" 00:12:10.032 }' 00:12:10.032 11:06:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:12:10.032 { 00:12:10.032 "nqn": "nqn.2016-06.io.spdk:cnode8977", 00:12:10.032 "max_cntlid": 65520, 00:12:10.032 "method": "nvmf_create_subsystem", 00:12:10.032 "req_id": 1 00:12:10.032 } 00:12:10.032 Got JSON-RPC error response 00:12:10.032 response: 00:12:10.032 { 00:12:10.032 "code": -32602, 00:12:10.032 "message": "Invalid cntlid range [1-65520]" 00:12:10.032 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:10.032 11:06:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode12164 -i 6 -I 5 00:12:10.290 [2024-11-20 11:06:37.678623] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12164: invalid cntlid range [6-5] 00:12:10.290 11:06:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:12:10.290 { 00:12:10.290 "nqn": "nqn.2016-06.io.spdk:cnode12164", 00:12:10.290 "min_cntlid": 6, 00:12:10.290 "max_cntlid": 5, 00:12:10.290 "method": "nvmf_create_subsystem", 00:12:10.290 "req_id": 1 00:12:10.290 } 00:12:10.290 Got JSON-RPC error response 00:12:10.290 response: 00:12:10.290 { 00:12:10.290 "code": -32602, 00:12:10.290 "message": "Invalid cntlid range [6-5]" 00:12:10.290 }' 00:12:10.290 11:06:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:12:10.290 { 00:12:10.290 "nqn": "nqn.2016-06.io.spdk:cnode12164", 00:12:10.290 "min_cntlid": 6, 00:12:10.290 "max_cntlid": 5, 00:12:10.290 "method": "nvmf_create_subsystem", 00:12:10.290 "req_id": 1 00:12:10.290 } 00:12:10.290 Got JSON-RPC error response 00:12:10.290 response: 00:12:10.290 { 00:12:10.290 "code": -32602, 00:12:10.290 "message": "Invalid cntlid range [6-5]" 00:12:10.290 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:10.290 11:06:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:12:10.548 11:06:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:12:10.548 { 00:12:10.548 "name": "foobar", 00:12:10.548 "method": "nvmf_delete_target", 00:12:10.548 "req_id": 1 00:12:10.548 } 00:12:10.548 Got JSON-RPC error response 00:12:10.548 response: 00:12:10.548 { 00:12:10.548 "code": -32602, 00:12:10.548 "message": "The specified target doesn'\''t exist, cannot delete it." 00:12:10.548 }' 00:12:10.548 11:06:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:12:10.548 { 00:12:10.548 "name": "foobar", 00:12:10.548 "method": "nvmf_delete_target", 00:12:10.548 "req_id": 1 00:12:10.548 } 00:12:10.548 Got JSON-RPC error response 00:12:10.548 response: 00:12:10.548 { 00:12:10.548 "code": -32602, 00:12:10.548 "message": "The specified target doesn't exist, cannot delete it." 00:12:10.548 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:12:10.548 11:06:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:12:10.548 11:06:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:12:10.548 11:06:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:10.548 11:06:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:12:10.548 11:06:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:10.548 11:06:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:12:10.548 11:06:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:10.548 11:06:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:10.548 rmmod nvme_tcp 00:12:10.548 rmmod nvme_fabrics 00:12:10.548 rmmod nvme_keyring 00:12:10.548 11:06:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:10.548 11:06:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:12:10.548 11:06:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:12:10.548 11:06:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 3997210 ']' 00:12:10.548 11:06:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 3997210 00:12:10.548 11:06:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 3997210 ']' 00:12:10.548 11:06:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 3997210 00:12:10.548 11:06:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:12:10.548 11:06:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:10.548 11:06:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3997210 00:12:10.548 11:06:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:10.548 11:06:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:10.548 11:06:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3997210' 00:12:10.548 killing process with pid 3997210 00:12:10.548 11:06:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 3997210 00:12:10.548 11:06:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 3997210 00:12:10.808 11:06:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:10.808 11:06:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:10.808 11:06:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:10.808 11:06:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:12:10.808 11:06:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:12:10.808 11:06:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:12:10.808 11:06:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:10.808 11:06:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:10.808 11:06:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:10.808 11:06:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:10.808 11:06:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:10.808 11:06:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:12.712 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:12.712 00:12:12.712 real 0m12.051s 00:12:12.712 user 0m18.510s 00:12:12.712 sys 0m5.540s 00:12:12.712 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:12.712 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:12.712 ************************************ 00:12:12.712 END TEST nvmf_invalid 00:12:12.712 ************************************ 00:12:12.971 11:06:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:12:12.971 11:06:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:12.971 11:06:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:12.971 11:06:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:12.971 ************************************ 00:12:12.971 START TEST nvmf_connect_stress 00:12:12.971 ************************************ 00:12:12.971 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:12:12.971 * Looking for test storage... 00:12:12.971 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:12.971 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:12.971 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:12:12.971 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:12.971 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:12.971 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:12.971 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:12.971 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:12.971 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:12:12.971 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:12:12.971 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:12:12.971 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:12:12.971 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:12:12.971 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:12:12.971 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:12:12.971 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:12.971 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:12:12.971 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:12:12.971 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:12.971 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:12.971 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:12:12.971 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:12:12.971 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:12.972 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:12:12.972 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:12:12.972 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:12:12.972 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:12:12.972 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:12.972 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:12:12.972 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:12:12.972 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:12.972 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:12.972 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:12:12.972 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:12.972 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:12.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:12.972 --rc genhtml_branch_coverage=1 00:12:12.972 --rc genhtml_function_coverage=1 00:12:12.972 --rc genhtml_legend=1 00:12:12.972 --rc geninfo_all_blocks=1 00:12:12.972 --rc geninfo_unexecuted_blocks=1 00:12:12.972 00:12:12.972 ' 00:12:12.972 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:12.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:12.972 --rc genhtml_branch_coverage=1 00:12:12.972 --rc genhtml_function_coverage=1 00:12:12.972 --rc genhtml_legend=1 00:12:12.972 --rc geninfo_all_blocks=1 00:12:12.972 --rc geninfo_unexecuted_blocks=1 00:12:12.972 00:12:12.972 ' 00:12:12.972 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:12.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:12.972 --rc genhtml_branch_coverage=1 00:12:12.972 --rc genhtml_function_coverage=1 00:12:12.972 --rc genhtml_legend=1 00:12:12.972 --rc geninfo_all_blocks=1 00:12:12.972 --rc geninfo_unexecuted_blocks=1 00:12:12.972 00:12:12.972 ' 00:12:12.972 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:12.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:12.972 --rc genhtml_branch_coverage=1 00:12:12.972 --rc genhtml_function_coverage=1 00:12:12.972 --rc genhtml_legend=1 00:12:12.972 --rc geninfo_all_blocks=1 00:12:12.972 --rc geninfo_unexecuted_blocks=1 00:12:12.972 00:12:12.972 ' 00:12:12.972 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:12.972 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:12:12.972 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:12.972 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:12.972 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:12.972 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:12.972 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:12.972 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:12.972 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:12.972 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:12.972 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:12.972 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:12.972 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:12.972 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:12.972 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:12.972 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:12.972 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:12.972 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:12.972 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:12.972 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:12:12.972 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:12.972 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:12.972 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:12.972 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.972 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.972 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.972 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:12:12.972 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.972 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:12:12.972 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:12.972 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:12.972 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:12.972 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:12.972 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:12.972 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:12.972 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:12.972 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:12.972 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:12.972 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:12.972 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:12:12.972 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:12.972 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:12.972 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:12.972 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:12.972 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:12.972 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:12.972 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:12.972 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:12.972 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:12.972 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:13.232 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:12:13.232 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:19.804 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:19.804 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:12:19.804 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:19.804 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:19.804 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:19.804 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:19.804 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:19.804 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:12:19.804 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:19.804 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:12:19.804 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:12:19.804 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:12:19.804 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:12:19.804 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:12:19.804 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:12:19.804 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:19.804 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:19.804 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:19.804 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:19.804 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:19.804 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:19.804 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:19.804 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:19.804 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:19.804 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:19.804 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:19.804 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:19.804 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:19.804 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:19.804 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:19.804 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:19.804 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:19.804 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:19.804 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:19.804 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:19.804 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:19.804 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:19.804 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:19.804 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:19.804 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:19.804 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:19.804 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:19.804 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:19.804 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:19.804 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:19.804 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:19.804 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:19.804 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:19.804 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:19.804 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:19.804 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:19.804 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:19.804 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:19.804 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:19.804 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:19.804 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:19.804 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:19.804 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:19.804 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:19.804 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:19.804 Found net devices under 0000:86:00.0: cvl_0_0 00:12:19.804 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:19.804 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:19.804 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:19.804 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:19.804 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:19.804 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:19.804 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:19.804 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:19.805 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:19.805 Found net devices under 0000:86:00.1: cvl_0_1 00:12:19.805 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:19.805 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:19.805 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:12:19.805 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:19.805 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:19.805 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:19.805 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:19.805 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:19.805 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:19.805 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:19.805 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:19.805 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:19.805 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:19.805 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:19.805 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:19.805 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:19.805 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:19.805 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:19.805 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:19.805 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:19.805 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:19.805 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:19.805 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:19.805 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:19.805 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:19.805 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:19.805 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:19.805 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:19.805 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:19.805 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:19.805 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.464 ms 00:12:19.805 00:12:19.805 --- 10.0.0.2 ping statistics --- 00:12:19.805 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:19.805 rtt min/avg/max/mdev = 0.464/0.464/0.464/0.000 ms 00:12:19.805 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:19.805 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:19.805 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.222 ms 00:12:19.805 00:12:19.805 --- 10.0.0.1 ping statistics --- 00:12:19.805 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:19.805 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:12:19.805 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:19.805 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:12:19.805 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:19.805 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:19.805 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:19.805 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:19.805 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:19.805 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:19.805 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:19.805 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:12:19.805 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:19.805 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:19.805 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:19.805 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=4001407 00:12:19.805 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:19.805 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 4001407 00:12:19.805 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 4001407 ']' 00:12:19.805 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:19.805 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:19.805 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:19.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:19.805 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:19.805 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:19.805 [2024-11-20 11:06:46.530642] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:12:19.805 [2024-11-20 11:06:46.530690] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:19.805 [2024-11-20 11:06:46.612299] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:19.805 [2024-11-20 11:06:46.655219] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:19.805 [2024-11-20 11:06:46.655255] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:19.805 [2024-11-20 11:06:46.655262] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:19.805 [2024-11-20 11:06:46.655269] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:19.805 [2024-11-20 11:06:46.655274] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:19.805 [2024-11-20 11:06:46.656640] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:19.806 [2024-11-20 11:06:46.656747] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:19.806 [2024-11-20 11:06:46.656748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:19.806 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:19.806 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:12:19.806 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:19.806 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:19.806 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:19.806 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:19.806 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:19.806 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.806 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:19.806 [2024-11-20 11:06:46.793728] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:19.806 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.806 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:19.806 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.806 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:19.806 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.806 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:19.806 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.806 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:19.806 [2024-11-20 11:06:46.813952] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:19.806 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.806 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:19.806 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.806 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:19.806 NULL1 00:12:19.806 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.806 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=4001525 00:12:19.806 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:19.806 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:12:19.806 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:19.806 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:12:19.806 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:19.806 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:19.806 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:19.806 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:19.806 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:19.806 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:19.806 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:19.806 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:19.806 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:19.806 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:19.806 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:19.806 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:19.806 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:19.806 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:19.806 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:19.806 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:19.806 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:19.806 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:19.806 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:19.806 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:19.806 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:19.806 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:19.806 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:19.806 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:19.806 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:19.806 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:19.806 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:19.806 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:19.806 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:19.806 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:19.806 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:19.806 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:19.806 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:19.806 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:19.806 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:19.806 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:19.806 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:19.806 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:19.806 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:19.806 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:19.806 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4001525 00:12:19.806 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:19.806 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.806 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:19.806 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.806 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4001525 00:12:19.806 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:19.806 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.806 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:20.372 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.372 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4001525 00:12:20.372 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:20.372 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.372 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:20.630 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.630 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4001525 00:12:20.630 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:20.630 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.630 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:20.888 11:06:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.888 11:06:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4001525 00:12:20.888 11:06:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:20.888 11:06:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.888 11:06:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:21.146 11:06:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.146 11:06:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4001525 00:12:21.146 11:06:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:21.146 11:06:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.146 11:06:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:21.404 11:06:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.404 11:06:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4001525 00:12:21.404 11:06:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:21.404 11:06:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.404 11:06:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:21.970 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.970 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4001525 00:12:21.970 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:21.970 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.970 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:22.228 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.228 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4001525 00:12:22.228 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:22.228 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.228 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:22.487 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.487 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4001525 00:12:22.487 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:22.487 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.487 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:22.746 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.746 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4001525 00:12:22.746 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:22.746 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.746 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:23.004 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.004 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4001525 00:12:23.004 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:23.004 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.004 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:23.571 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.571 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4001525 00:12:23.571 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:23.571 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.571 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:23.829 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.829 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4001525 00:12:23.829 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:23.829 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.829 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:24.087 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.087 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4001525 00:12:24.087 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:24.087 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.087 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:24.345 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.345 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4001525 00:12:24.345 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:24.345 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.345 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:24.911 11:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.911 11:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4001525 00:12:24.911 11:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:24.911 11:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.911 11:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:25.169 11:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.169 11:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4001525 00:12:25.169 11:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:25.169 11:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.169 11:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:25.427 11:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.427 11:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4001525 00:12:25.427 11:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:25.427 11:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.427 11:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:25.685 11:06:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.685 11:06:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4001525 00:12:25.685 11:06:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:25.685 11:06:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.685 11:06:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:25.943 11:06:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.943 11:06:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4001525 00:12:25.943 11:06:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:25.943 11:06:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.943 11:06:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:26.509 11:06:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.509 11:06:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4001525 00:12:26.509 11:06:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:26.509 11:06:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.509 11:06:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:26.767 11:06:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.767 11:06:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4001525 00:12:26.767 11:06:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:26.767 11:06:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.767 11:06:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:27.024 11:06:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.024 11:06:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4001525 00:12:27.024 11:06:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:27.024 11:06:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.024 11:06:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:27.282 11:06:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.282 11:06:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4001525 00:12:27.282 11:06:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:27.282 11:06:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.282 11:06:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:27.848 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.848 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4001525 00:12:27.848 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:27.848 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.848 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:28.106 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.106 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4001525 00:12:28.106 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:28.106 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.106 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:28.363 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.363 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4001525 00:12:28.363 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:28.363 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.363 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:28.622 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.622 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4001525 00:12:28.622 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:28.622 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.622 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:28.880 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.880 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4001525 00:12:28.880 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:28.880 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.880 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:29.447 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.447 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4001525 00:12:29.447 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:29.447 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.447 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:29.705 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:29.705 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.705 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4001525 00:12:29.705 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (4001525) - No such process 00:12:29.705 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 4001525 00:12:29.705 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:29.705 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:12:29.705 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:12:29.705 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:29.705 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:12:29.705 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:29.705 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:12:29.705 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:29.705 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:29.705 rmmod nvme_tcp 00:12:29.705 rmmod nvme_fabrics 00:12:29.705 rmmod nvme_keyring 00:12:29.705 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:29.705 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:12:29.705 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:12:29.705 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 4001407 ']' 00:12:29.705 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 4001407 00:12:29.705 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 4001407 ']' 00:12:29.705 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 4001407 00:12:29.705 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:12:29.705 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:29.705 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4001407 00:12:29.705 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:29.705 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:29.705 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4001407' 00:12:29.705 killing process with pid 4001407 00:12:29.705 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 4001407 00:12:29.705 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 4001407 00:12:29.965 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:29.965 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:29.965 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:29.965 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:12:29.965 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:12:29.965 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:12:29.965 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:29.965 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:29.965 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:29.965 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:29.965 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:29.965 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:31.884 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:31.884 00:12:31.884 real 0m19.119s 00:12:31.884 user 0m39.549s 00:12:31.884 sys 0m8.529s 00:12:31.884 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:31.884 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:31.884 ************************************ 00:12:31.884 END TEST nvmf_connect_stress 00:12:31.884 ************************************ 00:12:32.146 11:06:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:12:32.146 11:06:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:32.146 11:06:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:32.146 11:06:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:32.146 ************************************ 00:12:32.146 START TEST nvmf_fused_ordering 00:12:32.146 ************************************ 00:12:32.146 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:12:32.146 * Looking for test storage... 00:12:32.146 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:32.146 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:32.146 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lcov --version 00:12:32.146 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:32.146 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:32.146 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:32.146 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:32.146 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:32.146 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:12:32.146 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:12:32.146 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:12:32.146 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:12:32.146 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:12:32.146 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:12:32.146 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:12:32.146 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:32.146 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:12:32.146 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:12:32.146 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:32.146 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:32.146 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:12:32.146 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:12:32.146 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:32.146 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:12:32.146 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:12:32.146 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:12:32.146 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:12:32.146 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:32.146 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:12:32.146 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:12:32.146 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:32.146 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:32.146 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:12:32.146 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:32.146 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:32.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:32.146 --rc genhtml_branch_coverage=1 00:12:32.146 --rc genhtml_function_coverage=1 00:12:32.146 --rc genhtml_legend=1 00:12:32.146 --rc geninfo_all_blocks=1 00:12:32.146 --rc geninfo_unexecuted_blocks=1 00:12:32.146 00:12:32.146 ' 00:12:32.146 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:32.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:32.146 --rc genhtml_branch_coverage=1 00:12:32.146 --rc genhtml_function_coverage=1 00:12:32.146 --rc genhtml_legend=1 00:12:32.146 --rc geninfo_all_blocks=1 00:12:32.146 --rc geninfo_unexecuted_blocks=1 00:12:32.146 00:12:32.146 ' 00:12:32.146 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:32.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:32.146 --rc genhtml_branch_coverage=1 00:12:32.146 --rc genhtml_function_coverage=1 00:12:32.146 --rc genhtml_legend=1 00:12:32.146 --rc geninfo_all_blocks=1 00:12:32.146 --rc geninfo_unexecuted_blocks=1 00:12:32.146 00:12:32.146 ' 00:12:32.146 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:32.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:32.146 --rc genhtml_branch_coverage=1 00:12:32.146 --rc genhtml_function_coverage=1 00:12:32.146 --rc genhtml_legend=1 00:12:32.146 --rc geninfo_all_blocks=1 00:12:32.146 --rc geninfo_unexecuted_blocks=1 00:12:32.146 00:12:32.146 ' 00:12:32.146 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:32.146 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:12:32.146 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:32.146 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:32.146 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:32.146 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:32.146 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:32.146 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:32.146 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:32.146 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:32.146 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:32.146 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:32.406 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:32.406 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:32.406 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:32.406 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:32.406 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:32.406 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:32.406 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:32.406 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:12:32.407 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:32.407 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:32.407 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:32.407 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.407 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.407 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.407 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:12:32.407 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.407 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:12:32.407 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:32.407 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:32.407 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:32.407 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:32.407 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:32.407 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:32.407 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:32.407 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:32.407 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:32.407 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:32.407 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:12:32.407 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:32.407 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:32.407 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:32.407 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:32.407 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:32.407 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:32.407 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:32.407 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:32.407 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:32.407 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:32.407 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:12:32.407 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:39.046 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:39.046 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:12:39.046 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:39.046 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:39.046 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:39.046 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:39.046 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:39.046 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:12:39.046 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:39.046 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:12:39.046 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:12:39.046 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:12:39.046 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:12:39.046 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:12:39.046 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:12:39.046 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:39.046 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:39.046 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:39.046 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:39.046 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:39.046 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:39.046 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:39.046 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:39.046 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:39.046 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:39.046 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:39.046 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:39.046 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:39.046 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:39.046 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:39.046 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:39.046 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:39.046 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:39.046 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:39.046 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:39.046 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:39.046 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:39.046 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:39.046 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:39.046 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:39.046 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:39.046 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:39.046 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:39.046 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:39.046 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:39.046 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:39.046 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:39.046 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:39.046 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:39.046 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:39.046 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:39.046 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:39.046 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:39.046 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:39.046 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:39.046 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:39.046 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:39.046 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:39.046 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:39.046 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:39.046 Found net devices under 0000:86:00.0: cvl_0_0 00:12:39.046 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:39.046 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:39.046 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:39.046 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:39.046 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:39.046 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:39.047 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:39.047 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:39.047 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:39.047 Found net devices under 0000:86:00.1: cvl_0_1 00:12:39.047 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:39.047 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:39.047 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:12:39.047 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:39.047 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:39.047 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:39.047 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:39.047 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:39.047 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:39.047 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:39.047 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:39.047 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:39.047 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:39.047 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:39.047 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:39.047 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:39.047 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:39.047 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:39.047 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:39.047 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:39.047 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:39.047 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:39.047 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:39.047 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:39.047 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:39.047 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:39.047 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:39.047 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:39.047 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:39.047 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:39.047 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.476 ms 00:12:39.047 00:12:39.047 --- 10.0.0.2 ping statistics --- 00:12:39.047 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:39.047 rtt min/avg/max/mdev = 0.476/0.476/0.476/0.000 ms 00:12:39.047 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:39.047 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:39.047 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:12:39.047 00:12:39.047 --- 10.0.0.1 ping statistics --- 00:12:39.047 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:39.047 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:12:39.047 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:39.047 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:12:39.047 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:39.047 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:39.047 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:39.047 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:39.047 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:39.047 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:39.047 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:39.047 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:12:39.047 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:39.047 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:39.047 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:39.047 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=4006821 00:12:39.047 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 4006821 00:12:39.047 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:39.047 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 4006821 ']' 00:12:39.047 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:39.047 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:39.047 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:39.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:39.047 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:39.047 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:39.047 [2024-11-20 11:07:05.730147] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:12:39.047 [2024-11-20 11:07:05.730196] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:39.047 [2024-11-20 11:07:05.809134] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:39.047 [2024-11-20 11:07:05.850554] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:39.047 [2024-11-20 11:07:05.850590] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:39.047 [2024-11-20 11:07:05.850598] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:39.047 [2024-11-20 11:07:05.850604] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:39.047 [2024-11-20 11:07:05.850609] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:39.047 [2024-11-20 11:07:05.851204] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:39.047 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:39.047 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:12:39.047 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:39.047 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:39.047 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:39.047 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:39.047 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:39.047 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.047 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:39.047 [2024-11-20 11:07:05.986580] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:39.047 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.047 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:39.047 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.047 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:39.047 11:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.047 11:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:39.047 11:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.047 11:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:39.047 [2024-11-20 11:07:06.006766] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:39.047 11:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.047 11:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:39.047 11:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.047 11:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:39.047 NULL1 00:12:39.047 11:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.047 11:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:12:39.047 11:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.047 11:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:39.048 11:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.048 11:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:12:39.048 11:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.048 11:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:39.048 11:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.048 11:07:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:12:39.048 [2024-11-20 11:07:06.067105] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:12:39.048 [2024-11-20 11:07:06.067150] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4006844 ] 00:12:39.307 Attached to nqn.2016-06.io.spdk:cnode1 00:12:39.307 Namespace ID: 1 size: 1GB 00:12:39.307 fused_ordering(0) 00:12:39.307 fused_ordering(1) 00:12:39.307 fused_ordering(2) 00:12:39.307 fused_ordering(3) 00:12:39.307 fused_ordering(4) 00:12:39.307 fused_ordering(5) 00:12:39.307 fused_ordering(6) 00:12:39.307 fused_ordering(7) 00:12:39.307 fused_ordering(8) 00:12:39.307 fused_ordering(9) 00:12:39.307 fused_ordering(10) 00:12:39.307 fused_ordering(11) 00:12:39.307 fused_ordering(12) 00:12:39.307 fused_ordering(13) 00:12:39.307 fused_ordering(14) 00:12:39.307 fused_ordering(15) 00:12:39.307 fused_ordering(16) 00:12:39.307 fused_ordering(17) 00:12:39.307 fused_ordering(18) 00:12:39.307 fused_ordering(19) 00:12:39.307 fused_ordering(20) 00:12:39.307 fused_ordering(21) 00:12:39.307 fused_ordering(22) 00:12:39.307 fused_ordering(23) 00:12:39.307 fused_ordering(24) 00:12:39.307 fused_ordering(25) 00:12:39.307 fused_ordering(26) 00:12:39.307 fused_ordering(27) 00:12:39.307 fused_ordering(28) 00:12:39.307 fused_ordering(29) 00:12:39.307 fused_ordering(30) 00:12:39.307 fused_ordering(31) 00:12:39.307 fused_ordering(32) 00:12:39.307 fused_ordering(33) 00:12:39.307 fused_ordering(34) 00:12:39.307 fused_ordering(35) 00:12:39.307 fused_ordering(36) 00:12:39.307 fused_ordering(37) 00:12:39.307 fused_ordering(38) 00:12:39.307 fused_ordering(39) 00:12:39.307 fused_ordering(40) 00:12:39.307 fused_ordering(41) 00:12:39.307 fused_ordering(42) 00:12:39.307 fused_ordering(43) 00:12:39.307 fused_ordering(44) 00:12:39.307 fused_ordering(45) 00:12:39.307 fused_ordering(46) 00:12:39.307 fused_ordering(47) 00:12:39.307 fused_ordering(48) 00:12:39.307 fused_ordering(49) 00:12:39.307 fused_ordering(50) 00:12:39.307 fused_ordering(51) 00:12:39.307 fused_ordering(52) 00:12:39.307 fused_ordering(53) 00:12:39.307 fused_ordering(54) 00:12:39.307 fused_ordering(55) 00:12:39.307 fused_ordering(56) 00:12:39.307 fused_ordering(57) 00:12:39.308 fused_ordering(58) 00:12:39.308 fused_ordering(59) 00:12:39.308 fused_ordering(60) 00:12:39.308 fused_ordering(61) 00:12:39.308 fused_ordering(62) 00:12:39.308 fused_ordering(63) 00:12:39.308 fused_ordering(64) 00:12:39.308 fused_ordering(65) 00:12:39.308 fused_ordering(66) 00:12:39.308 fused_ordering(67) 00:12:39.308 fused_ordering(68) 00:12:39.308 fused_ordering(69) 00:12:39.308 fused_ordering(70) 00:12:39.308 fused_ordering(71) 00:12:39.308 fused_ordering(72) 00:12:39.308 fused_ordering(73) 00:12:39.308 fused_ordering(74) 00:12:39.308 fused_ordering(75) 00:12:39.308 fused_ordering(76) 00:12:39.308 fused_ordering(77) 00:12:39.308 fused_ordering(78) 00:12:39.308 fused_ordering(79) 00:12:39.308 fused_ordering(80) 00:12:39.308 fused_ordering(81) 00:12:39.308 fused_ordering(82) 00:12:39.308 fused_ordering(83) 00:12:39.308 fused_ordering(84) 00:12:39.308 fused_ordering(85) 00:12:39.308 fused_ordering(86) 00:12:39.308 fused_ordering(87) 00:12:39.308 fused_ordering(88) 00:12:39.308 fused_ordering(89) 00:12:39.308 fused_ordering(90) 00:12:39.308 fused_ordering(91) 00:12:39.308 fused_ordering(92) 00:12:39.308 fused_ordering(93) 00:12:39.308 fused_ordering(94) 00:12:39.308 fused_ordering(95) 00:12:39.308 fused_ordering(96) 00:12:39.308 fused_ordering(97) 00:12:39.308 fused_ordering(98) 00:12:39.308 fused_ordering(99) 00:12:39.308 fused_ordering(100) 00:12:39.308 fused_ordering(101) 00:12:39.308 fused_ordering(102) 00:12:39.308 fused_ordering(103) 00:12:39.308 fused_ordering(104) 00:12:39.308 fused_ordering(105) 00:12:39.308 fused_ordering(106) 00:12:39.308 fused_ordering(107) 00:12:39.308 fused_ordering(108) 00:12:39.308 fused_ordering(109) 00:12:39.308 fused_ordering(110) 00:12:39.308 fused_ordering(111) 00:12:39.308 fused_ordering(112) 00:12:39.308 fused_ordering(113) 00:12:39.308 fused_ordering(114) 00:12:39.308 fused_ordering(115) 00:12:39.308 fused_ordering(116) 00:12:39.308 fused_ordering(117) 00:12:39.308 fused_ordering(118) 00:12:39.308 fused_ordering(119) 00:12:39.308 fused_ordering(120) 00:12:39.308 fused_ordering(121) 00:12:39.308 fused_ordering(122) 00:12:39.308 fused_ordering(123) 00:12:39.308 fused_ordering(124) 00:12:39.308 fused_ordering(125) 00:12:39.308 fused_ordering(126) 00:12:39.308 fused_ordering(127) 00:12:39.308 fused_ordering(128) 00:12:39.308 fused_ordering(129) 00:12:39.308 fused_ordering(130) 00:12:39.308 fused_ordering(131) 00:12:39.308 fused_ordering(132) 00:12:39.308 fused_ordering(133) 00:12:39.308 fused_ordering(134) 00:12:39.308 fused_ordering(135) 00:12:39.308 fused_ordering(136) 00:12:39.308 fused_ordering(137) 00:12:39.308 fused_ordering(138) 00:12:39.308 fused_ordering(139) 00:12:39.308 fused_ordering(140) 00:12:39.308 fused_ordering(141) 00:12:39.308 fused_ordering(142) 00:12:39.308 fused_ordering(143) 00:12:39.308 fused_ordering(144) 00:12:39.308 fused_ordering(145) 00:12:39.308 fused_ordering(146) 00:12:39.308 fused_ordering(147) 00:12:39.308 fused_ordering(148) 00:12:39.308 fused_ordering(149) 00:12:39.308 fused_ordering(150) 00:12:39.308 fused_ordering(151) 00:12:39.308 fused_ordering(152) 00:12:39.308 fused_ordering(153) 00:12:39.308 fused_ordering(154) 00:12:39.308 fused_ordering(155) 00:12:39.308 fused_ordering(156) 00:12:39.308 fused_ordering(157) 00:12:39.308 fused_ordering(158) 00:12:39.308 fused_ordering(159) 00:12:39.308 fused_ordering(160) 00:12:39.308 fused_ordering(161) 00:12:39.308 fused_ordering(162) 00:12:39.308 fused_ordering(163) 00:12:39.308 fused_ordering(164) 00:12:39.308 fused_ordering(165) 00:12:39.308 fused_ordering(166) 00:12:39.308 fused_ordering(167) 00:12:39.308 fused_ordering(168) 00:12:39.308 fused_ordering(169) 00:12:39.308 fused_ordering(170) 00:12:39.308 fused_ordering(171) 00:12:39.308 fused_ordering(172) 00:12:39.308 fused_ordering(173) 00:12:39.308 fused_ordering(174) 00:12:39.308 fused_ordering(175) 00:12:39.308 fused_ordering(176) 00:12:39.308 fused_ordering(177) 00:12:39.308 fused_ordering(178) 00:12:39.308 fused_ordering(179) 00:12:39.308 fused_ordering(180) 00:12:39.308 fused_ordering(181) 00:12:39.308 fused_ordering(182) 00:12:39.308 fused_ordering(183) 00:12:39.308 fused_ordering(184) 00:12:39.308 fused_ordering(185) 00:12:39.308 fused_ordering(186) 00:12:39.308 fused_ordering(187) 00:12:39.308 fused_ordering(188) 00:12:39.308 fused_ordering(189) 00:12:39.308 fused_ordering(190) 00:12:39.308 fused_ordering(191) 00:12:39.308 fused_ordering(192) 00:12:39.308 fused_ordering(193) 00:12:39.308 fused_ordering(194) 00:12:39.308 fused_ordering(195) 00:12:39.308 fused_ordering(196) 00:12:39.308 fused_ordering(197) 00:12:39.308 fused_ordering(198) 00:12:39.308 fused_ordering(199) 00:12:39.308 fused_ordering(200) 00:12:39.308 fused_ordering(201) 00:12:39.308 fused_ordering(202) 00:12:39.308 fused_ordering(203) 00:12:39.308 fused_ordering(204) 00:12:39.308 fused_ordering(205) 00:12:39.568 fused_ordering(206) 00:12:39.568 fused_ordering(207) 00:12:39.568 fused_ordering(208) 00:12:39.568 fused_ordering(209) 00:12:39.568 fused_ordering(210) 00:12:39.568 fused_ordering(211) 00:12:39.568 fused_ordering(212) 00:12:39.568 fused_ordering(213) 00:12:39.568 fused_ordering(214) 00:12:39.568 fused_ordering(215) 00:12:39.568 fused_ordering(216) 00:12:39.568 fused_ordering(217) 00:12:39.568 fused_ordering(218) 00:12:39.568 fused_ordering(219) 00:12:39.568 fused_ordering(220) 00:12:39.568 fused_ordering(221) 00:12:39.568 fused_ordering(222) 00:12:39.568 fused_ordering(223) 00:12:39.568 fused_ordering(224) 00:12:39.568 fused_ordering(225) 00:12:39.568 fused_ordering(226) 00:12:39.568 fused_ordering(227) 00:12:39.568 fused_ordering(228) 00:12:39.568 fused_ordering(229) 00:12:39.568 fused_ordering(230) 00:12:39.568 fused_ordering(231) 00:12:39.568 fused_ordering(232) 00:12:39.568 fused_ordering(233) 00:12:39.568 fused_ordering(234) 00:12:39.568 fused_ordering(235) 00:12:39.568 fused_ordering(236) 00:12:39.568 fused_ordering(237) 00:12:39.568 fused_ordering(238) 00:12:39.568 fused_ordering(239) 00:12:39.568 fused_ordering(240) 00:12:39.568 fused_ordering(241) 00:12:39.568 fused_ordering(242) 00:12:39.568 fused_ordering(243) 00:12:39.568 fused_ordering(244) 00:12:39.568 fused_ordering(245) 00:12:39.568 fused_ordering(246) 00:12:39.568 fused_ordering(247) 00:12:39.568 fused_ordering(248) 00:12:39.568 fused_ordering(249) 00:12:39.568 fused_ordering(250) 00:12:39.568 fused_ordering(251) 00:12:39.568 fused_ordering(252) 00:12:39.568 fused_ordering(253) 00:12:39.568 fused_ordering(254) 00:12:39.568 fused_ordering(255) 00:12:39.568 fused_ordering(256) 00:12:39.568 fused_ordering(257) 00:12:39.568 fused_ordering(258) 00:12:39.568 fused_ordering(259) 00:12:39.568 fused_ordering(260) 00:12:39.568 fused_ordering(261) 00:12:39.568 fused_ordering(262) 00:12:39.568 fused_ordering(263) 00:12:39.568 fused_ordering(264) 00:12:39.568 fused_ordering(265) 00:12:39.568 fused_ordering(266) 00:12:39.568 fused_ordering(267) 00:12:39.568 fused_ordering(268) 00:12:39.568 fused_ordering(269) 00:12:39.568 fused_ordering(270) 00:12:39.568 fused_ordering(271) 00:12:39.568 fused_ordering(272) 00:12:39.568 fused_ordering(273) 00:12:39.568 fused_ordering(274) 00:12:39.568 fused_ordering(275) 00:12:39.568 fused_ordering(276) 00:12:39.568 fused_ordering(277) 00:12:39.568 fused_ordering(278) 00:12:39.568 fused_ordering(279) 00:12:39.568 fused_ordering(280) 00:12:39.568 fused_ordering(281) 00:12:39.568 fused_ordering(282) 00:12:39.568 fused_ordering(283) 00:12:39.568 fused_ordering(284) 00:12:39.568 fused_ordering(285) 00:12:39.568 fused_ordering(286) 00:12:39.568 fused_ordering(287) 00:12:39.568 fused_ordering(288) 00:12:39.568 fused_ordering(289) 00:12:39.568 fused_ordering(290) 00:12:39.568 fused_ordering(291) 00:12:39.568 fused_ordering(292) 00:12:39.568 fused_ordering(293) 00:12:39.568 fused_ordering(294) 00:12:39.568 fused_ordering(295) 00:12:39.568 fused_ordering(296) 00:12:39.568 fused_ordering(297) 00:12:39.568 fused_ordering(298) 00:12:39.568 fused_ordering(299) 00:12:39.568 fused_ordering(300) 00:12:39.568 fused_ordering(301) 00:12:39.568 fused_ordering(302) 00:12:39.568 fused_ordering(303) 00:12:39.568 fused_ordering(304) 00:12:39.568 fused_ordering(305) 00:12:39.568 fused_ordering(306) 00:12:39.568 fused_ordering(307) 00:12:39.568 fused_ordering(308) 00:12:39.568 fused_ordering(309) 00:12:39.568 fused_ordering(310) 00:12:39.568 fused_ordering(311) 00:12:39.568 fused_ordering(312) 00:12:39.568 fused_ordering(313) 00:12:39.568 fused_ordering(314) 00:12:39.568 fused_ordering(315) 00:12:39.568 fused_ordering(316) 00:12:39.568 fused_ordering(317) 00:12:39.568 fused_ordering(318) 00:12:39.568 fused_ordering(319) 00:12:39.568 fused_ordering(320) 00:12:39.568 fused_ordering(321) 00:12:39.568 fused_ordering(322) 00:12:39.568 fused_ordering(323) 00:12:39.568 fused_ordering(324) 00:12:39.568 fused_ordering(325) 00:12:39.568 fused_ordering(326) 00:12:39.568 fused_ordering(327) 00:12:39.568 fused_ordering(328) 00:12:39.568 fused_ordering(329) 00:12:39.568 fused_ordering(330) 00:12:39.568 fused_ordering(331) 00:12:39.568 fused_ordering(332) 00:12:39.568 fused_ordering(333) 00:12:39.568 fused_ordering(334) 00:12:39.568 fused_ordering(335) 00:12:39.568 fused_ordering(336) 00:12:39.568 fused_ordering(337) 00:12:39.568 fused_ordering(338) 00:12:39.568 fused_ordering(339) 00:12:39.568 fused_ordering(340) 00:12:39.568 fused_ordering(341) 00:12:39.568 fused_ordering(342) 00:12:39.568 fused_ordering(343) 00:12:39.568 fused_ordering(344) 00:12:39.568 fused_ordering(345) 00:12:39.568 fused_ordering(346) 00:12:39.568 fused_ordering(347) 00:12:39.568 fused_ordering(348) 00:12:39.568 fused_ordering(349) 00:12:39.568 fused_ordering(350) 00:12:39.568 fused_ordering(351) 00:12:39.568 fused_ordering(352) 00:12:39.568 fused_ordering(353) 00:12:39.568 fused_ordering(354) 00:12:39.568 fused_ordering(355) 00:12:39.568 fused_ordering(356) 00:12:39.568 fused_ordering(357) 00:12:39.568 fused_ordering(358) 00:12:39.568 fused_ordering(359) 00:12:39.568 fused_ordering(360) 00:12:39.568 fused_ordering(361) 00:12:39.569 fused_ordering(362) 00:12:39.569 fused_ordering(363) 00:12:39.569 fused_ordering(364) 00:12:39.569 fused_ordering(365) 00:12:39.569 fused_ordering(366) 00:12:39.569 fused_ordering(367) 00:12:39.569 fused_ordering(368) 00:12:39.569 fused_ordering(369) 00:12:39.569 fused_ordering(370) 00:12:39.569 fused_ordering(371) 00:12:39.569 fused_ordering(372) 00:12:39.569 fused_ordering(373) 00:12:39.569 fused_ordering(374) 00:12:39.569 fused_ordering(375) 00:12:39.569 fused_ordering(376) 00:12:39.569 fused_ordering(377) 00:12:39.569 fused_ordering(378) 00:12:39.569 fused_ordering(379) 00:12:39.569 fused_ordering(380) 00:12:39.569 fused_ordering(381) 00:12:39.569 fused_ordering(382) 00:12:39.569 fused_ordering(383) 00:12:39.569 fused_ordering(384) 00:12:39.569 fused_ordering(385) 00:12:39.569 fused_ordering(386) 00:12:39.569 fused_ordering(387) 00:12:39.569 fused_ordering(388) 00:12:39.569 fused_ordering(389) 00:12:39.569 fused_ordering(390) 00:12:39.569 fused_ordering(391) 00:12:39.569 fused_ordering(392) 00:12:39.569 fused_ordering(393) 00:12:39.569 fused_ordering(394) 00:12:39.569 fused_ordering(395) 00:12:39.569 fused_ordering(396) 00:12:39.569 fused_ordering(397) 00:12:39.569 fused_ordering(398) 00:12:39.569 fused_ordering(399) 00:12:39.569 fused_ordering(400) 00:12:39.569 fused_ordering(401) 00:12:39.569 fused_ordering(402) 00:12:39.569 fused_ordering(403) 00:12:39.569 fused_ordering(404) 00:12:39.569 fused_ordering(405) 00:12:39.569 fused_ordering(406) 00:12:39.569 fused_ordering(407) 00:12:39.569 fused_ordering(408) 00:12:39.569 fused_ordering(409) 00:12:39.569 fused_ordering(410) 00:12:39.827 fused_ordering(411) 00:12:39.827 fused_ordering(412) 00:12:39.827 fused_ordering(413) 00:12:39.827 fused_ordering(414) 00:12:39.827 fused_ordering(415) 00:12:39.827 fused_ordering(416) 00:12:39.827 fused_ordering(417) 00:12:39.827 fused_ordering(418) 00:12:39.827 fused_ordering(419) 00:12:39.827 fused_ordering(420) 00:12:39.827 fused_ordering(421) 00:12:39.827 fused_ordering(422) 00:12:39.827 fused_ordering(423) 00:12:39.827 fused_ordering(424) 00:12:39.827 fused_ordering(425) 00:12:39.827 fused_ordering(426) 00:12:39.828 fused_ordering(427) 00:12:39.828 fused_ordering(428) 00:12:39.828 fused_ordering(429) 00:12:39.828 fused_ordering(430) 00:12:39.828 fused_ordering(431) 00:12:39.828 fused_ordering(432) 00:12:39.828 fused_ordering(433) 00:12:39.828 fused_ordering(434) 00:12:39.828 fused_ordering(435) 00:12:39.828 fused_ordering(436) 00:12:39.828 fused_ordering(437) 00:12:39.828 fused_ordering(438) 00:12:39.828 fused_ordering(439) 00:12:39.828 fused_ordering(440) 00:12:39.828 fused_ordering(441) 00:12:39.828 fused_ordering(442) 00:12:39.828 fused_ordering(443) 00:12:39.828 fused_ordering(444) 00:12:39.828 fused_ordering(445) 00:12:39.828 fused_ordering(446) 00:12:39.828 fused_ordering(447) 00:12:39.828 fused_ordering(448) 00:12:39.828 fused_ordering(449) 00:12:39.828 fused_ordering(450) 00:12:39.828 fused_ordering(451) 00:12:39.828 fused_ordering(452) 00:12:39.828 fused_ordering(453) 00:12:39.828 fused_ordering(454) 00:12:39.828 fused_ordering(455) 00:12:39.828 fused_ordering(456) 00:12:39.828 fused_ordering(457) 00:12:39.828 fused_ordering(458) 00:12:39.828 fused_ordering(459) 00:12:39.828 fused_ordering(460) 00:12:39.828 fused_ordering(461) 00:12:39.828 fused_ordering(462) 00:12:39.828 fused_ordering(463) 00:12:39.828 fused_ordering(464) 00:12:39.828 fused_ordering(465) 00:12:39.828 fused_ordering(466) 00:12:39.828 fused_ordering(467) 00:12:39.828 fused_ordering(468) 00:12:39.828 fused_ordering(469) 00:12:39.828 fused_ordering(470) 00:12:39.828 fused_ordering(471) 00:12:39.828 fused_ordering(472) 00:12:39.828 fused_ordering(473) 00:12:39.828 fused_ordering(474) 00:12:39.828 fused_ordering(475) 00:12:39.828 fused_ordering(476) 00:12:39.828 fused_ordering(477) 00:12:39.828 fused_ordering(478) 00:12:39.828 fused_ordering(479) 00:12:39.828 fused_ordering(480) 00:12:39.828 fused_ordering(481) 00:12:39.828 fused_ordering(482) 00:12:39.828 fused_ordering(483) 00:12:39.828 fused_ordering(484) 00:12:39.828 fused_ordering(485) 00:12:39.828 fused_ordering(486) 00:12:39.828 fused_ordering(487) 00:12:39.828 fused_ordering(488) 00:12:39.828 fused_ordering(489) 00:12:39.828 fused_ordering(490) 00:12:39.828 fused_ordering(491) 00:12:39.828 fused_ordering(492) 00:12:39.828 fused_ordering(493) 00:12:39.828 fused_ordering(494) 00:12:39.828 fused_ordering(495) 00:12:39.828 fused_ordering(496) 00:12:39.828 fused_ordering(497) 00:12:39.828 fused_ordering(498) 00:12:39.828 fused_ordering(499) 00:12:39.828 fused_ordering(500) 00:12:39.828 fused_ordering(501) 00:12:39.828 fused_ordering(502) 00:12:39.828 fused_ordering(503) 00:12:39.828 fused_ordering(504) 00:12:39.828 fused_ordering(505) 00:12:39.828 fused_ordering(506) 00:12:39.828 fused_ordering(507) 00:12:39.828 fused_ordering(508) 00:12:39.828 fused_ordering(509) 00:12:39.828 fused_ordering(510) 00:12:39.828 fused_ordering(511) 00:12:39.828 fused_ordering(512) 00:12:39.828 fused_ordering(513) 00:12:39.828 fused_ordering(514) 00:12:39.828 fused_ordering(515) 00:12:39.828 fused_ordering(516) 00:12:39.828 fused_ordering(517) 00:12:39.828 fused_ordering(518) 00:12:39.828 fused_ordering(519) 00:12:39.828 fused_ordering(520) 00:12:39.828 fused_ordering(521) 00:12:39.828 fused_ordering(522) 00:12:39.828 fused_ordering(523) 00:12:39.828 fused_ordering(524) 00:12:39.828 fused_ordering(525) 00:12:39.828 fused_ordering(526) 00:12:39.828 fused_ordering(527) 00:12:39.828 fused_ordering(528) 00:12:39.828 fused_ordering(529) 00:12:39.828 fused_ordering(530) 00:12:39.828 fused_ordering(531) 00:12:39.828 fused_ordering(532) 00:12:39.828 fused_ordering(533) 00:12:39.828 fused_ordering(534) 00:12:39.828 fused_ordering(535) 00:12:39.828 fused_ordering(536) 00:12:39.828 fused_ordering(537) 00:12:39.828 fused_ordering(538) 00:12:39.828 fused_ordering(539) 00:12:39.828 fused_ordering(540) 00:12:39.828 fused_ordering(541) 00:12:39.828 fused_ordering(542) 00:12:39.828 fused_ordering(543) 00:12:39.828 fused_ordering(544) 00:12:39.828 fused_ordering(545) 00:12:39.828 fused_ordering(546) 00:12:39.828 fused_ordering(547) 00:12:39.828 fused_ordering(548) 00:12:39.828 fused_ordering(549) 00:12:39.828 fused_ordering(550) 00:12:39.828 fused_ordering(551) 00:12:39.828 fused_ordering(552) 00:12:39.828 fused_ordering(553) 00:12:39.828 fused_ordering(554) 00:12:39.828 fused_ordering(555) 00:12:39.828 fused_ordering(556) 00:12:39.828 fused_ordering(557) 00:12:39.828 fused_ordering(558) 00:12:39.828 fused_ordering(559) 00:12:39.828 fused_ordering(560) 00:12:39.828 fused_ordering(561) 00:12:39.828 fused_ordering(562) 00:12:39.828 fused_ordering(563) 00:12:39.828 fused_ordering(564) 00:12:39.828 fused_ordering(565) 00:12:39.828 fused_ordering(566) 00:12:39.828 fused_ordering(567) 00:12:39.828 fused_ordering(568) 00:12:39.828 fused_ordering(569) 00:12:39.828 fused_ordering(570) 00:12:39.828 fused_ordering(571) 00:12:39.828 fused_ordering(572) 00:12:39.828 fused_ordering(573) 00:12:39.828 fused_ordering(574) 00:12:39.828 fused_ordering(575) 00:12:39.828 fused_ordering(576) 00:12:39.828 fused_ordering(577) 00:12:39.828 fused_ordering(578) 00:12:39.828 fused_ordering(579) 00:12:39.828 fused_ordering(580) 00:12:39.828 fused_ordering(581) 00:12:39.828 fused_ordering(582) 00:12:39.828 fused_ordering(583) 00:12:39.828 fused_ordering(584) 00:12:39.828 fused_ordering(585) 00:12:39.828 fused_ordering(586) 00:12:39.828 fused_ordering(587) 00:12:39.828 fused_ordering(588) 00:12:39.828 fused_ordering(589) 00:12:39.828 fused_ordering(590) 00:12:39.828 fused_ordering(591) 00:12:39.828 fused_ordering(592) 00:12:39.828 fused_ordering(593) 00:12:39.828 fused_ordering(594) 00:12:39.828 fused_ordering(595) 00:12:39.828 fused_ordering(596) 00:12:39.828 fused_ordering(597) 00:12:39.828 fused_ordering(598) 00:12:39.828 fused_ordering(599) 00:12:39.828 fused_ordering(600) 00:12:39.828 fused_ordering(601) 00:12:39.828 fused_ordering(602) 00:12:39.828 fused_ordering(603) 00:12:39.828 fused_ordering(604) 00:12:39.828 fused_ordering(605) 00:12:39.828 fused_ordering(606) 00:12:39.828 fused_ordering(607) 00:12:39.828 fused_ordering(608) 00:12:39.828 fused_ordering(609) 00:12:39.828 fused_ordering(610) 00:12:39.828 fused_ordering(611) 00:12:39.828 fused_ordering(612) 00:12:39.828 fused_ordering(613) 00:12:39.828 fused_ordering(614) 00:12:39.828 fused_ordering(615) 00:12:40.087 fused_ordering(616) 00:12:40.087 fused_ordering(617) 00:12:40.087 fused_ordering(618) 00:12:40.087 fused_ordering(619) 00:12:40.087 fused_ordering(620) 00:12:40.087 fused_ordering(621) 00:12:40.087 fused_ordering(622) 00:12:40.087 fused_ordering(623) 00:12:40.087 fused_ordering(624) 00:12:40.087 fused_ordering(625) 00:12:40.087 fused_ordering(626) 00:12:40.087 fused_ordering(627) 00:12:40.087 fused_ordering(628) 00:12:40.087 fused_ordering(629) 00:12:40.087 fused_ordering(630) 00:12:40.087 fused_ordering(631) 00:12:40.087 fused_ordering(632) 00:12:40.087 fused_ordering(633) 00:12:40.087 fused_ordering(634) 00:12:40.087 fused_ordering(635) 00:12:40.087 fused_ordering(636) 00:12:40.087 fused_ordering(637) 00:12:40.087 fused_ordering(638) 00:12:40.087 fused_ordering(639) 00:12:40.087 fused_ordering(640) 00:12:40.087 fused_ordering(641) 00:12:40.087 fused_ordering(642) 00:12:40.087 fused_ordering(643) 00:12:40.087 fused_ordering(644) 00:12:40.087 fused_ordering(645) 00:12:40.087 fused_ordering(646) 00:12:40.087 fused_ordering(647) 00:12:40.087 fused_ordering(648) 00:12:40.087 fused_ordering(649) 00:12:40.087 fused_ordering(650) 00:12:40.087 fused_ordering(651) 00:12:40.087 fused_ordering(652) 00:12:40.087 fused_ordering(653) 00:12:40.087 fused_ordering(654) 00:12:40.087 fused_ordering(655) 00:12:40.087 fused_ordering(656) 00:12:40.087 fused_ordering(657) 00:12:40.087 fused_ordering(658) 00:12:40.087 fused_ordering(659) 00:12:40.087 fused_ordering(660) 00:12:40.087 fused_ordering(661) 00:12:40.087 fused_ordering(662) 00:12:40.087 fused_ordering(663) 00:12:40.087 fused_ordering(664) 00:12:40.087 fused_ordering(665) 00:12:40.087 fused_ordering(666) 00:12:40.087 fused_ordering(667) 00:12:40.087 fused_ordering(668) 00:12:40.087 fused_ordering(669) 00:12:40.087 fused_ordering(670) 00:12:40.087 fused_ordering(671) 00:12:40.087 fused_ordering(672) 00:12:40.087 fused_ordering(673) 00:12:40.087 fused_ordering(674) 00:12:40.087 fused_ordering(675) 00:12:40.087 fused_ordering(676) 00:12:40.087 fused_ordering(677) 00:12:40.087 fused_ordering(678) 00:12:40.087 fused_ordering(679) 00:12:40.087 fused_ordering(680) 00:12:40.087 fused_ordering(681) 00:12:40.087 fused_ordering(682) 00:12:40.087 fused_ordering(683) 00:12:40.087 fused_ordering(684) 00:12:40.087 fused_ordering(685) 00:12:40.087 fused_ordering(686) 00:12:40.087 fused_ordering(687) 00:12:40.087 fused_ordering(688) 00:12:40.087 fused_ordering(689) 00:12:40.087 fused_ordering(690) 00:12:40.087 fused_ordering(691) 00:12:40.087 fused_ordering(692) 00:12:40.087 fused_ordering(693) 00:12:40.087 fused_ordering(694) 00:12:40.087 fused_ordering(695) 00:12:40.087 fused_ordering(696) 00:12:40.087 fused_ordering(697) 00:12:40.087 fused_ordering(698) 00:12:40.087 fused_ordering(699) 00:12:40.087 fused_ordering(700) 00:12:40.087 fused_ordering(701) 00:12:40.087 fused_ordering(702) 00:12:40.087 fused_ordering(703) 00:12:40.087 fused_ordering(704) 00:12:40.087 fused_ordering(705) 00:12:40.087 fused_ordering(706) 00:12:40.087 fused_ordering(707) 00:12:40.087 fused_ordering(708) 00:12:40.087 fused_ordering(709) 00:12:40.087 fused_ordering(710) 00:12:40.087 fused_ordering(711) 00:12:40.087 fused_ordering(712) 00:12:40.087 fused_ordering(713) 00:12:40.087 fused_ordering(714) 00:12:40.087 fused_ordering(715) 00:12:40.087 fused_ordering(716) 00:12:40.087 fused_ordering(717) 00:12:40.087 fused_ordering(718) 00:12:40.087 fused_ordering(719) 00:12:40.087 fused_ordering(720) 00:12:40.087 fused_ordering(721) 00:12:40.087 fused_ordering(722) 00:12:40.087 fused_ordering(723) 00:12:40.087 fused_ordering(724) 00:12:40.087 fused_ordering(725) 00:12:40.087 fused_ordering(726) 00:12:40.087 fused_ordering(727) 00:12:40.087 fused_ordering(728) 00:12:40.087 fused_ordering(729) 00:12:40.087 fused_ordering(730) 00:12:40.087 fused_ordering(731) 00:12:40.087 fused_ordering(732) 00:12:40.087 fused_ordering(733) 00:12:40.087 fused_ordering(734) 00:12:40.087 fused_ordering(735) 00:12:40.087 fused_ordering(736) 00:12:40.087 fused_ordering(737) 00:12:40.087 fused_ordering(738) 00:12:40.087 fused_ordering(739) 00:12:40.087 fused_ordering(740) 00:12:40.087 fused_ordering(741) 00:12:40.087 fused_ordering(742) 00:12:40.087 fused_ordering(743) 00:12:40.087 fused_ordering(744) 00:12:40.087 fused_ordering(745) 00:12:40.088 fused_ordering(746) 00:12:40.088 fused_ordering(747) 00:12:40.088 fused_ordering(748) 00:12:40.088 fused_ordering(749) 00:12:40.088 fused_ordering(750) 00:12:40.088 fused_ordering(751) 00:12:40.088 fused_ordering(752) 00:12:40.088 fused_ordering(753) 00:12:40.088 fused_ordering(754) 00:12:40.088 fused_ordering(755) 00:12:40.088 fused_ordering(756) 00:12:40.088 fused_ordering(757) 00:12:40.088 fused_ordering(758) 00:12:40.088 fused_ordering(759) 00:12:40.088 fused_ordering(760) 00:12:40.088 fused_ordering(761) 00:12:40.088 fused_ordering(762) 00:12:40.088 fused_ordering(763) 00:12:40.088 fused_ordering(764) 00:12:40.088 fused_ordering(765) 00:12:40.088 fused_ordering(766) 00:12:40.088 fused_ordering(767) 00:12:40.088 fused_ordering(768) 00:12:40.088 fused_ordering(769) 00:12:40.088 fused_ordering(770) 00:12:40.088 fused_ordering(771) 00:12:40.088 fused_ordering(772) 00:12:40.088 fused_ordering(773) 00:12:40.088 fused_ordering(774) 00:12:40.088 fused_ordering(775) 00:12:40.088 fused_ordering(776) 00:12:40.088 fused_ordering(777) 00:12:40.088 fused_ordering(778) 00:12:40.088 fused_ordering(779) 00:12:40.088 fused_ordering(780) 00:12:40.088 fused_ordering(781) 00:12:40.088 fused_ordering(782) 00:12:40.088 fused_ordering(783) 00:12:40.088 fused_ordering(784) 00:12:40.088 fused_ordering(785) 00:12:40.088 fused_ordering(786) 00:12:40.088 fused_ordering(787) 00:12:40.088 fused_ordering(788) 00:12:40.088 fused_ordering(789) 00:12:40.088 fused_ordering(790) 00:12:40.088 fused_ordering(791) 00:12:40.088 fused_ordering(792) 00:12:40.088 fused_ordering(793) 00:12:40.088 fused_ordering(794) 00:12:40.088 fused_ordering(795) 00:12:40.088 fused_ordering(796) 00:12:40.088 fused_ordering(797) 00:12:40.088 fused_ordering(798) 00:12:40.088 fused_ordering(799) 00:12:40.088 fused_ordering(800) 00:12:40.088 fused_ordering(801) 00:12:40.088 fused_ordering(802) 00:12:40.088 fused_ordering(803) 00:12:40.088 fused_ordering(804) 00:12:40.088 fused_ordering(805) 00:12:40.088 fused_ordering(806) 00:12:40.088 fused_ordering(807) 00:12:40.088 fused_ordering(808) 00:12:40.088 fused_ordering(809) 00:12:40.088 fused_ordering(810) 00:12:40.088 fused_ordering(811) 00:12:40.088 fused_ordering(812) 00:12:40.088 fused_ordering(813) 00:12:40.088 fused_ordering(814) 00:12:40.088 fused_ordering(815) 00:12:40.088 fused_ordering(816) 00:12:40.088 fused_ordering(817) 00:12:40.088 fused_ordering(818) 00:12:40.088 fused_ordering(819) 00:12:40.088 fused_ordering(820) 00:12:40.655 fused_ordering(821) 00:12:40.656 fused_ordering(822) 00:12:40.656 fused_ordering(823) 00:12:40.656 fused_ordering(824) 00:12:40.656 fused_ordering(825) 00:12:40.656 fused_ordering(826) 00:12:40.656 fused_ordering(827) 00:12:40.656 fused_ordering(828) 00:12:40.656 fused_ordering(829) 00:12:40.656 fused_ordering(830) 00:12:40.656 fused_ordering(831) 00:12:40.656 fused_ordering(832) 00:12:40.656 fused_ordering(833) 00:12:40.656 fused_ordering(834) 00:12:40.656 fused_ordering(835) 00:12:40.656 fused_ordering(836) 00:12:40.656 fused_ordering(837) 00:12:40.656 fused_ordering(838) 00:12:40.656 fused_ordering(839) 00:12:40.656 fused_ordering(840) 00:12:40.656 fused_ordering(841) 00:12:40.656 fused_ordering(842) 00:12:40.656 fused_ordering(843) 00:12:40.656 fused_ordering(844) 00:12:40.656 fused_ordering(845) 00:12:40.656 fused_ordering(846) 00:12:40.656 fused_ordering(847) 00:12:40.656 fused_ordering(848) 00:12:40.656 fused_ordering(849) 00:12:40.656 fused_ordering(850) 00:12:40.656 fused_ordering(851) 00:12:40.656 fused_ordering(852) 00:12:40.656 fused_ordering(853) 00:12:40.656 fused_ordering(854) 00:12:40.656 fused_ordering(855) 00:12:40.656 fused_ordering(856) 00:12:40.656 fused_ordering(857) 00:12:40.656 fused_ordering(858) 00:12:40.656 fused_ordering(859) 00:12:40.656 fused_ordering(860) 00:12:40.656 fused_ordering(861) 00:12:40.656 fused_ordering(862) 00:12:40.656 fused_ordering(863) 00:12:40.656 fused_ordering(864) 00:12:40.656 fused_ordering(865) 00:12:40.656 fused_ordering(866) 00:12:40.656 fused_ordering(867) 00:12:40.656 fused_ordering(868) 00:12:40.656 fused_ordering(869) 00:12:40.656 fused_ordering(870) 00:12:40.656 fused_ordering(871) 00:12:40.656 fused_ordering(872) 00:12:40.656 fused_ordering(873) 00:12:40.656 fused_ordering(874) 00:12:40.656 fused_ordering(875) 00:12:40.656 fused_ordering(876) 00:12:40.656 fused_ordering(877) 00:12:40.656 fused_ordering(878) 00:12:40.656 fused_ordering(879) 00:12:40.656 fused_ordering(880) 00:12:40.656 fused_ordering(881) 00:12:40.656 fused_ordering(882) 00:12:40.656 fused_ordering(883) 00:12:40.656 fused_ordering(884) 00:12:40.656 fused_ordering(885) 00:12:40.656 fused_ordering(886) 00:12:40.656 fused_ordering(887) 00:12:40.656 fused_ordering(888) 00:12:40.656 fused_ordering(889) 00:12:40.656 fused_ordering(890) 00:12:40.656 fused_ordering(891) 00:12:40.656 fused_ordering(892) 00:12:40.656 fused_ordering(893) 00:12:40.656 fused_ordering(894) 00:12:40.656 fused_ordering(895) 00:12:40.656 fused_ordering(896) 00:12:40.656 fused_ordering(897) 00:12:40.656 fused_ordering(898) 00:12:40.656 fused_ordering(899) 00:12:40.656 fused_ordering(900) 00:12:40.656 fused_ordering(901) 00:12:40.656 fused_ordering(902) 00:12:40.656 fused_ordering(903) 00:12:40.656 fused_ordering(904) 00:12:40.656 fused_ordering(905) 00:12:40.656 fused_ordering(906) 00:12:40.656 fused_ordering(907) 00:12:40.656 fused_ordering(908) 00:12:40.656 fused_ordering(909) 00:12:40.656 fused_ordering(910) 00:12:40.656 fused_ordering(911) 00:12:40.656 fused_ordering(912) 00:12:40.656 fused_ordering(913) 00:12:40.656 fused_ordering(914) 00:12:40.656 fused_ordering(915) 00:12:40.656 fused_ordering(916) 00:12:40.656 fused_ordering(917) 00:12:40.656 fused_ordering(918) 00:12:40.656 fused_ordering(919) 00:12:40.656 fused_ordering(920) 00:12:40.656 fused_ordering(921) 00:12:40.656 fused_ordering(922) 00:12:40.656 fused_ordering(923) 00:12:40.656 fused_ordering(924) 00:12:40.656 fused_ordering(925) 00:12:40.656 fused_ordering(926) 00:12:40.656 fused_ordering(927) 00:12:40.656 fused_ordering(928) 00:12:40.656 fused_ordering(929) 00:12:40.656 fused_ordering(930) 00:12:40.656 fused_ordering(931) 00:12:40.656 fused_ordering(932) 00:12:40.656 fused_ordering(933) 00:12:40.656 fused_ordering(934) 00:12:40.656 fused_ordering(935) 00:12:40.656 fused_ordering(936) 00:12:40.656 fused_ordering(937) 00:12:40.656 fused_ordering(938) 00:12:40.656 fused_ordering(939) 00:12:40.656 fused_ordering(940) 00:12:40.656 fused_ordering(941) 00:12:40.656 fused_ordering(942) 00:12:40.656 fused_ordering(943) 00:12:40.656 fused_ordering(944) 00:12:40.656 fused_ordering(945) 00:12:40.656 fused_ordering(946) 00:12:40.656 fused_ordering(947) 00:12:40.656 fused_ordering(948) 00:12:40.656 fused_ordering(949) 00:12:40.656 fused_ordering(950) 00:12:40.656 fused_ordering(951) 00:12:40.656 fused_ordering(952) 00:12:40.656 fused_ordering(953) 00:12:40.656 fused_ordering(954) 00:12:40.656 fused_ordering(955) 00:12:40.656 fused_ordering(956) 00:12:40.656 fused_ordering(957) 00:12:40.656 fused_ordering(958) 00:12:40.656 fused_ordering(959) 00:12:40.656 fused_ordering(960) 00:12:40.656 fused_ordering(961) 00:12:40.656 fused_ordering(962) 00:12:40.656 fused_ordering(963) 00:12:40.656 fused_ordering(964) 00:12:40.656 fused_ordering(965) 00:12:40.656 fused_ordering(966) 00:12:40.656 fused_ordering(967) 00:12:40.656 fused_ordering(968) 00:12:40.656 fused_ordering(969) 00:12:40.656 fused_ordering(970) 00:12:40.656 fused_ordering(971) 00:12:40.656 fused_ordering(972) 00:12:40.656 fused_ordering(973) 00:12:40.656 fused_ordering(974) 00:12:40.656 fused_ordering(975) 00:12:40.656 fused_ordering(976) 00:12:40.656 fused_ordering(977) 00:12:40.656 fused_ordering(978) 00:12:40.656 fused_ordering(979) 00:12:40.656 fused_ordering(980) 00:12:40.656 fused_ordering(981) 00:12:40.656 fused_ordering(982) 00:12:40.656 fused_ordering(983) 00:12:40.656 fused_ordering(984) 00:12:40.656 fused_ordering(985) 00:12:40.656 fused_ordering(986) 00:12:40.656 fused_ordering(987) 00:12:40.656 fused_ordering(988) 00:12:40.656 fused_ordering(989) 00:12:40.656 fused_ordering(990) 00:12:40.657 fused_ordering(991) 00:12:40.657 fused_ordering(992) 00:12:40.657 fused_ordering(993) 00:12:40.657 fused_ordering(994) 00:12:40.657 fused_ordering(995) 00:12:40.657 fused_ordering(996) 00:12:40.657 fused_ordering(997) 00:12:40.657 fused_ordering(998) 00:12:40.657 fused_ordering(999) 00:12:40.657 fused_ordering(1000) 00:12:40.657 fused_ordering(1001) 00:12:40.657 fused_ordering(1002) 00:12:40.657 fused_ordering(1003) 00:12:40.657 fused_ordering(1004) 00:12:40.657 fused_ordering(1005) 00:12:40.657 fused_ordering(1006) 00:12:40.657 fused_ordering(1007) 00:12:40.657 fused_ordering(1008) 00:12:40.657 fused_ordering(1009) 00:12:40.657 fused_ordering(1010) 00:12:40.657 fused_ordering(1011) 00:12:40.657 fused_ordering(1012) 00:12:40.657 fused_ordering(1013) 00:12:40.657 fused_ordering(1014) 00:12:40.657 fused_ordering(1015) 00:12:40.657 fused_ordering(1016) 00:12:40.657 fused_ordering(1017) 00:12:40.657 fused_ordering(1018) 00:12:40.657 fused_ordering(1019) 00:12:40.657 fused_ordering(1020) 00:12:40.657 fused_ordering(1021) 00:12:40.657 fused_ordering(1022) 00:12:40.657 fused_ordering(1023) 00:12:40.657 11:07:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:12:40.657 11:07:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:12:40.657 11:07:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:40.657 11:07:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:12:40.657 11:07:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:40.657 11:07:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:12:40.657 11:07:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:40.657 11:07:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:40.657 rmmod nvme_tcp 00:12:40.657 rmmod nvme_fabrics 00:12:40.657 rmmod nvme_keyring 00:12:40.657 11:07:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:40.657 11:07:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:12:40.657 11:07:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:12:40.657 11:07:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 4006821 ']' 00:12:40.657 11:07:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 4006821 00:12:40.657 11:07:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 4006821 ']' 00:12:40.657 11:07:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 4006821 00:12:40.657 11:07:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:12:40.657 11:07:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:40.657 11:07:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4006821 00:12:40.657 11:07:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:40.657 11:07:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:40.657 11:07:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4006821' 00:12:40.657 killing process with pid 4006821 00:12:40.657 11:07:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 4006821 00:12:40.657 11:07:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 4006821 00:12:40.917 11:07:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:40.917 11:07:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:40.917 11:07:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:40.917 11:07:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:12:40.917 11:07:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:12:40.917 11:07:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:40.917 11:07:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:12:40.917 11:07:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:40.917 11:07:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:40.917 11:07:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:40.917 11:07:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:40.917 11:07:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:43.455 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:43.455 00:12:43.455 real 0m10.907s 00:12:43.455 user 0m5.292s 00:12:43.455 sys 0m5.919s 00:12:43.455 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:43.455 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:43.455 ************************************ 00:12:43.455 END TEST nvmf_fused_ordering 00:12:43.455 ************************************ 00:12:43.455 11:07:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:12:43.455 11:07:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:43.455 11:07:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:43.455 11:07:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:43.455 ************************************ 00:12:43.455 START TEST nvmf_ns_masking 00:12:43.455 ************************************ 00:12:43.455 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:12:43.455 * Looking for test storage... 00:12:43.455 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:43.455 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:43.455 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lcov --version 00:12:43.455 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:43.455 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:43.455 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:43.455 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:43.455 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:43.455 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:12:43.455 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:12:43.455 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:12:43.455 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:12:43.455 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:12:43.455 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:12:43.455 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:12:43.455 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:43.455 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:12:43.455 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:12:43.455 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:43.456 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:43.456 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:12:43.456 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:12:43.456 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:43.456 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:12:43.456 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:12:43.456 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:12:43.456 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:12:43.456 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:43.456 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:12:43.456 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:12:43.456 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:43.456 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:43.456 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:12:43.456 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:43.456 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:43.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:43.456 --rc genhtml_branch_coverage=1 00:12:43.456 --rc genhtml_function_coverage=1 00:12:43.456 --rc genhtml_legend=1 00:12:43.456 --rc geninfo_all_blocks=1 00:12:43.456 --rc geninfo_unexecuted_blocks=1 00:12:43.456 00:12:43.456 ' 00:12:43.456 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:43.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:43.456 --rc genhtml_branch_coverage=1 00:12:43.456 --rc genhtml_function_coverage=1 00:12:43.456 --rc genhtml_legend=1 00:12:43.456 --rc geninfo_all_blocks=1 00:12:43.456 --rc geninfo_unexecuted_blocks=1 00:12:43.456 00:12:43.456 ' 00:12:43.456 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:43.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:43.456 --rc genhtml_branch_coverage=1 00:12:43.456 --rc genhtml_function_coverage=1 00:12:43.456 --rc genhtml_legend=1 00:12:43.456 --rc geninfo_all_blocks=1 00:12:43.456 --rc geninfo_unexecuted_blocks=1 00:12:43.456 00:12:43.456 ' 00:12:43.456 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:43.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:43.456 --rc genhtml_branch_coverage=1 00:12:43.456 --rc genhtml_function_coverage=1 00:12:43.456 --rc genhtml_legend=1 00:12:43.456 --rc geninfo_all_blocks=1 00:12:43.456 --rc geninfo_unexecuted_blocks=1 00:12:43.456 00:12:43.456 ' 00:12:43.456 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:43.456 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:12:43.456 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:43.456 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:43.456 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:43.456 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:43.456 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:43.456 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:43.456 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:43.456 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:43.456 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:43.456 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:43.456 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:43.456 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:43.456 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:43.456 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:43.456 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:43.456 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:43.456 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:43.456 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:12:43.456 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:43.456 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:43.456 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:43.456 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:43.456 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:43.456 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:43.456 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:12:43.456 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:43.456 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:12:43.456 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:43.456 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:43.456 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:43.456 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:43.456 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:43.456 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:43.456 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:43.456 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:43.456 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:43.456 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:43.456 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:43.456 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:12:43.456 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:12:43.456 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:12:43.456 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=81904244-e626-4b9a-b613-7030d6a9b69f 00:12:43.456 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:12:43.456 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=4338823b-8436-4dde-8e9d-1d2d298b9d0b 00:12:43.456 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:12:43.456 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:12:43.456 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:12:43.456 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:12:43.456 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=abe83b2b-7a1d-4315-9dcf-1a56696a6df7 00:12:43.456 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:12:43.456 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:43.456 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:43.457 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:43.457 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:43.457 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:43.457 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:43.457 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:43.457 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:43.457 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:43.457 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:43.457 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:12:43.457 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:50.053 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:50.053 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:12:50.053 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:50.053 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:50.053 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:50.053 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:50.053 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:50.053 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:12:50.053 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:50.053 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:12:50.053 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:12:50.053 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:12:50.053 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:12:50.053 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:12:50.053 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:12:50.053 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:50.053 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:50.053 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:50.053 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:50.053 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:50.053 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:50.053 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:50.053 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:50.053 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:50.053 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:50.053 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:50.053 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:50.053 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:50.053 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:50.053 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:50.053 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:50.053 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:50.053 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:50.053 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:50.053 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:50.053 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:50.053 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:50.053 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:50.053 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:50.053 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:50.053 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:50.053 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:50.053 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:50.053 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:50.053 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:50.053 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:50.053 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:50.053 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:50.053 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:50.053 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:50.053 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:50.053 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:50.053 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:50.053 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:50.053 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:50.053 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:50.053 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:50.053 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:50.053 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:50.053 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:50.053 Found net devices under 0000:86:00.0: cvl_0_0 00:12:50.053 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:50.053 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:50.053 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:50.053 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:50.053 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:50.053 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:50.053 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:50.053 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:50.053 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:50.053 Found net devices under 0000:86:00.1: cvl_0_1 00:12:50.053 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:50.053 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:50.053 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:12:50.053 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:50.053 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:50.053 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:50.053 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:50.053 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:50.053 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:50.053 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:50.053 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:50.053 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:50.053 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:50.053 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:50.053 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:50.053 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:50.053 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:50.053 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:50.053 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:50.053 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:50.054 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:50.054 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:50.054 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:50.054 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:50.054 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:50.054 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:50.054 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:50.054 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:50.054 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:50.054 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:50.054 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.478 ms 00:12:50.054 00:12:50.054 --- 10.0.0.2 ping statistics --- 00:12:50.054 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:50.054 rtt min/avg/max/mdev = 0.478/0.478/0.478/0.000 ms 00:12:50.054 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:50.054 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:50.054 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:12:50.054 00:12:50.054 --- 10.0.0.1 ping statistics --- 00:12:50.054 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:50.054 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:12:50.054 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:50.054 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:12:50.054 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:50.054 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:50.054 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:50.054 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:50.054 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:50.054 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:50.054 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:50.054 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:12:50.054 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:50.054 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:50.054 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:50.054 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=4010786 00:12:50.054 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:12:50.054 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 4010786 00:12:50.054 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 4010786 ']' 00:12:50.054 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:50.054 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:50.054 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:50.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:50.054 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:50.054 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:50.054 [2024-11-20 11:07:16.669198] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:12:50.054 [2024-11-20 11:07:16.669246] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:50.054 [2024-11-20 11:07:16.750572] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:50.054 [2024-11-20 11:07:16.791923] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:50.054 [2024-11-20 11:07:16.791963] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:50.054 [2024-11-20 11:07:16.791970] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:50.054 [2024-11-20 11:07:16.791977] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:50.054 [2024-11-20 11:07:16.791982] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:50.054 [2024-11-20 11:07:16.792525] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:50.054 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:50.054 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:12:50.054 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:50.054 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:50.054 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:50.054 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:50.054 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:50.054 [2024-11-20 11:07:17.092422] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:50.054 11:07:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:12:50.054 11:07:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:12:50.054 11:07:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:50.054 Malloc1 00:12:50.054 11:07:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:50.054 Malloc2 00:12:50.313 11:07:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:50.313 11:07:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:12:50.572 11:07:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:50.831 [2024-11-20 11:07:18.123697] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:50.831 11:07:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:12:50.831 11:07:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I abe83b2b-7a1d-4315-9dcf-1a56696a6df7 -a 10.0.0.2 -s 4420 -i 4 00:12:51.090 11:07:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:12:51.090 11:07:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:12:51.090 11:07:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:51.090 11:07:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:51.090 11:07:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:12:52.995 11:07:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:52.995 11:07:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:52.995 11:07:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:52.995 11:07:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:52.995 11:07:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:52.995 11:07:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:12:52.995 11:07:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:52.995 11:07:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:52.995 11:07:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:52.995 11:07:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:52.995 11:07:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:12:52.995 11:07:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:52.995 11:07:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:52.995 [ 0]:0x1 00:12:52.995 11:07:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:52.995 11:07:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:53.255 11:07:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9438c6ef22374b4da68bbf315bd4ca58 00:12:53.255 11:07:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9438c6ef22374b4da68bbf315bd4ca58 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:53.255 11:07:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:12:53.255 11:07:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:12:53.255 11:07:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:53.255 11:07:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:53.255 [ 0]:0x1 00:12:53.255 11:07:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:53.255 11:07:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:53.514 11:07:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9438c6ef22374b4da68bbf315bd4ca58 00:12:53.514 11:07:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9438c6ef22374b4da68bbf315bd4ca58 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:53.514 11:07:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:12:53.514 11:07:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:53.514 11:07:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:53.514 [ 1]:0x2 00:12:53.514 11:07:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:53.514 11:07:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:53.514 11:07:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ca51e87f98bd40e1b4fa0b3a59ac262c 00:12:53.514 11:07:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ca51e87f98bd40e1b4fa0b3a59ac262c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:53.514 11:07:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:12:53.514 11:07:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:53.514 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:53.514 11:07:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:53.773 11:07:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:12:53.773 11:07:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:12:53.773 11:07:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I abe83b2b-7a1d-4315-9dcf-1a56696a6df7 -a 10.0.0.2 -s 4420 -i 4 00:12:54.032 11:07:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:12:54.032 11:07:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:12:54.032 11:07:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:54.032 11:07:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:12:54.032 11:07:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:12:54.032 11:07:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:12:55.938 11:07:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:55.938 11:07:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:55.938 11:07:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:55.938 11:07:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:55.938 11:07:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:55.938 11:07:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:12:55.938 11:07:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:55.938 11:07:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:56.197 11:07:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:56.197 11:07:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:56.197 11:07:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:12:56.197 11:07:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:12:56.197 11:07:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:12:56.197 11:07:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:12:56.197 11:07:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:56.197 11:07:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:12:56.197 11:07:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:56.197 11:07:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:12:56.197 11:07:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:56.197 11:07:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:56.197 11:07:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:56.197 11:07:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:56.197 11:07:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:56.197 11:07:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:56.197 11:07:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:12:56.197 11:07:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:56.197 11:07:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:56.197 11:07:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:56.197 11:07:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:12:56.197 11:07:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:56.197 11:07:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:56.197 [ 0]:0x2 00:12:56.197 11:07:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:56.197 11:07:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:56.197 11:07:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ca51e87f98bd40e1b4fa0b3a59ac262c 00:12:56.197 11:07:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ca51e87f98bd40e1b4fa0b3a59ac262c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:56.198 11:07:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:56.456 11:07:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:12:56.456 11:07:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:56.456 11:07:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:56.456 [ 0]:0x1 00:12:56.456 11:07:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:56.456 11:07:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:56.456 11:07:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9438c6ef22374b4da68bbf315bd4ca58 00:12:56.456 11:07:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9438c6ef22374b4da68bbf315bd4ca58 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:56.456 11:07:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:12:56.456 11:07:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:56.456 11:07:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:56.456 [ 1]:0x2 00:12:56.456 11:07:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:56.456 11:07:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:56.456 11:07:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ca51e87f98bd40e1b4fa0b3a59ac262c 00:12:56.456 11:07:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ca51e87f98bd40e1b4fa0b3a59ac262c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:56.456 11:07:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:56.715 11:07:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:12:56.715 11:07:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:12:56.715 11:07:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:12:56.715 11:07:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:12:56.715 11:07:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:56.715 11:07:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:12:56.715 11:07:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:56.715 11:07:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:12:56.715 11:07:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:56.715 11:07:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:56.715 11:07:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:56.715 11:07:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:56.715 11:07:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:56.715 11:07:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:56.716 11:07:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:12:56.716 11:07:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:56.716 11:07:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:56.716 11:07:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:56.716 11:07:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:12:56.716 11:07:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:56.716 11:07:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:56.716 [ 0]:0x2 00:12:56.716 11:07:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:56.716 11:07:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:56.716 11:07:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ca51e87f98bd40e1b4fa0b3a59ac262c 00:12:56.716 11:07:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ca51e87f98bd40e1b4fa0b3a59ac262c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:56.716 11:07:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:12:56.716 11:07:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:56.716 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:56.716 11:07:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:56.974 11:07:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:12:56.974 11:07:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I abe83b2b-7a1d-4315-9dcf-1a56696a6df7 -a 10.0.0.2 -s 4420 -i 4 00:12:57.232 11:07:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:12:57.232 11:07:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:12:57.232 11:07:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:57.232 11:07:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:12:57.232 11:07:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:12:57.233 11:07:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:12:59.147 11:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:59.147 11:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:59.147 11:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:59.147 11:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:12:59.147 11:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:59.147 11:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:12:59.147 11:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:59.147 11:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:59.147 11:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:59.147 11:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:59.147 11:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:12:59.147 11:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:59.147 11:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:59.147 [ 0]:0x1 00:12:59.147 11:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:59.147 11:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:59.147 11:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9438c6ef22374b4da68bbf315bd4ca58 00:12:59.147 11:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9438c6ef22374b4da68bbf315bd4ca58 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:59.147 11:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:12:59.406 11:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:59.406 11:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:59.406 [ 1]:0x2 00:12:59.406 11:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:59.406 11:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:59.406 11:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ca51e87f98bd40e1b4fa0b3a59ac262c 00:12:59.406 11:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ca51e87f98bd40e1b4fa0b3a59ac262c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:59.406 11:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:59.406 11:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:12:59.406 11:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:12:59.406 11:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:12:59.406 11:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:12:59.406 11:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:59.406 11:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:12:59.406 11:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:59.406 11:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:12:59.406 11:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:59.406 11:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:59.407 11:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:59.407 11:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:59.665 11:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:59.665 11:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:59.665 11:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:12:59.665 11:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:59.665 11:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:59.665 11:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:59.665 11:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:12:59.665 11:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:59.665 11:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:59.665 [ 0]:0x2 00:12:59.665 11:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:59.665 11:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:59.665 11:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ca51e87f98bd40e1b4fa0b3a59ac262c 00:12:59.665 11:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ca51e87f98bd40e1b4fa0b3a59ac262c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:59.665 11:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:59.665 11:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:12:59.665 11:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:59.665 11:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:59.665 11:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:59.665 11:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:59.665 11:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:59.665 11:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:59.665 11:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:59.665 11:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:59.665 11:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:12:59.665 11:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:59.665 [2024-11-20 11:07:27.157372] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:12:59.924 request: 00:12:59.924 { 00:12:59.924 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:59.924 "nsid": 2, 00:12:59.924 "host": "nqn.2016-06.io.spdk:host1", 00:12:59.924 "method": "nvmf_ns_remove_host", 00:12:59.924 "req_id": 1 00:12:59.924 } 00:12:59.924 Got JSON-RPC error response 00:12:59.924 response: 00:12:59.924 { 00:12:59.924 "code": -32602, 00:12:59.924 "message": "Invalid parameters" 00:12:59.924 } 00:12:59.924 11:07:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:12:59.924 11:07:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:59.924 11:07:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:59.924 11:07:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:59.924 11:07:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:12:59.924 11:07:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:12:59.924 11:07:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:12:59.924 11:07:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:12:59.924 11:07:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:59.924 11:07:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:12:59.924 11:07:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:59.924 11:07:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:12:59.924 11:07:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:59.924 11:07:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:59.924 11:07:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:59.924 11:07:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:59.924 11:07:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:59.924 11:07:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:59.924 11:07:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:12:59.924 11:07:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:59.924 11:07:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:59.924 11:07:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:59.924 11:07:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:12:59.924 11:07:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:59.924 11:07:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:59.924 [ 0]:0x2 00:12:59.924 11:07:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:59.924 11:07:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:59.924 11:07:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ca51e87f98bd40e1b4fa0b3a59ac262c 00:12:59.925 11:07:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ca51e87f98bd40e1b4fa0b3a59ac262c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:59.925 11:07:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:12:59.925 11:07:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:00.184 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:00.184 11:07:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=4012605 00:13:00.184 11:07:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:13:00.184 11:07:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:13:00.184 11:07:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 4012605 /var/tmp/host.sock 00:13:00.184 11:07:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 4012605 ']' 00:13:00.184 11:07:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:13:00.184 11:07:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:00.184 11:07:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:13:00.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:13:00.184 11:07:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:00.184 11:07:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:00.184 [2024-11-20 11:07:27.540908] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:13:00.184 [2024-11-20 11:07:27.540959] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4012605 ] 00:13:00.184 [2024-11-20 11:07:27.619096] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:00.184 [2024-11-20 11:07:27.662344] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:00.443 11:07:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:00.443 11:07:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:13:00.443 11:07:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:00.702 11:07:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:00.961 11:07:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 81904244-e626-4b9a-b613-7030d6a9b69f 00:13:00.961 11:07:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:00.961 11:07:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 81904244E6264B9AB6137030D6A9B69F -i 00:13:01.219 11:07:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 4338823b-8436-4dde-8e9d-1d2d298b9d0b 00:13:01.219 11:07:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:01.219 11:07:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 4338823B84364DDE8E9D1D2D298B9D0B -i 00:13:01.219 11:07:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:01.477 11:07:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:13:01.736 11:07:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:13:01.736 11:07:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:13:01.995 nvme0n1 00:13:01.995 11:07:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:13:01.995 11:07:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:13:02.562 nvme1n2 00:13:02.562 11:07:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:13:02.562 11:07:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:13:02.562 11:07:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:13:02.562 11:07:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:13:02.562 11:07:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:13:02.562 11:07:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:13:02.562 11:07:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:13:02.562 11:07:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:13:02.562 11:07:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:13:02.822 11:07:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 81904244-e626-4b9a-b613-7030d6a9b69f == \8\1\9\0\4\2\4\4\-\e\6\2\6\-\4\b\9\a\-\b\6\1\3\-\7\0\3\0\d\6\a\9\b\6\9\f ]] 00:13:02.822 11:07:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:13:02.822 11:07:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:13:02.822 11:07:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:13:03.079 11:07:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 4338823b-8436-4dde-8e9d-1d2d298b9d0b == \4\3\3\8\8\2\3\b\-\8\4\3\6\-\4\d\d\e\-\8\e\9\d\-\1\d\2\d\2\9\8\b\9\d\0\b ]] 00:13:03.079 11:07:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:03.337 11:07:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:03.337 11:07:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 81904244-e626-4b9a-b613-7030d6a9b69f 00:13:03.337 11:07:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:03.337 11:07:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 81904244E6264B9AB6137030D6A9B69F 00:13:03.337 11:07:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:03.337 11:07:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 81904244E6264B9AB6137030D6A9B69F 00:13:03.337 11:07:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:03.337 11:07:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:03.337 11:07:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:03.337 11:07:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:03.337 11:07:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:03.337 11:07:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:03.337 11:07:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:03.337 11:07:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:13:03.337 11:07:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 81904244E6264B9AB6137030D6A9B69F 00:13:03.595 [2024-11-20 11:07:30.992019] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:13:03.595 [2024-11-20 11:07:30.992051] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:13:03.595 [2024-11-20 11:07:30.992059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.595 request: 00:13:03.595 { 00:13:03.595 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:03.595 "namespace": { 00:13:03.595 "bdev_name": "invalid", 00:13:03.595 "nsid": 1, 00:13:03.595 "nguid": "81904244E6264B9AB6137030D6A9B69F", 00:13:03.595 "no_auto_visible": false 00:13:03.595 }, 00:13:03.595 "method": "nvmf_subsystem_add_ns", 00:13:03.595 "req_id": 1 00:13:03.595 } 00:13:03.595 Got JSON-RPC error response 00:13:03.595 response: 00:13:03.595 { 00:13:03.595 "code": -32602, 00:13:03.595 "message": "Invalid parameters" 00:13:03.595 } 00:13:03.595 11:07:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:03.595 11:07:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:03.595 11:07:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:03.595 11:07:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:03.595 11:07:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 81904244-e626-4b9a-b613-7030d6a9b69f 00:13:03.595 11:07:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:03.595 11:07:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 81904244E6264B9AB6137030D6A9B69F -i 00:13:03.853 11:07:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:13:05.755 11:07:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:13:05.755 11:07:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:13:05.755 11:07:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:13:06.013 11:07:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:13:06.013 11:07:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 4012605 00:13:06.013 11:07:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 4012605 ']' 00:13:06.013 11:07:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 4012605 00:13:06.013 11:07:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:13:06.013 11:07:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:06.013 11:07:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4012605 00:13:06.013 11:07:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:06.013 11:07:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:06.013 11:07:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4012605' 00:13:06.013 killing process with pid 4012605 00:13:06.013 11:07:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 4012605 00:13:06.013 11:07:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 4012605 00:13:06.607 11:07:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:06.607 11:07:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:13:06.607 11:07:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:13:06.607 11:07:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:06.607 11:07:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:13:06.607 11:07:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:06.607 11:07:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:13:06.607 11:07:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:06.607 11:07:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:06.607 rmmod nvme_tcp 00:13:06.607 rmmod nvme_fabrics 00:13:06.607 rmmod nvme_keyring 00:13:06.607 11:07:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:06.607 11:07:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:13:06.607 11:07:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:13:06.607 11:07:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 4010786 ']' 00:13:06.607 11:07:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 4010786 00:13:06.607 11:07:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 4010786 ']' 00:13:06.607 11:07:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 4010786 00:13:06.607 11:07:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:13:06.607 11:07:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:06.607 11:07:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4010786 00:13:06.865 11:07:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:06.865 11:07:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:06.865 11:07:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4010786' 00:13:06.865 killing process with pid 4010786 00:13:06.865 11:07:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 4010786 00:13:06.865 11:07:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 4010786 00:13:06.866 11:07:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:06.866 11:07:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:06.866 11:07:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:06.866 11:07:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:13:06.866 11:07:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:13:06.866 11:07:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:06.866 11:07:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:13:06.866 11:07:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:06.866 11:07:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:06.866 11:07:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:06.866 11:07:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:06.866 11:07:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:09.396 11:07:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:09.396 00:13:09.396 real 0m25.962s 00:13:09.396 user 0m31.225s 00:13:09.396 sys 0m7.082s 00:13:09.396 11:07:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:09.396 11:07:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:09.396 ************************************ 00:13:09.396 END TEST nvmf_ns_masking 00:13:09.396 ************************************ 00:13:09.396 11:07:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:13:09.396 11:07:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:13:09.396 11:07:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:09.396 11:07:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:09.396 11:07:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:09.396 ************************************ 00:13:09.396 START TEST nvmf_nvme_cli 00:13:09.396 ************************************ 00:13:09.396 11:07:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:13:09.396 * Looking for test storage... 00:13:09.396 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:09.396 11:07:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:09.396 11:07:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lcov --version 00:13:09.396 11:07:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:09.396 11:07:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:09.396 11:07:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:09.396 11:07:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:09.396 11:07:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:09.396 11:07:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:13:09.396 11:07:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:13:09.396 11:07:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:13:09.396 11:07:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:13:09.396 11:07:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:13:09.396 11:07:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:13:09.396 11:07:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:13:09.396 11:07:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:09.396 11:07:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:13:09.396 11:07:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:13:09.396 11:07:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:09.396 11:07:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:09.396 11:07:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:13:09.396 11:07:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:13:09.396 11:07:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:09.396 11:07:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:13:09.396 11:07:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:13:09.396 11:07:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:13:09.396 11:07:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:13:09.396 11:07:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:09.396 11:07:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:13:09.396 11:07:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:13:09.396 11:07:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:09.396 11:07:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:09.396 11:07:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:13:09.396 11:07:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:09.396 11:07:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:09.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:09.396 --rc genhtml_branch_coverage=1 00:13:09.396 --rc genhtml_function_coverage=1 00:13:09.396 --rc genhtml_legend=1 00:13:09.396 --rc geninfo_all_blocks=1 00:13:09.396 --rc geninfo_unexecuted_blocks=1 00:13:09.396 00:13:09.396 ' 00:13:09.396 11:07:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:09.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:09.396 --rc genhtml_branch_coverage=1 00:13:09.396 --rc genhtml_function_coverage=1 00:13:09.396 --rc genhtml_legend=1 00:13:09.396 --rc geninfo_all_blocks=1 00:13:09.396 --rc geninfo_unexecuted_blocks=1 00:13:09.396 00:13:09.396 ' 00:13:09.396 11:07:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:09.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:09.396 --rc genhtml_branch_coverage=1 00:13:09.396 --rc genhtml_function_coverage=1 00:13:09.396 --rc genhtml_legend=1 00:13:09.396 --rc geninfo_all_blocks=1 00:13:09.396 --rc geninfo_unexecuted_blocks=1 00:13:09.396 00:13:09.396 ' 00:13:09.396 11:07:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:09.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:09.396 --rc genhtml_branch_coverage=1 00:13:09.396 --rc genhtml_function_coverage=1 00:13:09.396 --rc genhtml_legend=1 00:13:09.396 --rc geninfo_all_blocks=1 00:13:09.396 --rc geninfo_unexecuted_blocks=1 00:13:09.396 00:13:09.396 ' 00:13:09.396 11:07:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:09.396 11:07:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:13:09.396 11:07:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:09.396 11:07:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:09.396 11:07:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:09.396 11:07:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:09.396 11:07:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:09.396 11:07:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:09.396 11:07:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:09.396 11:07:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:09.396 11:07:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:09.396 11:07:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:09.396 11:07:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:09.396 11:07:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:09.396 11:07:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:09.396 11:07:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:09.396 11:07:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:09.396 11:07:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:09.396 11:07:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:09.396 11:07:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:13:09.396 11:07:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:09.396 11:07:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:09.396 11:07:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:09.396 11:07:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.397 11:07:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.397 11:07:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.397 11:07:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:13:09.397 11:07:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.397 11:07:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:13:09.397 11:07:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:09.397 11:07:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:09.397 11:07:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:09.397 11:07:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:09.397 11:07:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:09.397 11:07:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:09.397 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:09.397 11:07:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:09.397 11:07:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:09.397 11:07:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:09.397 11:07:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:09.397 11:07:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:09.397 11:07:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:13:09.397 11:07:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:13:09.397 11:07:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:09.397 11:07:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:09.397 11:07:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:09.397 11:07:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:09.397 11:07:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:09.397 11:07:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:09.397 11:07:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:09.397 11:07:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:09.397 11:07:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:09.397 11:07:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:09.397 11:07:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:13:09.397 11:07:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:15.966 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:15.966 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:13:15.966 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:15.966 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:15.966 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:15.966 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:15.966 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:15.966 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:13:15.966 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:15.966 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:13:15.966 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:13:15.966 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:13:15.966 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:13:15.966 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:13:15.966 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:13:15.966 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:15.966 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:15.966 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:15.966 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:15.966 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:15.966 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:15.966 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:15.966 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:15.966 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:15.966 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:15.966 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:15.966 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:15.966 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:15.966 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:15.966 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:15.966 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:15.966 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:15.966 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:15.966 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:15.966 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:15.966 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:15.966 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:15.966 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:15.966 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:15.966 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:15.966 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:15.966 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:15.966 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:15.966 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:15.966 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:15.966 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:15.966 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:15.966 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:15.966 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:15.966 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:15.966 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:15.966 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:15.966 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:15.966 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:15.966 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:15.966 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:15.966 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:15.966 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:15.966 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:15.967 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:15.967 Found net devices under 0000:86:00.0: cvl_0_0 00:13:15.967 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:15.967 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:15.967 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:15.967 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:15.967 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:15.967 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:15.967 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:15.967 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:15.967 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:15.967 Found net devices under 0000:86:00.1: cvl_0_1 00:13:15.967 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:15.967 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:15.967 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:13:15.967 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:15.967 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:15.967 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:15.967 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:15.967 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:15.967 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:15.967 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:15.967 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:15.967 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:15.967 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:15.967 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:15.967 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:15.967 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:15.967 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:15.967 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:15.967 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:15.967 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:15.967 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:15.967 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:15.967 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:15.967 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:15.967 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:15.967 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:15.967 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:15.967 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:15.967 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:15.967 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:15.967 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.485 ms 00:13:15.967 00:13:15.967 --- 10.0.0.2 ping statistics --- 00:13:15.967 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:15.967 rtt min/avg/max/mdev = 0.485/0.485/0.485/0.000 ms 00:13:15.967 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:15.967 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:15.967 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:13:15.967 00:13:15.967 --- 10.0.0.1 ping statistics --- 00:13:15.967 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:15.967 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:13:15.967 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:15.967 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:13:15.967 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:15.967 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:15.967 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:15.967 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:15.967 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:15.967 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:15.967 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:15.967 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:13:15.967 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:15.967 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:15.967 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:15.967 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=4017320 00:13:15.967 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 4017320 00:13:15.967 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:15.967 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 4017320 ']' 00:13:15.967 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:15.967 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:15.967 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:15.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:15.967 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:15.967 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:15.967 [2024-11-20 11:07:42.716577] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:13:15.967 [2024-11-20 11:07:42.716630] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:15.967 [2024-11-20 11:07:42.796110] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:15.967 [2024-11-20 11:07:42.840797] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:15.967 [2024-11-20 11:07:42.840835] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:15.967 [2024-11-20 11:07:42.840843] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:15.967 [2024-11-20 11:07:42.840849] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:15.967 [2024-11-20 11:07:42.840853] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:15.967 [2024-11-20 11:07:42.842390] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:15.967 [2024-11-20 11:07:42.842477] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:15.967 [2024-11-20 11:07:42.842607] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:15.967 [2024-11-20 11:07:42.842608] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:16.226 11:07:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:16.226 11:07:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:13:16.226 11:07:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:16.226 11:07:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:16.226 11:07:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:16.226 11:07:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:16.226 11:07:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:16.226 11:07:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.226 11:07:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:16.226 [2024-11-20 11:07:43.602084] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:16.226 11:07:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.226 11:07:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:16.226 11:07:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.226 11:07:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:16.226 Malloc0 00:13:16.226 11:07:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.226 11:07:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:16.226 11:07:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.226 11:07:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:16.226 Malloc1 00:13:16.226 11:07:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.226 11:07:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:13:16.226 11:07:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.226 11:07:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:16.226 11:07:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.226 11:07:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:16.226 11:07:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.226 11:07:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:16.226 11:07:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.226 11:07:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:16.226 11:07:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.226 11:07:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:16.226 11:07:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.226 11:07:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:16.226 11:07:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.226 11:07:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:16.226 [2024-11-20 11:07:43.709206] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:16.226 11:07:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.226 11:07:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:16.226 11:07:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.226 11:07:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:16.484 11:07:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.484 11:07:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:13:16.484 00:13:16.484 Discovery Log Number of Records 2, Generation counter 2 00:13:16.484 =====Discovery Log Entry 0====== 00:13:16.484 trtype: tcp 00:13:16.484 adrfam: ipv4 00:13:16.484 subtype: current discovery subsystem 00:13:16.484 treq: not required 00:13:16.484 portid: 0 00:13:16.484 trsvcid: 4420 00:13:16.484 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:16.484 traddr: 10.0.0.2 00:13:16.484 eflags: explicit discovery connections, duplicate discovery information 00:13:16.484 sectype: none 00:13:16.484 =====Discovery Log Entry 1====== 00:13:16.484 trtype: tcp 00:13:16.484 adrfam: ipv4 00:13:16.484 subtype: nvme subsystem 00:13:16.484 treq: not required 00:13:16.484 portid: 0 00:13:16.484 trsvcid: 4420 00:13:16.484 subnqn: nqn.2016-06.io.spdk:cnode1 00:13:16.484 traddr: 10.0.0.2 00:13:16.484 eflags: none 00:13:16.484 sectype: none 00:13:16.484 11:07:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:13:16.484 11:07:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:13:16.484 11:07:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:13:16.484 11:07:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:16.484 11:07:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:13:16.484 11:07:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:13:16.484 11:07:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:16.484 11:07:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:13:16.484 11:07:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:16.484 11:07:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:13:16.484 11:07:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:17.855 11:07:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:13:17.855 11:07:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:13:17.855 11:07:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:17.855 11:07:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:13:17.855 11:07:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:13:17.855 11:07:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:13:19.754 11:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:19.754 11:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:19.754 11:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:19.754 11:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:13:19.754 11:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:19.754 11:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:13:19.754 11:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:13:19.754 11:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:13:19.754 11:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:19.754 11:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:13:19.754 11:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:13:19.754 11:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:19.754 11:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:13:19.754 11:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:19.754 11:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:13:19.754 11:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:13:19.754 11:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:19.754 11:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:13:19.754 11:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:13:19.754 11:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:19.754 11:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:13:19.754 /dev/nvme0n2 ]] 00:13:19.754 11:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:13:19.754 11:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:13:19.754 11:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:13:19.754 11:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:13:19.754 11:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:20.012 11:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:13:20.012 11:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:20.012 11:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:13:20.012 11:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:20.012 11:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:13:20.012 11:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:13:20.012 11:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:20.012 11:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:13:20.012 11:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:13:20.012 11:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:20.012 11:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:13:20.012 11:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:20.270 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:20.270 11:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:20.270 11:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:13:20.270 11:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:20.270 11:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:20.270 11:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:20.270 11:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:20.270 11:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:13:20.270 11:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:13:20.270 11:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:20.270 11:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.270 11:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:20.270 11:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.270 11:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:13:20.270 11:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:13:20.270 11:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:20.270 11:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:13:20.270 11:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:20.270 11:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:13:20.270 11:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:20.270 11:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:20.270 rmmod nvme_tcp 00:13:20.270 rmmod nvme_fabrics 00:13:20.270 rmmod nvme_keyring 00:13:20.270 11:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:20.270 11:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:13:20.270 11:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:13:20.270 11:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 4017320 ']' 00:13:20.270 11:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 4017320 00:13:20.270 11:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 4017320 ']' 00:13:20.270 11:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 4017320 00:13:20.270 11:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:13:20.270 11:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:20.270 11:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4017320 00:13:20.270 11:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:20.270 11:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:20.270 11:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4017320' 00:13:20.270 killing process with pid 4017320 00:13:20.270 11:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 4017320 00:13:20.270 11:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 4017320 00:13:20.529 11:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:20.529 11:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:20.529 11:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:20.529 11:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:13:20.529 11:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:13:20.529 11:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:20.529 11:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:13:20.529 11:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:20.529 11:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:20.529 11:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:20.529 11:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:20.529 11:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:23.076 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:23.076 00:13:23.076 real 0m13.575s 00:13:23.076 user 0m22.188s 00:13:23.076 sys 0m5.239s 00:13:23.076 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:23.076 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:23.076 ************************************ 00:13:23.076 END TEST nvmf_nvme_cli 00:13:23.076 ************************************ 00:13:23.076 11:07:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:13:23.076 11:07:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:13:23.076 11:07:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:23.076 11:07:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:23.076 11:07:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:23.076 ************************************ 00:13:23.076 START TEST nvmf_vfio_user 00:13:23.076 ************************************ 00:13:23.076 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:13:23.076 * Looking for test storage... 00:13:23.076 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:23.076 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:23.076 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lcov --version 00:13:23.076 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:23.076 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:23.076 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:23.076 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:23.076 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:23.076 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:13:23.076 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:13:23.076 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:13:23.076 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:13:23.076 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:13:23.076 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:13:23.076 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:13:23.076 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:23.076 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:13:23.076 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:13:23.076 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:23.076 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:23.076 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:13:23.076 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:13:23.076 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:23.077 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:13:23.077 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:13:23.077 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:13:23.077 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:13:23.077 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:23.077 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:13:23.077 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:13:23.077 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:23.077 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:23.077 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:13:23.077 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:23.077 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:23.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:23.077 --rc genhtml_branch_coverage=1 00:13:23.077 --rc genhtml_function_coverage=1 00:13:23.077 --rc genhtml_legend=1 00:13:23.077 --rc geninfo_all_blocks=1 00:13:23.077 --rc geninfo_unexecuted_blocks=1 00:13:23.077 00:13:23.077 ' 00:13:23.077 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:23.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:23.077 --rc genhtml_branch_coverage=1 00:13:23.077 --rc genhtml_function_coverage=1 00:13:23.077 --rc genhtml_legend=1 00:13:23.077 --rc geninfo_all_blocks=1 00:13:23.077 --rc geninfo_unexecuted_blocks=1 00:13:23.077 00:13:23.077 ' 00:13:23.077 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:23.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:23.077 --rc genhtml_branch_coverage=1 00:13:23.077 --rc genhtml_function_coverage=1 00:13:23.077 --rc genhtml_legend=1 00:13:23.077 --rc geninfo_all_blocks=1 00:13:23.077 --rc geninfo_unexecuted_blocks=1 00:13:23.077 00:13:23.077 ' 00:13:23.077 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:23.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:23.077 --rc genhtml_branch_coverage=1 00:13:23.077 --rc genhtml_function_coverage=1 00:13:23.077 --rc genhtml_legend=1 00:13:23.077 --rc geninfo_all_blocks=1 00:13:23.077 --rc geninfo_unexecuted_blocks=1 00:13:23.077 00:13:23.077 ' 00:13:23.077 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:23.077 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:13:23.077 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:23.077 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:23.077 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:23.077 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:23.077 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:23.077 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:23.077 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:23.077 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:23.077 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:23.077 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:23.077 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:23.077 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:23.077 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:23.077 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:23.077 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:23.077 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:23.077 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:23.077 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:13:23.077 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:23.077 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:23.077 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:23.077 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:23.077 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:23.077 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:23.077 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:13:23.077 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:23.077 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:13:23.077 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:23.077 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:23.077 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:23.077 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:23.077 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:23.077 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:23.077 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:23.077 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:23.077 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:23.077 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:23.077 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:13:23.077 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:23.077 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:13:23.077 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:23.077 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:13:23.077 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:13:23.077 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:13:23.077 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:13:23.077 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:13:23.077 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:13:23.077 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=4018758 00:13:23.077 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 4018758' 00:13:23.077 Process pid: 4018758 00:13:23.077 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:23.077 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 4018758 00:13:23.077 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:13:23.077 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 4018758 ']' 00:13:23.077 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:23.077 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:23.077 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:23.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:23.077 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:23.077 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:13:23.078 [2024-11-20 11:07:50.370099] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:13:23.078 [2024-11-20 11:07:50.370149] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:23.078 [2024-11-20 11:07:50.448257] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:23.078 [2024-11-20 11:07:50.491591] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:23.078 [2024-11-20 11:07:50.491628] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:23.078 [2024-11-20 11:07:50.491635] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:23.078 [2024-11-20 11:07:50.491641] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:23.078 [2024-11-20 11:07:50.491646] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:23.078 [2024-11-20 11:07:50.493265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:23.078 [2024-11-20 11:07:50.493301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:23.078 [2024-11-20 11:07:50.493382] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:23.078 [2024-11-20 11:07:50.493383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:23.336 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:23.336 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:13:23.336 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:13:24.272 11:07:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:13:24.530 11:07:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:13:24.530 11:07:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:13:24.530 11:07:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:24.530 11:07:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:13:24.530 11:07:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:24.530 Malloc1 00:13:24.789 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:13:24.789 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:13:25.047 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:13:25.305 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:25.305 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:13:25.305 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:25.564 Malloc2 00:13:25.564 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:13:25.564 11:07:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:13:25.822 11:07:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:13:26.084 11:07:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:13:26.084 11:07:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:13:26.084 11:07:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:26.084 11:07:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:13:26.084 11:07:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:13:26.084 11:07:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:13:26.084 [2024-11-20 11:07:53.481836] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:13:26.084 [2024-11-20 11:07:53.481868] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4019325 ] 00:13:26.084 [2024-11-20 11:07:53.520891] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:13:26.084 [2024-11-20 11:07:53.526269] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:26.084 [2024-11-20 11:07:53.526291] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f4758f1b000 00:13:26.084 [2024-11-20 11:07:53.527271] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:26.084 [2024-11-20 11:07:53.528269] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:26.084 [2024-11-20 11:07:53.529276] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:26.084 [2024-11-20 11:07:53.530281] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:26.084 [2024-11-20 11:07:53.531292] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:26.084 [2024-11-20 11:07:53.532294] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:26.084 [2024-11-20 11:07:53.533303] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:26.084 [2024-11-20 11:07:53.534303] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:26.084 [2024-11-20 11:07:53.535312] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:26.084 [2024-11-20 11:07:53.535321] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f4758f10000 00:13:26.084 [2024-11-20 11:07:53.536263] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:26.084 [2024-11-20 11:07:53.549871] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:13:26.084 [2024-11-20 11:07:53.549895] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:13:26.084 [2024-11-20 11:07:53.552432] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:13:26.084 [2024-11-20 11:07:53.552472] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:13:26.084 [2024-11-20 11:07:53.552542] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:13:26.084 [2024-11-20 11:07:53.552557] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:13:26.084 [2024-11-20 11:07:53.552562] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:13:26.084 [2024-11-20 11:07:53.553428] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:13:26.084 [2024-11-20 11:07:53.553436] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:13:26.084 [2024-11-20 11:07:53.553443] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:13:26.084 [2024-11-20 11:07:53.554438] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:13:26.084 [2024-11-20 11:07:53.554446] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:13:26.084 [2024-11-20 11:07:53.554453] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:13:26.084 [2024-11-20 11:07:53.555441] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:13:26.084 [2024-11-20 11:07:53.555449] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:13:26.084 [2024-11-20 11:07:53.556449] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:13:26.084 [2024-11-20 11:07:53.556457] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:13:26.084 [2024-11-20 11:07:53.556461] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:13:26.084 [2024-11-20 11:07:53.556467] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:13:26.084 [2024-11-20 11:07:53.556575] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:13:26.084 [2024-11-20 11:07:53.556579] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:13:26.084 [2024-11-20 11:07:53.556584] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:13:26.084 [2024-11-20 11:07:53.557458] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:13:26.084 [2024-11-20 11:07:53.558459] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:13:26.084 [2024-11-20 11:07:53.559465] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:13:26.084 [2024-11-20 11:07:53.560465] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:26.084 [2024-11-20 11:07:53.560528] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:13:26.084 [2024-11-20 11:07:53.561476] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:13:26.084 [2024-11-20 11:07:53.561483] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:13:26.084 [2024-11-20 11:07:53.561487] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:13:26.084 [2024-11-20 11:07:53.561505] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:13:26.084 [2024-11-20 11:07:53.561511] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:13:26.084 [2024-11-20 11:07:53.561527] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:26.084 [2024-11-20 11:07:53.561532] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:26.084 [2024-11-20 11:07:53.561536] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:26.084 [2024-11-20 11:07:53.561548] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:26.084 [2024-11-20 11:07:53.561607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:13:26.085 [2024-11-20 11:07:53.561616] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:13:26.085 [2024-11-20 11:07:53.561621] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:13:26.085 [2024-11-20 11:07:53.561625] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:13:26.085 [2024-11-20 11:07:53.561629] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:13:26.085 [2024-11-20 11:07:53.561636] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:13:26.085 [2024-11-20 11:07:53.561640] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:13:26.085 [2024-11-20 11:07:53.561644] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:13:26.085 [2024-11-20 11:07:53.561653] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:13:26.085 [2024-11-20 11:07:53.561662] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:13:26.085 [2024-11-20 11:07:53.561675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:13:26.085 [2024-11-20 11:07:53.561686] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:26.085 [2024-11-20 11:07:53.561693] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:26.085 [2024-11-20 11:07:53.561700] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:26.085 [2024-11-20 11:07:53.561710] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:26.085 [2024-11-20 11:07:53.561714] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:13:26.085 [2024-11-20 11:07:53.561720] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:13:26.085 [2024-11-20 11:07:53.561728] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:13:26.085 [2024-11-20 11:07:53.561742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:13:26.085 [2024-11-20 11:07:53.561749] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:13:26.085 [2024-11-20 11:07:53.561754] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:13:26.085 [2024-11-20 11:07:53.561760] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:13:26.085 [2024-11-20 11:07:53.561765] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:13:26.085 [2024-11-20 11:07:53.561773] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:26.085 [2024-11-20 11:07:53.561782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:13:26.085 [2024-11-20 11:07:53.561833] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:13:26.085 [2024-11-20 11:07:53.561840] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:13:26.085 [2024-11-20 11:07:53.561847] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:13:26.085 [2024-11-20 11:07:53.561851] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:13:26.085 [2024-11-20 11:07:53.561854] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:26.085 [2024-11-20 11:07:53.561860] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:13:26.085 [2024-11-20 11:07:53.561875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:13:26.085 [2024-11-20 11:07:53.561883] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:13:26.085 [2024-11-20 11:07:53.561895] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:13:26.085 [2024-11-20 11:07:53.561902] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:13:26.085 [2024-11-20 11:07:53.561908] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:26.085 [2024-11-20 11:07:53.561912] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:26.085 [2024-11-20 11:07:53.561915] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:26.085 [2024-11-20 11:07:53.561921] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:26.085 [2024-11-20 11:07:53.561943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:13:26.085 [2024-11-20 11:07:53.561959] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:13:26.085 [2024-11-20 11:07:53.561967] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:13:26.085 [2024-11-20 11:07:53.561973] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:26.085 [2024-11-20 11:07:53.561977] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:26.085 [2024-11-20 11:07:53.561980] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:26.085 [2024-11-20 11:07:53.561985] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:26.085 [2024-11-20 11:07:53.562000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:13:26.085 [2024-11-20 11:07:53.562007] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:13:26.085 [2024-11-20 11:07:53.562013] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:13:26.085 [2024-11-20 11:07:53.562020] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:13:26.085 [2024-11-20 11:07:53.562026] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:13:26.085 [2024-11-20 11:07:53.562031] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:13:26.085 [2024-11-20 11:07:53.562035] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:13:26.085 [2024-11-20 11:07:53.562040] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:13:26.085 [2024-11-20 11:07:53.562044] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:13:26.085 [2024-11-20 11:07:53.562048] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:13:26.085 [2024-11-20 11:07:53.562065] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:13:26.085 [2024-11-20 11:07:53.562073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:13:26.085 [2024-11-20 11:07:53.562084] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:13:26.085 [2024-11-20 11:07:53.562099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:13:26.085 [2024-11-20 11:07:53.562108] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:13:26.085 [2024-11-20 11:07:53.562119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:13:26.085 [2024-11-20 11:07:53.562128] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:26.085 [2024-11-20 11:07:53.562137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:13:26.085 [2024-11-20 11:07:53.562149] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:13:26.085 [2024-11-20 11:07:53.562154] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:13:26.085 [2024-11-20 11:07:53.562157] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:13:26.085 [2024-11-20 11:07:53.562160] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:13:26.085 [2024-11-20 11:07:53.562163] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:13:26.085 [2024-11-20 11:07:53.562168] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:13:26.085 [2024-11-20 11:07:53.562175] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:13:26.085 [2024-11-20 11:07:53.562179] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:13:26.085 [2024-11-20 11:07:53.562182] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:26.085 [2024-11-20 11:07:53.562187] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:13:26.085 [2024-11-20 11:07:53.562193] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:13:26.085 [2024-11-20 11:07:53.562197] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:26.085 [2024-11-20 11:07:53.562200] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:26.085 [2024-11-20 11:07:53.562205] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:26.085 [2024-11-20 11:07:53.562212] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:13:26.085 [2024-11-20 11:07:53.562216] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:13:26.085 [2024-11-20 11:07:53.562219] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:26.085 [2024-11-20 11:07:53.562224] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:13:26.086 [2024-11-20 11:07:53.562230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:13:26.086 [2024-11-20 11:07:53.562242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:13:26.086 [2024-11-20 11:07:53.562252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:13:26.086 [2024-11-20 11:07:53.562258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:13:26.086 ===================================================== 00:13:26.086 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:26.086 ===================================================== 00:13:26.086 Controller Capabilities/Features 00:13:26.086 ================================ 00:13:26.086 Vendor ID: 4e58 00:13:26.086 Subsystem Vendor ID: 4e58 00:13:26.086 Serial Number: SPDK1 00:13:26.086 Model Number: SPDK bdev Controller 00:13:26.086 Firmware Version: 25.01 00:13:26.086 Recommended Arb Burst: 6 00:13:26.086 IEEE OUI Identifier: 8d 6b 50 00:13:26.086 Multi-path I/O 00:13:26.086 May have multiple subsystem ports: Yes 00:13:26.086 May have multiple controllers: Yes 00:13:26.086 Associated with SR-IOV VF: No 00:13:26.086 Max Data Transfer Size: 131072 00:13:26.086 Max Number of Namespaces: 32 00:13:26.086 Max Number of I/O Queues: 127 00:13:26.086 NVMe Specification Version (VS): 1.3 00:13:26.086 NVMe Specification Version (Identify): 1.3 00:13:26.086 Maximum Queue Entries: 256 00:13:26.086 Contiguous Queues Required: Yes 00:13:26.086 Arbitration Mechanisms Supported 00:13:26.086 Weighted Round Robin: Not Supported 00:13:26.086 Vendor Specific: Not Supported 00:13:26.086 Reset Timeout: 15000 ms 00:13:26.086 Doorbell Stride: 4 bytes 00:13:26.086 NVM Subsystem Reset: Not Supported 00:13:26.086 Command Sets Supported 00:13:26.086 NVM Command Set: Supported 00:13:26.086 Boot Partition: Not Supported 00:13:26.086 Memory Page Size Minimum: 4096 bytes 00:13:26.086 Memory Page Size Maximum: 4096 bytes 00:13:26.086 Persistent Memory Region: Not Supported 00:13:26.086 Optional Asynchronous Events Supported 00:13:26.086 Namespace Attribute Notices: Supported 00:13:26.086 Firmware Activation Notices: Not Supported 00:13:26.086 ANA Change Notices: Not Supported 00:13:26.086 PLE Aggregate Log Change Notices: Not Supported 00:13:26.086 LBA Status Info Alert Notices: Not Supported 00:13:26.086 EGE Aggregate Log Change Notices: Not Supported 00:13:26.086 Normal NVM Subsystem Shutdown event: Not Supported 00:13:26.086 Zone Descriptor Change Notices: Not Supported 00:13:26.086 Discovery Log Change Notices: Not Supported 00:13:26.086 Controller Attributes 00:13:26.086 128-bit Host Identifier: Supported 00:13:26.086 Non-Operational Permissive Mode: Not Supported 00:13:26.086 NVM Sets: Not Supported 00:13:26.086 Read Recovery Levels: Not Supported 00:13:26.086 Endurance Groups: Not Supported 00:13:26.086 Predictable Latency Mode: Not Supported 00:13:26.086 Traffic Based Keep ALive: Not Supported 00:13:26.086 Namespace Granularity: Not Supported 00:13:26.086 SQ Associations: Not Supported 00:13:26.086 UUID List: Not Supported 00:13:26.086 Multi-Domain Subsystem: Not Supported 00:13:26.086 Fixed Capacity Management: Not Supported 00:13:26.086 Variable Capacity Management: Not Supported 00:13:26.086 Delete Endurance Group: Not Supported 00:13:26.086 Delete NVM Set: Not Supported 00:13:26.086 Extended LBA Formats Supported: Not Supported 00:13:26.086 Flexible Data Placement Supported: Not Supported 00:13:26.086 00:13:26.086 Controller Memory Buffer Support 00:13:26.086 ================================ 00:13:26.086 Supported: No 00:13:26.086 00:13:26.086 Persistent Memory Region Support 00:13:26.086 ================================ 00:13:26.086 Supported: No 00:13:26.086 00:13:26.086 Admin Command Set Attributes 00:13:26.086 ============================ 00:13:26.086 Security Send/Receive: Not Supported 00:13:26.086 Format NVM: Not Supported 00:13:26.086 Firmware Activate/Download: Not Supported 00:13:26.086 Namespace Management: Not Supported 00:13:26.086 Device Self-Test: Not Supported 00:13:26.086 Directives: Not Supported 00:13:26.086 NVMe-MI: Not Supported 00:13:26.086 Virtualization Management: Not Supported 00:13:26.086 Doorbell Buffer Config: Not Supported 00:13:26.086 Get LBA Status Capability: Not Supported 00:13:26.086 Command & Feature Lockdown Capability: Not Supported 00:13:26.086 Abort Command Limit: 4 00:13:26.086 Async Event Request Limit: 4 00:13:26.086 Number of Firmware Slots: N/A 00:13:26.086 Firmware Slot 1 Read-Only: N/A 00:13:26.086 Firmware Activation Without Reset: N/A 00:13:26.086 Multiple Update Detection Support: N/A 00:13:26.086 Firmware Update Granularity: No Information Provided 00:13:26.086 Per-Namespace SMART Log: No 00:13:26.086 Asymmetric Namespace Access Log Page: Not Supported 00:13:26.086 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:13:26.086 Command Effects Log Page: Supported 00:13:26.086 Get Log Page Extended Data: Supported 00:13:26.086 Telemetry Log Pages: Not Supported 00:13:26.086 Persistent Event Log Pages: Not Supported 00:13:26.086 Supported Log Pages Log Page: May Support 00:13:26.086 Commands Supported & Effects Log Page: Not Supported 00:13:26.086 Feature Identifiers & Effects Log Page:May Support 00:13:26.086 NVMe-MI Commands & Effects Log Page: May Support 00:13:26.086 Data Area 4 for Telemetry Log: Not Supported 00:13:26.086 Error Log Page Entries Supported: 128 00:13:26.086 Keep Alive: Supported 00:13:26.086 Keep Alive Granularity: 10000 ms 00:13:26.086 00:13:26.086 NVM Command Set Attributes 00:13:26.086 ========================== 00:13:26.086 Submission Queue Entry Size 00:13:26.086 Max: 64 00:13:26.086 Min: 64 00:13:26.086 Completion Queue Entry Size 00:13:26.086 Max: 16 00:13:26.086 Min: 16 00:13:26.086 Number of Namespaces: 32 00:13:26.086 Compare Command: Supported 00:13:26.086 Write Uncorrectable Command: Not Supported 00:13:26.086 Dataset Management Command: Supported 00:13:26.086 Write Zeroes Command: Supported 00:13:26.086 Set Features Save Field: Not Supported 00:13:26.086 Reservations: Not Supported 00:13:26.086 Timestamp: Not Supported 00:13:26.086 Copy: Supported 00:13:26.086 Volatile Write Cache: Present 00:13:26.086 Atomic Write Unit (Normal): 1 00:13:26.086 Atomic Write Unit (PFail): 1 00:13:26.086 Atomic Compare & Write Unit: 1 00:13:26.086 Fused Compare & Write: Supported 00:13:26.086 Scatter-Gather List 00:13:26.086 SGL Command Set: Supported (Dword aligned) 00:13:26.086 SGL Keyed: Not Supported 00:13:26.086 SGL Bit Bucket Descriptor: Not Supported 00:13:26.086 SGL Metadata Pointer: Not Supported 00:13:26.086 Oversized SGL: Not Supported 00:13:26.086 SGL Metadata Address: Not Supported 00:13:26.086 SGL Offset: Not Supported 00:13:26.086 Transport SGL Data Block: Not Supported 00:13:26.086 Replay Protected Memory Block: Not Supported 00:13:26.086 00:13:26.086 Firmware Slot Information 00:13:26.086 ========================= 00:13:26.086 Active slot: 1 00:13:26.086 Slot 1 Firmware Revision: 25.01 00:13:26.086 00:13:26.086 00:13:26.086 Commands Supported and Effects 00:13:26.086 ============================== 00:13:26.086 Admin Commands 00:13:26.086 -------------- 00:13:26.086 Get Log Page (02h): Supported 00:13:26.086 Identify (06h): Supported 00:13:26.086 Abort (08h): Supported 00:13:26.086 Set Features (09h): Supported 00:13:26.086 Get Features (0Ah): Supported 00:13:26.086 Asynchronous Event Request (0Ch): Supported 00:13:26.086 Keep Alive (18h): Supported 00:13:26.086 I/O Commands 00:13:26.086 ------------ 00:13:26.086 Flush (00h): Supported LBA-Change 00:13:26.086 Write (01h): Supported LBA-Change 00:13:26.086 Read (02h): Supported 00:13:26.086 Compare (05h): Supported 00:13:26.086 Write Zeroes (08h): Supported LBA-Change 00:13:26.086 Dataset Management (09h): Supported LBA-Change 00:13:26.086 Copy (19h): Supported LBA-Change 00:13:26.086 00:13:26.086 Error Log 00:13:26.086 ========= 00:13:26.086 00:13:26.086 Arbitration 00:13:26.086 =========== 00:13:26.086 Arbitration Burst: 1 00:13:26.086 00:13:26.086 Power Management 00:13:26.086 ================ 00:13:26.086 Number of Power States: 1 00:13:26.086 Current Power State: Power State #0 00:13:26.086 Power State #0: 00:13:26.086 Max Power: 0.00 W 00:13:26.086 Non-Operational State: Operational 00:13:26.086 Entry Latency: Not Reported 00:13:26.086 Exit Latency: Not Reported 00:13:26.086 Relative Read Throughput: 0 00:13:26.086 Relative Read Latency: 0 00:13:26.086 Relative Write Throughput: 0 00:13:26.086 Relative Write Latency: 0 00:13:26.086 Idle Power: Not Reported 00:13:26.086 Active Power: Not Reported 00:13:26.087 Non-Operational Permissive Mode: Not Supported 00:13:26.087 00:13:26.087 Health Information 00:13:26.087 ================== 00:13:26.087 Critical Warnings: 00:13:26.087 Available Spare Space: OK 00:13:26.087 Temperature: OK 00:13:26.087 Device Reliability: OK 00:13:26.087 Read Only: No 00:13:26.087 Volatile Memory Backup: OK 00:13:26.087 Current Temperature: 0 Kelvin (-273 Celsius) 00:13:26.087 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:13:26.087 Available Spare: 0% 00:13:26.087 Available Sp[2024-11-20 11:07:53.562344] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:13:26.087 [2024-11-20 11:07:53.562351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:13:26.087 [2024-11-20 11:07:53.562376] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:13:26.087 [2024-11-20 11:07:53.562385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:26.087 [2024-11-20 11:07:53.562390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:26.087 [2024-11-20 11:07:53.562396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:26.087 [2024-11-20 11:07:53.562401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:26.087 [2024-11-20 11:07:53.564955] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:13:26.087 [2024-11-20 11:07:53.564966] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:13:26.087 [2024-11-20 11:07:53.565501] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:26.087 [2024-11-20 11:07:53.565552] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:13:26.087 [2024-11-20 11:07:53.565558] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:13:26.087 [2024-11-20 11:07:53.566503] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:13:26.087 [2024-11-20 11:07:53.566514] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:13:26.087 [2024-11-20 11:07:53.566563] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:13:26.087 [2024-11-20 11:07:53.568543] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:26.346 are Threshold: 0% 00:13:26.346 Life Percentage Used: 0% 00:13:26.346 Data Units Read: 0 00:13:26.346 Data Units Written: 0 00:13:26.346 Host Read Commands: 0 00:13:26.346 Host Write Commands: 0 00:13:26.346 Controller Busy Time: 0 minutes 00:13:26.346 Power Cycles: 0 00:13:26.346 Power On Hours: 0 hours 00:13:26.346 Unsafe Shutdowns: 0 00:13:26.346 Unrecoverable Media Errors: 0 00:13:26.346 Lifetime Error Log Entries: 0 00:13:26.346 Warning Temperature Time: 0 minutes 00:13:26.346 Critical Temperature Time: 0 minutes 00:13:26.346 00:13:26.346 Number of Queues 00:13:26.346 ================ 00:13:26.346 Number of I/O Submission Queues: 127 00:13:26.346 Number of I/O Completion Queues: 127 00:13:26.346 00:13:26.346 Active Namespaces 00:13:26.346 ================= 00:13:26.346 Namespace ID:1 00:13:26.346 Error Recovery Timeout: Unlimited 00:13:26.346 Command Set Identifier: NVM (00h) 00:13:26.346 Deallocate: Supported 00:13:26.346 Deallocated/Unwritten Error: Not Supported 00:13:26.346 Deallocated Read Value: Unknown 00:13:26.346 Deallocate in Write Zeroes: Not Supported 00:13:26.346 Deallocated Guard Field: 0xFFFF 00:13:26.346 Flush: Supported 00:13:26.346 Reservation: Supported 00:13:26.346 Namespace Sharing Capabilities: Multiple Controllers 00:13:26.346 Size (in LBAs): 131072 (0GiB) 00:13:26.346 Capacity (in LBAs): 131072 (0GiB) 00:13:26.346 Utilization (in LBAs): 131072 (0GiB) 00:13:26.346 NGUID: 7EB7C8F3B3AD4B6A8FD9F79624686F8C 00:13:26.346 UUID: 7eb7c8f3-b3ad-4b6a-8fd9-f79624686f8c 00:13:26.346 Thin Provisioning: Not Supported 00:13:26.346 Per-NS Atomic Units: Yes 00:13:26.346 Atomic Boundary Size (Normal): 0 00:13:26.346 Atomic Boundary Size (PFail): 0 00:13:26.346 Atomic Boundary Offset: 0 00:13:26.346 Maximum Single Source Range Length: 65535 00:13:26.346 Maximum Copy Length: 65535 00:13:26.346 Maximum Source Range Count: 1 00:13:26.346 NGUID/EUI64 Never Reused: No 00:13:26.346 Namespace Write Protected: No 00:13:26.346 Number of LBA Formats: 1 00:13:26.346 Current LBA Format: LBA Format #00 00:13:26.346 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:26.346 00:13:26.346 11:07:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:13:26.346 [2024-11-20 11:07:53.804800] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:31.717 Initializing NVMe Controllers 00:13:31.717 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:31.717 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:13:31.717 Initialization complete. Launching workers. 00:13:31.717 ======================================================== 00:13:31.717 Latency(us) 00:13:31.717 Device Information : IOPS MiB/s Average min max 00:13:31.717 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39942.78 156.03 3205.00 971.45 9376.34 00:13:31.717 ======================================================== 00:13:31.717 Total : 39942.78 156.03 3205.00 971.45 9376.34 00:13:31.717 00:13:31.717 [2024-11-20 11:07:58.826271] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:31.717 11:07:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:13:31.717 [2024-11-20 11:07:59.062370] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:36.985 Initializing NVMe Controllers 00:13:36.985 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:36.985 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:13:36.985 Initialization complete. Launching workers. 00:13:36.985 ======================================================== 00:13:36.985 Latency(us) 00:13:36.985 Device Information : IOPS MiB/s Average min max 00:13:36.985 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16045.48 62.68 7982.66 5991.94 15454.10 00:13:36.985 ======================================================== 00:13:36.985 Total : 16045.48 62.68 7982.66 5991.94 15454.10 00:13:36.985 00:13:36.985 [2024-11-20 11:08:04.105833] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:36.985 11:08:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:13:36.985 [2024-11-20 11:08:04.322881] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:42.259 [2024-11-20 11:08:09.395284] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:42.259 Initializing NVMe Controllers 00:13:42.259 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:42.259 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:42.259 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:13:42.259 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:13:42.259 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:13:42.259 Initialization complete. Launching workers. 00:13:42.259 Starting thread on core 2 00:13:42.259 Starting thread on core 3 00:13:42.259 Starting thread on core 1 00:13:42.259 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:13:42.259 [2024-11-20 11:08:09.690362] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:45.541 [2024-11-20 11:08:12.744197] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:45.541 Initializing NVMe Controllers 00:13:45.541 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:45.541 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:45.541 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:13:45.541 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:13:45.541 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:13:45.541 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:13:45.541 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:13:45.541 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:13:45.541 Initialization complete. Launching workers. 00:13:45.541 Starting thread on core 1 with urgent priority queue 00:13:45.541 Starting thread on core 2 with urgent priority queue 00:13:45.541 Starting thread on core 3 with urgent priority queue 00:13:45.541 Starting thread on core 0 with urgent priority queue 00:13:45.541 SPDK bdev Controller (SPDK1 ) core 0: 6078.00 IO/s 16.45 secs/100000 ios 00:13:45.541 SPDK bdev Controller (SPDK1 ) core 1: 5418.67 IO/s 18.45 secs/100000 ios 00:13:45.541 SPDK bdev Controller (SPDK1 ) core 2: 5021.67 IO/s 19.91 secs/100000 ios 00:13:45.541 SPDK bdev Controller (SPDK1 ) core 3: 5902.00 IO/s 16.94 secs/100000 ios 00:13:45.541 ======================================================== 00:13:45.541 00:13:45.541 11:08:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:13:45.541 [2024-11-20 11:08:13.032282] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:45.799 Initializing NVMe Controllers 00:13:45.799 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:45.799 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:45.799 Namespace ID: 1 size: 0GB 00:13:45.799 Initialization complete. 00:13:45.799 INFO: using host memory buffer for IO 00:13:45.799 Hello world! 00:13:45.799 [2024-11-20 11:08:13.066533] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:45.799 11:08:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:13:46.056 [2024-11-20 11:08:13.349393] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:46.994 Initializing NVMe Controllers 00:13:46.994 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:46.994 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:46.994 Initialization complete. Launching workers. 00:13:46.994 submit (in ns) avg, min, max = 7696.2, 3302.6, 4000934.8 00:13:46.994 complete (in ns) avg, min, max = 20617.4, 1809.6, 4009585.2 00:13:46.994 00:13:46.994 Submit histogram 00:13:46.994 ================ 00:13:46.994 Range in us Cumulative Count 00:13:46.994 3.297 - 3.311: 0.0370% ( 6) 00:13:46.994 3.311 - 3.325: 0.1540% ( 19) 00:13:46.994 3.325 - 3.339: 0.4004% ( 40) 00:13:46.994 3.339 - 3.353: 1.2383% ( 136) 00:13:46.994 3.353 - 3.367: 4.3987% ( 513) 00:13:46.994 3.367 - 3.381: 9.7154% ( 863) 00:13:46.994 3.381 - 3.395: 16.2765% ( 1065) 00:13:46.994 3.395 - 3.409: 22.6528% ( 1035) 00:13:46.994 3.409 - 3.423: 29.1893% ( 1061) 00:13:46.994 3.423 - 3.437: 35.0234% ( 947) 00:13:46.994 3.437 - 3.450: 40.0628% ( 818) 00:13:46.994 3.450 - 3.464: 44.9298% ( 790) 00:13:46.994 3.464 - 3.478: 48.8726% ( 640) 00:13:46.994 3.478 - 3.492: 53.3144% ( 721) 00:13:46.994 3.492 - 3.506: 58.6804% ( 871) 00:13:46.994 3.506 - 3.520: 65.4941% ( 1106) 00:13:46.994 3.520 - 3.534: 70.3733% ( 792) 00:13:46.994 3.534 - 3.548: 75.4374% ( 822) 00:13:46.994 3.548 - 3.562: 80.8403% ( 877) 00:13:46.994 3.562 - 3.590: 85.7196% ( 792) 00:13:46.994 3.590 - 3.617: 87.3152% ( 259) 00:13:46.994 3.617 - 3.645: 87.7896% ( 77) 00:13:46.994 3.645 - 3.673: 88.8615% ( 174) 00:13:46.994 3.673 - 3.701: 90.3832% ( 247) 00:13:46.994 3.701 - 3.729: 92.1575% ( 288) 00:13:46.994 3.729 - 3.757: 93.9194% ( 286) 00:13:46.994 3.757 - 3.784: 95.8354% ( 311) 00:13:46.994 3.784 - 3.812: 97.4495% ( 262) 00:13:46.994 3.812 - 3.840: 98.3736% ( 150) 00:13:46.994 3.840 - 3.868: 98.9034% ( 86) 00:13:46.994 3.868 - 3.896: 99.2730% ( 60) 00:13:46.994 3.896 - 3.923: 99.4640% ( 31) 00:13:46.994 3.923 - 3.951: 99.5379% ( 12) 00:13:46.994 3.951 - 3.979: 99.5626% ( 4) 00:13:46.994 5.287 - 5.315: 99.5688% ( 1) 00:13:46.994 5.537 - 5.565: 99.5749% ( 1) 00:13:46.994 5.732 - 5.760: 99.5811% ( 1) 00:13:46.994 5.788 - 5.816: 99.5872% ( 1) 00:13:46.994 5.843 - 5.871: 99.5934% ( 1) 00:13:46.994 6.066 - 6.094: 99.5996% ( 1) 00:13:46.994 6.122 - 6.150: 99.6057% ( 1) 00:13:46.994 6.177 - 6.205: 99.6119% ( 1) 00:13:46.994 6.205 - 6.233: 99.6180% ( 1) 00:13:46.994 6.261 - 6.289: 99.6242% ( 1) 00:13:46.994 6.289 - 6.317: 99.6365% ( 2) 00:13:46.994 6.428 - 6.456: 99.6427% ( 1) 00:13:46.994 6.483 - 6.511: 99.6488% ( 1) 00:13:46.994 6.511 - 6.539: 99.6550% ( 1) 00:13:46.994 6.567 - 6.595: 99.6612% ( 1) 00:13:46.994 6.623 - 6.650: 99.6673% ( 1) 00:13:46.994 6.650 - 6.678: 99.6735% ( 1) 00:13:46.994 6.817 - 6.845: 99.6796% ( 1) 00:13:46.994 6.901 - 6.929: 99.6920% ( 2) 00:13:46.994 7.040 - 7.068: 99.6981% ( 1) 00:13:46.994 7.068 - 7.096: 99.7043% ( 1) 00:13:46.994 7.123 - 7.179: 99.7104% ( 1) 00:13:46.994 7.235 - 7.290: 99.7166% ( 1) 00:13:46.994 7.290 - 7.346: 99.7228% ( 1) 00:13:46.994 7.346 - 7.402: 99.7413% ( 3) 00:13:46.994 7.402 - 7.457: 99.7597% ( 3) 00:13:46.994 7.457 - 7.513: 99.7721% ( 2) 00:13:46.994 7.513 - 7.569: 99.7844% ( 2) 00:13:46.994 7.569 - 7.624: 99.7905% ( 1) 00:13:46.994 7.624 - 7.680: 99.7967% ( 1) 00:13:46.994 7.736 - 7.791: 99.8029% ( 1) 00:13:46.994 7.958 - 8.014: 99.8152% ( 2) 00:13:46.994 8.070 - 8.125: 99.8213% ( 1) 00:13:46.994 8.125 - 8.181: 99.8460% ( 4) 00:13:46.994 8.515 - 8.570: 99.8521% ( 1) 00:13:46.994 8.682 - 8.737: 99.8583% ( 1) 00:13:46.994 8.737 - 8.793: 99.8706% ( 2) 00:13:46.994 8.849 - 8.904: 99.8768% ( 1) 00:13:46.995 8.904 - 8.960: 99.8829% ( 1) 00:13:46.995 9.071 - 9.127: 99.8891% ( 1) 00:13:46.995 9.183 - 9.238: 99.8953% ( 1) 00:13:46.995 3989.148 - 4017.642: 100.0000% ( 17) 00:13:46.995 00:13:46.995 Complete histogram 00:13:46.995 ================== 00:13:46.995 Range in us Cumulative Count 00:13:46.995 1.809 - 1.823: 0.0493% ( 8) 00:13:46.995 1.823 - [2024-11-20 11:08:14.370440] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:46.995 1.837: 0.7208% ( 109) 00:13:46.995 1.837 - 1.850: 2.1069% ( 225) 00:13:46.995 1.850 - 1.864: 2.8894% ( 127) 00:13:46.995 1.864 - 1.878: 11.4650% ( 1392) 00:13:46.995 1.878 - 1.892: 64.4591% ( 8602) 00:13:46.995 1.892 - 1.906: 86.7238% ( 3614) 00:13:46.995 1.906 - 1.920: 92.3300% ( 910) 00:13:46.995 1.920 - 1.934: 94.1166% ( 290) 00:13:46.995 1.934 - 1.948: 94.8497% ( 119) 00:13:46.995 1.948 - 1.962: 96.9258% ( 337) 00:13:46.995 1.962 - 1.976: 98.6570% ( 281) 00:13:46.995 1.976 - 1.990: 99.1560% ( 81) 00:13:46.995 1.990 - 2.003: 99.2607% ( 17) 00:13:46.995 2.003 - 2.017: 99.2854% ( 4) 00:13:46.995 2.017 - 2.031: 99.2915% ( 1) 00:13:46.995 2.031 - 2.045: 99.3038% ( 2) 00:13:46.995 2.045 - 2.059: 99.3100% ( 1) 00:13:46.995 2.059 - 2.073: 99.3162% ( 1) 00:13:46.995 2.073 - 2.087: 99.3223% ( 1) 00:13:46.995 2.129 - 2.143: 99.3285% ( 1) 00:13:46.995 2.240 - 2.254: 99.3346% ( 1) 00:13:46.995 3.868 - 3.896: 99.3408% ( 1) 00:13:46.995 4.007 - 4.035: 99.3470% ( 1) 00:13:46.995 4.063 - 4.090: 99.3531% ( 1) 00:13:46.995 4.146 - 4.174: 99.3593% ( 1) 00:13:46.995 4.424 - 4.452: 99.3655% ( 1) 00:13:46.995 4.508 - 4.536: 99.3716% ( 1) 00:13:46.995 4.675 - 4.703: 99.3778% ( 1) 00:13:46.995 4.786 - 4.814: 99.3839% ( 1) 00:13:46.995 4.981 - 5.009: 99.3901% ( 1) 00:13:46.995 5.176 - 5.203: 99.3963% ( 1) 00:13:46.995 5.343 - 5.370: 99.4024% ( 1) 00:13:46.995 5.454 - 5.482: 99.4086% ( 1) 00:13:46.995 5.482 - 5.510: 99.4147% ( 1) 00:13:46.995 5.510 - 5.537: 99.4209% ( 1) 00:13:46.995 5.565 - 5.593: 99.4271% ( 1) 00:13:46.995 5.621 - 5.649: 99.4394% ( 2) 00:13:46.995 5.704 - 5.732: 99.4455% ( 1) 00:13:46.995 5.788 - 5.816: 99.4517% ( 1) 00:13:46.995 5.899 - 5.927: 99.4640% ( 2) 00:13:46.995 5.955 - 5.983: 99.4702% ( 1) 00:13:46.995 6.122 - 6.150: 99.4763% ( 1) 00:13:46.995 6.177 - 6.205: 99.4825% ( 1) 00:13:46.995 6.289 - 6.317: 99.4887% ( 1) 00:13:46.995 6.317 - 6.344: 99.4948% ( 1) 00:13:46.995 6.790 - 6.817: 99.5010% ( 1) 00:13:46.995 6.929 - 6.957: 99.5071% ( 1) 00:13:46.995 7.012 - 7.040: 99.5133% ( 1) 00:13:46.995 7.068 - 7.096: 99.5195% ( 1) 00:13:46.995 7.569 - 7.624: 99.5256% ( 1) 00:13:46.995 1175.374 - 1182.497: 99.5318% ( 1) 00:13:46.995 3006.108 - 3020.355: 99.5379% ( 1) 00:13:46.995 3989.148 - 4017.642: 100.0000% ( 75) 00:13:46.995 00:13:46.995 11:08:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:13:46.995 11:08:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:13:46.995 11:08:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:13:46.995 11:08:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:13:46.995 11:08:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:47.253 [ 00:13:47.253 { 00:13:47.253 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:47.253 "subtype": "Discovery", 00:13:47.253 "listen_addresses": [], 00:13:47.253 "allow_any_host": true, 00:13:47.253 "hosts": [] 00:13:47.253 }, 00:13:47.253 { 00:13:47.253 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:47.253 "subtype": "NVMe", 00:13:47.253 "listen_addresses": [ 00:13:47.253 { 00:13:47.253 "trtype": "VFIOUSER", 00:13:47.253 "adrfam": "IPv4", 00:13:47.253 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:47.253 "trsvcid": "0" 00:13:47.253 } 00:13:47.253 ], 00:13:47.253 "allow_any_host": true, 00:13:47.253 "hosts": [], 00:13:47.253 "serial_number": "SPDK1", 00:13:47.253 "model_number": "SPDK bdev Controller", 00:13:47.253 "max_namespaces": 32, 00:13:47.253 "min_cntlid": 1, 00:13:47.253 "max_cntlid": 65519, 00:13:47.253 "namespaces": [ 00:13:47.253 { 00:13:47.253 "nsid": 1, 00:13:47.253 "bdev_name": "Malloc1", 00:13:47.253 "name": "Malloc1", 00:13:47.253 "nguid": "7EB7C8F3B3AD4B6A8FD9F79624686F8C", 00:13:47.253 "uuid": "7eb7c8f3-b3ad-4b6a-8fd9-f79624686f8c" 00:13:47.253 } 00:13:47.253 ] 00:13:47.253 }, 00:13:47.253 { 00:13:47.253 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:47.253 "subtype": "NVMe", 00:13:47.253 "listen_addresses": [ 00:13:47.253 { 00:13:47.253 "trtype": "VFIOUSER", 00:13:47.253 "adrfam": "IPv4", 00:13:47.253 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:47.253 "trsvcid": "0" 00:13:47.253 } 00:13:47.253 ], 00:13:47.253 "allow_any_host": true, 00:13:47.253 "hosts": [], 00:13:47.253 "serial_number": "SPDK2", 00:13:47.253 "model_number": "SPDK bdev Controller", 00:13:47.253 "max_namespaces": 32, 00:13:47.253 "min_cntlid": 1, 00:13:47.253 "max_cntlid": 65519, 00:13:47.253 "namespaces": [ 00:13:47.253 { 00:13:47.253 "nsid": 1, 00:13:47.253 "bdev_name": "Malloc2", 00:13:47.253 "name": "Malloc2", 00:13:47.253 "nguid": "629BE02682514953B70607556F4CB988", 00:13:47.253 "uuid": "629be026-8251-4953-b706-07556f4cb988" 00:13:47.253 } 00:13:47.253 ] 00:13:47.253 } 00:13:47.253 ] 00:13:47.253 11:08:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:13:47.253 11:08:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:13:47.253 11:08:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=4022778 00:13:47.253 11:08:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:13:47.253 11:08:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:13:47.253 11:08:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:47.253 11:08:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:47.253 11:08:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:13:47.253 11:08:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:13:47.253 11:08:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:13:47.511 [2024-11-20 11:08:14.777344] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:47.511 Malloc3 00:13:47.511 11:08:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:13:47.769 [2024-11-20 11:08:15.019244] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:47.769 11:08:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:47.769 Asynchronous Event Request test 00:13:47.769 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:47.769 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:47.769 Registering asynchronous event callbacks... 00:13:47.769 Starting namespace attribute notice tests for all controllers... 00:13:47.769 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:13:47.769 aer_cb - Changed Namespace 00:13:47.769 Cleaning up... 00:13:47.769 [ 00:13:47.769 { 00:13:47.769 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:47.769 "subtype": "Discovery", 00:13:47.769 "listen_addresses": [], 00:13:47.769 "allow_any_host": true, 00:13:47.769 "hosts": [] 00:13:47.769 }, 00:13:47.769 { 00:13:47.769 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:47.769 "subtype": "NVMe", 00:13:47.769 "listen_addresses": [ 00:13:47.769 { 00:13:47.769 "trtype": "VFIOUSER", 00:13:47.769 "adrfam": "IPv4", 00:13:47.769 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:47.769 "trsvcid": "0" 00:13:47.769 } 00:13:47.769 ], 00:13:47.769 "allow_any_host": true, 00:13:47.769 "hosts": [], 00:13:47.769 "serial_number": "SPDK1", 00:13:47.769 "model_number": "SPDK bdev Controller", 00:13:47.769 "max_namespaces": 32, 00:13:47.769 "min_cntlid": 1, 00:13:47.769 "max_cntlid": 65519, 00:13:47.769 "namespaces": [ 00:13:47.769 { 00:13:47.769 "nsid": 1, 00:13:47.769 "bdev_name": "Malloc1", 00:13:47.769 "name": "Malloc1", 00:13:47.769 "nguid": "7EB7C8F3B3AD4B6A8FD9F79624686F8C", 00:13:47.769 "uuid": "7eb7c8f3-b3ad-4b6a-8fd9-f79624686f8c" 00:13:47.769 }, 00:13:47.769 { 00:13:47.769 "nsid": 2, 00:13:47.769 "bdev_name": "Malloc3", 00:13:47.769 "name": "Malloc3", 00:13:47.769 "nguid": "A2FCE89A202744919DC2C99DC27815DD", 00:13:47.769 "uuid": "a2fce89a-2027-4491-9dc2-c99dc27815dd" 00:13:47.769 } 00:13:47.769 ] 00:13:47.769 }, 00:13:47.769 { 00:13:47.769 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:47.769 "subtype": "NVMe", 00:13:47.769 "listen_addresses": [ 00:13:47.769 { 00:13:47.769 "trtype": "VFIOUSER", 00:13:47.769 "adrfam": "IPv4", 00:13:47.769 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:47.769 "trsvcid": "0" 00:13:47.769 } 00:13:47.769 ], 00:13:47.769 "allow_any_host": true, 00:13:47.769 "hosts": [], 00:13:47.769 "serial_number": "SPDK2", 00:13:47.769 "model_number": "SPDK bdev Controller", 00:13:47.769 "max_namespaces": 32, 00:13:47.769 "min_cntlid": 1, 00:13:47.769 "max_cntlid": 65519, 00:13:47.769 "namespaces": [ 00:13:47.769 { 00:13:47.769 "nsid": 1, 00:13:47.769 "bdev_name": "Malloc2", 00:13:47.769 "name": "Malloc2", 00:13:47.769 "nguid": "629BE02682514953B70607556F4CB988", 00:13:47.769 "uuid": "629be026-8251-4953-b706-07556f4cb988" 00:13:47.769 } 00:13:47.769 ] 00:13:47.769 } 00:13:47.769 ] 00:13:47.769 11:08:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 4022778 00:13:47.769 11:08:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:47.769 11:08:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:13:47.770 11:08:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:13:47.770 11:08:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:13:48.030 [2024-11-20 11:08:15.267890] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:13:48.030 [2024-11-20 11:08:15.267924] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4022836 ] 00:13:48.030 [2024-11-20 11:08:15.308811] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:13:48.030 [2024-11-20 11:08:15.317238] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:48.030 [2024-11-20 11:08:15.317264] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f63b32de000 00:13:48.030 [2024-11-20 11:08:15.318237] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:48.030 [2024-11-20 11:08:15.319246] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:48.030 [2024-11-20 11:08:15.320248] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:48.030 [2024-11-20 11:08:15.321251] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:48.030 [2024-11-20 11:08:15.322255] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:48.030 [2024-11-20 11:08:15.323261] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:48.030 [2024-11-20 11:08:15.324266] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:48.030 [2024-11-20 11:08:15.325278] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:48.030 [2024-11-20 11:08:15.326284] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:48.030 [2024-11-20 11:08:15.326295] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f63b32d3000 00:13:48.030 [2024-11-20 11:08:15.327241] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:48.030 [2024-11-20 11:08:15.336766] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:13:48.030 [2024-11-20 11:08:15.336791] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:13:48.030 [2024-11-20 11:08:15.341887] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:13:48.030 [2024-11-20 11:08:15.341926] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:13:48.030 [2024-11-20 11:08:15.342004] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:13:48.030 [2024-11-20 11:08:15.342028] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:13:48.030 [2024-11-20 11:08:15.342034] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:13:48.030 [2024-11-20 11:08:15.342894] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:13:48.030 [2024-11-20 11:08:15.342904] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:13:48.030 [2024-11-20 11:08:15.342911] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:13:48.030 [2024-11-20 11:08:15.343905] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:13:48.030 [2024-11-20 11:08:15.343915] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:13:48.030 [2024-11-20 11:08:15.343922] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:13:48.030 [2024-11-20 11:08:15.344919] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:13:48.030 [2024-11-20 11:08:15.344929] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:13:48.030 [2024-11-20 11:08:15.345924] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:13:48.030 [2024-11-20 11:08:15.345933] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:13:48.030 [2024-11-20 11:08:15.345938] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:13:48.030 [2024-11-20 11:08:15.345943] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:13:48.030 [2024-11-20 11:08:15.346055] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:13:48.030 [2024-11-20 11:08:15.346060] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:13:48.030 [2024-11-20 11:08:15.346065] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:13:48.030 [2024-11-20 11:08:15.346933] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:13:48.030 [2024-11-20 11:08:15.347934] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:13:48.030 [2024-11-20 11:08:15.348943] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:13:48.030 [2024-11-20 11:08:15.349940] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:48.030 [2024-11-20 11:08:15.349985] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:13:48.030 [2024-11-20 11:08:15.350946] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:13:48.030 [2024-11-20 11:08:15.350961] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:13:48.030 [2024-11-20 11:08:15.350965] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:13:48.030 [2024-11-20 11:08:15.350982] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:13:48.030 [2024-11-20 11:08:15.350994] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:13:48.030 [2024-11-20 11:08:15.351005] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:48.030 [2024-11-20 11:08:15.351010] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:48.030 [2024-11-20 11:08:15.351015] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:48.030 [2024-11-20 11:08:15.351028] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:48.030 [2024-11-20 11:08:15.359958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:13:48.030 [2024-11-20 11:08:15.359972] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:13:48.030 [2024-11-20 11:08:15.359977] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:13:48.030 [2024-11-20 11:08:15.359982] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:13:48.030 [2024-11-20 11:08:15.359987] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:13:48.030 [2024-11-20 11:08:15.359995] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:13:48.030 [2024-11-20 11:08:15.359999] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:13:48.030 [2024-11-20 11:08:15.360003] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:13:48.030 [2024-11-20 11:08:15.360012] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:13:48.030 [2024-11-20 11:08:15.360022] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:13:48.030 [2024-11-20 11:08:15.367955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:13:48.030 [2024-11-20 11:08:15.367969] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:48.031 [2024-11-20 11:08:15.367976] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:48.031 [2024-11-20 11:08:15.367983] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:48.031 [2024-11-20 11:08:15.367991] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:48.031 [2024-11-20 11:08:15.367995] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:13:48.031 [2024-11-20 11:08:15.368001] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:13:48.031 [2024-11-20 11:08:15.368009] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:13:48.031 [2024-11-20 11:08:15.375955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:13:48.031 [2024-11-20 11:08:15.375966] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:13:48.031 [2024-11-20 11:08:15.375971] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:13:48.031 [2024-11-20 11:08:15.375977] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:13:48.031 [2024-11-20 11:08:15.375983] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:13:48.031 [2024-11-20 11:08:15.375992] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:48.031 [2024-11-20 11:08:15.383956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:13:48.031 [2024-11-20 11:08:15.384014] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:13:48.031 [2024-11-20 11:08:15.384022] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:13:48.031 [2024-11-20 11:08:15.384029] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:13:48.031 [2024-11-20 11:08:15.384033] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:13:48.031 [2024-11-20 11:08:15.384036] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:48.031 [2024-11-20 11:08:15.384042] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:13:48.031 [2024-11-20 11:08:15.391955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:13:48.031 [2024-11-20 11:08:15.391966] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:13:48.031 [2024-11-20 11:08:15.391975] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:13:48.031 [2024-11-20 11:08:15.391982] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:13:48.031 [2024-11-20 11:08:15.391989] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:48.031 [2024-11-20 11:08:15.391992] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:48.031 [2024-11-20 11:08:15.391995] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:48.031 [2024-11-20 11:08:15.392001] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:48.031 [2024-11-20 11:08:15.399955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:13:48.031 [2024-11-20 11:08:15.399971] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:13:48.031 [2024-11-20 11:08:15.399978] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:13:48.031 [2024-11-20 11:08:15.399985] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:48.031 [2024-11-20 11:08:15.399989] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:48.031 [2024-11-20 11:08:15.399992] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:48.031 [2024-11-20 11:08:15.399997] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:48.031 [2024-11-20 11:08:15.407954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:13:48.031 [2024-11-20 11:08:15.407964] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:13:48.031 [2024-11-20 11:08:15.407970] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:13:48.031 [2024-11-20 11:08:15.407980] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:13:48.031 [2024-11-20 11:08:15.407986] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:13:48.031 [2024-11-20 11:08:15.407990] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:13:48.031 [2024-11-20 11:08:15.407995] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:13:48.031 [2024-11-20 11:08:15.407999] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:13:48.031 [2024-11-20 11:08:15.408003] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:13:48.031 [2024-11-20 11:08:15.408008] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:13:48.031 [2024-11-20 11:08:15.408024] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:13:48.031 [2024-11-20 11:08:15.415956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:13:48.031 [2024-11-20 11:08:15.415968] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:13:48.031 [2024-11-20 11:08:15.423954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:13:48.031 [2024-11-20 11:08:15.423966] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:13:48.031 [2024-11-20 11:08:15.431954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:13:48.031 [2024-11-20 11:08:15.431966] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:48.031 [2024-11-20 11:08:15.439955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:13:48.031 [2024-11-20 11:08:15.439970] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:13:48.031 [2024-11-20 11:08:15.439975] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:13:48.031 [2024-11-20 11:08:15.439978] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:13:48.031 [2024-11-20 11:08:15.439981] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:13:48.031 [2024-11-20 11:08:15.439984] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:13:48.031 [2024-11-20 11:08:15.439989] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:13:48.031 [2024-11-20 11:08:15.439996] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:13:48.031 [2024-11-20 11:08:15.440000] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:13:48.031 [2024-11-20 11:08:15.440003] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:48.031 [2024-11-20 11:08:15.440009] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:13:48.031 [2024-11-20 11:08:15.440014] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:13:48.031 [2024-11-20 11:08:15.440019] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:48.031 [2024-11-20 11:08:15.440022] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:48.031 [2024-11-20 11:08:15.440030] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:48.031 [2024-11-20 11:08:15.440037] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:13:48.031 [2024-11-20 11:08:15.440041] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:13:48.031 [2024-11-20 11:08:15.440044] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:48.031 [2024-11-20 11:08:15.440049] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:13:48.031 [2024-11-20 11:08:15.447953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:13:48.031 [2024-11-20 11:08:15.447967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:13:48.031 [2024-11-20 11:08:15.447977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:13:48.031 [2024-11-20 11:08:15.447983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:13:48.031 ===================================================== 00:13:48.031 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:48.031 ===================================================== 00:13:48.031 Controller Capabilities/Features 00:13:48.031 ================================ 00:13:48.031 Vendor ID: 4e58 00:13:48.031 Subsystem Vendor ID: 4e58 00:13:48.031 Serial Number: SPDK2 00:13:48.031 Model Number: SPDK bdev Controller 00:13:48.031 Firmware Version: 25.01 00:13:48.031 Recommended Arb Burst: 6 00:13:48.031 IEEE OUI Identifier: 8d 6b 50 00:13:48.031 Multi-path I/O 00:13:48.031 May have multiple subsystem ports: Yes 00:13:48.031 May have multiple controllers: Yes 00:13:48.031 Associated with SR-IOV VF: No 00:13:48.031 Max Data Transfer Size: 131072 00:13:48.032 Max Number of Namespaces: 32 00:13:48.032 Max Number of I/O Queues: 127 00:13:48.032 NVMe Specification Version (VS): 1.3 00:13:48.032 NVMe Specification Version (Identify): 1.3 00:13:48.032 Maximum Queue Entries: 256 00:13:48.032 Contiguous Queues Required: Yes 00:13:48.032 Arbitration Mechanisms Supported 00:13:48.032 Weighted Round Robin: Not Supported 00:13:48.032 Vendor Specific: Not Supported 00:13:48.032 Reset Timeout: 15000 ms 00:13:48.032 Doorbell Stride: 4 bytes 00:13:48.032 NVM Subsystem Reset: Not Supported 00:13:48.032 Command Sets Supported 00:13:48.032 NVM Command Set: Supported 00:13:48.032 Boot Partition: Not Supported 00:13:48.032 Memory Page Size Minimum: 4096 bytes 00:13:48.032 Memory Page Size Maximum: 4096 bytes 00:13:48.032 Persistent Memory Region: Not Supported 00:13:48.032 Optional Asynchronous Events Supported 00:13:48.032 Namespace Attribute Notices: Supported 00:13:48.032 Firmware Activation Notices: Not Supported 00:13:48.032 ANA Change Notices: Not Supported 00:13:48.032 PLE Aggregate Log Change Notices: Not Supported 00:13:48.032 LBA Status Info Alert Notices: Not Supported 00:13:48.032 EGE Aggregate Log Change Notices: Not Supported 00:13:48.032 Normal NVM Subsystem Shutdown event: Not Supported 00:13:48.032 Zone Descriptor Change Notices: Not Supported 00:13:48.032 Discovery Log Change Notices: Not Supported 00:13:48.032 Controller Attributes 00:13:48.032 128-bit Host Identifier: Supported 00:13:48.032 Non-Operational Permissive Mode: Not Supported 00:13:48.032 NVM Sets: Not Supported 00:13:48.032 Read Recovery Levels: Not Supported 00:13:48.032 Endurance Groups: Not Supported 00:13:48.032 Predictable Latency Mode: Not Supported 00:13:48.032 Traffic Based Keep ALive: Not Supported 00:13:48.032 Namespace Granularity: Not Supported 00:13:48.032 SQ Associations: Not Supported 00:13:48.032 UUID List: Not Supported 00:13:48.032 Multi-Domain Subsystem: Not Supported 00:13:48.032 Fixed Capacity Management: Not Supported 00:13:48.032 Variable Capacity Management: Not Supported 00:13:48.032 Delete Endurance Group: Not Supported 00:13:48.032 Delete NVM Set: Not Supported 00:13:48.032 Extended LBA Formats Supported: Not Supported 00:13:48.032 Flexible Data Placement Supported: Not Supported 00:13:48.032 00:13:48.032 Controller Memory Buffer Support 00:13:48.032 ================================ 00:13:48.032 Supported: No 00:13:48.032 00:13:48.032 Persistent Memory Region Support 00:13:48.032 ================================ 00:13:48.032 Supported: No 00:13:48.032 00:13:48.032 Admin Command Set Attributes 00:13:48.032 ============================ 00:13:48.032 Security Send/Receive: Not Supported 00:13:48.032 Format NVM: Not Supported 00:13:48.032 Firmware Activate/Download: Not Supported 00:13:48.032 Namespace Management: Not Supported 00:13:48.032 Device Self-Test: Not Supported 00:13:48.032 Directives: Not Supported 00:13:48.032 NVMe-MI: Not Supported 00:13:48.032 Virtualization Management: Not Supported 00:13:48.032 Doorbell Buffer Config: Not Supported 00:13:48.032 Get LBA Status Capability: Not Supported 00:13:48.032 Command & Feature Lockdown Capability: Not Supported 00:13:48.032 Abort Command Limit: 4 00:13:48.032 Async Event Request Limit: 4 00:13:48.032 Number of Firmware Slots: N/A 00:13:48.032 Firmware Slot 1 Read-Only: N/A 00:13:48.032 Firmware Activation Without Reset: N/A 00:13:48.032 Multiple Update Detection Support: N/A 00:13:48.032 Firmware Update Granularity: No Information Provided 00:13:48.032 Per-Namespace SMART Log: No 00:13:48.032 Asymmetric Namespace Access Log Page: Not Supported 00:13:48.032 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:13:48.032 Command Effects Log Page: Supported 00:13:48.032 Get Log Page Extended Data: Supported 00:13:48.032 Telemetry Log Pages: Not Supported 00:13:48.032 Persistent Event Log Pages: Not Supported 00:13:48.032 Supported Log Pages Log Page: May Support 00:13:48.032 Commands Supported & Effects Log Page: Not Supported 00:13:48.032 Feature Identifiers & Effects Log Page:May Support 00:13:48.032 NVMe-MI Commands & Effects Log Page: May Support 00:13:48.032 Data Area 4 for Telemetry Log: Not Supported 00:13:48.032 Error Log Page Entries Supported: 128 00:13:48.032 Keep Alive: Supported 00:13:48.032 Keep Alive Granularity: 10000 ms 00:13:48.032 00:13:48.032 NVM Command Set Attributes 00:13:48.032 ========================== 00:13:48.032 Submission Queue Entry Size 00:13:48.032 Max: 64 00:13:48.032 Min: 64 00:13:48.032 Completion Queue Entry Size 00:13:48.032 Max: 16 00:13:48.032 Min: 16 00:13:48.032 Number of Namespaces: 32 00:13:48.032 Compare Command: Supported 00:13:48.032 Write Uncorrectable Command: Not Supported 00:13:48.032 Dataset Management Command: Supported 00:13:48.032 Write Zeroes Command: Supported 00:13:48.032 Set Features Save Field: Not Supported 00:13:48.032 Reservations: Not Supported 00:13:48.032 Timestamp: Not Supported 00:13:48.032 Copy: Supported 00:13:48.032 Volatile Write Cache: Present 00:13:48.032 Atomic Write Unit (Normal): 1 00:13:48.032 Atomic Write Unit (PFail): 1 00:13:48.032 Atomic Compare & Write Unit: 1 00:13:48.032 Fused Compare & Write: Supported 00:13:48.032 Scatter-Gather List 00:13:48.032 SGL Command Set: Supported (Dword aligned) 00:13:48.032 SGL Keyed: Not Supported 00:13:48.032 SGL Bit Bucket Descriptor: Not Supported 00:13:48.032 SGL Metadata Pointer: Not Supported 00:13:48.032 Oversized SGL: Not Supported 00:13:48.032 SGL Metadata Address: Not Supported 00:13:48.032 SGL Offset: Not Supported 00:13:48.032 Transport SGL Data Block: Not Supported 00:13:48.032 Replay Protected Memory Block: Not Supported 00:13:48.032 00:13:48.032 Firmware Slot Information 00:13:48.032 ========================= 00:13:48.032 Active slot: 1 00:13:48.032 Slot 1 Firmware Revision: 25.01 00:13:48.032 00:13:48.032 00:13:48.032 Commands Supported and Effects 00:13:48.032 ============================== 00:13:48.032 Admin Commands 00:13:48.032 -------------- 00:13:48.032 Get Log Page (02h): Supported 00:13:48.032 Identify (06h): Supported 00:13:48.032 Abort (08h): Supported 00:13:48.032 Set Features (09h): Supported 00:13:48.032 Get Features (0Ah): Supported 00:13:48.032 Asynchronous Event Request (0Ch): Supported 00:13:48.032 Keep Alive (18h): Supported 00:13:48.032 I/O Commands 00:13:48.032 ------------ 00:13:48.032 Flush (00h): Supported LBA-Change 00:13:48.032 Write (01h): Supported LBA-Change 00:13:48.032 Read (02h): Supported 00:13:48.032 Compare (05h): Supported 00:13:48.032 Write Zeroes (08h): Supported LBA-Change 00:13:48.032 Dataset Management (09h): Supported LBA-Change 00:13:48.032 Copy (19h): Supported LBA-Change 00:13:48.032 00:13:48.032 Error Log 00:13:48.032 ========= 00:13:48.032 00:13:48.032 Arbitration 00:13:48.032 =========== 00:13:48.032 Arbitration Burst: 1 00:13:48.032 00:13:48.032 Power Management 00:13:48.032 ================ 00:13:48.032 Number of Power States: 1 00:13:48.032 Current Power State: Power State #0 00:13:48.032 Power State #0: 00:13:48.032 Max Power: 0.00 W 00:13:48.032 Non-Operational State: Operational 00:13:48.032 Entry Latency: Not Reported 00:13:48.032 Exit Latency: Not Reported 00:13:48.032 Relative Read Throughput: 0 00:13:48.032 Relative Read Latency: 0 00:13:48.032 Relative Write Throughput: 0 00:13:48.032 Relative Write Latency: 0 00:13:48.032 Idle Power: Not Reported 00:13:48.032 Active Power: Not Reported 00:13:48.032 Non-Operational Permissive Mode: Not Supported 00:13:48.032 00:13:48.032 Health Information 00:13:48.032 ================== 00:13:48.032 Critical Warnings: 00:13:48.032 Available Spare Space: OK 00:13:48.032 Temperature: OK 00:13:48.032 Device Reliability: OK 00:13:48.032 Read Only: No 00:13:48.032 Volatile Memory Backup: OK 00:13:48.032 Current Temperature: 0 Kelvin (-273 Celsius) 00:13:48.032 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:13:48.032 Available Spare: 0% 00:13:48.032 Available Sp[2024-11-20 11:08:15.448076] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:13:48.032 [2024-11-20 11:08:15.455954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:13:48.032 [2024-11-20 11:08:15.455982] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:13:48.032 [2024-11-20 11:08:15.455991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.032 [2024-11-20 11:08:15.455997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.032 [2024-11-20 11:08:15.456002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.033 [2024-11-20 11:08:15.456008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.033 [2024-11-20 11:08:15.456064] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:13:48.033 [2024-11-20 11:08:15.456074] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:13:48.033 [2024-11-20 11:08:15.457062] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:48.033 [2024-11-20 11:08:15.457106] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:13:48.033 [2024-11-20 11:08:15.457112] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:13:48.033 [2024-11-20 11:08:15.458073] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:13:48.033 [2024-11-20 11:08:15.458085] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:13:48.033 [2024-11-20 11:08:15.458132] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:13:48.033 [2024-11-20 11:08:15.459118] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:48.033 are Threshold: 0% 00:13:48.033 Life Percentage Used: 0% 00:13:48.033 Data Units Read: 0 00:13:48.033 Data Units Written: 0 00:13:48.033 Host Read Commands: 0 00:13:48.033 Host Write Commands: 0 00:13:48.033 Controller Busy Time: 0 minutes 00:13:48.033 Power Cycles: 0 00:13:48.033 Power On Hours: 0 hours 00:13:48.033 Unsafe Shutdowns: 0 00:13:48.033 Unrecoverable Media Errors: 0 00:13:48.033 Lifetime Error Log Entries: 0 00:13:48.033 Warning Temperature Time: 0 minutes 00:13:48.033 Critical Temperature Time: 0 minutes 00:13:48.033 00:13:48.033 Number of Queues 00:13:48.033 ================ 00:13:48.033 Number of I/O Submission Queues: 127 00:13:48.033 Number of I/O Completion Queues: 127 00:13:48.033 00:13:48.033 Active Namespaces 00:13:48.033 ================= 00:13:48.033 Namespace ID:1 00:13:48.033 Error Recovery Timeout: Unlimited 00:13:48.033 Command Set Identifier: NVM (00h) 00:13:48.033 Deallocate: Supported 00:13:48.033 Deallocated/Unwritten Error: Not Supported 00:13:48.033 Deallocated Read Value: Unknown 00:13:48.033 Deallocate in Write Zeroes: Not Supported 00:13:48.033 Deallocated Guard Field: 0xFFFF 00:13:48.033 Flush: Supported 00:13:48.033 Reservation: Supported 00:13:48.033 Namespace Sharing Capabilities: Multiple Controllers 00:13:48.033 Size (in LBAs): 131072 (0GiB) 00:13:48.033 Capacity (in LBAs): 131072 (0GiB) 00:13:48.033 Utilization (in LBAs): 131072 (0GiB) 00:13:48.033 NGUID: 629BE02682514953B70607556F4CB988 00:13:48.033 UUID: 629be026-8251-4953-b706-07556f4cb988 00:13:48.033 Thin Provisioning: Not Supported 00:13:48.033 Per-NS Atomic Units: Yes 00:13:48.033 Atomic Boundary Size (Normal): 0 00:13:48.033 Atomic Boundary Size (PFail): 0 00:13:48.033 Atomic Boundary Offset: 0 00:13:48.033 Maximum Single Source Range Length: 65535 00:13:48.033 Maximum Copy Length: 65535 00:13:48.033 Maximum Source Range Count: 1 00:13:48.033 NGUID/EUI64 Never Reused: No 00:13:48.033 Namespace Write Protected: No 00:13:48.033 Number of LBA Formats: 1 00:13:48.033 Current LBA Format: LBA Format #00 00:13:48.033 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:48.033 00:13:48.033 11:08:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:13:48.291 [2024-11-20 11:08:15.695459] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:53.552 Initializing NVMe Controllers 00:13:53.552 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:53.552 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:13:53.552 Initialization complete. Launching workers. 00:13:53.552 ======================================================== 00:13:53.552 Latency(us) 00:13:53.552 Device Information : IOPS MiB/s Average min max 00:13:53.552 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39958.60 156.09 3203.37 960.89 8093.66 00:13:53.552 ======================================================== 00:13:53.552 Total : 39958.60 156.09 3203.37 960.89 8093.66 00:13:53.552 00:13:53.552 [2024-11-20 11:08:20.796224] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:53.552 11:08:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:13:53.552 [2024-11-20 11:08:21.027899] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:58.842 Initializing NVMe Controllers 00:13:58.842 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:58.842 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:13:58.842 Initialization complete. Launching workers. 00:13:58.842 ======================================================== 00:13:58.842 Latency(us) 00:13:58.842 Device Information : IOPS MiB/s Average min max 00:13:58.842 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39948.98 156.05 3203.68 982.05 8206.39 00:13:58.842 ======================================================== 00:13:58.842 Total : 39948.98 156.05 3203.68 982.05 8206.39 00:13:58.842 00:13:58.842 [2024-11-20 11:08:26.048180] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:58.842 11:08:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:13:58.842 [2024-11-20 11:08:26.263514] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:04.104 [2024-11-20 11:08:31.396045] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:04.104 Initializing NVMe Controllers 00:14:04.104 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:04.104 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:04.104 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:14:04.104 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:14:04.104 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:14:04.104 Initialization complete. Launching workers. 00:14:04.104 Starting thread on core 2 00:14:04.104 Starting thread on core 3 00:14:04.104 Starting thread on core 1 00:14:04.104 11:08:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:14:04.362 [2024-11-20 11:08:31.685746] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:07.647 [2024-11-20 11:08:34.736973] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:07.647 Initializing NVMe Controllers 00:14:07.647 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:07.647 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:07.647 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:14:07.647 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:14:07.647 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:14:07.647 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:14:07.647 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:14:07.647 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:14:07.647 Initialization complete. Launching workers. 00:14:07.647 Starting thread on core 1 with urgent priority queue 00:14:07.647 Starting thread on core 2 with urgent priority queue 00:14:07.647 Starting thread on core 3 with urgent priority queue 00:14:07.647 Starting thread on core 0 with urgent priority queue 00:14:07.647 SPDK bdev Controller (SPDK2 ) core 0: 9000.67 IO/s 11.11 secs/100000 ios 00:14:07.647 SPDK bdev Controller (SPDK2 ) core 1: 8069.67 IO/s 12.39 secs/100000 ios 00:14:07.647 SPDK bdev Controller (SPDK2 ) core 2: 7136.33 IO/s 14.01 secs/100000 ios 00:14:07.647 SPDK bdev Controller (SPDK2 ) core 3: 8289.67 IO/s 12.06 secs/100000 ios 00:14:07.647 ======================================================== 00:14:07.647 00:14:07.647 11:08:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:14:07.647 [2024-11-20 11:08:35.026451] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:07.647 Initializing NVMe Controllers 00:14:07.647 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:07.647 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:07.647 Namespace ID: 1 size: 0GB 00:14:07.647 Initialization complete. 00:14:07.647 INFO: using host memory buffer for IO 00:14:07.647 Hello world! 00:14:07.647 [2024-11-20 11:08:35.038528] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:07.647 11:08:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:14:07.906 [2024-11-20 11:08:35.312831] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:09.279 Initializing NVMe Controllers 00:14:09.279 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:09.279 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:09.279 Initialization complete. Launching workers. 00:14:09.279 submit (in ns) avg, min, max = 6074.8, 3280.0, 4000613.9 00:14:09.279 complete (in ns) avg, min, max = 21855.4, 1818.3, 7986700.0 00:14:09.279 00:14:09.279 Submit histogram 00:14:09.279 ================ 00:14:09.279 Range in us Cumulative Count 00:14:09.279 3.270 - 3.283: 0.0126% ( 2) 00:14:09.279 3.283 - 3.297: 0.0566% ( 7) 00:14:09.279 3.297 - 3.311: 0.2263% ( 27) 00:14:09.279 3.311 - 3.325: 0.6725% ( 71) 00:14:09.279 3.325 - 3.339: 2.6273% ( 311) 00:14:09.279 3.339 - 3.353: 7.2030% ( 728) 00:14:09.279 3.353 - 3.367: 12.8473% ( 898) 00:14:09.279 3.367 - 3.381: 19.2583% ( 1020) 00:14:09.279 3.381 - 3.395: 25.6694% ( 1020) 00:14:09.279 3.395 - 3.409: 31.6719% ( 955) 00:14:09.279 3.409 - 3.423: 36.5242% ( 772) 00:14:09.279 3.423 - 3.437: 42.5456% ( 958) 00:14:09.279 3.437 - 3.450: 46.9956% ( 708) 00:14:09.279 3.450 - 3.464: 50.5594% ( 567) 00:14:09.279 3.464 - 3.478: 54.0981% ( 563) 00:14:09.279 3.478 - 3.492: 59.3652% ( 838) 00:14:09.279 3.492 - 3.506: 66.9767% ( 1211) 00:14:09.279 3.506 - 3.520: 72.0616% ( 809) 00:14:09.279 3.520 - 3.534: 76.6499% ( 730) 00:14:09.279 3.534 - 3.548: 81.0874% ( 706) 00:14:09.279 3.548 - 3.562: 84.3118% ( 513) 00:14:09.279 3.562 - 3.590: 87.1464% ( 451) 00:14:09.279 3.590 - 3.617: 87.8316% ( 109) 00:14:09.279 3.617 - 3.645: 88.7178% ( 141) 00:14:09.279 3.645 - 3.673: 90.3646% ( 262) 00:14:09.279 3.673 - 3.701: 92.2124% ( 294) 00:14:09.279 3.701 - 3.729: 93.8089% ( 254) 00:14:09.279 3.729 - 3.757: 95.5877% ( 283) 00:14:09.279 3.757 - 3.784: 97.0333% ( 230) 00:14:09.279 3.784 - 3.812: 98.2212% ( 189) 00:14:09.279 3.812 - 3.840: 98.8875% ( 106) 00:14:09.279 3.840 - 3.868: 99.3212% ( 69) 00:14:09.279 3.868 - 3.896: 99.5663% ( 39) 00:14:09.279 3.896 - 3.923: 99.6229% ( 9) 00:14:09.279 3.923 - 3.951: 99.6417% ( 3) 00:14:09.279 3.951 - 3.979: 99.6480% ( 1) 00:14:09.279 5.426 - 5.454: 99.6543% ( 1) 00:14:09.279 5.482 - 5.510: 99.6606% ( 1) 00:14:09.279 5.510 - 5.537: 99.6669% ( 1) 00:14:09.279 5.593 - 5.621: 99.6794% ( 2) 00:14:09.279 5.732 - 5.760: 99.6857% ( 1) 00:14:09.279 5.760 - 5.788: 99.6920% ( 1) 00:14:09.279 5.816 - 5.843: 99.6983% ( 1) 00:14:09.279 5.871 - 5.899: 99.7046% ( 1) 00:14:09.279 6.010 - 6.038: 99.7109% ( 1) 00:14:09.279 6.150 - 6.177: 99.7172% ( 1) 00:14:09.279 6.205 - 6.233: 99.7234% ( 1) 00:14:09.279 6.289 - 6.317: 99.7297% ( 1) 00:14:09.279 6.317 - 6.344: 99.7360% ( 1) 00:14:09.279 6.567 - 6.595: 99.7423% ( 1) 00:14:09.279 6.595 - 6.623: 99.7486% ( 1) 00:14:09.279 6.873 - 6.901: 99.7674% ( 3) 00:14:09.279 6.901 - 6.929: 99.7737% ( 1) 00:14:09.279 6.929 - 6.957: 99.7800% ( 1) 00:14:09.279 7.040 - 7.068: 99.7863% ( 1) 00:14:09.279 7.068 - 7.096: 99.7926% ( 1) 00:14:09.279 7.235 - 7.290: 99.7989% ( 1) 00:14:09.279 7.346 - 7.402: 99.8052% ( 1) 00:14:09.279 7.569 - 7.624: 99.8114% ( 1) 00:14:09.279 7.624 - 7.680: 99.8240% ( 2) 00:14:09.279 7.680 - 7.736: 99.8366% ( 2) 00:14:09.279 7.791 - 7.847: 99.8429% ( 1) 00:14:09.279 8.292 - 8.348: 99.8554% ( 2) 00:14:09.279 8.348 - 8.403: 99.8617% ( 1) 00:14:09.279 8.515 - 8.570: 99.8743% ( 2) 00:14:09.279 8.626 - 8.682: 99.8869% ( 2) 00:14:09.279 8.682 - 8.737: 99.8931% ( 1) 00:14:09.279 8.849 - 8.904: 99.8994% ( 1) 00:14:09.279 8.904 - 8.960: 99.9057% ( 1) 00:14:09.279 9.016 - 9.071: 99.9120% ( 1) 00:14:09.279 9.238 - 9.294: 99.9183% ( 1) 00:14:09.279 9.350 - 9.405: 99.9246% ( 1) 00:14:09.279 11.464 - 11.520: 99.9309% ( 1) 00:14:09.279 1154.003 - 1161.127: 99.9371% ( 1) 00:14:09.279 3989.148 - 4017.642: 100.0000% ( 10) 00:14:09.279 00:14:09.279 Complete histogram 00:14:09.279 ================== 00:14:09.279 Range in us Cumulative Count 00:14:09.279 1.809 - 1.823: 0.0063% ( 1) 00:14:09.279 1.823 - 1.837: 0.2954% ( 46) 00:14:09.279 1.837 - [2024-11-20 11:08:36.406995] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:09.279 1.850: 1.4959% ( 191) 00:14:09.279 1.850 - 1.864: 4.7014% ( 510) 00:14:09.279 1.864 - 1.878: 50.9617% ( 7360) 00:14:09.279 1.878 - 1.892: 87.7373% ( 5851) 00:14:09.279 1.892 - 1.906: 93.8152% ( 967) 00:14:09.279 1.906 - 1.920: 96.0717% ( 359) 00:14:09.279 1.920 - 1.934: 96.7756% ( 112) 00:14:09.279 1.934 - 1.948: 97.5864% ( 129) 00:14:09.279 1.948 - 1.962: 98.5732% ( 157) 00:14:09.279 1.962 - 1.976: 99.1263% ( 88) 00:14:09.279 1.976 - 1.990: 99.2269% ( 16) 00:14:09.279 1.990 - 2.003: 99.2458% ( 3) 00:14:09.279 2.017 - 2.031: 99.2520% ( 1) 00:14:09.279 2.031 - 2.045: 99.2583% ( 1) 00:14:09.279 2.059 - 2.073: 99.2646% ( 1) 00:14:09.279 2.073 - 2.087: 99.2772% ( 2) 00:14:09.279 3.645 - 3.673: 99.2835% ( 1) 00:14:09.279 3.812 - 3.840: 99.2898% ( 1) 00:14:09.279 3.923 - 3.951: 99.2960% ( 1) 00:14:09.279 4.118 - 4.146: 99.3023% ( 1) 00:14:09.279 4.146 - 4.174: 99.3086% ( 1) 00:14:09.279 4.174 - 4.202: 99.3149% ( 1) 00:14:09.279 4.202 - 4.230: 99.3212% ( 1) 00:14:09.279 4.313 - 4.341: 99.3275% ( 1) 00:14:09.279 4.424 - 4.452: 99.3338% ( 1) 00:14:09.279 4.452 - 4.480: 99.3400% ( 1) 00:14:09.279 4.563 - 4.591: 99.3463% ( 1) 00:14:09.279 4.591 - 4.619: 99.3526% ( 1) 00:14:09.279 4.703 - 4.730: 99.3589% ( 1) 00:14:09.279 5.315 - 5.343: 99.3715% ( 2) 00:14:09.279 5.343 - 5.370: 99.3777% ( 1) 00:14:09.279 5.370 - 5.398: 99.3840% ( 1) 00:14:09.279 5.482 - 5.510: 99.3903% ( 1) 00:14:09.279 5.537 - 5.565: 99.4029% ( 2) 00:14:09.279 5.649 - 5.677: 99.4092% ( 1) 00:14:09.279 5.899 - 5.927: 99.4155% ( 1) 00:14:09.279 6.094 - 6.122: 99.4217% ( 1) 00:14:09.279 6.511 - 6.539: 99.4280% ( 1) 00:14:09.279 6.539 - 6.567: 99.4343% ( 1) 00:14:09.279 6.817 - 6.845: 99.4406% ( 1) 00:14:09.279 6.845 - 6.873: 99.4469% ( 1) 00:14:09.279 6.957 - 6.984: 99.4532% ( 1) 00:14:09.279 6.984 - 7.012: 99.4595% ( 1) 00:14:09.279 7.290 - 7.346: 99.4657% ( 1) 00:14:09.279 7.513 - 7.569: 99.4720% ( 1) 00:14:09.279 7.680 - 7.736: 99.4783% ( 1) 00:14:09.279 8.070 - 8.125: 99.4846% ( 1) 00:14:09.279 8.570 - 8.626: 99.4909% ( 1) 00:14:09.279 8.849 - 8.904: 99.4972% ( 1) 00:14:09.279 11.075 - 11.130: 99.5035% ( 1) 00:14:09.279 13.746 - 13.802: 99.5097% ( 1) 00:14:09.279 3989.148 - 4017.642: 99.9811% ( 75) 00:14:09.279 4046.136 - 4074.630: 99.9874% ( 1) 00:14:09.279 5983.722 - 6012.216: 99.9937% ( 1) 00:14:09.279 7978.296 - 8035.283: 100.0000% ( 1) 00:14:09.279 00:14:09.279 11:08:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:14:09.279 11:08:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:14:09.279 11:08:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:14:09.279 11:08:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:14:09.279 11:08:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:09.279 [ 00:14:09.279 { 00:14:09.279 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:09.279 "subtype": "Discovery", 00:14:09.279 "listen_addresses": [], 00:14:09.280 "allow_any_host": true, 00:14:09.280 "hosts": [] 00:14:09.280 }, 00:14:09.280 { 00:14:09.280 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:09.280 "subtype": "NVMe", 00:14:09.280 "listen_addresses": [ 00:14:09.280 { 00:14:09.280 "trtype": "VFIOUSER", 00:14:09.280 "adrfam": "IPv4", 00:14:09.280 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:09.280 "trsvcid": "0" 00:14:09.280 } 00:14:09.280 ], 00:14:09.280 "allow_any_host": true, 00:14:09.280 "hosts": [], 00:14:09.280 "serial_number": "SPDK1", 00:14:09.280 "model_number": "SPDK bdev Controller", 00:14:09.280 "max_namespaces": 32, 00:14:09.280 "min_cntlid": 1, 00:14:09.280 "max_cntlid": 65519, 00:14:09.280 "namespaces": [ 00:14:09.280 { 00:14:09.280 "nsid": 1, 00:14:09.280 "bdev_name": "Malloc1", 00:14:09.280 "name": "Malloc1", 00:14:09.280 "nguid": "7EB7C8F3B3AD4B6A8FD9F79624686F8C", 00:14:09.280 "uuid": "7eb7c8f3-b3ad-4b6a-8fd9-f79624686f8c" 00:14:09.280 }, 00:14:09.280 { 00:14:09.280 "nsid": 2, 00:14:09.280 "bdev_name": "Malloc3", 00:14:09.280 "name": "Malloc3", 00:14:09.280 "nguid": "A2FCE89A202744919DC2C99DC27815DD", 00:14:09.280 "uuid": "a2fce89a-2027-4491-9dc2-c99dc27815dd" 00:14:09.280 } 00:14:09.280 ] 00:14:09.280 }, 00:14:09.280 { 00:14:09.280 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:09.280 "subtype": "NVMe", 00:14:09.280 "listen_addresses": [ 00:14:09.280 { 00:14:09.280 "trtype": "VFIOUSER", 00:14:09.280 "adrfam": "IPv4", 00:14:09.280 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:09.280 "trsvcid": "0" 00:14:09.280 } 00:14:09.280 ], 00:14:09.280 "allow_any_host": true, 00:14:09.280 "hosts": [], 00:14:09.280 "serial_number": "SPDK2", 00:14:09.280 "model_number": "SPDK bdev Controller", 00:14:09.280 "max_namespaces": 32, 00:14:09.280 "min_cntlid": 1, 00:14:09.280 "max_cntlid": 65519, 00:14:09.280 "namespaces": [ 00:14:09.280 { 00:14:09.280 "nsid": 1, 00:14:09.280 "bdev_name": "Malloc2", 00:14:09.280 "name": "Malloc2", 00:14:09.280 "nguid": "629BE02682514953B70607556F4CB988", 00:14:09.280 "uuid": "629be026-8251-4953-b706-07556f4cb988" 00:14:09.280 } 00:14:09.280 ] 00:14:09.280 } 00:14:09.280 ] 00:14:09.280 11:08:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:14:09.280 11:08:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:14:09.280 11:08:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=4026459 00:14:09.280 11:08:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:14:09.280 11:08:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:14:09.280 11:08:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:09.280 11:08:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:09.280 11:08:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:14:09.280 11:08:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:14:09.280 11:08:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:14:09.538 [2024-11-20 11:08:36.789461] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:09.538 Malloc4 00:14:09.538 11:08:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:14:09.796 [2024-11-20 11:08:37.034304] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:09.796 11:08:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:09.796 Asynchronous Event Request test 00:14:09.796 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:09.796 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:09.796 Registering asynchronous event callbacks... 00:14:09.796 Starting namespace attribute notice tests for all controllers... 00:14:09.796 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:14:09.796 aer_cb - Changed Namespace 00:14:09.796 Cleaning up... 00:14:09.796 [ 00:14:09.796 { 00:14:09.796 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:09.796 "subtype": "Discovery", 00:14:09.796 "listen_addresses": [], 00:14:09.796 "allow_any_host": true, 00:14:09.796 "hosts": [] 00:14:09.796 }, 00:14:09.796 { 00:14:09.796 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:09.796 "subtype": "NVMe", 00:14:09.796 "listen_addresses": [ 00:14:09.796 { 00:14:09.796 "trtype": "VFIOUSER", 00:14:09.796 "adrfam": "IPv4", 00:14:09.796 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:09.796 "trsvcid": "0" 00:14:09.796 } 00:14:09.796 ], 00:14:09.796 "allow_any_host": true, 00:14:09.796 "hosts": [], 00:14:09.796 "serial_number": "SPDK1", 00:14:09.796 "model_number": "SPDK bdev Controller", 00:14:09.796 "max_namespaces": 32, 00:14:09.796 "min_cntlid": 1, 00:14:09.796 "max_cntlid": 65519, 00:14:09.796 "namespaces": [ 00:14:09.796 { 00:14:09.796 "nsid": 1, 00:14:09.796 "bdev_name": "Malloc1", 00:14:09.796 "name": "Malloc1", 00:14:09.796 "nguid": "7EB7C8F3B3AD4B6A8FD9F79624686F8C", 00:14:09.796 "uuid": "7eb7c8f3-b3ad-4b6a-8fd9-f79624686f8c" 00:14:09.796 }, 00:14:09.796 { 00:14:09.796 "nsid": 2, 00:14:09.796 "bdev_name": "Malloc3", 00:14:09.796 "name": "Malloc3", 00:14:09.796 "nguid": "A2FCE89A202744919DC2C99DC27815DD", 00:14:09.796 "uuid": "a2fce89a-2027-4491-9dc2-c99dc27815dd" 00:14:09.796 } 00:14:09.796 ] 00:14:09.796 }, 00:14:09.796 { 00:14:09.796 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:09.796 "subtype": "NVMe", 00:14:09.796 "listen_addresses": [ 00:14:09.796 { 00:14:09.796 "trtype": "VFIOUSER", 00:14:09.796 "adrfam": "IPv4", 00:14:09.796 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:09.796 "trsvcid": "0" 00:14:09.796 } 00:14:09.796 ], 00:14:09.796 "allow_any_host": true, 00:14:09.796 "hosts": [], 00:14:09.796 "serial_number": "SPDK2", 00:14:09.796 "model_number": "SPDK bdev Controller", 00:14:09.796 "max_namespaces": 32, 00:14:09.796 "min_cntlid": 1, 00:14:09.796 "max_cntlid": 65519, 00:14:09.796 "namespaces": [ 00:14:09.796 { 00:14:09.796 "nsid": 1, 00:14:09.796 "bdev_name": "Malloc2", 00:14:09.796 "name": "Malloc2", 00:14:09.796 "nguid": "629BE02682514953B70607556F4CB988", 00:14:09.796 "uuid": "629be026-8251-4953-b706-07556f4cb988" 00:14:09.796 }, 00:14:09.796 { 00:14:09.796 "nsid": 2, 00:14:09.796 "bdev_name": "Malloc4", 00:14:09.796 "name": "Malloc4", 00:14:09.796 "nguid": "38666130EDD44F349EFF8D86D5F91A23", 00:14:09.796 "uuid": "38666130-edd4-4f34-9eff-8d86d5f91a23" 00:14:09.796 } 00:14:09.796 ] 00:14:09.796 } 00:14:09.796 ] 00:14:09.796 11:08:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 4026459 00:14:09.796 11:08:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:14:09.796 11:08:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 4018758 00:14:09.796 11:08:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 4018758 ']' 00:14:09.796 11:08:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 4018758 00:14:09.796 11:08:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:14:09.796 11:08:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:09.796 11:08:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4018758 00:14:10.055 11:08:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:10.055 11:08:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:10.055 11:08:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4018758' 00:14:10.055 killing process with pid 4018758 00:14:10.055 11:08:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 4018758 00:14:10.055 11:08:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 4018758 00:14:10.055 11:08:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:14:10.055 11:08:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:10.055 11:08:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:14:10.055 11:08:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:14:10.055 11:08:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:14:10.055 11:08:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=4026539 00:14:10.055 11:08:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 4026539' 00:14:10.055 Process pid: 4026539 00:14:10.055 11:08:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:14:10.055 11:08:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:10.056 11:08:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 4026539 00:14:10.056 11:08:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 4026539 ']' 00:14:10.056 11:08:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:10.056 11:08:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:10.056 11:08:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:10.056 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:10.056 11:08:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:10.056 11:08:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:10.315 [2024-11-20 11:08:37.589348] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:14:10.315 [2024-11-20 11:08:37.590266] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:14:10.315 [2024-11-20 11:08:37.590308] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:10.315 [2024-11-20 11:08:37.667775] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:10.315 [2024-11-20 11:08:37.708181] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:10.315 [2024-11-20 11:08:37.708223] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:10.315 [2024-11-20 11:08:37.708230] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:10.315 [2024-11-20 11:08:37.708236] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:10.315 [2024-11-20 11:08:37.708240] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:10.315 [2024-11-20 11:08:37.709864] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:10.315 [2024-11-20 11:08:37.709989] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:10.315 [2024-11-20 11:08:37.710036] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:10.315 [2024-11-20 11:08:37.710037] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:10.315 [2024-11-20 11:08:37.778527] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:14:10.315 [2024-11-20 11:08:37.778707] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:14:10.315 [2024-11-20 11:08:37.779401] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:14:10.315 [2024-11-20 11:08:37.779800] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:14:10.315 [2024-11-20 11:08:37.779835] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:14:10.574 11:08:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:10.574 11:08:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:14:10.574 11:08:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:11.510 11:08:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:14:11.769 11:08:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:11.769 11:08:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:11.769 11:08:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:11.769 11:08:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:11.769 11:08:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:11.769 Malloc1 00:14:12.026 11:08:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:12.026 11:08:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:12.283 11:08:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:12.542 11:08:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:12.542 11:08:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:12.542 11:08:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:12.799 Malloc2 00:14:12.800 11:08:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:13.057 11:08:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:13.057 11:08:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:13.316 11:08:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:14:13.316 11:08:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 4026539 00:14:13.316 11:08:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 4026539 ']' 00:14:13.316 11:08:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 4026539 00:14:13.316 11:08:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:14:13.316 11:08:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:13.316 11:08:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4026539 00:14:13.316 11:08:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:13.316 11:08:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:13.316 11:08:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4026539' 00:14:13.316 killing process with pid 4026539 00:14:13.316 11:08:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 4026539 00:14:13.316 11:08:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 4026539 00:14:13.575 11:08:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:14:13.575 11:08:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:13.575 00:14:13.575 real 0m50.863s 00:14:13.575 user 3m16.561s 00:14:13.575 sys 0m3.321s 00:14:13.575 11:08:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:13.575 11:08:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:13.575 ************************************ 00:14:13.575 END TEST nvmf_vfio_user 00:14:13.575 ************************************ 00:14:13.575 11:08:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:14:13.575 11:08:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:13.575 11:08:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:13.575 11:08:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:13.575 ************************************ 00:14:13.575 START TEST nvmf_vfio_user_nvme_compliance 00:14:13.575 ************************************ 00:14:13.575 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:14:13.836 * Looking for test storage... 00:14:13.836 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:14:13.836 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:13.836 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lcov --version 00:14:13.836 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:13.836 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:13.836 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:13.836 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:13.836 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:13.836 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:14:13.836 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:14:13.836 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:14:13.836 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:14:13.836 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:14:13.836 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:14:13.836 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:14:13.836 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:13.836 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:14:13.836 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:14:13.836 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:13.836 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:13.836 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:14:13.836 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:14:13.836 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:13.836 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:14:13.836 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:14:13.836 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:14:13.836 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:14:13.836 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:13.836 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:14:13.836 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:14:13.836 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:13.836 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:13.836 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:14:13.836 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:13.836 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:13.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:13.836 --rc genhtml_branch_coverage=1 00:14:13.836 --rc genhtml_function_coverage=1 00:14:13.836 --rc genhtml_legend=1 00:14:13.836 --rc geninfo_all_blocks=1 00:14:13.836 --rc geninfo_unexecuted_blocks=1 00:14:13.836 00:14:13.836 ' 00:14:13.836 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:13.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:13.836 --rc genhtml_branch_coverage=1 00:14:13.836 --rc genhtml_function_coverage=1 00:14:13.836 --rc genhtml_legend=1 00:14:13.836 --rc geninfo_all_blocks=1 00:14:13.836 --rc geninfo_unexecuted_blocks=1 00:14:13.836 00:14:13.836 ' 00:14:13.836 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:13.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:13.836 --rc genhtml_branch_coverage=1 00:14:13.836 --rc genhtml_function_coverage=1 00:14:13.836 --rc genhtml_legend=1 00:14:13.836 --rc geninfo_all_blocks=1 00:14:13.836 --rc geninfo_unexecuted_blocks=1 00:14:13.836 00:14:13.836 ' 00:14:13.836 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:13.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:13.836 --rc genhtml_branch_coverage=1 00:14:13.836 --rc genhtml_function_coverage=1 00:14:13.836 --rc genhtml_legend=1 00:14:13.836 --rc geninfo_all_blocks=1 00:14:13.836 --rc geninfo_unexecuted_blocks=1 00:14:13.836 00:14:13.836 ' 00:14:13.836 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:13.836 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:14:13.836 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:13.836 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:13.836 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:13.836 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:13.836 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:13.836 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:13.836 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:13.836 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:13.836 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:13.836 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:13.836 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:13.836 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:13.836 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:13.836 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:13.836 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:13.836 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:13.836 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:13.836 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:14:13.836 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:13.836 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:13.836 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:13.837 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:13.837 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:13.837 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:13.837 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:14:13.837 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:13.837 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:14:13.837 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:13.837 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:13.837 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:13.837 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:13.837 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:13.837 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:13.837 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:13.837 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:13.837 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:13.837 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:13.837 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:13.837 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:13.837 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:14:13.837 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:14:13.837 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:14:13.837 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=4027241 00:14:13.837 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 4027241' 00:14:13.837 Process pid: 4027241 00:14:13.837 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:13.837 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 4027241 00:14:13.837 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:14:13.837 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 4027241 ']' 00:14:13.837 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:13.837 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:13.837 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:13.837 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:13.837 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:13.837 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:13.837 [2024-11-20 11:08:41.296387] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:14:13.837 [2024-11-20 11:08:41.296436] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:14.112 [2024-11-20 11:08:41.371137] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:14.112 [2024-11-20 11:08:41.410367] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:14.113 [2024-11-20 11:08:41.410406] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:14.113 [2024-11-20 11:08:41.410413] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:14.113 [2024-11-20 11:08:41.410420] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:14.113 [2024-11-20 11:08:41.410425] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:14.113 [2024-11-20 11:08:41.411878] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:14.113 [2024-11-20 11:08:41.412000] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:14.113 [2024-11-20 11:08:41.412002] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:14.113 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:14.113 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:14:14.113 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:14:15.044 11:08:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:14:15.044 11:08:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:14:15.044 11:08:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:14:15.044 11:08:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.044 11:08:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:15.044 11:08:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.044 11:08:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:14:15.044 11:08:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:14:15.044 11:08:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.044 11:08:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:15.361 malloc0 00:14:15.361 11:08:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.361 11:08:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:14:15.361 11:08:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.361 11:08:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:15.361 11:08:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.361 11:08:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:14:15.361 11:08:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.361 11:08:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:15.361 11:08:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.361 11:08:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:14:15.361 11:08:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.361 11:08:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:15.361 11:08:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.361 11:08:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:14:15.361 00:14:15.361 00:14:15.361 CUnit - A unit testing framework for C - Version 2.1-3 00:14:15.361 http://cunit.sourceforge.net/ 00:14:15.361 00:14:15.361 00:14:15.361 Suite: nvme_compliance 00:14:15.361 Test: admin_identify_ctrlr_verify_dptr ...[2024-11-20 11:08:42.756940] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:15.361 [2024-11-20 11:08:42.758301] vfio_user.c: 807:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:14:15.361 [2024-11-20 11:08:42.758317] vfio_user.c:5511:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:14:15.361 [2024-11-20 11:08:42.758323] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:14:15.361 [2024-11-20 11:08:42.759964] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:15.361 passed 00:14:15.361 Test: admin_identify_ctrlr_verify_fused ...[2024-11-20 11:08:42.839556] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:15.361 [2024-11-20 11:08:42.842576] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:15.618 passed 00:14:15.618 Test: admin_identify_ns ...[2024-11-20 11:08:42.923378] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:15.618 [2024-11-20 11:08:42.983969] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:14:15.618 [2024-11-20 11:08:42.991971] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:14:15.618 [2024-11-20 11:08:43.013056] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:15.618 passed 00:14:15.618 Test: admin_get_features_mandatory_features ...[2024-11-20 11:08:43.087215] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:15.618 [2024-11-20 11:08:43.090239] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:15.876 passed 00:14:15.876 Test: admin_get_features_optional_features ...[2024-11-20 11:08:43.168769] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:15.876 [2024-11-20 11:08:43.171785] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:15.876 passed 00:14:15.876 Test: admin_set_features_number_of_queues ...[2024-11-20 11:08:43.248319] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:15.876 [2024-11-20 11:08:43.357039] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:16.133 passed 00:14:16.133 Test: admin_get_log_page_mandatory_logs ...[2024-11-20 11:08:43.430075] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:16.133 [2024-11-20 11:08:43.433096] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:16.133 passed 00:14:16.133 Test: admin_get_log_page_with_lpo ...[2024-11-20 11:08:43.511886] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:16.133 [2024-11-20 11:08:43.580959] ctrlr.c:2697:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:14:16.133 [2024-11-20 11:08:43.594004] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:16.133 passed 00:14:16.391 Test: fabric_property_get ...[2024-11-20 11:08:43.667934] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:16.391 [2024-11-20 11:08:43.669178] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:14:16.391 [2024-11-20 11:08:43.670957] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:16.391 passed 00:14:16.391 Test: admin_delete_io_sq_use_admin_qid ...[2024-11-20 11:08:43.749468] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:16.391 [2024-11-20 11:08:43.750704] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:14:16.391 [2024-11-20 11:08:43.752484] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:16.391 passed 00:14:16.391 Test: admin_delete_io_sq_delete_sq_twice ...[2024-11-20 11:08:43.831304] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:16.648 [2024-11-20 11:08:43.915955] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:16.648 [2024-11-20 11:08:43.931958] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:16.648 [2024-11-20 11:08:43.937034] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:16.648 passed 00:14:16.648 Test: admin_delete_io_cq_use_admin_qid ...[2024-11-20 11:08:44.013955] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:16.648 [2024-11-20 11:08:44.015196] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:14:16.648 [2024-11-20 11:08:44.016984] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:16.648 passed 00:14:16.648 Test: admin_delete_io_cq_delete_cq_first ...[2024-11-20 11:08:44.092301] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:16.906 [2024-11-20 11:08:44.171957] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:14:16.906 [2024-11-20 11:08:44.195953] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:16.906 [2024-11-20 11:08:44.201033] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:16.906 passed 00:14:16.906 Test: admin_create_io_cq_verify_iv_pc ...[2024-11-20 11:08:44.274951] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:16.906 [2024-11-20 11:08:44.276188] vfio_user.c:2161:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:14:16.906 [2024-11-20 11:08:44.276210] vfio_user.c:2155:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:14:16.906 [2024-11-20 11:08:44.277969] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:16.906 passed 00:14:16.906 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-11-20 11:08:44.355764] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:17.165 [2024-11-20 11:08:44.448960] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:14:17.165 [2024-11-20 11:08:44.456958] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:14:17.165 [2024-11-20 11:08:44.464958] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:14:17.165 [2024-11-20 11:08:44.472960] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:14:17.165 [2024-11-20 11:08:44.502035] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:17.165 passed 00:14:17.165 Test: admin_create_io_sq_verify_pc ...[2024-11-20 11:08:44.578057] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:17.165 [2024-11-20 11:08:44.595965] vfio_user.c:2054:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:14:17.165 [2024-11-20 11:08:44.613236] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:17.165 passed 00:14:17.423 Test: admin_create_io_qp_max_qps ...[2024-11-20 11:08:44.687733] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:18.357 [2024-11-20 11:08:45.801961] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:14:18.923 [2024-11-20 11:08:46.182428] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:18.923 passed 00:14:18.923 Test: admin_create_io_sq_shared_cq ...[2024-11-20 11:08:46.259484] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:18.923 [2024-11-20 11:08:46.390956] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:14:19.183 [2024-11-20 11:08:46.428013] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:19.183 passed 00:14:19.183 00:14:19.183 Run Summary: Type Total Ran Passed Failed Inactive 00:14:19.183 suites 1 1 n/a 0 0 00:14:19.183 tests 18 18 18 0 0 00:14:19.183 asserts 360 360 360 0 n/a 00:14:19.183 00:14:19.183 Elapsed time = 1.510 seconds 00:14:19.183 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 4027241 00:14:19.183 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 4027241 ']' 00:14:19.183 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 4027241 00:14:19.183 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:14:19.183 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:19.183 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4027241 00:14:19.183 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:19.183 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:19.183 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4027241' 00:14:19.183 killing process with pid 4027241 00:14:19.183 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 4027241 00:14:19.183 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 4027241 00:14:19.443 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:14:19.443 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:14:19.443 00:14:19.443 real 0m5.658s 00:14:19.443 user 0m15.778s 00:14:19.443 sys 0m0.516s 00:14:19.443 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:19.443 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:19.443 ************************************ 00:14:19.443 END TEST nvmf_vfio_user_nvme_compliance 00:14:19.443 ************************************ 00:14:19.443 11:08:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:14:19.443 11:08:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:19.443 11:08:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:19.443 11:08:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:19.443 ************************************ 00:14:19.443 START TEST nvmf_vfio_user_fuzz 00:14:19.443 ************************************ 00:14:19.443 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:14:19.443 * Looking for test storage... 00:14:19.443 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:19.443 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:19.443 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lcov --version 00:14:19.443 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:19.443 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:19.443 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:19.443 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:19.443 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:19.443 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:14:19.443 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:14:19.443 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:14:19.443 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:14:19.443 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:14:19.443 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:14:19.443 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:14:19.443 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:19.443 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:14:19.443 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:14:19.443 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:19.443 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:19.443 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:14:19.443 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:14:19.443 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:19.443 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:14:19.443 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:14:19.443 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:14:19.443 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:14:19.443 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:19.443 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:14:19.443 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:14:19.443 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:19.443 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:19.443 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:14:19.443 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:19.443 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:19.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:19.443 --rc genhtml_branch_coverage=1 00:14:19.443 --rc genhtml_function_coverage=1 00:14:19.443 --rc genhtml_legend=1 00:14:19.443 --rc geninfo_all_blocks=1 00:14:19.443 --rc geninfo_unexecuted_blocks=1 00:14:19.443 00:14:19.443 ' 00:14:19.443 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:19.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:19.443 --rc genhtml_branch_coverage=1 00:14:19.443 --rc genhtml_function_coverage=1 00:14:19.443 --rc genhtml_legend=1 00:14:19.443 --rc geninfo_all_blocks=1 00:14:19.443 --rc geninfo_unexecuted_blocks=1 00:14:19.443 00:14:19.443 ' 00:14:19.443 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:19.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:19.443 --rc genhtml_branch_coverage=1 00:14:19.443 --rc genhtml_function_coverage=1 00:14:19.443 --rc genhtml_legend=1 00:14:19.443 --rc geninfo_all_blocks=1 00:14:19.443 --rc geninfo_unexecuted_blocks=1 00:14:19.443 00:14:19.443 ' 00:14:19.443 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:19.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:19.443 --rc genhtml_branch_coverage=1 00:14:19.443 --rc genhtml_function_coverage=1 00:14:19.443 --rc genhtml_legend=1 00:14:19.443 --rc geninfo_all_blocks=1 00:14:19.443 --rc geninfo_unexecuted_blocks=1 00:14:19.443 00:14:19.443 ' 00:14:19.443 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:19.443 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:14:19.703 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:19.703 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:19.703 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:19.703 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:19.703 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:19.703 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:19.703 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:19.703 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:19.703 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:19.703 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:19.703 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:19.703 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:19.703 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:19.703 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:19.703 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:19.704 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:19.704 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:19.704 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:14:19.704 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:19.704 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:19.704 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:19.704 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.704 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.704 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.704 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:14:19.704 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.704 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:14:19.704 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:19.704 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:19.704 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:19.704 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:19.704 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:19.704 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:19.704 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:19.704 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:19.704 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:19.704 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:19.704 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:19.704 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:19.704 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:14:19.704 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:14:19.704 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:19.704 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:19.704 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:14:19.704 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=4028229 00:14:19.704 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 4028229' 00:14:19.704 Process pid: 4028229 00:14:19.704 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:19.704 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:19.704 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 4028229 00:14:19.704 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 4028229 ']' 00:14:19.704 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:19.704 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:19.704 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:19.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:19.704 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:19.704 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:19.963 11:08:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:19.963 11:08:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:14:19.963 11:08:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:14:20.900 11:08:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:14:20.900 11:08:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.900 11:08:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:20.900 11:08:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.900 11:08:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:14:20.900 11:08:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:14:20.900 11:08:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.900 11:08:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:20.900 malloc0 00:14:20.900 11:08:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.900 11:08:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:14:20.900 11:08:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.900 11:08:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:20.900 11:08:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.900 11:08:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:14:20.900 11:08:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.900 11:08:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:20.900 11:08:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.901 11:08:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:14:20.901 11:08:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.901 11:08:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:20.901 11:08:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.901 11:08:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:14:20.901 11:08:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:14:52.990 Fuzzing completed. Shutting down the fuzz application 00:14:52.990 00:14:52.990 Dumping successful admin opcodes: 00:14:52.990 8, 9, 10, 24, 00:14:52.990 Dumping successful io opcodes: 00:14:52.990 0, 00:14:52.990 NS: 0x20000081ef00 I/O qp, Total commands completed: 999592, total successful commands: 3911, random_seed: 1042218048 00:14:52.990 NS: 0x20000081ef00 admin qp, Total commands completed: 246940, total successful commands: 1993, random_seed: 1164669568 00:14:52.990 11:09:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:14:52.990 11:09:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.990 11:09:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:52.990 11:09:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.990 11:09:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 4028229 00:14:52.990 11:09:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 4028229 ']' 00:14:52.990 11:09:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 4028229 00:14:52.990 11:09:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:14:52.990 11:09:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:52.990 11:09:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4028229 00:14:52.990 11:09:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:52.990 11:09:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:52.990 11:09:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4028229' 00:14:52.990 killing process with pid 4028229 00:14:52.990 11:09:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 4028229 00:14:52.990 11:09:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 4028229 00:14:52.990 11:09:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:14:52.990 11:09:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:14:52.990 00:14:52.990 real 0m32.201s 00:14:52.990 user 0m30.102s 00:14:52.990 sys 0m30.996s 00:14:52.990 11:09:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:52.990 11:09:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:52.990 ************************************ 00:14:52.990 END TEST nvmf_vfio_user_fuzz 00:14:52.990 ************************************ 00:14:52.990 11:09:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:14:52.990 11:09:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:52.990 11:09:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:52.990 11:09:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:52.990 ************************************ 00:14:52.990 START TEST nvmf_auth_target 00:14:52.990 ************************************ 00:14:52.990 11:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:14:52.990 * Looking for test storage... 00:14:52.990 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:52.990 11:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:52.990 11:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lcov --version 00:14:52.990 11:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:52.990 11:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:52.990 11:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:52.990 11:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:52.990 11:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:52.990 11:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:14:52.990 11:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:14:52.990 11:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:14:52.990 11:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:14:52.990 11:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:14:52.990 11:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:14:52.990 11:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:14:52.990 11:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:52.990 11:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:14:52.990 11:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:14:52.990 11:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:52.990 11:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:52.990 11:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:14:52.990 11:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:14:52.990 11:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:52.990 11:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:14:52.990 11:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:14:52.990 11:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:14:52.990 11:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:14:52.990 11:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:52.990 11:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:14:52.990 11:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:14:52.990 11:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:52.990 11:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:52.990 11:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:14:52.990 11:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:52.990 11:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:52.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:52.990 --rc genhtml_branch_coverage=1 00:14:52.990 --rc genhtml_function_coverage=1 00:14:52.990 --rc genhtml_legend=1 00:14:52.990 --rc geninfo_all_blocks=1 00:14:52.990 --rc geninfo_unexecuted_blocks=1 00:14:52.990 00:14:52.990 ' 00:14:52.990 11:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:52.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:52.990 --rc genhtml_branch_coverage=1 00:14:52.990 --rc genhtml_function_coverage=1 00:14:52.990 --rc genhtml_legend=1 00:14:52.991 --rc geninfo_all_blocks=1 00:14:52.991 --rc geninfo_unexecuted_blocks=1 00:14:52.991 00:14:52.991 ' 00:14:52.991 11:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:52.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:52.991 --rc genhtml_branch_coverage=1 00:14:52.991 --rc genhtml_function_coverage=1 00:14:52.991 --rc genhtml_legend=1 00:14:52.991 --rc geninfo_all_blocks=1 00:14:52.991 --rc geninfo_unexecuted_blocks=1 00:14:52.991 00:14:52.991 ' 00:14:52.991 11:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:52.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:52.991 --rc genhtml_branch_coverage=1 00:14:52.991 --rc genhtml_function_coverage=1 00:14:52.991 --rc genhtml_legend=1 00:14:52.991 --rc geninfo_all_blocks=1 00:14:52.991 --rc geninfo_unexecuted_blocks=1 00:14:52.991 00:14:52.991 ' 00:14:52.991 11:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:52.991 11:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:14:52.991 11:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:52.991 11:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:52.991 11:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:52.991 11:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:52.991 11:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:52.991 11:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:52.991 11:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:52.991 11:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:52.991 11:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:52.991 11:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:52.991 11:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:52.991 11:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:52.991 11:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:52.991 11:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:52.991 11:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:52.991 11:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:52.991 11:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:52.991 11:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:14:52.991 11:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:52.991 11:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:52.991 11:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:52.991 11:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.991 11:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.991 11:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.991 11:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:14:52.991 11:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.991 11:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:14:52.991 11:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:52.991 11:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:52.991 11:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:52.991 11:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:52.991 11:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:52.991 11:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:52.991 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:52.991 11:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:52.991 11:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:52.991 11:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:52.991 11:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:14:52.991 11:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:14:52.991 11:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:14:52.991 11:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:52.991 11:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:14:52.991 11:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:14:52.991 11:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:14:52.991 11:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:14:52.991 11:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:52.991 11:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:52.991 11:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:52.991 11:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:52.991 11:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:52.991 11:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:52.991 11:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:52.991 11:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:52.991 11:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:52.991 11:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:52.991 11:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:14:52.991 11:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.417 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:58.418 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:14:58.418 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:58.418 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:58.418 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:58.418 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:58.418 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:58.418 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:14:58.418 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:58.418 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:14:58.418 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:14:58.418 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:14:58.418 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:14:58.418 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:14:58.418 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:14:58.418 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:58.418 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:58.418 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:58.418 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:58.418 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:58.418 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:58.418 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:58.418 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:58.418 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:58.418 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:58.418 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:58.418 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:58.418 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:58.418 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:58.418 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:58.418 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:58.418 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:58.418 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:58.418 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:58.418 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:58.418 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:58.418 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:58.418 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:58.418 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:58.418 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:58.418 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:58.418 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:58.418 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:58.418 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:58.418 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:58.418 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:58.418 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:58.418 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:58.418 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:58.418 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:58.418 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:58.418 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:58.418 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:58.418 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:58.418 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:58.418 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:58.418 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:58.418 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:58.418 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:58.418 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:58.418 Found net devices under 0000:86:00.0: cvl_0_0 00:14:58.418 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:58.418 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:58.418 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:58.418 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:58.418 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:58.418 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:58.418 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:58.418 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:58.418 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:58.418 Found net devices under 0000:86:00.1: cvl_0_1 00:14:58.418 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:58.418 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:58.418 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:14:58.418 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:58.418 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:58.418 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:58.418 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:58.418 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:58.418 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:58.418 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:58.418 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:58.418 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:58.418 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:58.418 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:58.418 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:58.418 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:58.418 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:58.418 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:58.418 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:58.418 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:58.418 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:58.418 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:58.418 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:58.418 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:58.418 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:58.418 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:58.418 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:58.418 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:58.418 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:58.418 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:58.418 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.416 ms 00:14:58.418 00:14:58.418 --- 10.0.0.2 ping statistics --- 00:14:58.418 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:58.418 rtt min/avg/max/mdev = 0.416/0.416/0.416/0.000 ms 00:14:58.418 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:58.418 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:58.418 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:14:58.418 00:14:58.418 --- 10.0.0.1 ping statistics --- 00:14:58.418 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:58.419 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:14:58.419 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:58.419 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:14:58.419 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:58.419 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:58.419 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:58.419 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:58.419 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:58.419 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:58.419 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:58.419 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:14:58.419 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:58.419 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:58.419 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.419 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=4037232 00:14:58.419 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 4037232 00:14:58.419 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:14:58.419 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 4037232 ']' 00:14:58.419 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:58.419 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:58.419 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:58.419 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:58.419 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.419 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:58.419 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:14:58.419 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:58.419 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:58.419 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.419 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:58.419 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=4037281 00:14:58.419 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:14:58.419 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:14:58.419 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:14:58.419 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:58.419 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:58.419 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:58.419 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:14:58.419 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:14:58.419 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:58.419 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=5c8230c94d5f4839239c570ea218778a189280278e861417 00:14:58.419 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:14:58.419 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.Nif 00:14:58.419 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 5c8230c94d5f4839239c570ea218778a189280278e861417 0 00:14:58.419 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 5c8230c94d5f4839239c570ea218778a189280278e861417 0 00:14:58.419 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:58.419 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:58.419 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=5c8230c94d5f4839239c570ea218778a189280278e861417 00:14:58.419 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:14:58.419 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:58.419 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.Nif 00:14:58.419 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.Nif 00:14:58.419 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.Nif 00:14:58.419 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:14:58.419 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:58.419 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:58.419 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:58.419 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:14:58.419 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:14:58.419 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:14:58.419 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=5c425cb8b8e9798245692ca8878310383a760dcd6be95169f0a2992bf64bd42a 00:14:58.419 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:14:58.419 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.Pw7 00:14:58.419 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 5c425cb8b8e9798245692ca8878310383a760dcd6be95169f0a2992bf64bd42a 3 00:14:58.419 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 5c425cb8b8e9798245692ca8878310383a760dcd6be95169f0a2992bf64bd42a 3 00:14:58.419 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:58.419 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:58.419 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=5c425cb8b8e9798245692ca8878310383a760dcd6be95169f0a2992bf64bd42a 00:14:58.419 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:14:58.419 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:58.419 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.Pw7 00:14:58.419 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.Pw7 00:14:58.419 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.Pw7 00:14:58.419 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:14:58.419 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:58.419 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:58.419 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:58.419 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:14:58.419 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:14:58.419 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:14:58.419 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=ff0c1be2ba2d435e649ff7b7e95ef303 00:14:58.419 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:14:58.419 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.Xir 00:14:58.419 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key ff0c1be2ba2d435e649ff7b7e95ef303 1 00:14:58.419 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 ff0c1be2ba2d435e649ff7b7e95ef303 1 00:14:58.419 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:58.419 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:58.419 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=ff0c1be2ba2d435e649ff7b7e95ef303 00:14:58.419 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:14:58.419 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:58.419 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.Xir 00:14:58.419 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.Xir 00:14:58.419 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.Xir 00:14:58.419 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:14:58.419 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:58.419 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:58.419 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:58.419 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:14:58.419 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:14:58.419 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:58.419 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=d4ce692d9266564f4424a46cb1972bf2ad72a6abb1805ff0 00:14:58.419 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:14:58.419 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.l8t 00:14:58.419 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key d4ce692d9266564f4424a46cb1972bf2ad72a6abb1805ff0 2 00:14:58.420 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 d4ce692d9266564f4424a46cb1972bf2ad72a6abb1805ff0 2 00:14:58.420 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:58.420 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:58.420 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=d4ce692d9266564f4424a46cb1972bf2ad72a6abb1805ff0 00:14:58.420 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:14:58.420 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:58.420 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.l8t 00:14:58.420 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.l8t 00:14:58.420 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.l8t 00:14:58.420 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:14:58.420 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:58.420 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:58.420 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:58.420 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:14:58.420 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:14:58.420 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:58.420 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=cf558c2599322936a6f0d58fd247f18cf9b49e0149b5b02b 00:14:58.420 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:14:58.420 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.BvB 00:14:58.420 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key cf558c2599322936a6f0d58fd247f18cf9b49e0149b5b02b 2 00:14:58.420 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 cf558c2599322936a6f0d58fd247f18cf9b49e0149b5b02b 2 00:14:58.420 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:58.420 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:58.420 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=cf558c2599322936a6f0d58fd247f18cf9b49e0149b5b02b 00:14:58.420 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:14:58.420 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:58.420 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.BvB 00:14:58.420 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.BvB 00:14:58.420 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.BvB 00:14:58.420 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:14:58.420 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:58.420 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:58.420 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:58.420 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:14:58.420 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:14:58.420 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:14:58.420 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=eba01d25bf12a8038de345b2b4ee940e 00:14:58.420 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:14:58.420 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.S21 00:14:58.420 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key eba01d25bf12a8038de345b2b4ee940e 1 00:14:58.420 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 eba01d25bf12a8038de345b2b4ee940e 1 00:14:58.420 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:58.420 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:58.420 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=eba01d25bf12a8038de345b2b4ee940e 00:14:58.420 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:14:58.420 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:58.420 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.S21 00:14:58.420 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.S21 00:14:58.420 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.S21 00:14:58.420 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:14:58.420 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:58.420 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:58.420 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:58.420 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:14:58.420 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:14:58.420 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:14:58.420 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=5c0962d63d0b6f274e0a049ee96725bb4b98af61fb316e725efaf809dece3b21 00:14:58.420 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:14:58.420 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.H31 00:14:58.420 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 5c0962d63d0b6f274e0a049ee96725bb4b98af61fb316e725efaf809dece3b21 3 00:14:58.420 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 5c0962d63d0b6f274e0a049ee96725bb4b98af61fb316e725efaf809dece3b21 3 00:14:58.420 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:58.420 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:58.420 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=5c0962d63d0b6f274e0a049ee96725bb4b98af61fb316e725efaf809dece3b21 00:14:58.420 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:14:58.420 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:58.680 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.H31 00:14:58.680 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.H31 00:14:58.680 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.H31 00:14:58.680 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:14:58.680 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 4037232 00:14:58.680 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 4037232 ']' 00:14:58.680 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:58.680 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:58.680 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:58.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:58.680 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:58.680 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.680 11:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:58.680 11:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:14:58.680 11:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 4037281 /var/tmp/host.sock 00:14:58.680 11:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 4037281 ']' 00:14:58.680 11:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:14:58.680 11:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:58.680 11:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:14:58.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:14:58.680 11:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:58.680 11:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.940 11:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:58.940 11:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:14:58.940 11:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:14:58.940 11:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.940 11:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.940 11:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.940 11:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:14:58.940 11:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Nif 00:14:58.940 11:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.940 11:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.940 11:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.940 11:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.Nif 00:14:58.940 11:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.Nif 00:14:59.199 11:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.Pw7 ]] 00:14:59.199 11:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Pw7 00:14:59.199 11:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.199 11:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.199 11:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.199 11:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Pw7 00:14:59.199 11:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Pw7 00:14:59.458 11:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:14:59.458 11:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.Xir 00:14:59.458 11:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.458 11:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.458 11:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.458 11:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.Xir 00:14:59.458 11:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.Xir 00:14:59.717 11:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.l8t ]] 00:14:59.717 11:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.l8t 00:14:59.717 11:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.717 11:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.717 11:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.717 11:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.l8t 00:14:59.717 11:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.l8t 00:14:59.717 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:14:59.717 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.BvB 00:14:59.717 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.717 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.717 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.717 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.BvB 00:14:59.717 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.BvB 00:14:59.976 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.S21 ]] 00:14:59.976 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.S21 00:14:59.976 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.976 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.976 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.976 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.S21 00:14:59.976 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.S21 00:15:00.235 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:00.235 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.H31 00:15:00.235 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.235 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.235 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.235 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.H31 00:15:00.235 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.H31 00:15:00.494 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:15:00.494 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:15:00.494 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:00.494 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:00.494 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:00.494 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:00.753 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:15:00.753 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:00.753 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:00.753 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:00.753 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:00.753 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:00.753 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:00.753 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.753 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.753 11:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.753 11:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:00.753 11:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:00.753 11:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:00.753 00:15:01.012 11:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:01.012 11:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:01.012 11:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:01.012 11:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:01.012 11:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:01.012 11:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.012 11:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.012 11:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.012 11:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:01.012 { 00:15:01.012 "cntlid": 1, 00:15:01.012 "qid": 0, 00:15:01.012 "state": "enabled", 00:15:01.012 "thread": "nvmf_tgt_poll_group_000", 00:15:01.012 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:01.012 "listen_address": { 00:15:01.012 "trtype": "TCP", 00:15:01.012 "adrfam": "IPv4", 00:15:01.012 "traddr": "10.0.0.2", 00:15:01.012 "trsvcid": "4420" 00:15:01.012 }, 00:15:01.012 "peer_address": { 00:15:01.013 "trtype": "TCP", 00:15:01.013 "adrfam": "IPv4", 00:15:01.013 "traddr": "10.0.0.1", 00:15:01.013 "trsvcid": "40480" 00:15:01.013 }, 00:15:01.013 "auth": { 00:15:01.013 "state": "completed", 00:15:01.013 "digest": "sha256", 00:15:01.013 "dhgroup": "null" 00:15:01.013 } 00:15:01.013 } 00:15:01.013 ]' 00:15:01.013 11:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:01.271 11:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:01.271 11:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:01.271 11:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:01.271 11:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:01.271 11:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:01.271 11:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:01.271 11:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:01.531 11:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWM4MjMwYzk0ZDVmNDgzOTIzOWM1NzBlYTIxODc3OGExODkyODAyNzhlODYxNDE3N1Zy3A==: --dhchap-ctrl-secret DHHC-1:03:NWM0MjVjYjhiOGU5Nzk4MjQ1NjkyY2E4ODc4MzEwMzgzYTc2MGRjZDZiZTk1MTY5ZjBhMjk5MmJmNjRiZDQyYd/wShw=: 00:15:01.531 11:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NWM4MjMwYzk0ZDVmNDgzOTIzOWM1NzBlYTIxODc3OGExODkyODAyNzhlODYxNDE3N1Zy3A==: --dhchap-ctrl-secret DHHC-1:03:NWM0MjVjYjhiOGU5Nzk4MjQ1NjkyY2E4ODc4MzEwMzgzYTc2MGRjZDZiZTk1MTY5ZjBhMjk5MmJmNjRiZDQyYd/wShw=: 00:15:02.099 11:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:02.099 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:02.099 11:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:02.099 11:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.099 11:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.099 11:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.099 11:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:02.099 11:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:02.099 11:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:02.359 11:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:15:02.359 11:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:02.359 11:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:02.359 11:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:02.359 11:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:02.359 11:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:02.359 11:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:02.359 11:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.359 11:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.359 11:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.359 11:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:02.359 11:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:02.359 11:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:02.359 00:15:02.618 11:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:02.618 11:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:02.618 11:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:02.618 11:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:02.618 11:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:02.618 11:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.618 11:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.618 11:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.618 11:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:02.618 { 00:15:02.618 "cntlid": 3, 00:15:02.618 "qid": 0, 00:15:02.618 "state": "enabled", 00:15:02.618 "thread": "nvmf_tgt_poll_group_000", 00:15:02.618 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:02.618 "listen_address": { 00:15:02.618 "trtype": "TCP", 00:15:02.618 "adrfam": "IPv4", 00:15:02.618 "traddr": "10.0.0.2", 00:15:02.618 "trsvcid": "4420" 00:15:02.618 }, 00:15:02.618 "peer_address": { 00:15:02.618 "trtype": "TCP", 00:15:02.618 "adrfam": "IPv4", 00:15:02.618 "traddr": "10.0.0.1", 00:15:02.618 "trsvcid": "40510" 00:15:02.618 }, 00:15:02.618 "auth": { 00:15:02.618 "state": "completed", 00:15:02.618 "digest": "sha256", 00:15:02.618 "dhgroup": "null" 00:15:02.618 } 00:15:02.618 } 00:15:02.618 ]' 00:15:02.618 11:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:02.877 11:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:02.877 11:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:02.877 11:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:02.877 11:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:02.877 11:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:02.877 11:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:02.877 11:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:03.135 11:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmYwYzFiZTJiYTJkNDM1ZTY0OWZmN2I3ZTk1ZWYzMDP/AKiL: --dhchap-ctrl-secret DHHC-1:02:ZDRjZTY5MmQ5MjY2NTY0ZjQ0MjRhNDZjYjE5NzJiZjJhZDcyYTZhYmIxODA1ZmYwqEkFpA==: 00:15:03.135 11:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZmYwYzFiZTJiYTJkNDM1ZTY0OWZmN2I3ZTk1ZWYzMDP/AKiL: --dhchap-ctrl-secret DHHC-1:02:ZDRjZTY5MmQ5MjY2NTY0ZjQ0MjRhNDZjYjE5NzJiZjJhZDcyYTZhYmIxODA1ZmYwqEkFpA==: 00:15:03.703 11:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:03.703 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:03.703 11:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:03.703 11:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.703 11:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.703 11:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.703 11:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:03.703 11:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:03.703 11:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:03.703 11:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:15:03.703 11:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:03.703 11:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:03.703 11:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:03.963 11:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:03.963 11:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:03.963 11:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:03.963 11:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.963 11:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.963 11:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.963 11:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:03.963 11:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:03.963 11:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:03.963 00:15:04.222 11:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:04.222 11:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:04.222 11:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:04.222 11:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:04.222 11:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:04.222 11:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.222 11:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.222 11:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.222 11:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:04.222 { 00:15:04.222 "cntlid": 5, 00:15:04.222 "qid": 0, 00:15:04.222 "state": "enabled", 00:15:04.222 "thread": "nvmf_tgt_poll_group_000", 00:15:04.222 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:04.222 "listen_address": { 00:15:04.222 "trtype": "TCP", 00:15:04.222 "adrfam": "IPv4", 00:15:04.222 "traddr": "10.0.0.2", 00:15:04.222 "trsvcid": "4420" 00:15:04.222 }, 00:15:04.222 "peer_address": { 00:15:04.222 "trtype": "TCP", 00:15:04.222 "adrfam": "IPv4", 00:15:04.222 "traddr": "10.0.0.1", 00:15:04.222 "trsvcid": "40546" 00:15:04.222 }, 00:15:04.222 "auth": { 00:15:04.222 "state": "completed", 00:15:04.222 "digest": "sha256", 00:15:04.222 "dhgroup": "null" 00:15:04.222 } 00:15:04.222 } 00:15:04.222 ]' 00:15:04.222 11:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:04.222 11:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:04.482 11:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:04.482 11:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:04.482 11:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:04.482 11:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:04.482 11:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:04.482 11:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:04.740 11:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2Y1NThjMjU5OTMyMjkzNmE2ZjBkNThmZDI0N2YxOGNmOWI0OWUwMTQ5YjViMDJik9KxkQ==: --dhchap-ctrl-secret DHHC-1:01:ZWJhMDFkMjViZjEyYTgwMzhkZTM0NWIyYjRlZTk0MGVw0Hvh: 00:15:04.740 11:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Y2Y1NThjMjU5OTMyMjkzNmE2ZjBkNThmZDI0N2YxOGNmOWI0OWUwMTQ5YjViMDJik9KxkQ==: --dhchap-ctrl-secret DHHC-1:01:ZWJhMDFkMjViZjEyYTgwMzhkZTM0NWIyYjRlZTk0MGVw0Hvh: 00:15:05.309 11:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:05.309 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:05.309 11:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:05.309 11:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.309 11:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.309 11:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.309 11:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:05.309 11:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:05.309 11:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:05.309 11:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:15:05.309 11:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:05.309 11:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:05.309 11:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:05.309 11:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:05.309 11:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:05.309 11:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:05.309 11:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.309 11:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.309 11:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.309 11:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:05.309 11:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:05.309 11:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:05.568 00:15:05.568 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:05.568 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:05.568 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:05.827 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:05.827 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:05.827 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.827 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.827 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.827 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:05.827 { 00:15:05.827 "cntlid": 7, 00:15:05.827 "qid": 0, 00:15:05.827 "state": "enabled", 00:15:05.827 "thread": "nvmf_tgt_poll_group_000", 00:15:05.827 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:05.827 "listen_address": { 00:15:05.827 "trtype": "TCP", 00:15:05.827 "adrfam": "IPv4", 00:15:05.827 "traddr": "10.0.0.2", 00:15:05.827 "trsvcid": "4420" 00:15:05.827 }, 00:15:05.827 "peer_address": { 00:15:05.827 "trtype": "TCP", 00:15:05.827 "adrfam": "IPv4", 00:15:05.827 "traddr": "10.0.0.1", 00:15:05.827 "trsvcid": "40570" 00:15:05.827 }, 00:15:05.827 "auth": { 00:15:05.827 "state": "completed", 00:15:05.827 "digest": "sha256", 00:15:05.827 "dhgroup": "null" 00:15:05.827 } 00:15:05.827 } 00:15:05.827 ]' 00:15:05.827 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:05.827 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:05.827 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:06.086 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:06.086 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:06.086 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:06.086 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:06.086 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:06.086 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWMwOTYyZDYzZDBiNmYyNzRlMGEwNDllZTk2NzI1YmI0Yjk4YWY2MWZiMzE2ZTcyNWVmYWY4MDlkZWNlM2IyMUJs8ko=: 00:15:06.345 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NWMwOTYyZDYzZDBiNmYyNzRlMGEwNDllZTk2NzI1YmI0Yjk4YWY2MWZiMzE2ZTcyNWVmYWY4MDlkZWNlM2IyMUJs8ko=: 00:15:06.912 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:06.912 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:06.912 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:06.913 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.913 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.913 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.913 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:06.913 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:06.913 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:06.913 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:06.913 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:15:06.913 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:06.913 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:06.913 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:06.913 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:06.913 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:06.913 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:06.913 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.913 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.913 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.913 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:06.913 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:06.913 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:07.172 00:15:07.172 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:07.172 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:07.172 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:07.431 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:07.431 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:07.431 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.431 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.431 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.431 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:07.431 { 00:15:07.431 "cntlid": 9, 00:15:07.431 "qid": 0, 00:15:07.431 "state": "enabled", 00:15:07.431 "thread": "nvmf_tgt_poll_group_000", 00:15:07.431 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:07.431 "listen_address": { 00:15:07.431 "trtype": "TCP", 00:15:07.431 "adrfam": "IPv4", 00:15:07.431 "traddr": "10.0.0.2", 00:15:07.431 "trsvcid": "4420" 00:15:07.431 }, 00:15:07.431 "peer_address": { 00:15:07.431 "trtype": "TCP", 00:15:07.431 "adrfam": "IPv4", 00:15:07.431 "traddr": "10.0.0.1", 00:15:07.431 "trsvcid": "35072" 00:15:07.431 }, 00:15:07.431 "auth": { 00:15:07.431 "state": "completed", 00:15:07.431 "digest": "sha256", 00:15:07.431 "dhgroup": "ffdhe2048" 00:15:07.431 } 00:15:07.431 } 00:15:07.431 ]' 00:15:07.431 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:07.431 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:07.431 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:07.431 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:07.431 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:07.690 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:07.690 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:07.690 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:07.690 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWM4MjMwYzk0ZDVmNDgzOTIzOWM1NzBlYTIxODc3OGExODkyODAyNzhlODYxNDE3N1Zy3A==: --dhchap-ctrl-secret DHHC-1:03:NWM0MjVjYjhiOGU5Nzk4MjQ1NjkyY2E4ODc4MzEwMzgzYTc2MGRjZDZiZTk1MTY5ZjBhMjk5MmJmNjRiZDQyYd/wShw=: 00:15:07.690 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NWM4MjMwYzk0ZDVmNDgzOTIzOWM1NzBlYTIxODc3OGExODkyODAyNzhlODYxNDE3N1Zy3A==: --dhchap-ctrl-secret DHHC-1:03:NWM0MjVjYjhiOGU5Nzk4MjQ1NjkyY2E4ODc4MzEwMzgzYTc2MGRjZDZiZTk1MTY5ZjBhMjk5MmJmNjRiZDQyYd/wShw=: 00:15:08.257 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:08.516 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:08.516 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:08.516 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.516 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.516 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.516 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:08.516 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:08.516 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:08.516 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:15:08.516 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:08.516 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:08.516 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:08.516 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:08.516 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:08.516 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:08.516 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.516 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.516 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.516 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:08.516 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:08.516 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:08.775 00:15:08.775 11:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:08.775 11:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:08.775 11:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:09.033 11:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:09.033 11:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:09.033 11:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.033 11:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.033 11:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.033 11:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:09.033 { 00:15:09.033 "cntlid": 11, 00:15:09.033 "qid": 0, 00:15:09.033 "state": "enabled", 00:15:09.033 "thread": "nvmf_tgt_poll_group_000", 00:15:09.033 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:09.033 "listen_address": { 00:15:09.033 "trtype": "TCP", 00:15:09.033 "adrfam": "IPv4", 00:15:09.033 "traddr": "10.0.0.2", 00:15:09.033 "trsvcid": "4420" 00:15:09.033 }, 00:15:09.033 "peer_address": { 00:15:09.033 "trtype": "TCP", 00:15:09.033 "adrfam": "IPv4", 00:15:09.033 "traddr": "10.0.0.1", 00:15:09.033 "trsvcid": "35096" 00:15:09.033 }, 00:15:09.033 "auth": { 00:15:09.033 "state": "completed", 00:15:09.033 "digest": "sha256", 00:15:09.033 "dhgroup": "ffdhe2048" 00:15:09.033 } 00:15:09.033 } 00:15:09.033 ]' 00:15:09.033 11:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:09.034 11:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:09.034 11:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:09.292 11:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:09.292 11:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:09.292 11:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:09.292 11:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:09.292 11:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:09.551 11:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmYwYzFiZTJiYTJkNDM1ZTY0OWZmN2I3ZTk1ZWYzMDP/AKiL: --dhchap-ctrl-secret DHHC-1:02:ZDRjZTY5MmQ5MjY2NTY0ZjQ0MjRhNDZjYjE5NzJiZjJhZDcyYTZhYmIxODA1ZmYwqEkFpA==: 00:15:09.551 11:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZmYwYzFiZTJiYTJkNDM1ZTY0OWZmN2I3ZTk1ZWYzMDP/AKiL: --dhchap-ctrl-secret DHHC-1:02:ZDRjZTY5MmQ5MjY2NTY0ZjQ0MjRhNDZjYjE5NzJiZjJhZDcyYTZhYmIxODA1ZmYwqEkFpA==: 00:15:10.119 11:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:10.119 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:10.119 11:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:10.119 11:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.119 11:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.119 11:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.119 11:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:10.119 11:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:10.119 11:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:10.119 11:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:15:10.119 11:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:10.119 11:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:10.119 11:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:10.119 11:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:10.119 11:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:10.119 11:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:10.119 11:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.119 11:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.378 11:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.378 11:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:10.378 11:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:10.378 11:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:10.378 00:15:10.637 11:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:10.637 11:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:10.637 11:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:10.637 11:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:10.637 11:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:10.637 11:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.637 11:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.637 11:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.638 11:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:10.638 { 00:15:10.638 "cntlid": 13, 00:15:10.638 "qid": 0, 00:15:10.638 "state": "enabled", 00:15:10.638 "thread": "nvmf_tgt_poll_group_000", 00:15:10.638 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:10.638 "listen_address": { 00:15:10.638 "trtype": "TCP", 00:15:10.638 "adrfam": "IPv4", 00:15:10.638 "traddr": "10.0.0.2", 00:15:10.638 "trsvcid": "4420" 00:15:10.638 }, 00:15:10.638 "peer_address": { 00:15:10.638 "trtype": "TCP", 00:15:10.638 "adrfam": "IPv4", 00:15:10.638 "traddr": "10.0.0.1", 00:15:10.638 "trsvcid": "35132" 00:15:10.638 }, 00:15:10.638 "auth": { 00:15:10.638 "state": "completed", 00:15:10.638 "digest": "sha256", 00:15:10.638 "dhgroup": "ffdhe2048" 00:15:10.638 } 00:15:10.638 } 00:15:10.638 ]' 00:15:10.638 11:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:10.897 11:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:10.897 11:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:10.897 11:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:10.897 11:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:10.897 11:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:10.897 11:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:10.897 11:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:11.155 11:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2Y1NThjMjU5OTMyMjkzNmE2ZjBkNThmZDI0N2YxOGNmOWI0OWUwMTQ5YjViMDJik9KxkQ==: --dhchap-ctrl-secret DHHC-1:01:ZWJhMDFkMjViZjEyYTgwMzhkZTM0NWIyYjRlZTk0MGVw0Hvh: 00:15:11.155 11:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Y2Y1NThjMjU5OTMyMjkzNmE2ZjBkNThmZDI0N2YxOGNmOWI0OWUwMTQ5YjViMDJik9KxkQ==: --dhchap-ctrl-secret DHHC-1:01:ZWJhMDFkMjViZjEyYTgwMzhkZTM0NWIyYjRlZTk0MGVw0Hvh: 00:15:11.724 11:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:11.724 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:11.724 11:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:11.724 11:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.724 11:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.724 11:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.724 11:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:11.724 11:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:11.724 11:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:11.983 11:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:15:11.983 11:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:11.983 11:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:11.983 11:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:11.983 11:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:11.983 11:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:11.983 11:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:11.983 11:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.983 11:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.983 11:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.983 11:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:11.983 11:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:11.983 11:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:12.243 00:15:12.243 11:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:12.243 11:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:12.243 11:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:12.502 11:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:12.502 11:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:12.502 11:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.502 11:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.502 11:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.502 11:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:12.502 { 00:15:12.502 "cntlid": 15, 00:15:12.502 "qid": 0, 00:15:12.502 "state": "enabled", 00:15:12.502 "thread": "nvmf_tgt_poll_group_000", 00:15:12.502 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:12.502 "listen_address": { 00:15:12.502 "trtype": "TCP", 00:15:12.502 "adrfam": "IPv4", 00:15:12.502 "traddr": "10.0.0.2", 00:15:12.502 "trsvcid": "4420" 00:15:12.502 }, 00:15:12.502 "peer_address": { 00:15:12.502 "trtype": "TCP", 00:15:12.502 "adrfam": "IPv4", 00:15:12.502 "traddr": "10.0.0.1", 00:15:12.502 "trsvcid": "35170" 00:15:12.502 }, 00:15:12.502 "auth": { 00:15:12.502 "state": "completed", 00:15:12.502 "digest": "sha256", 00:15:12.502 "dhgroup": "ffdhe2048" 00:15:12.502 } 00:15:12.502 } 00:15:12.502 ]' 00:15:12.502 11:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:12.502 11:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:12.502 11:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:12.502 11:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:12.502 11:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:12.502 11:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:12.502 11:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:12.502 11:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:12.761 11:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWMwOTYyZDYzZDBiNmYyNzRlMGEwNDllZTk2NzI1YmI0Yjk4YWY2MWZiMzE2ZTcyNWVmYWY4MDlkZWNlM2IyMUJs8ko=: 00:15:12.761 11:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NWMwOTYyZDYzZDBiNmYyNzRlMGEwNDllZTk2NzI1YmI0Yjk4YWY2MWZiMzE2ZTcyNWVmYWY4MDlkZWNlM2IyMUJs8ko=: 00:15:13.328 11:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:13.329 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:13.329 11:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:13.329 11:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.329 11:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.329 11:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.329 11:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:13.329 11:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:13.329 11:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:13.329 11:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:13.588 11:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:15:13.588 11:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:13.588 11:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:13.588 11:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:13.588 11:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:13.588 11:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:13.588 11:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:13.588 11:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.588 11:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.588 11:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.588 11:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:13.588 11:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:13.588 11:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:13.847 00:15:13.847 11:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:13.848 11:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:13.848 11:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:13.848 11:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:14.107 11:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:14.107 11:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.107 11:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.107 11:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.107 11:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:14.107 { 00:15:14.107 "cntlid": 17, 00:15:14.107 "qid": 0, 00:15:14.107 "state": "enabled", 00:15:14.107 "thread": "nvmf_tgt_poll_group_000", 00:15:14.107 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:14.107 "listen_address": { 00:15:14.107 "trtype": "TCP", 00:15:14.107 "adrfam": "IPv4", 00:15:14.107 "traddr": "10.0.0.2", 00:15:14.107 "trsvcid": "4420" 00:15:14.107 }, 00:15:14.107 "peer_address": { 00:15:14.107 "trtype": "TCP", 00:15:14.107 "adrfam": "IPv4", 00:15:14.107 "traddr": "10.0.0.1", 00:15:14.107 "trsvcid": "35190" 00:15:14.107 }, 00:15:14.107 "auth": { 00:15:14.107 "state": "completed", 00:15:14.107 "digest": "sha256", 00:15:14.107 "dhgroup": "ffdhe3072" 00:15:14.107 } 00:15:14.107 } 00:15:14.107 ]' 00:15:14.107 11:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:14.107 11:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:14.107 11:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:14.107 11:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:14.107 11:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:14.107 11:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:14.107 11:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:14.107 11:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:14.366 11:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWM4MjMwYzk0ZDVmNDgzOTIzOWM1NzBlYTIxODc3OGExODkyODAyNzhlODYxNDE3N1Zy3A==: --dhchap-ctrl-secret DHHC-1:03:NWM0MjVjYjhiOGU5Nzk4MjQ1NjkyY2E4ODc4MzEwMzgzYTc2MGRjZDZiZTk1MTY5ZjBhMjk5MmJmNjRiZDQyYd/wShw=: 00:15:14.366 11:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NWM4MjMwYzk0ZDVmNDgzOTIzOWM1NzBlYTIxODc3OGExODkyODAyNzhlODYxNDE3N1Zy3A==: --dhchap-ctrl-secret DHHC-1:03:NWM0MjVjYjhiOGU5Nzk4MjQ1NjkyY2E4ODc4MzEwMzgzYTc2MGRjZDZiZTk1MTY5ZjBhMjk5MmJmNjRiZDQyYd/wShw=: 00:15:14.934 11:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:14.934 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:14.934 11:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:14.934 11:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.934 11:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.934 11:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.934 11:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:14.934 11:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:14.934 11:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:15.193 11:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:15:15.193 11:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:15.193 11:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:15.193 11:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:15.193 11:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:15.193 11:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:15.193 11:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:15.193 11:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.193 11:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.193 11:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.193 11:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:15.193 11:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:15.193 11:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:15.453 00:15:15.453 11:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:15.453 11:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:15.453 11:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:15.712 11:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:15.712 11:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:15.712 11:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.712 11:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.712 11:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.712 11:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:15.712 { 00:15:15.712 "cntlid": 19, 00:15:15.712 "qid": 0, 00:15:15.712 "state": "enabled", 00:15:15.712 "thread": "nvmf_tgt_poll_group_000", 00:15:15.712 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:15.712 "listen_address": { 00:15:15.712 "trtype": "TCP", 00:15:15.712 "adrfam": "IPv4", 00:15:15.712 "traddr": "10.0.0.2", 00:15:15.712 "trsvcid": "4420" 00:15:15.712 }, 00:15:15.712 "peer_address": { 00:15:15.712 "trtype": "TCP", 00:15:15.712 "adrfam": "IPv4", 00:15:15.712 "traddr": "10.0.0.1", 00:15:15.712 "trsvcid": "35224" 00:15:15.712 }, 00:15:15.712 "auth": { 00:15:15.712 "state": "completed", 00:15:15.712 "digest": "sha256", 00:15:15.712 "dhgroup": "ffdhe3072" 00:15:15.712 } 00:15:15.712 } 00:15:15.712 ]' 00:15:15.712 11:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:15.712 11:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:15.712 11:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:15.713 11:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:15.713 11:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:15.713 11:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:15.713 11:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:15.713 11:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:15.971 11:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmYwYzFiZTJiYTJkNDM1ZTY0OWZmN2I3ZTk1ZWYzMDP/AKiL: --dhchap-ctrl-secret DHHC-1:02:ZDRjZTY5MmQ5MjY2NTY0ZjQ0MjRhNDZjYjE5NzJiZjJhZDcyYTZhYmIxODA1ZmYwqEkFpA==: 00:15:15.971 11:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZmYwYzFiZTJiYTJkNDM1ZTY0OWZmN2I3ZTk1ZWYzMDP/AKiL: --dhchap-ctrl-secret DHHC-1:02:ZDRjZTY5MmQ5MjY2NTY0ZjQ0MjRhNDZjYjE5NzJiZjJhZDcyYTZhYmIxODA1ZmYwqEkFpA==: 00:15:16.538 11:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:16.538 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:16.538 11:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:16.538 11:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.538 11:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.538 11:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.538 11:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:16.538 11:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:16.538 11:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:16.797 11:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:15:16.797 11:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:16.797 11:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:16.797 11:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:16.797 11:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:16.797 11:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:16.797 11:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:16.797 11:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.797 11:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.797 11:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.797 11:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:16.797 11:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:16.797 11:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:17.056 00:15:17.056 11:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:17.056 11:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:17.056 11:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:17.314 11:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:17.314 11:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:17.314 11:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.314 11:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.314 11:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.314 11:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:17.314 { 00:15:17.314 "cntlid": 21, 00:15:17.314 "qid": 0, 00:15:17.314 "state": "enabled", 00:15:17.314 "thread": "nvmf_tgt_poll_group_000", 00:15:17.314 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:17.314 "listen_address": { 00:15:17.314 "trtype": "TCP", 00:15:17.314 "adrfam": "IPv4", 00:15:17.314 "traddr": "10.0.0.2", 00:15:17.314 "trsvcid": "4420" 00:15:17.314 }, 00:15:17.314 "peer_address": { 00:15:17.314 "trtype": "TCP", 00:15:17.314 "adrfam": "IPv4", 00:15:17.314 "traddr": "10.0.0.1", 00:15:17.314 "trsvcid": "33508" 00:15:17.314 }, 00:15:17.314 "auth": { 00:15:17.314 "state": "completed", 00:15:17.314 "digest": "sha256", 00:15:17.314 "dhgroup": "ffdhe3072" 00:15:17.314 } 00:15:17.314 } 00:15:17.314 ]' 00:15:17.315 11:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:17.315 11:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:17.315 11:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:17.315 11:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:17.315 11:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:17.315 11:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:17.315 11:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:17.315 11:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:17.574 11:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2Y1NThjMjU5OTMyMjkzNmE2ZjBkNThmZDI0N2YxOGNmOWI0OWUwMTQ5YjViMDJik9KxkQ==: --dhchap-ctrl-secret DHHC-1:01:ZWJhMDFkMjViZjEyYTgwMzhkZTM0NWIyYjRlZTk0MGVw0Hvh: 00:15:17.574 11:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Y2Y1NThjMjU5OTMyMjkzNmE2ZjBkNThmZDI0N2YxOGNmOWI0OWUwMTQ5YjViMDJik9KxkQ==: --dhchap-ctrl-secret DHHC-1:01:ZWJhMDFkMjViZjEyYTgwMzhkZTM0NWIyYjRlZTk0MGVw0Hvh: 00:15:18.141 11:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:18.141 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:18.141 11:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:18.141 11:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.141 11:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.141 11:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.141 11:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:18.141 11:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:18.141 11:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:18.400 11:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:15:18.401 11:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:18.401 11:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:18.401 11:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:18.401 11:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:18.401 11:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:18.401 11:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:18.401 11:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.401 11:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.401 11:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.401 11:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:18.401 11:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:18.401 11:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:18.659 00:15:18.659 11:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:18.659 11:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:18.659 11:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:18.918 11:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:18.918 11:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:18.918 11:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.918 11:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.918 11:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.918 11:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:18.918 { 00:15:18.918 "cntlid": 23, 00:15:18.918 "qid": 0, 00:15:18.918 "state": "enabled", 00:15:18.918 "thread": "nvmf_tgt_poll_group_000", 00:15:18.918 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:18.918 "listen_address": { 00:15:18.918 "trtype": "TCP", 00:15:18.918 "adrfam": "IPv4", 00:15:18.918 "traddr": "10.0.0.2", 00:15:18.918 "trsvcid": "4420" 00:15:18.918 }, 00:15:18.918 "peer_address": { 00:15:18.918 "trtype": "TCP", 00:15:18.918 "adrfam": "IPv4", 00:15:18.918 "traddr": "10.0.0.1", 00:15:18.918 "trsvcid": "33536" 00:15:18.918 }, 00:15:18.918 "auth": { 00:15:18.918 "state": "completed", 00:15:18.918 "digest": "sha256", 00:15:18.918 "dhgroup": "ffdhe3072" 00:15:18.918 } 00:15:18.918 } 00:15:18.918 ]' 00:15:18.918 11:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:18.918 11:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:18.918 11:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:18.918 11:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:18.918 11:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:18.918 11:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:18.918 11:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:18.918 11:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:19.177 11:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWMwOTYyZDYzZDBiNmYyNzRlMGEwNDllZTk2NzI1YmI0Yjk4YWY2MWZiMzE2ZTcyNWVmYWY4MDlkZWNlM2IyMUJs8ko=: 00:15:19.177 11:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NWMwOTYyZDYzZDBiNmYyNzRlMGEwNDllZTk2NzI1YmI0Yjk4YWY2MWZiMzE2ZTcyNWVmYWY4MDlkZWNlM2IyMUJs8ko=: 00:15:19.745 11:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:19.745 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:19.745 11:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:19.745 11:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.745 11:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.745 11:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.745 11:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:19.745 11:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:19.745 11:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:19.745 11:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:20.004 11:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:15:20.004 11:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:20.004 11:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:20.004 11:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:20.004 11:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:20.004 11:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:20.004 11:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:20.004 11:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.004 11:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.004 11:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.004 11:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:20.004 11:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:20.004 11:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:20.262 00:15:20.262 11:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:20.262 11:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:20.262 11:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:20.521 11:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:20.521 11:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:20.521 11:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.521 11:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.521 11:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.521 11:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:20.521 { 00:15:20.521 "cntlid": 25, 00:15:20.521 "qid": 0, 00:15:20.521 "state": "enabled", 00:15:20.521 "thread": "nvmf_tgt_poll_group_000", 00:15:20.521 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:20.521 "listen_address": { 00:15:20.521 "trtype": "TCP", 00:15:20.521 "adrfam": "IPv4", 00:15:20.521 "traddr": "10.0.0.2", 00:15:20.521 "trsvcid": "4420" 00:15:20.521 }, 00:15:20.521 "peer_address": { 00:15:20.521 "trtype": "TCP", 00:15:20.521 "adrfam": "IPv4", 00:15:20.521 "traddr": "10.0.0.1", 00:15:20.521 "trsvcid": "33560" 00:15:20.521 }, 00:15:20.521 "auth": { 00:15:20.521 "state": "completed", 00:15:20.521 "digest": "sha256", 00:15:20.521 "dhgroup": "ffdhe4096" 00:15:20.521 } 00:15:20.521 } 00:15:20.521 ]' 00:15:20.521 11:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:20.521 11:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:20.521 11:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:20.521 11:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:20.521 11:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:20.521 11:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:20.521 11:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:20.521 11:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:20.780 11:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWM4MjMwYzk0ZDVmNDgzOTIzOWM1NzBlYTIxODc3OGExODkyODAyNzhlODYxNDE3N1Zy3A==: --dhchap-ctrl-secret DHHC-1:03:NWM0MjVjYjhiOGU5Nzk4MjQ1NjkyY2E4ODc4MzEwMzgzYTc2MGRjZDZiZTk1MTY5ZjBhMjk5MmJmNjRiZDQyYd/wShw=: 00:15:20.780 11:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NWM4MjMwYzk0ZDVmNDgzOTIzOWM1NzBlYTIxODc3OGExODkyODAyNzhlODYxNDE3N1Zy3A==: --dhchap-ctrl-secret DHHC-1:03:NWM0MjVjYjhiOGU5Nzk4MjQ1NjkyY2E4ODc4MzEwMzgzYTc2MGRjZDZiZTk1MTY5ZjBhMjk5MmJmNjRiZDQyYd/wShw=: 00:15:21.348 11:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:21.348 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:21.348 11:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:21.348 11:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.348 11:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.348 11:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.348 11:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:21.348 11:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:21.348 11:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:21.607 11:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:15:21.607 11:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:21.607 11:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:21.607 11:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:21.607 11:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:21.607 11:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:21.607 11:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:21.607 11:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.607 11:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.607 11:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.607 11:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:21.607 11:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:21.607 11:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:21.864 00:15:21.864 11:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:21.864 11:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:21.864 11:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:22.123 11:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:22.123 11:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:22.123 11:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.123 11:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.123 11:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.123 11:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:22.123 { 00:15:22.123 "cntlid": 27, 00:15:22.123 "qid": 0, 00:15:22.123 "state": "enabled", 00:15:22.123 "thread": "nvmf_tgt_poll_group_000", 00:15:22.123 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:22.123 "listen_address": { 00:15:22.123 "trtype": "TCP", 00:15:22.123 "adrfam": "IPv4", 00:15:22.123 "traddr": "10.0.0.2", 00:15:22.123 "trsvcid": "4420" 00:15:22.123 }, 00:15:22.123 "peer_address": { 00:15:22.123 "trtype": "TCP", 00:15:22.123 "adrfam": "IPv4", 00:15:22.123 "traddr": "10.0.0.1", 00:15:22.123 "trsvcid": "33584" 00:15:22.123 }, 00:15:22.123 "auth": { 00:15:22.123 "state": "completed", 00:15:22.123 "digest": "sha256", 00:15:22.123 "dhgroup": "ffdhe4096" 00:15:22.123 } 00:15:22.123 } 00:15:22.123 ]' 00:15:22.123 11:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:22.123 11:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:22.123 11:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:22.123 11:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:22.123 11:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:22.123 11:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:22.123 11:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:22.123 11:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:22.384 11:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmYwYzFiZTJiYTJkNDM1ZTY0OWZmN2I3ZTk1ZWYzMDP/AKiL: --dhchap-ctrl-secret DHHC-1:02:ZDRjZTY5MmQ5MjY2NTY0ZjQ0MjRhNDZjYjE5NzJiZjJhZDcyYTZhYmIxODA1ZmYwqEkFpA==: 00:15:22.384 11:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZmYwYzFiZTJiYTJkNDM1ZTY0OWZmN2I3ZTk1ZWYzMDP/AKiL: --dhchap-ctrl-secret DHHC-1:02:ZDRjZTY5MmQ5MjY2NTY0ZjQ0MjRhNDZjYjE5NzJiZjJhZDcyYTZhYmIxODA1ZmYwqEkFpA==: 00:15:22.952 11:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:22.952 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:22.952 11:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:22.952 11:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.952 11:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.952 11:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.952 11:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:22.952 11:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:22.952 11:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:23.212 11:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:15:23.212 11:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:23.212 11:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:23.212 11:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:23.212 11:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:23.212 11:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:23.212 11:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:23.212 11:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.212 11:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.212 11:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.212 11:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:23.212 11:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:23.212 11:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:23.471 00:15:23.471 11:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:23.471 11:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:23.471 11:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:23.730 11:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:23.730 11:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:23.730 11:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.730 11:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.730 11:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.730 11:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:23.730 { 00:15:23.730 "cntlid": 29, 00:15:23.730 "qid": 0, 00:15:23.730 "state": "enabled", 00:15:23.730 "thread": "nvmf_tgt_poll_group_000", 00:15:23.730 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:23.730 "listen_address": { 00:15:23.730 "trtype": "TCP", 00:15:23.730 "adrfam": "IPv4", 00:15:23.730 "traddr": "10.0.0.2", 00:15:23.730 "trsvcid": "4420" 00:15:23.730 }, 00:15:23.730 "peer_address": { 00:15:23.730 "trtype": "TCP", 00:15:23.730 "adrfam": "IPv4", 00:15:23.730 "traddr": "10.0.0.1", 00:15:23.730 "trsvcid": "33610" 00:15:23.730 }, 00:15:23.730 "auth": { 00:15:23.730 "state": "completed", 00:15:23.730 "digest": "sha256", 00:15:23.730 "dhgroup": "ffdhe4096" 00:15:23.730 } 00:15:23.730 } 00:15:23.730 ]' 00:15:23.730 11:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:23.730 11:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:23.730 11:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:23.730 11:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:23.730 11:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:23.730 11:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:23.730 11:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:23.730 11:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:23.989 11:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2Y1NThjMjU5OTMyMjkzNmE2ZjBkNThmZDI0N2YxOGNmOWI0OWUwMTQ5YjViMDJik9KxkQ==: --dhchap-ctrl-secret DHHC-1:01:ZWJhMDFkMjViZjEyYTgwMzhkZTM0NWIyYjRlZTk0MGVw0Hvh: 00:15:23.989 11:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Y2Y1NThjMjU5OTMyMjkzNmE2ZjBkNThmZDI0N2YxOGNmOWI0OWUwMTQ5YjViMDJik9KxkQ==: --dhchap-ctrl-secret DHHC-1:01:ZWJhMDFkMjViZjEyYTgwMzhkZTM0NWIyYjRlZTk0MGVw0Hvh: 00:15:24.557 11:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:24.557 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:24.557 11:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:24.557 11:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.557 11:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.557 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.557 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:24.557 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:24.557 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:24.815 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:15:24.816 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:24.816 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:24.816 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:24.816 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:24.816 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:24.816 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:24.816 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.816 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.816 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.816 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:24.816 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:24.816 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:25.075 00:15:25.075 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:25.075 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:25.075 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:25.334 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:25.334 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:25.334 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.334 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.334 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.334 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:25.334 { 00:15:25.334 "cntlid": 31, 00:15:25.334 "qid": 0, 00:15:25.334 "state": "enabled", 00:15:25.334 "thread": "nvmf_tgt_poll_group_000", 00:15:25.334 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:25.334 "listen_address": { 00:15:25.334 "trtype": "TCP", 00:15:25.334 "adrfam": "IPv4", 00:15:25.334 "traddr": "10.0.0.2", 00:15:25.334 "trsvcid": "4420" 00:15:25.334 }, 00:15:25.334 "peer_address": { 00:15:25.334 "trtype": "TCP", 00:15:25.334 "adrfam": "IPv4", 00:15:25.334 "traddr": "10.0.0.1", 00:15:25.334 "trsvcid": "33636" 00:15:25.334 }, 00:15:25.334 "auth": { 00:15:25.334 "state": "completed", 00:15:25.334 "digest": "sha256", 00:15:25.334 "dhgroup": "ffdhe4096" 00:15:25.334 } 00:15:25.334 } 00:15:25.334 ]' 00:15:25.334 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:25.334 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:25.334 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:25.334 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:25.334 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:25.592 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:25.592 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:25.592 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:25.592 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWMwOTYyZDYzZDBiNmYyNzRlMGEwNDllZTk2NzI1YmI0Yjk4YWY2MWZiMzE2ZTcyNWVmYWY4MDlkZWNlM2IyMUJs8ko=: 00:15:25.592 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NWMwOTYyZDYzZDBiNmYyNzRlMGEwNDllZTk2NzI1YmI0Yjk4YWY2MWZiMzE2ZTcyNWVmYWY4MDlkZWNlM2IyMUJs8ko=: 00:15:26.158 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:26.158 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:26.158 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:26.158 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.159 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.159 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.159 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:26.159 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:26.159 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:26.159 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:26.416 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:15:26.416 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:26.416 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:26.416 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:26.416 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:26.416 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:26.416 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:26.416 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.416 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.416 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.416 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:26.416 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:26.416 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:26.982 00:15:26.982 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:26.982 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:26.982 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:26.982 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:26.982 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:26.982 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.982 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.982 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.982 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:26.982 { 00:15:26.982 "cntlid": 33, 00:15:26.982 "qid": 0, 00:15:26.982 "state": "enabled", 00:15:26.982 "thread": "nvmf_tgt_poll_group_000", 00:15:26.982 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:26.982 "listen_address": { 00:15:26.982 "trtype": "TCP", 00:15:26.982 "adrfam": "IPv4", 00:15:26.982 "traddr": "10.0.0.2", 00:15:26.982 "trsvcid": "4420" 00:15:26.982 }, 00:15:26.982 "peer_address": { 00:15:26.982 "trtype": "TCP", 00:15:26.982 "adrfam": "IPv4", 00:15:26.982 "traddr": "10.0.0.1", 00:15:26.982 "trsvcid": "47882" 00:15:26.982 }, 00:15:26.982 "auth": { 00:15:26.982 "state": "completed", 00:15:26.982 "digest": "sha256", 00:15:26.982 "dhgroup": "ffdhe6144" 00:15:26.982 } 00:15:26.982 } 00:15:26.982 ]' 00:15:26.982 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:26.982 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:26.982 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:27.240 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:27.240 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:27.240 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:27.240 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:27.240 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:27.240 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWM4MjMwYzk0ZDVmNDgzOTIzOWM1NzBlYTIxODc3OGExODkyODAyNzhlODYxNDE3N1Zy3A==: --dhchap-ctrl-secret DHHC-1:03:NWM0MjVjYjhiOGU5Nzk4MjQ1NjkyY2E4ODc4MzEwMzgzYTc2MGRjZDZiZTk1MTY5ZjBhMjk5MmJmNjRiZDQyYd/wShw=: 00:15:27.240 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NWM4MjMwYzk0ZDVmNDgzOTIzOWM1NzBlYTIxODc3OGExODkyODAyNzhlODYxNDE3N1Zy3A==: --dhchap-ctrl-secret DHHC-1:03:NWM0MjVjYjhiOGU5Nzk4MjQ1NjkyY2E4ODc4MzEwMzgzYTc2MGRjZDZiZTk1MTY5ZjBhMjk5MmJmNjRiZDQyYd/wShw=: 00:15:27.806 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:28.065 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:28.065 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:28.065 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.065 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.065 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.065 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:28.065 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:28.065 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:28.065 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:15:28.065 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:28.065 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:28.065 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:28.065 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:28.065 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:28.065 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:28.065 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.065 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.065 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.065 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:28.065 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:28.065 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:28.633 00:15:28.633 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:28.633 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:28.633 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:28.633 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:28.633 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:28.633 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.633 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.633 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.633 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:28.633 { 00:15:28.633 "cntlid": 35, 00:15:28.633 "qid": 0, 00:15:28.633 "state": "enabled", 00:15:28.633 "thread": "nvmf_tgt_poll_group_000", 00:15:28.633 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:28.633 "listen_address": { 00:15:28.633 "trtype": "TCP", 00:15:28.633 "adrfam": "IPv4", 00:15:28.633 "traddr": "10.0.0.2", 00:15:28.633 "trsvcid": "4420" 00:15:28.633 }, 00:15:28.633 "peer_address": { 00:15:28.633 "trtype": "TCP", 00:15:28.633 "adrfam": "IPv4", 00:15:28.633 "traddr": "10.0.0.1", 00:15:28.633 "trsvcid": "47916" 00:15:28.633 }, 00:15:28.633 "auth": { 00:15:28.633 "state": "completed", 00:15:28.633 "digest": "sha256", 00:15:28.633 "dhgroup": "ffdhe6144" 00:15:28.633 } 00:15:28.634 } 00:15:28.634 ]' 00:15:28.634 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:28.892 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:28.892 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:28.892 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:28.892 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:28.892 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:28.892 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:28.892 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:29.151 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmYwYzFiZTJiYTJkNDM1ZTY0OWZmN2I3ZTk1ZWYzMDP/AKiL: --dhchap-ctrl-secret DHHC-1:02:ZDRjZTY5MmQ5MjY2NTY0ZjQ0MjRhNDZjYjE5NzJiZjJhZDcyYTZhYmIxODA1ZmYwqEkFpA==: 00:15:29.151 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZmYwYzFiZTJiYTJkNDM1ZTY0OWZmN2I3ZTk1ZWYzMDP/AKiL: --dhchap-ctrl-secret DHHC-1:02:ZDRjZTY5MmQ5MjY2NTY0ZjQ0MjRhNDZjYjE5NzJiZjJhZDcyYTZhYmIxODA1ZmYwqEkFpA==: 00:15:29.719 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:29.719 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:29.719 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:29.719 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.719 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.719 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.719 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:29.719 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:29.719 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:29.978 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:15:29.978 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:29.978 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:29.978 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:29.978 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:29.978 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:29.978 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:29.978 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.978 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.978 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.978 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:29.978 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:29.978 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:30.237 00:15:30.237 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:30.237 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:30.237 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:30.497 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:30.497 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:30.497 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.497 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.497 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.497 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:30.497 { 00:15:30.497 "cntlid": 37, 00:15:30.497 "qid": 0, 00:15:30.497 "state": "enabled", 00:15:30.497 "thread": "nvmf_tgt_poll_group_000", 00:15:30.497 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:30.497 "listen_address": { 00:15:30.497 "trtype": "TCP", 00:15:30.497 "adrfam": "IPv4", 00:15:30.497 "traddr": "10.0.0.2", 00:15:30.497 "trsvcid": "4420" 00:15:30.497 }, 00:15:30.497 "peer_address": { 00:15:30.497 "trtype": "TCP", 00:15:30.497 "adrfam": "IPv4", 00:15:30.497 "traddr": "10.0.0.1", 00:15:30.497 "trsvcid": "47940" 00:15:30.497 }, 00:15:30.497 "auth": { 00:15:30.497 "state": "completed", 00:15:30.497 "digest": "sha256", 00:15:30.497 "dhgroup": "ffdhe6144" 00:15:30.497 } 00:15:30.497 } 00:15:30.497 ]' 00:15:30.497 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:30.497 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:30.497 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:30.497 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:30.497 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:30.497 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:30.497 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:30.497 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:30.756 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2Y1NThjMjU5OTMyMjkzNmE2ZjBkNThmZDI0N2YxOGNmOWI0OWUwMTQ5YjViMDJik9KxkQ==: --dhchap-ctrl-secret DHHC-1:01:ZWJhMDFkMjViZjEyYTgwMzhkZTM0NWIyYjRlZTk0MGVw0Hvh: 00:15:30.756 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Y2Y1NThjMjU5OTMyMjkzNmE2ZjBkNThmZDI0N2YxOGNmOWI0OWUwMTQ5YjViMDJik9KxkQ==: --dhchap-ctrl-secret DHHC-1:01:ZWJhMDFkMjViZjEyYTgwMzhkZTM0NWIyYjRlZTk0MGVw0Hvh: 00:15:31.324 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:31.324 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:31.324 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:31.324 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.324 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.324 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.324 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:31.324 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:31.324 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:31.582 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:15:31.582 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:31.582 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:31.582 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:31.582 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:31.582 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:31.582 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:31.582 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.582 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.582 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.582 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:31.583 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:31.583 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:31.852 00:15:31.852 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:31.852 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:31.852 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:32.114 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:32.114 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:32.114 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.114 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.114 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.114 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:32.114 { 00:15:32.114 "cntlid": 39, 00:15:32.114 "qid": 0, 00:15:32.114 "state": "enabled", 00:15:32.114 "thread": "nvmf_tgt_poll_group_000", 00:15:32.114 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:32.114 "listen_address": { 00:15:32.114 "trtype": "TCP", 00:15:32.114 "adrfam": "IPv4", 00:15:32.114 "traddr": "10.0.0.2", 00:15:32.114 "trsvcid": "4420" 00:15:32.114 }, 00:15:32.114 "peer_address": { 00:15:32.114 "trtype": "TCP", 00:15:32.114 "adrfam": "IPv4", 00:15:32.114 "traddr": "10.0.0.1", 00:15:32.114 "trsvcid": "47980" 00:15:32.114 }, 00:15:32.114 "auth": { 00:15:32.114 "state": "completed", 00:15:32.114 "digest": "sha256", 00:15:32.114 "dhgroup": "ffdhe6144" 00:15:32.114 } 00:15:32.114 } 00:15:32.114 ]' 00:15:32.114 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:32.114 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:32.114 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:32.114 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:32.114 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:32.374 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:32.374 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:32.374 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:32.374 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWMwOTYyZDYzZDBiNmYyNzRlMGEwNDllZTk2NzI1YmI0Yjk4YWY2MWZiMzE2ZTcyNWVmYWY4MDlkZWNlM2IyMUJs8ko=: 00:15:32.374 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NWMwOTYyZDYzZDBiNmYyNzRlMGEwNDllZTk2NzI1YmI0Yjk4YWY2MWZiMzE2ZTcyNWVmYWY4MDlkZWNlM2IyMUJs8ko=: 00:15:32.943 11:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:32.943 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:32.943 11:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:32.943 11:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.943 11:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.943 11:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.943 11:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:32.943 11:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:32.943 11:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:32.943 11:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:33.202 11:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:15:33.202 11:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:33.202 11:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:33.202 11:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:33.202 11:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:33.202 11:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:33.202 11:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:33.202 11:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.202 11:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.202 11:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.202 11:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:33.202 11:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:33.202 11:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:33.770 00:15:33.770 11:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:33.770 11:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:33.770 11:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:34.029 11:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:34.029 11:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:34.029 11:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.029 11:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.029 11:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.029 11:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:34.029 { 00:15:34.029 "cntlid": 41, 00:15:34.029 "qid": 0, 00:15:34.029 "state": "enabled", 00:15:34.029 "thread": "nvmf_tgt_poll_group_000", 00:15:34.029 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:34.029 "listen_address": { 00:15:34.029 "trtype": "TCP", 00:15:34.029 "adrfam": "IPv4", 00:15:34.029 "traddr": "10.0.0.2", 00:15:34.029 "trsvcid": "4420" 00:15:34.029 }, 00:15:34.029 "peer_address": { 00:15:34.029 "trtype": "TCP", 00:15:34.029 "adrfam": "IPv4", 00:15:34.029 "traddr": "10.0.0.1", 00:15:34.029 "trsvcid": "47994" 00:15:34.029 }, 00:15:34.029 "auth": { 00:15:34.029 "state": "completed", 00:15:34.029 "digest": "sha256", 00:15:34.029 "dhgroup": "ffdhe8192" 00:15:34.029 } 00:15:34.029 } 00:15:34.029 ]' 00:15:34.029 11:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:34.029 11:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:34.029 11:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:34.029 11:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:34.029 11:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:34.029 11:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:34.029 11:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:34.029 11:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:34.288 11:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWM4MjMwYzk0ZDVmNDgzOTIzOWM1NzBlYTIxODc3OGExODkyODAyNzhlODYxNDE3N1Zy3A==: --dhchap-ctrl-secret DHHC-1:03:NWM0MjVjYjhiOGU5Nzk4MjQ1NjkyY2E4ODc4MzEwMzgzYTc2MGRjZDZiZTk1MTY5ZjBhMjk5MmJmNjRiZDQyYd/wShw=: 00:15:34.288 11:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NWM4MjMwYzk0ZDVmNDgzOTIzOWM1NzBlYTIxODc3OGExODkyODAyNzhlODYxNDE3N1Zy3A==: --dhchap-ctrl-secret DHHC-1:03:NWM0MjVjYjhiOGU5Nzk4MjQ1NjkyY2E4ODc4MzEwMzgzYTc2MGRjZDZiZTk1MTY5ZjBhMjk5MmJmNjRiZDQyYd/wShw=: 00:15:34.859 11:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:34.859 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:34.859 11:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:34.859 11:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.859 11:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.859 11:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.859 11:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:34.859 11:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:34.859 11:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:35.162 11:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:15:35.162 11:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:35.162 11:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:35.162 11:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:35.162 11:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:35.162 11:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:35.162 11:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:35.162 11:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.162 11:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.162 11:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.162 11:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:35.162 11:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:35.162 11:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:35.465 00:15:35.737 11:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:35.737 11:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:35.737 11:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:35.737 11:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:35.737 11:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:35.737 11:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.737 11:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.737 11:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.737 11:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:35.737 { 00:15:35.737 "cntlid": 43, 00:15:35.737 "qid": 0, 00:15:35.737 "state": "enabled", 00:15:35.737 "thread": "nvmf_tgt_poll_group_000", 00:15:35.737 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:35.737 "listen_address": { 00:15:35.737 "trtype": "TCP", 00:15:35.737 "adrfam": "IPv4", 00:15:35.737 "traddr": "10.0.0.2", 00:15:35.737 "trsvcid": "4420" 00:15:35.737 }, 00:15:35.737 "peer_address": { 00:15:35.737 "trtype": "TCP", 00:15:35.737 "adrfam": "IPv4", 00:15:35.737 "traddr": "10.0.0.1", 00:15:35.737 "trsvcid": "48026" 00:15:35.737 }, 00:15:35.737 "auth": { 00:15:35.737 "state": "completed", 00:15:35.737 "digest": "sha256", 00:15:35.737 "dhgroup": "ffdhe8192" 00:15:35.737 } 00:15:35.737 } 00:15:35.737 ]' 00:15:35.737 11:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:35.737 11:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:35.737 11:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:35.996 11:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:35.996 11:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:35.996 11:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:35.996 11:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:35.996 11:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:36.254 11:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmYwYzFiZTJiYTJkNDM1ZTY0OWZmN2I3ZTk1ZWYzMDP/AKiL: --dhchap-ctrl-secret DHHC-1:02:ZDRjZTY5MmQ5MjY2NTY0ZjQ0MjRhNDZjYjE5NzJiZjJhZDcyYTZhYmIxODA1ZmYwqEkFpA==: 00:15:36.254 11:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZmYwYzFiZTJiYTJkNDM1ZTY0OWZmN2I3ZTk1ZWYzMDP/AKiL: --dhchap-ctrl-secret DHHC-1:02:ZDRjZTY5MmQ5MjY2NTY0ZjQ0MjRhNDZjYjE5NzJiZjJhZDcyYTZhYmIxODA1ZmYwqEkFpA==: 00:15:36.822 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:36.822 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:36.822 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:36.822 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.822 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.822 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.822 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:36.822 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:36.822 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:36.822 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:15:36.822 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:36.822 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:36.822 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:36.822 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:36.822 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:36.822 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:36.822 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.822 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.822 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.822 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:36.822 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:36.822 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:37.390 00:15:37.390 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:37.390 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:37.390 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:37.648 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:37.648 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:37.648 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.648 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.648 11:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.648 11:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:37.648 { 00:15:37.648 "cntlid": 45, 00:15:37.648 "qid": 0, 00:15:37.648 "state": "enabled", 00:15:37.648 "thread": "nvmf_tgt_poll_group_000", 00:15:37.648 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:37.648 "listen_address": { 00:15:37.648 "trtype": "TCP", 00:15:37.648 "adrfam": "IPv4", 00:15:37.648 "traddr": "10.0.0.2", 00:15:37.648 "trsvcid": "4420" 00:15:37.648 }, 00:15:37.648 "peer_address": { 00:15:37.648 "trtype": "TCP", 00:15:37.648 "adrfam": "IPv4", 00:15:37.648 "traddr": "10.0.0.1", 00:15:37.648 "trsvcid": "33898" 00:15:37.648 }, 00:15:37.648 "auth": { 00:15:37.648 "state": "completed", 00:15:37.648 "digest": "sha256", 00:15:37.648 "dhgroup": "ffdhe8192" 00:15:37.648 } 00:15:37.648 } 00:15:37.648 ]' 00:15:37.648 11:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:37.648 11:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:37.648 11:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:37.648 11:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:37.648 11:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:37.648 11:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:37.648 11:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:37.648 11:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:37.907 11:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2Y1NThjMjU5OTMyMjkzNmE2ZjBkNThmZDI0N2YxOGNmOWI0OWUwMTQ5YjViMDJik9KxkQ==: --dhchap-ctrl-secret DHHC-1:01:ZWJhMDFkMjViZjEyYTgwMzhkZTM0NWIyYjRlZTk0MGVw0Hvh: 00:15:37.907 11:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Y2Y1NThjMjU5OTMyMjkzNmE2ZjBkNThmZDI0N2YxOGNmOWI0OWUwMTQ5YjViMDJik9KxkQ==: --dhchap-ctrl-secret DHHC-1:01:ZWJhMDFkMjViZjEyYTgwMzhkZTM0NWIyYjRlZTk0MGVw0Hvh: 00:15:38.474 11:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:38.474 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:38.474 11:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:38.474 11:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.474 11:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.474 11:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.474 11:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:38.474 11:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:38.474 11:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:38.733 11:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:15:38.733 11:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:38.733 11:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:38.733 11:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:38.733 11:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:38.733 11:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:38.733 11:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:38.733 11:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.733 11:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.733 11:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.733 11:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:38.733 11:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:38.734 11:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:39.301 00:15:39.301 11:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:39.301 11:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:39.301 11:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:39.560 11:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:39.560 11:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:39.560 11:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.560 11:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.560 11:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.560 11:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:39.560 { 00:15:39.560 "cntlid": 47, 00:15:39.560 "qid": 0, 00:15:39.560 "state": "enabled", 00:15:39.560 "thread": "nvmf_tgt_poll_group_000", 00:15:39.560 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:39.560 "listen_address": { 00:15:39.560 "trtype": "TCP", 00:15:39.560 "adrfam": "IPv4", 00:15:39.560 "traddr": "10.0.0.2", 00:15:39.560 "trsvcid": "4420" 00:15:39.560 }, 00:15:39.560 "peer_address": { 00:15:39.560 "trtype": "TCP", 00:15:39.560 "adrfam": "IPv4", 00:15:39.560 "traddr": "10.0.0.1", 00:15:39.560 "trsvcid": "33924" 00:15:39.560 }, 00:15:39.560 "auth": { 00:15:39.560 "state": "completed", 00:15:39.560 "digest": "sha256", 00:15:39.560 "dhgroup": "ffdhe8192" 00:15:39.560 } 00:15:39.560 } 00:15:39.560 ]' 00:15:39.560 11:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:39.560 11:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:39.560 11:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:39.560 11:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:39.560 11:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:39.560 11:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:39.560 11:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:39.560 11:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:39.819 11:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWMwOTYyZDYzZDBiNmYyNzRlMGEwNDllZTk2NzI1YmI0Yjk4YWY2MWZiMzE2ZTcyNWVmYWY4MDlkZWNlM2IyMUJs8ko=: 00:15:39.819 11:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NWMwOTYyZDYzZDBiNmYyNzRlMGEwNDllZTk2NzI1YmI0Yjk4YWY2MWZiMzE2ZTcyNWVmYWY4MDlkZWNlM2IyMUJs8ko=: 00:15:40.386 11:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:40.386 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:40.386 11:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:40.386 11:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.386 11:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.386 11:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.386 11:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:15:40.386 11:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:40.386 11:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:40.386 11:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:40.386 11:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:40.645 11:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:15:40.645 11:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:40.645 11:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:40.645 11:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:40.645 11:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:40.645 11:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:40.645 11:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:40.645 11:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.645 11:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.645 11:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.645 11:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:40.646 11:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:40.646 11:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:40.904 00:15:40.904 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:40.904 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:40.904 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:41.163 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:41.164 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:41.164 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.164 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.164 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.164 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:41.164 { 00:15:41.164 "cntlid": 49, 00:15:41.164 "qid": 0, 00:15:41.164 "state": "enabled", 00:15:41.164 "thread": "nvmf_tgt_poll_group_000", 00:15:41.164 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:41.164 "listen_address": { 00:15:41.164 "trtype": "TCP", 00:15:41.164 "adrfam": "IPv4", 00:15:41.164 "traddr": "10.0.0.2", 00:15:41.164 "trsvcid": "4420" 00:15:41.164 }, 00:15:41.164 "peer_address": { 00:15:41.164 "trtype": "TCP", 00:15:41.164 "adrfam": "IPv4", 00:15:41.164 "traddr": "10.0.0.1", 00:15:41.164 "trsvcid": "33942" 00:15:41.164 }, 00:15:41.164 "auth": { 00:15:41.164 "state": "completed", 00:15:41.164 "digest": "sha384", 00:15:41.164 "dhgroup": "null" 00:15:41.164 } 00:15:41.164 } 00:15:41.164 ]' 00:15:41.164 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:41.164 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:41.164 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:41.164 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:41.164 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:41.164 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:41.164 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:41.164 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:41.422 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWM4MjMwYzk0ZDVmNDgzOTIzOWM1NzBlYTIxODc3OGExODkyODAyNzhlODYxNDE3N1Zy3A==: --dhchap-ctrl-secret DHHC-1:03:NWM0MjVjYjhiOGU5Nzk4MjQ1NjkyY2E4ODc4MzEwMzgzYTc2MGRjZDZiZTk1MTY5ZjBhMjk5MmJmNjRiZDQyYd/wShw=: 00:15:41.423 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NWM4MjMwYzk0ZDVmNDgzOTIzOWM1NzBlYTIxODc3OGExODkyODAyNzhlODYxNDE3N1Zy3A==: --dhchap-ctrl-secret DHHC-1:03:NWM0MjVjYjhiOGU5Nzk4MjQ1NjkyY2E4ODc4MzEwMzgzYTc2MGRjZDZiZTk1MTY5ZjBhMjk5MmJmNjRiZDQyYd/wShw=: 00:15:41.989 11:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:41.989 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:41.989 11:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:41.989 11:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.989 11:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.989 11:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.989 11:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:41.989 11:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:41.989 11:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:42.247 11:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:15:42.247 11:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:42.247 11:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:42.247 11:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:42.247 11:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:42.247 11:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:42.248 11:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:42.248 11:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.248 11:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.248 11:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.248 11:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:42.248 11:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:42.248 11:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:42.506 00:15:42.506 11:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:42.506 11:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:42.506 11:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:42.506 11:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:42.506 11:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:42.506 11:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.506 11:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.506 11:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.506 11:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:42.506 { 00:15:42.506 "cntlid": 51, 00:15:42.506 "qid": 0, 00:15:42.506 "state": "enabled", 00:15:42.506 "thread": "nvmf_tgt_poll_group_000", 00:15:42.506 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:42.506 "listen_address": { 00:15:42.506 "trtype": "TCP", 00:15:42.506 "adrfam": "IPv4", 00:15:42.506 "traddr": "10.0.0.2", 00:15:42.506 "trsvcid": "4420" 00:15:42.506 }, 00:15:42.506 "peer_address": { 00:15:42.506 "trtype": "TCP", 00:15:42.506 "adrfam": "IPv4", 00:15:42.506 "traddr": "10.0.0.1", 00:15:42.506 "trsvcid": "33960" 00:15:42.506 }, 00:15:42.506 "auth": { 00:15:42.506 "state": "completed", 00:15:42.506 "digest": "sha384", 00:15:42.506 "dhgroup": "null" 00:15:42.506 } 00:15:42.506 } 00:15:42.506 ]' 00:15:42.506 11:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:42.765 11:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:42.765 11:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:42.765 11:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:42.765 11:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:42.765 11:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:42.765 11:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:42.765 11:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:43.023 11:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmYwYzFiZTJiYTJkNDM1ZTY0OWZmN2I3ZTk1ZWYzMDP/AKiL: --dhchap-ctrl-secret DHHC-1:02:ZDRjZTY5MmQ5MjY2NTY0ZjQ0MjRhNDZjYjE5NzJiZjJhZDcyYTZhYmIxODA1ZmYwqEkFpA==: 00:15:43.023 11:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZmYwYzFiZTJiYTJkNDM1ZTY0OWZmN2I3ZTk1ZWYzMDP/AKiL: --dhchap-ctrl-secret DHHC-1:02:ZDRjZTY5MmQ5MjY2NTY0ZjQ0MjRhNDZjYjE5NzJiZjJhZDcyYTZhYmIxODA1ZmYwqEkFpA==: 00:15:43.590 11:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:43.591 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:43.591 11:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:43.591 11:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.591 11:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.591 11:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.591 11:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:43.591 11:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:43.591 11:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:43.849 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:15:43.849 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:43.849 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:43.849 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:43.849 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:43.849 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:43.849 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:43.849 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.849 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.849 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.849 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:43.849 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:43.849 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:44.107 00:15:44.108 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:44.108 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:44.108 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:44.366 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:44.366 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:44.366 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.366 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.366 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.366 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:44.366 { 00:15:44.366 "cntlid": 53, 00:15:44.366 "qid": 0, 00:15:44.366 "state": "enabled", 00:15:44.366 "thread": "nvmf_tgt_poll_group_000", 00:15:44.366 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:44.366 "listen_address": { 00:15:44.366 "trtype": "TCP", 00:15:44.366 "adrfam": "IPv4", 00:15:44.366 "traddr": "10.0.0.2", 00:15:44.366 "trsvcid": "4420" 00:15:44.366 }, 00:15:44.366 "peer_address": { 00:15:44.366 "trtype": "TCP", 00:15:44.366 "adrfam": "IPv4", 00:15:44.366 "traddr": "10.0.0.1", 00:15:44.366 "trsvcid": "34002" 00:15:44.366 }, 00:15:44.366 "auth": { 00:15:44.366 "state": "completed", 00:15:44.366 "digest": "sha384", 00:15:44.366 "dhgroup": "null" 00:15:44.366 } 00:15:44.366 } 00:15:44.366 ]' 00:15:44.366 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:44.366 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:44.366 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:44.366 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:44.366 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:44.366 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:44.366 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:44.366 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:44.625 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2Y1NThjMjU5OTMyMjkzNmE2ZjBkNThmZDI0N2YxOGNmOWI0OWUwMTQ5YjViMDJik9KxkQ==: --dhchap-ctrl-secret DHHC-1:01:ZWJhMDFkMjViZjEyYTgwMzhkZTM0NWIyYjRlZTk0MGVw0Hvh: 00:15:44.625 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Y2Y1NThjMjU5OTMyMjkzNmE2ZjBkNThmZDI0N2YxOGNmOWI0OWUwMTQ5YjViMDJik9KxkQ==: --dhchap-ctrl-secret DHHC-1:01:ZWJhMDFkMjViZjEyYTgwMzhkZTM0NWIyYjRlZTk0MGVw0Hvh: 00:15:45.193 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:45.193 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:45.193 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:45.193 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.193 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.193 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.193 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:45.193 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:45.193 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:45.451 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:15:45.452 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:45.452 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:45.452 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:45.452 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:45.452 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:45.452 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:45.452 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.452 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.452 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.452 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:45.452 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:45.452 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:45.710 00:15:45.710 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:45.710 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:45.710 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:45.710 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:45.710 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:45.710 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.710 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.710 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.710 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:45.710 { 00:15:45.710 "cntlid": 55, 00:15:45.710 "qid": 0, 00:15:45.710 "state": "enabled", 00:15:45.710 "thread": "nvmf_tgt_poll_group_000", 00:15:45.710 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:45.710 "listen_address": { 00:15:45.710 "trtype": "TCP", 00:15:45.710 "adrfam": "IPv4", 00:15:45.710 "traddr": "10.0.0.2", 00:15:45.710 "trsvcid": "4420" 00:15:45.710 }, 00:15:45.710 "peer_address": { 00:15:45.710 "trtype": "TCP", 00:15:45.710 "adrfam": "IPv4", 00:15:45.710 "traddr": "10.0.0.1", 00:15:45.710 "trsvcid": "34022" 00:15:45.710 }, 00:15:45.710 "auth": { 00:15:45.710 "state": "completed", 00:15:45.710 "digest": "sha384", 00:15:45.710 "dhgroup": "null" 00:15:45.710 } 00:15:45.710 } 00:15:45.710 ]' 00:15:45.710 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:45.969 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:45.969 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:45.969 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:45.969 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:45.969 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:45.969 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:45.969 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:46.227 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWMwOTYyZDYzZDBiNmYyNzRlMGEwNDllZTk2NzI1YmI0Yjk4YWY2MWZiMzE2ZTcyNWVmYWY4MDlkZWNlM2IyMUJs8ko=: 00:15:46.227 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NWMwOTYyZDYzZDBiNmYyNzRlMGEwNDllZTk2NzI1YmI0Yjk4YWY2MWZiMzE2ZTcyNWVmYWY4MDlkZWNlM2IyMUJs8ko=: 00:15:46.795 11:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:46.795 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:46.795 11:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:46.795 11:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.795 11:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.795 11:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.795 11:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:46.795 11:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:46.795 11:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:46.795 11:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:47.054 11:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:15:47.054 11:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:47.054 11:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:47.054 11:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:47.054 11:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:47.054 11:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:47.054 11:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:47.054 11:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.054 11:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.054 11:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.054 11:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:47.054 11:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:47.054 11:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:47.313 00:15:47.313 11:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:47.313 11:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:47.313 11:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:47.313 11:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:47.313 11:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:47.313 11:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.313 11:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.313 11:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.313 11:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:47.313 { 00:15:47.313 "cntlid": 57, 00:15:47.313 "qid": 0, 00:15:47.313 "state": "enabled", 00:15:47.313 "thread": "nvmf_tgt_poll_group_000", 00:15:47.313 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:47.313 "listen_address": { 00:15:47.313 "trtype": "TCP", 00:15:47.313 "adrfam": "IPv4", 00:15:47.313 "traddr": "10.0.0.2", 00:15:47.313 "trsvcid": "4420" 00:15:47.313 }, 00:15:47.313 "peer_address": { 00:15:47.313 "trtype": "TCP", 00:15:47.313 "adrfam": "IPv4", 00:15:47.313 "traddr": "10.0.0.1", 00:15:47.313 "trsvcid": "55566" 00:15:47.313 }, 00:15:47.313 "auth": { 00:15:47.313 "state": "completed", 00:15:47.313 "digest": "sha384", 00:15:47.313 "dhgroup": "ffdhe2048" 00:15:47.313 } 00:15:47.313 } 00:15:47.313 ]' 00:15:47.313 11:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:47.573 11:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:47.573 11:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:47.573 11:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:47.573 11:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:47.573 11:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:47.573 11:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:47.573 11:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:47.832 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWM4MjMwYzk0ZDVmNDgzOTIzOWM1NzBlYTIxODc3OGExODkyODAyNzhlODYxNDE3N1Zy3A==: --dhchap-ctrl-secret DHHC-1:03:NWM0MjVjYjhiOGU5Nzk4MjQ1NjkyY2E4ODc4MzEwMzgzYTc2MGRjZDZiZTk1MTY5ZjBhMjk5MmJmNjRiZDQyYd/wShw=: 00:15:47.832 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NWM4MjMwYzk0ZDVmNDgzOTIzOWM1NzBlYTIxODc3OGExODkyODAyNzhlODYxNDE3N1Zy3A==: --dhchap-ctrl-secret DHHC-1:03:NWM0MjVjYjhiOGU5Nzk4MjQ1NjkyY2E4ODc4MzEwMzgzYTc2MGRjZDZiZTk1MTY5ZjBhMjk5MmJmNjRiZDQyYd/wShw=: 00:15:48.400 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:48.400 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:48.400 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:48.400 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.400 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.400 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.400 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:48.400 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:48.400 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:48.658 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:15:48.658 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:48.658 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:48.658 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:48.658 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:48.658 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:48.658 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:48.658 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.658 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.658 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.658 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:48.658 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:48.658 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:48.658 00:15:48.917 11:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:48.917 11:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:48.917 11:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:48.917 11:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:48.917 11:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:48.917 11:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.917 11:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.917 11:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.917 11:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:48.917 { 00:15:48.917 "cntlid": 59, 00:15:48.917 "qid": 0, 00:15:48.917 "state": "enabled", 00:15:48.917 "thread": "nvmf_tgt_poll_group_000", 00:15:48.917 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:48.917 "listen_address": { 00:15:48.917 "trtype": "TCP", 00:15:48.917 "adrfam": "IPv4", 00:15:48.917 "traddr": "10.0.0.2", 00:15:48.917 "trsvcid": "4420" 00:15:48.917 }, 00:15:48.917 "peer_address": { 00:15:48.917 "trtype": "TCP", 00:15:48.917 "adrfam": "IPv4", 00:15:48.917 "traddr": "10.0.0.1", 00:15:48.917 "trsvcid": "55592" 00:15:48.917 }, 00:15:48.917 "auth": { 00:15:48.917 "state": "completed", 00:15:48.917 "digest": "sha384", 00:15:48.917 "dhgroup": "ffdhe2048" 00:15:48.917 } 00:15:48.917 } 00:15:48.917 ]' 00:15:48.917 11:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:49.175 11:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:49.175 11:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:49.175 11:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:49.175 11:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:49.175 11:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:49.175 11:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:49.175 11:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:49.434 11:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmYwYzFiZTJiYTJkNDM1ZTY0OWZmN2I3ZTk1ZWYzMDP/AKiL: --dhchap-ctrl-secret DHHC-1:02:ZDRjZTY5MmQ5MjY2NTY0ZjQ0MjRhNDZjYjE5NzJiZjJhZDcyYTZhYmIxODA1ZmYwqEkFpA==: 00:15:49.434 11:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZmYwYzFiZTJiYTJkNDM1ZTY0OWZmN2I3ZTk1ZWYzMDP/AKiL: --dhchap-ctrl-secret DHHC-1:02:ZDRjZTY5MmQ5MjY2NTY0ZjQ0MjRhNDZjYjE5NzJiZjJhZDcyYTZhYmIxODA1ZmYwqEkFpA==: 00:15:50.000 11:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:50.000 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:50.000 11:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:50.001 11:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.001 11:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.001 11:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.001 11:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:50.001 11:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:50.001 11:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:50.001 11:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:15:50.001 11:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:50.001 11:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:50.001 11:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:50.001 11:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:50.001 11:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:50.001 11:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:50.259 11:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.259 11:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.259 11:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.259 11:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:50.259 11:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:50.259 11:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:50.259 00:15:50.518 11:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:50.518 11:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:50.518 11:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:50.518 11:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:50.518 11:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:50.518 11:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.518 11:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.518 11:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.518 11:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:50.518 { 00:15:50.518 "cntlid": 61, 00:15:50.518 "qid": 0, 00:15:50.518 "state": "enabled", 00:15:50.518 "thread": "nvmf_tgt_poll_group_000", 00:15:50.518 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:50.518 "listen_address": { 00:15:50.518 "trtype": "TCP", 00:15:50.518 "adrfam": "IPv4", 00:15:50.518 "traddr": "10.0.0.2", 00:15:50.518 "trsvcid": "4420" 00:15:50.518 }, 00:15:50.518 "peer_address": { 00:15:50.518 "trtype": "TCP", 00:15:50.518 "adrfam": "IPv4", 00:15:50.518 "traddr": "10.0.0.1", 00:15:50.518 "trsvcid": "55614" 00:15:50.518 }, 00:15:50.518 "auth": { 00:15:50.518 "state": "completed", 00:15:50.518 "digest": "sha384", 00:15:50.518 "dhgroup": "ffdhe2048" 00:15:50.518 } 00:15:50.518 } 00:15:50.518 ]' 00:15:50.518 11:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:50.777 11:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:50.777 11:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:50.777 11:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:50.777 11:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:50.777 11:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:50.777 11:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:50.777 11:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:51.036 11:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2Y1NThjMjU5OTMyMjkzNmE2ZjBkNThmZDI0N2YxOGNmOWI0OWUwMTQ5YjViMDJik9KxkQ==: --dhchap-ctrl-secret DHHC-1:01:ZWJhMDFkMjViZjEyYTgwMzhkZTM0NWIyYjRlZTk0MGVw0Hvh: 00:15:51.036 11:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Y2Y1NThjMjU5OTMyMjkzNmE2ZjBkNThmZDI0N2YxOGNmOWI0OWUwMTQ5YjViMDJik9KxkQ==: --dhchap-ctrl-secret DHHC-1:01:ZWJhMDFkMjViZjEyYTgwMzhkZTM0NWIyYjRlZTk0MGVw0Hvh: 00:15:51.603 11:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:51.603 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:51.603 11:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:51.603 11:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.603 11:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.603 11:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.603 11:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:51.603 11:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:51.603 11:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:51.603 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:15:51.603 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:51.603 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:51.603 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:51.873 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:51.873 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:51.874 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:51.874 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.874 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.874 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.874 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:51.874 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:51.874 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:51.874 00:15:52.133 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:52.133 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:52.133 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:52.133 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:52.133 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:52.133 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.133 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.133 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.133 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:52.133 { 00:15:52.133 "cntlid": 63, 00:15:52.133 "qid": 0, 00:15:52.133 "state": "enabled", 00:15:52.133 "thread": "nvmf_tgt_poll_group_000", 00:15:52.133 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:52.133 "listen_address": { 00:15:52.133 "trtype": "TCP", 00:15:52.133 "adrfam": "IPv4", 00:15:52.133 "traddr": "10.0.0.2", 00:15:52.133 "trsvcid": "4420" 00:15:52.133 }, 00:15:52.133 "peer_address": { 00:15:52.133 "trtype": "TCP", 00:15:52.133 "adrfam": "IPv4", 00:15:52.133 "traddr": "10.0.0.1", 00:15:52.133 "trsvcid": "55642" 00:15:52.133 }, 00:15:52.133 "auth": { 00:15:52.133 "state": "completed", 00:15:52.133 "digest": "sha384", 00:15:52.133 "dhgroup": "ffdhe2048" 00:15:52.133 } 00:15:52.133 } 00:15:52.133 ]' 00:15:52.133 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:52.391 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:52.391 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:52.391 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:52.391 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:52.391 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:52.391 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:52.391 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:52.649 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWMwOTYyZDYzZDBiNmYyNzRlMGEwNDllZTk2NzI1YmI0Yjk4YWY2MWZiMzE2ZTcyNWVmYWY4MDlkZWNlM2IyMUJs8ko=: 00:15:52.649 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NWMwOTYyZDYzZDBiNmYyNzRlMGEwNDllZTk2NzI1YmI0Yjk4YWY2MWZiMzE2ZTcyNWVmYWY4MDlkZWNlM2IyMUJs8ko=: 00:15:53.216 11:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:53.216 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:53.216 11:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:53.216 11:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.216 11:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.216 11:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.216 11:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:53.216 11:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:53.216 11:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:53.216 11:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:53.475 11:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:15:53.475 11:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:53.475 11:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:53.475 11:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:53.475 11:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:53.475 11:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:53.475 11:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:53.475 11:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.475 11:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.475 11:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.475 11:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:53.475 11:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:53.475 11:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:53.475 00:15:53.733 11:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:53.733 11:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:53.733 11:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:53.733 11:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:53.733 11:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:53.733 11:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.733 11:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.733 11:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.733 11:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:53.733 { 00:15:53.733 "cntlid": 65, 00:15:53.733 "qid": 0, 00:15:53.733 "state": "enabled", 00:15:53.733 "thread": "nvmf_tgt_poll_group_000", 00:15:53.733 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:53.733 "listen_address": { 00:15:53.733 "trtype": "TCP", 00:15:53.733 "adrfam": "IPv4", 00:15:53.733 "traddr": "10.0.0.2", 00:15:53.733 "trsvcid": "4420" 00:15:53.733 }, 00:15:53.733 "peer_address": { 00:15:53.733 "trtype": "TCP", 00:15:53.733 "adrfam": "IPv4", 00:15:53.733 "traddr": "10.0.0.1", 00:15:53.733 "trsvcid": "55658" 00:15:53.733 }, 00:15:53.733 "auth": { 00:15:53.733 "state": "completed", 00:15:53.733 "digest": "sha384", 00:15:53.733 "dhgroup": "ffdhe3072" 00:15:53.733 } 00:15:53.733 } 00:15:53.733 ]' 00:15:53.733 11:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:53.992 11:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:53.992 11:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:53.992 11:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:53.992 11:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:53.992 11:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:53.992 11:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:53.992 11:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:54.251 11:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWM4MjMwYzk0ZDVmNDgzOTIzOWM1NzBlYTIxODc3OGExODkyODAyNzhlODYxNDE3N1Zy3A==: --dhchap-ctrl-secret DHHC-1:03:NWM0MjVjYjhiOGU5Nzk4MjQ1NjkyY2E4ODc4MzEwMzgzYTc2MGRjZDZiZTk1MTY5ZjBhMjk5MmJmNjRiZDQyYd/wShw=: 00:15:54.251 11:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NWM4MjMwYzk0ZDVmNDgzOTIzOWM1NzBlYTIxODc3OGExODkyODAyNzhlODYxNDE3N1Zy3A==: --dhchap-ctrl-secret DHHC-1:03:NWM0MjVjYjhiOGU5Nzk4MjQ1NjkyY2E4ODc4MzEwMzgzYTc2MGRjZDZiZTk1MTY5ZjBhMjk5MmJmNjRiZDQyYd/wShw=: 00:15:54.819 11:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:54.819 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:54.819 11:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:54.819 11:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.819 11:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.819 11:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.819 11:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:54.819 11:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:54.819 11:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:55.078 11:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:15:55.078 11:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:55.078 11:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:55.078 11:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:55.078 11:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:55.078 11:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:55.078 11:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:55.078 11:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.078 11:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.079 11:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.079 11:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:55.079 11:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:55.079 11:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:55.339 00:15:55.339 11:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:55.339 11:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:55.339 11:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:55.339 11:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:55.339 11:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:55.339 11:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.339 11:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.339 11:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.339 11:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:55.339 { 00:15:55.339 "cntlid": 67, 00:15:55.339 "qid": 0, 00:15:55.339 "state": "enabled", 00:15:55.339 "thread": "nvmf_tgt_poll_group_000", 00:15:55.339 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:55.339 "listen_address": { 00:15:55.339 "trtype": "TCP", 00:15:55.339 "adrfam": "IPv4", 00:15:55.339 "traddr": "10.0.0.2", 00:15:55.339 "trsvcid": "4420" 00:15:55.339 }, 00:15:55.339 "peer_address": { 00:15:55.339 "trtype": "TCP", 00:15:55.339 "adrfam": "IPv4", 00:15:55.339 "traddr": "10.0.0.1", 00:15:55.339 "trsvcid": "55672" 00:15:55.339 }, 00:15:55.339 "auth": { 00:15:55.339 "state": "completed", 00:15:55.339 "digest": "sha384", 00:15:55.339 "dhgroup": "ffdhe3072" 00:15:55.339 } 00:15:55.339 } 00:15:55.339 ]' 00:15:55.339 11:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:55.598 11:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:55.598 11:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:55.598 11:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:55.598 11:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:55.598 11:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:55.598 11:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:55.598 11:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:55.856 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmYwYzFiZTJiYTJkNDM1ZTY0OWZmN2I3ZTk1ZWYzMDP/AKiL: --dhchap-ctrl-secret DHHC-1:02:ZDRjZTY5MmQ5MjY2NTY0ZjQ0MjRhNDZjYjE5NzJiZjJhZDcyYTZhYmIxODA1ZmYwqEkFpA==: 00:15:55.856 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZmYwYzFiZTJiYTJkNDM1ZTY0OWZmN2I3ZTk1ZWYzMDP/AKiL: --dhchap-ctrl-secret DHHC-1:02:ZDRjZTY5MmQ5MjY2NTY0ZjQ0MjRhNDZjYjE5NzJiZjJhZDcyYTZhYmIxODA1ZmYwqEkFpA==: 00:15:56.423 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:56.423 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:56.423 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:56.423 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.423 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.423 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.423 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:56.423 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:56.423 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:56.682 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:15:56.682 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:56.682 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:56.682 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:56.682 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:56.682 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:56.682 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:56.682 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.682 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.682 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.682 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:56.682 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:56.682 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:56.682 00:15:56.941 11:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:56.941 11:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:56.941 11:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:56.941 11:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:56.941 11:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:56.941 11:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.941 11:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.941 11:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.941 11:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:56.942 { 00:15:56.942 "cntlid": 69, 00:15:56.942 "qid": 0, 00:15:56.942 "state": "enabled", 00:15:56.942 "thread": "nvmf_tgt_poll_group_000", 00:15:56.942 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:56.942 "listen_address": { 00:15:56.942 "trtype": "TCP", 00:15:56.942 "adrfam": "IPv4", 00:15:56.942 "traddr": "10.0.0.2", 00:15:56.942 "trsvcid": "4420" 00:15:56.942 }, 00:15:56.942 "peer_address": { 00:15:56.942 "trtype": "TCP", 00:15:56.942 "adrfam": "IPv4", 00:15:56.942 "traddr": "10.0.0.1", 00:15:56.942 "trsvcid": "34128" 00:15:56.942 }, 00:15:56.942 "auth": { 00:15:56.942 "state": "completed", 00:15:56.942 "digest": "sha384", 00:15:56.942 "dhgroup": "ffdhe3072" 00:15:56.942 } 00:15:56.942 } 00:15:56.942 ]' 00:15:56.942 11:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:57.200 11:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:57.200 11:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:57.200 11:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:57.200 11:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:57.200 11:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:57.200 11:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:57.200 11:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:57.459 11:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2Y1NThjMjU5OTMyMjkzNmE2ZjBkNThmZDI0N2YxOGNmOWI0OWUwMTQ5YjViMDJik9KxkQ==: --dhchap-ctrl-secret DHHC-1:01:ZWJhMDFkMjViZjEyYTgwMzhkZTM0NWIyYjRlZTk0MGVw0Hvh: 00:15:57.459 11:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Y2Y1NThjMjU5OTMyMjkzNmE2ZjBkNThmZDI0N2YxOGNmOWI0OWUwMTQ5YjViMDJik9KxkQ==: --dhchap-ctrl-secret DHHC-1:01:ZWJhMDFkMjViZjEyYTgwMzhkZTM0NWIyYjRlZTk0MGVw0Hvh: 00:15:58.026 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:58.026 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:58.026 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:58.026 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.026 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.026 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.026 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:58.026 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:58.026 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:58.026 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:15:58.026 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:58.026 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:58.026 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:58.026 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:58.026 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:58.026 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:58.026 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.026 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.026 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.026 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:58.026 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:58.026 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:58.285 00:15:58.544 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:58.544 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:58.544 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:58.544 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:58.544 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:58.544 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.544 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.544 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.544 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:58.544 { 00:15:58.544 "cntlid": 71, 00:15:58.544 "qid": 0, 00:15:58.544 "state": "enabled", 00:15:58.544 "thread": "nvmf_tgt_poll_group_000", 00:15:58.544 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:58.544 "listen_address": { 00:15:58.544 "trtype": "TCP", 00:15:58.544 "adrfam": "IPv4", 00:15:58.544 "traddr": "10.0.0.2", 00:15:58.544 "trsvcid": "4420" 00:15:58.544 }, 00:15:58.544 "peer_address": { 00:15:58.544 "trtype": "TCP", 00:15:58.544 "adrfam": "IPv4", 00:15:58.544 "traddr": "10.0.0.1", 00:15:58.544 "trsvcid": "34156" 00:15:58.544 }, 00:15:58.544 "auth": { 00:15:58.544 "state": "completed", 00:15:58.544 "digest": "sha384", 00:15:58.544 "dhgroup": "ffdhe3072" 00:15:58.544 } 00:15:58.544 } 00:15:58.544 ]' 00:15:58.544 11:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:58.544 11:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:58.544 11:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:58.804 11:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:58.804 11:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:58.804 11:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:58.804 11:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:58.804 11:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:59.062 11:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWMwOTYyZDYzZDBiNmYyNzRlMGEwNDllZTk2NzI1YmI0Yjk4YWY2MWZiMzE2ZTcyNWVmYWY4MDlkZWNlM2IyMUJs8ko=: 00:15:59.062 11:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NWMwOTYyZDYzZDBiNmYyNzRlMGEwNDllZTk2NzI1YmI0Yjk4YWY2MWZiMzE2ZTcyNWVmYWY4MDlkZWNlM2IyMUJs8ko=: 00:15:59.629 11:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:59.629 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:59.629 11:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:59.629 11:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.629 11:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.629 11:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.630 11:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:59.630 11:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:59.630 11:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:59.630 11:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:59.630 11:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:15:59.630 11:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:59.630 11:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:59.630 11:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:59.630 11:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:59.630 11:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:59.630 11:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:59.630 11:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.630 11:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.630 11:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.630 11:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:59.630 11:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:59.630 11:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:59.891 00:16:00.152 11:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:00.152 11:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:00.152 11:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:00.153 11:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:00.153 11:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:00.153 11:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.153 11:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.153 11:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.153 11:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:00.153 { 00:16:00.153 "cntlid": 73, 00:16:00.153 "qid": 0, 00:16:00.153 "state": "enabled", 00:16:00.153 "thread": "nvmf_tgt_poll_group_000", 00:16:00.153 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:00.153 "listen_address": { 00:16:00.153 "trtype": "TCP", 00:16:00.153 "adrfam": "IPv4", 00:16:00.153 "traddr": "10.0.0.2", 00:16:00.153 "trsvcid": "4420" 00:16:00.153 }, 00:16:00.153 "peer_address": { 00:16:00.153 "trtype": "TCP", 00:16:00.153 "adrfam": "IPv4", 00:16:00.153 "traddr": "10.0.0.1", 00:16:00.153 "trsvcid": "34180" 00:16:00.153 }, 00:16:00.153 "auth": { 00:16:00.153 "state": "completed", 00:16:00.153 "digest": "sha384", 00:16:00.153 "dhgroup": "ffdhe4096" 00:16:00.153 } 00:16:00.153 } 00:16:00.153 ]' 00:16:00.153 11:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:00.153 11:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:00.153 11:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:00.411 11:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:00.411 11:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:00.411 11:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:00.411 11:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:00.411 11:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:00.671 11:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWM4MjMwYzk0ZDVmNDgzOTIzOWM1NzBlYTIxODc3OGExODkyODAyNzhlODYxNDE3N1Zy3A==: --dhchap-ctrl-secret DHHC-1:03:NWM0MjVjYjhiOGU5Nzk4MjQ1NjkyY2E4ODc4MzEwMzgzYTc2MGRjZDZiZTk1MTY5ZjBhMjk5MmJmNjRiZDQyYd/wShw=: 00:16:00.671 11:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NWM4MjMwYzk0ZDVmNDgzOTIzOWM1NzBlYTIxODc3OGExODkyODAyNzhlODYxNDE3N1Zy3A==: --dhchap-ctrl-secret DHHC-1:03:NWM0MjVjYjhiOGU5Nzk4MjQ1NjkyY2E4ODc4MzEwMzgzYTc2MGRjZDZiZTk1MTY5ZjBhMjk5MmJmNjRiZDQyYd/wShw=: 00:16:01.241 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:01.241 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:01.241 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:01.241 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.241 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.241 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.241 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:01.241 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:01.241 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:01.241 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:16:01.241 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:01.241 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:01.241 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:01.241 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:01.241 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:01.241 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:01.241 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.241 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.500 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.500 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:01.500 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:01.500 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:01.758 00:16:01.758 11:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:01.758 11:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:01.758 11:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:01.758 11:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:01.758 11:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:01.758 11:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.758 11:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.758 11:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.758 11:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:01.758 { 00:16:01.758 "cntlid": 75, 00:16:01.758 "qid": 0, 00:16:01.758 "state": "enabled", 00:16:01.758 "thread": "nvmf_tgt_poll_group_000", 00:16:01.758 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:01.758 "listen_address": { 00:16:01.758 "trtype": "TCP", 00:16:01.758 "adrfam": "IPv4", 00:16:01.758 "traddr": "10.0.0.2", 00:16:01.758 "trsvcid": "4420" 00:16:01.758 }, 00:16:01.758 "peer_address": { 00:16:01.758 "trtype": "TCP", 00:16:01.758 "adrfam": "IPv4", 00:16:01.758 "traddr": "10.0.0.1", 00:16:01.758 "trsvcid": "34214" 00:16:01.758 }, 00:16:01.758 "auth": { 00:16:01.758 "state": "completed", 00:16:01.758 "digest": "sha384", 00:16:01.758 "dhgroup": "ffdhe4096" 00:16:01.758 } 00:16:01.758 } 00:16:01.758 ]' 00:16:01.758 11:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:02.017 11:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:02.017 11:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:02.017 11:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:02.017 11:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:02.017 11:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:02.017 11:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:02.017 11:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:02.276 11:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmYwYzFiZTJiYTJkNDM1ZTY0OWZmN2I3ZTk1ZWYzMDP/AKiL: --dhchap-ctrl-secret DHHC-1:02:ZDRjZTY5MmQ5MjY2NTY0ZjQ0MjRhNDZjYjE5NzJiZjJhZDcyYTZhYmIxODA1ZmYwqEkFpA==: 00:16:02.276 11:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZmYwYzFiZTJiYTJkNDM1ZTY0OWZmN2I3ZTk1ZWYzMDP/AKiL: --dhchap-ctrl-secret DHHC-1:02:ZDRjZTY5MmQ5MjY2NTY0ZjQ0MjRhNDZjYjE5NzJiZjJhZDcyYTZhYmIxODA1ZmYwqEkFpA==: 00:16:02.844 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:02.844 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:02.844 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:02.844 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.844 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.844 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.844 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:02.844 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:02.844 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:02.844 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:16:02.844 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:02.844 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:02.844 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:02.844 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:02.844 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:02.844 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:02.844 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.844 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.844 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.844 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:02.844 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:02.844 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:03.411 00:16:03.411 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:03.411 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:03.411 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:03.411 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:03.411 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:03.411 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.411 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.411 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.411 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:03.411 { 00:16:03.411 "cntlid": 77, 00:16:03.411 "qid": 0, 00:16:03.411 "state": "enabled", 00:16:03.411 "thread": "nvmf_tgt_poll_group_000", 00:16:03.411 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:03.411 "listen_address": { 00:16:03.411 "trtype": "TCP", 00:16:03.411 "adrfam": "IPv4", 00:16:03.411 "traddr": "10.0.0.2", 00:16:03.411 "trsvcid": "4420" 00:16:03.411 }, 00:16:03.411 "peer_address": { 00:16:03.411 "trtype": "TCP", 00:16:03.411 "adrfam": "IPv4", 00:16:03.411 "traddr": "10.0.0.1", 00:16:03.411 "trsvcid": "34240" 00:16:03.411 }, 00:16:03.411 "auth": { 00:16:03.411 "state": "completed", 00:16:03.411 "digest": "sha384", 00:16:03.411 "dhgroup": "ffdhe4096" 00:16:03.411 } 00:16:03.411 } 00:16:03.411 ]' 00:16:03.411 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:03.411 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:03.411 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:03.670 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:03.670 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:03.670 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:03.670 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:03.670 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:03.928 11:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2Y1NThjMjU5OTMyMjkzNmE2ZjBkNThmZDI0N2YxOGNmOWI0OWUwMTQ5YjViMDJik9KxkQ==: --dhchap-ctrl-secret DHHC-1:01:ZWJhMDFkMjViZjEyYTgwMzhkZTM0NWIyYjRlZTk0MGVw0Hvh: 00:16:03.929 11:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Y2Y1NThjMjU5OTMyMjkzNmE2ZjBkNThmZDI0N2YxOGNmOWI0OWUwMTQ5YjViMDJik9KxkQ==: --dhchap-ctrl-secret DHHC-1:01:ZWJhMDFkMjViZjEyYTgwMzhkZTM0NWIyYjRlZTk0MGVw0Hvh: 00:16:04.496 11:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:04.496 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:04.496 11:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:04.496 11:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.496 11:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.496 11:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.496 11:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:04.496 11:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:04.496 11:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:04.496 11:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:16:04.496 11:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:04.496 11:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:04.496 11:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:04.496 11:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:04.496 11:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:04.496 11:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:04.496 11:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.496 11:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.496 11:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.496 11:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:04.496 11:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:04.496 11:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:04.755 00:16:05.013 11:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:05.013 11:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:05.013 11:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:05.013 11:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:05.013 11:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:05.013 11:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.013 11:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.013 11:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.013 11:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:05.013 { 00:16:05.013 "cntlid": 79, 00:16:05.013 "qid": 0, 00:16:05.013 "state": "enabled", 00:16:05.013 "thread": "nvmf_tgt_poll_group_000", 00:16:05.013 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:05.013 "listen_address": { 00:16:05.013 "trtype": "TCP", 00:16:05.013 "adrfam": "IPv4", 00:16:05.013 "traddr": "10.0.0.2", 00:16:05.013 "trsvcid": "4420" 00:16:05.013 }, 00:16:05.013 "peer_address": { 00:16:05.013 "trtype": "TCP", 00:16:05.013 "adrfam": "IPv4", 00:16:05.013 "traddr": "10.0.0.1", 00:16:05.013 "trsvcid": "34282" 00:16:05.013 }, 00:16:05.013 "auth": { 00:16:05.013 "state": "completed", 00:16:05.013 "digest": "sha384", 00:16:05.013 "dhgroup": "ffdhe4096" 00:16:05.013 } 00:16:05.013 } 00:16:05.013 ]' 00:16:05.013 11:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:05.271 11:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:05.271 11:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:05.271 11:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:05.271 11:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:05.271 11:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:05.271 11:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:05.271 11:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:05.530 11:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWMwOTYyZDYzZDBiNmYyNzRlMGEwNDllZTk2NzI1YmI0Yjk4YWY2MWZiMzE2ZTcyNWVmYWY4MDlkZWNlM2IyMUJs8ko=: 00:16:05.530 11:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NWMwOTYyZDYzZDBiNmYyNzRlMGEwNDllZTk2NzI1YmI0Yjk4YWY2MWZiMzE2ZTcyNWVmYWY4MDlkZWNlM2IyMUJs8ko=: 00:16:06.096 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:06.096 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:06.096 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:06.096 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.096 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.096 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.096 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:06.096 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:06.096 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:06.096 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:06.355 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:16:06.355 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:06.355 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:06.355 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:06.355 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:06.355 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:06.355 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:06.355 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.355 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.355 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.355 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:06.355 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:06.355 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:06.613 00:16:06.613 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:06.613 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:06.613 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:06.871 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:06.871 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:06.871 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.871 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.871 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.871 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:06.871 { 00:16:06.871 "cntlid": 81, 00:16:06.871 "qid": 0, 00:16:06.871 "state": "enabled", 00:16:06.871 "thread": "nvmf_tgt_poll_group_000", 00:16:06.871 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:06.871 "listen_address": { 00:16:06.871 "trtype": "TCP", 00:16:06.871 "adrfam": "IPv4", 00:16:06.872 "traddr": "10.0.0.2", 00:16:06.872 "trsvcid": "4420" 00:16:06.872 }, 00:16:06.872 "peer_address": { 00:16:06.872 "trtype": "TCP", 00:16:06.872 "adrfam": "IPv4", 00:16:06.872 "traddr": "10.0.0.1", 00:16:06.872 "trsvcid": "48072" 00:16:06.872 }, 00:16:06.872 "auth": { 00:16:06.872 "state": "completed", 00:16:06.872 "digest": "sha384", 00:16:06.872 "dhgroup": "ffdhe6144" 00:16:06.872 } 00:16:06.872 } 00:16:06.872 ]' 00:16:06.872 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:06.872 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:06.872 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:06.872 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:06.872 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:06.872 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:06.872 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:06.872 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:07.130 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWM4MjMwYzk0ZDVmNDgzOTIzOWM1NzBlYTIxODc3OGExODkyODAyNzhlODYxNDE3N1Zy3A==: --dhchap-ctrl-secret DHHC-1:03:NWM0MjVjYjhiOGU5Nzk4MjQ1NjkyY2E4ODc4MzEwMzgzYTc2MGRjZDZiZTk1MTY5ZjBhMjk5MmJmNjRiZDQyYd/wShw=: 00:16:07.130 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NWM4MjMwYzk0ZDVmNDgzOTIzOWM1NzBlYTIxODc3OGExODkyODAyNzhlODYxNDE3N1Zy3A==: --dhchap-ctrl-secret DHHC-1:03:NWM0MjVjYjhiOGU5Nzk4MjQ1NjkyY2E4ODc4MzEwMzgzYTc2MGRjZDZiZTk1MTY5ZjBhMjk5MmJmNjRiZDQyYd/wShw=: 00:16:07.698 11:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:07.698 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:07.698 11:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:07.698 11:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.698 11:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.698 11:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.698 11:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:07.698 11:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:07.698 11:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:07.957 11:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:16:07.957 11:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:07.957 11:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:07.957 11:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:07.957 11:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:07.957 11:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:07.957 11:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:07.957 11:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.957 11:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.957 11:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.957 11:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:07.957 11:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:07.957 11:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:08.215 00:16:08.215 11:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:08.215 11:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:08.215 11:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:08.475 11:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:08.475 11:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:08.475 11:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.475 11:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.475 11:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.475 11:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:08.475 { 00:16:08.475 "cntlid": 83, 00:16:08.475 "qid": 0, 00:16:08.475 "state": "enabled", 00:16:08.475 "thread": "nvmf_tgt_poll_group_000", 00:16:08.475 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:08.475 "listen_address": { 00:16:08.475 "trtype": "TCP", 00:16:08.475 "adrfam": "IPv4", 00:16:08.475 "traddr": "10.0.0.2", 00:16:08.475 "trsvcid": "4420" 00:16:08.475 }, 00:16:08.475 "peer_address": { 00:16:08.475 "trtype": "TCP", 00:16:08.475 "adrfam": "IPv4", 00:16:08.475 "traddr": "10.0.0.1", 00:16:08.475 "trsvcid": "48110" 00:16:08.475 }, 00:16:08.475 "auth": { 00:16:08.475 "state": "completed", 00:16:08.475 "digest": "sha384", 00:16:08.475 "dhgroup": "ffdhe6144" 00:16:08.475 } 00:16:08.475 } 00:16:08.475 ]' 00:16:08.475 11:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:08.475 11:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:08.475 11:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:08.475 11:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:08.475 11:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:08.475 11:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:08.475 11:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:08.475 11:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:08.734 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmYwYzFiZTJiYTJkNDM1ZTY0OWZmN2I3ZTk1ZWYzMDP/AKiL: --dhchap-ctrl-secret DHHC-1:02:ZDRjZTY5MmQ5MjY2NTY0ZjQ0MjRhNDZjYjE5NzJiZjJhZDcyYTZhYmIxODA1ZmYwqEkFpA==: 00:16:08.734 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZmYwYzFiZTJiYTJkNDM1ZTY0OWZmN2I3ZTk1ZWYzMDP/AKiL: --dhchap-ctrl-secret DHHC-1:02:ZDRjZTY5MmQ5MjY2NTY0ZjQ0MjRhNDZjYjE5NzJiZjJhZDcyYTZhYmIxODA1ZmYwqEkFpA==: 00:16:09.302 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:09.302 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:09.302 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:09.302 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.302 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.302 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.302 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:09.302 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:09.302 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:09.560 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:16:09.560 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:09.560 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:09.560 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:09.560 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:09.560 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:09.560 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:09.560 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.560 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.560 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.560 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:09.560 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:09.561 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:09.819 00:16:10.078 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:10.078 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:10.078 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:10.078 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:10.078 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:10.078 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.078 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.078 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.078 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:10.078 { 00:16:10.078 "cntlid": 85, 00:16:10.078 "qid": 0, 00:16:10.078 "state": "enabled", 00:16:10.078 "thread": "nvmf_tgt_poll_group_000", 00:16:10.078 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:10.078 "listen_address": { 00:16:10.078 "trtype": "TCP", 00:16:10.078 "adrfam": "IPv4", 00:16:10.078 "traddr": "10.0.0.2", 00:16:10.078 "trsvcid": "4420" 00:16:10.078 }, 00:16:10.078 "peer_address": { 00:16:10.078 "trtype": "TCP", 00:16:10.078 "adrfam": "IPv4", 00:16:10.078 "traddr": "10.0.0.1", 00:16:10.078 "trsvcid": "48136" 00:16:10.078 }, 00:16:10.078 "auth": { 00:16:10.078 "state": "completed", 00:16:10.078 "digest": "sha384", 00:16:10.078 "dhgroup": "ffdhe6144" 00:16:10.078 } 00:16:10.078 } 00:16:10.078 ]' 00:16:10.078 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:10.336 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:10.336 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:10.336 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:10.336 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:10.336 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:10.336 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:10.336 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:10.595 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2Y1NThjMjU5OTMyMjkzNmE2ZjBkNThmZDI0N2YxOGNmOWI0OWUwMTQ5YjViMDJik9KxkQ==: --dhchap-ctrl-secret DHHC-1:01:ZWJhMDFkMjViZjEyYTgwMzhkZTM0NWIyYjRlZTk0MGVw0Hvh: 00:16:10.595 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Y2Y1NThjMjU5OTMyMjkzNmE2ZjBkNThmZDI0N2YxOGNmOWI0OWUwMTQ5YjViMDJik9KxkQ==: --dhchap-ctrl-secret DHHC-1:01:ZWJhMDFkMjViZjEyYTgwMzhkZTM0NWIyYjRlZTk0MGVw0Hvh: 00:16:11.162 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:11.162 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:11.162 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:11.162 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.162 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.162 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.162 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:11.162 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:11.162 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:11.162 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:16:11.162 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:11.162 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:11.421 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:11.421 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:11.421 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:11.421 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:11.421 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.421 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.421 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.421 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:11.421 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:11.421 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:11.679 00:16:11.679 11:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:11.679 11:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:11.679 11:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:11.936 11:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:11.936 11:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:11.936 11:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.936 11:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.936 11:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.936 11:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:11.936 { 00:16:11.936 "cntlid": 87, 00:16:11.936 "qid": 0, 00:16:11.936 "state": "enabled", 00:16:11.936 "thread": "nvmf_tgt_poll_group_000", 00:16:11.936 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:11.936 "listen_address": { 00:16:11.936 "trtype": "TCP", 00:16:11.936 "adrfam": "IPv4", 00:16:11.936 "traddr": "10.0.0.2", 00:16:11.937 "trsvcid": "4420" 00:16:11.937 }, 00:16:11.937 "peer_address": { 00:16:11.937 "trtype": "TCP", 00:16:11.937 "adrfam": "IPv4", 00:16:11.937 "traddr": "10.0.0.1", 00:16:11.937 "trsvcid": "48162" 00:16:11.937 }, 00:16:11.937 "auth": { 00:16:11.937 "state": "completed", 00:16:11.937 "digest": "sha384", 00:16:11.937 "dhgroup": "ffdhe6144" 00:16:11.937 } 00:16:11.937 } 00:16:11.937 ]' 00:16:11.937 11:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:11.937 11:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:11.937 11:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:11.937 11:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:11.937 11:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:11.937 11:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:11.937 11:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:11.937 11:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:12.194 11:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWMwOTYyZDYzZDBiNmYyNzRlMGEwNDllZTk2NzI1YmI0Yjk4YWY2MWZiMzE2ZTcyNWVmYWY4MDlkZWNlM2IyMUJs8ko=: 00:16:12.194 11:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NWMwOTYyZDYzZDBiNmYyNzRlMGEwNDllZTk2NzI1YmI0Yjk4YWY2MWZiMzE2ZTcyNWVmYWY4MDlkZWNlM2IyMUJs8ko=: 00:16:12.851 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:12.851 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:12.851 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:12.851 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.851 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.851 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.851 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:12.851 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:12.851 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:12.851 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:13.200 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:16:13.200 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:13.200 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:13.200 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:13.200 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:13.200 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:13.200 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:13.200 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.200 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.200 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.200 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:13.200 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:13.200 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:13.459 00:16:13.459 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:13.459 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:13.459 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:13.717 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:13.717 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:13.717 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.717 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.717 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.717 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:13.717 { 00:16:13.717 "cntlid": 89, 00:16:13.717 "qid": 0, 00:16:13.717 "state": "enabled", 00:16:13.717 "thread": "nvmf_tgt_poll_group_000", 00:16:13.717 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:13.717 "listen_address": { 00:16:13.717 "trtype": "TCP", 00:16:13.717 "adrfam": "IPv4", 00:16:13.717 "traddr": "10.0.0.2", 00:16:13.717 "trsvcid": "4420" 00:16:13.717 }, 00:16:13.717 "peer_address": { 00:16:13.717 "trtype": "TCP", 00:16:13.717 "adrfam": "IPv4", 00:16:13.717 "traddr": "10.0.0.1", 00:16:13.717 "trsvcid": "48180" 00:16:13.717 }, 00:16:13.717 "auth": { 00:16:13.717 "state": "completed", 00:16:13.717 "digest": "sha384", 00:16:13.717 "dhgroup": "ffdhe8192" 00:16:13.717 } 00:16:13.717 } 00:16:13.717 ]' 00:16:13.717 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:13.717 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:13.717 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:13.717 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:13.717 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:13.717 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:13.717 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:13.717 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:13.976 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWM4MjMwYzk0ZDVmNDgzOTIzOWM1NzBlYTIxODc3OGExODkyODAyNzhlODYxNDE3N1Zy3A==: --dhchap-ctrl-secret DHHC-1:03:NWM0MjVjYjhiOGU5Nzk4MjQ1NjkyY2E4ODc4MzEwMzgzYTc2MGRjZDZiZTk1MTY5ZjBhMjk5MmJmNjRiZDQyYd/wShw=: 00:16:13.976 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NWM4MjMwYzk0ZDVmNDgzOTIzOWM1NzBlYTIxODc3OGExODkyODAyNzhlODYxNDE3N1Zy3A==: --dhchap-ctrl-secret DHHC-1:03:NWM0MjVjYjhiOGU5Nzk4MjQ1NjkyY2E4ODc4MzEwMzgzYTc2MGRjZDZiZTk1MTY5ZjBhMjk5MmJmNjRiZDQyYd/wShw=: 00:16:14.544 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:14.544 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:14.544 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:14.544 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.544 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.544 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.544 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:14.544 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:14.544 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:14.802 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:16:14.802 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:14.802 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:14.802 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:14.802 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:14.803 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:14.803 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:14.803 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.803 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.803 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.803 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:14.803 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:14.803 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:15.370 00:16:15.370 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:15.370 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:15.370 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:15.370 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:15.370 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:15.370 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.370 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.628 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.628 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:15.628 { 00:16:15.628 "cntlid": 91, 00:16:15.628 "qid": 0, 00:16:15.628 "state": "enabled", 00:16:15.628 "thread": "nvmf_tgt_poll_group_000", 00:16:15.628 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:15.628 "listen_address": { 00:16:15.628 "trtype": "TCP", 00:16:15.628 "adrfam": "IPv4", 00:16:15.628 "traddr": "10.0.0.2", 00:16:15.628 "trsvcid": "4420" 00:16:15.628 }, 00:16:15.628 "peer_address": { 00:16:15.628 "trtype": "TCP", 00:16:15.628 "adrfam": "IPv4", 00:16:15.628 "traddr": "10.0.0.1", 00:16:15.628 "trsvcid": "48212" 00:16:15.628 }, 00:16:15.628 "auth": { 00:16:15.628 "state": "completed", 00:16:15.628 "digest": "sha384", 00:16:15.628 "dhgroup": "ffdhe8192" 00:16:15.628 } 00:16:15.628 } 00:16:15.628 ]' 00:16:15.628 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:15.628 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:15.628 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:15.628 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:15.628 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:15.628 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:15.629 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:15.629 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:15.887 11:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmYwYzFiZTJiYTJkNDM1ZTY0OWZmN2I3ZTk1ZWYzMDP/AKiL: --dhchap-ctrl-secret DHHC-1:02:ZDRjZTY5MmQ5MjY2NTY0ZjQ0MjRhNDZjYjE5NzJiZjJhZDcyYTZhYmIxODA1ZmYwqEkFpA==: 00:16:15.887 11:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZmYwYzFiZTJiYTJkNDM1ZTY0OWZmN2I3ZTk1ZWYzMDP/AKiL: --dhchap-ctrl-secret DHHC-1:02:ZDRjZTY5MmQ5MjY2NTY0ZjQ0MjRhNDZjYjE5NzJiZjJhZDcyYTZhYmIxODA1ZmYwqEkFpA==: 00:16:16.455 11:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:16.455 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:16.455 11:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:16.455 11:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.455 11:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.455 11:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.455 11:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:16.455 11:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:16.455 11:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:16.714 11:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:16:16.714 11:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:16.714 11:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:16.714 11:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:16.714 11:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:16.714 11:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:16.714 11:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:16.714 11:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.714 11:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.714 11:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.714 11:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:16.714 11:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:16.714 11:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:16.973 00:16:17.232 11:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:17.232 11:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:17.232 11:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:17.232 11:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:17.232 11:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:17.232 11:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.232 11:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.232 11:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.232 11:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:17.232 { 00:16:17.232 "cntlid": 93, 00:16:17.232 "qid": 0, 00:16:17.232 "state": "enabled", 00:16:17.232 "thread": "nvmf_tgt_poll_group_000", 00:16:17.232 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:17.232 "listen_address": { 00:16:17.232 "trtype": "TCP", 00:16:17.232 "adrfam": "IPv4", 00:16:17.232 "traddr": "10.0.0.2", 00:16:17.232 "trsvcid": "4420" 00:16:17.232 }, 00:16:17.232 "peer_address": { 00:16:17.232 "trtype": "TCP", 00:16:17.232 "adrfam": "IPv4", 00:16:17.232 "traddr": "10.0.0.1", 00:16:17.232 "trsvcid": "34606" 00:16:17.232 }, 00:16:17.232 "auth": { 00:16:17.232 "state": "completed", 00:16:17.232 "digest": "sha384", 00:16:17.232 "dhgroup": "ffdhe8192" 00:16:17.232 } 00:16:17.232 } 00:16:17.232 ]' 00:16:17.232 11:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:17.491 11:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:17.491 11:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:17.491 11:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:17.491 11:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:17.491 11:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:17.491 11:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:17.491 11:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:17.750 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2Y1NThjMjU5OTMyMjkzNmE2ZjBkNThmZDI0N2YxOGNmOWI0OWUwMTQ5YjViMDJik9KxkQ==: --dhchap-ctrl-secret DHHC-1:01:ZWJhMDFkMjViZjEyYTgwMzhkZTM0NWIyYjRlZTk0MGVw0Hvh: 00:16:17.750 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Y2Y1NThjMjU5OTMyMjkzNmE2ZjBkNThmZDI0N2YxOGNmOWI0OWUwMTQ5YjViMDJik9KxkQ==: --dhchap-ctrl-secret DHHC-1:01:ZWJhMDFkMjViZjEyYTgwMzhkZTM0NWIyYjRlZTk0MGVw0Hvh: 00:16:18.317 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:18.317 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:18.317 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:18.317 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.317 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.317 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.317 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:18.317 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:18.317 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:18.317 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:16:18.317 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:18.317 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:18.317 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:18.317 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:18.317 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:18.317 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:18.317 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.317 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.317 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.317 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:18.317 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:18.317 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:18.884 00:16:18.884 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:18.884 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:18.884 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:19.143 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:19.143 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:19.143 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.143 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.143 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.143 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:19.143 { 00:16:19.143 "cntlid": 95, 00:16:19.143 "qid": 0, 00:16:19.143 "state": "enabled", 00:16:19.143 "thread": "nvmf_tgt_poll_group_000", 00:16:19.143 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:19.143 "listen_address": { 00:16:19.143 "trtype": "TCP", 00:16:19.143 "adrfam": "IPv4", 00:16:19.143 "traddr": "10.0.0.2", 00:16:19.143 "trsvcid": "4420" 00:16:19.143 }, 00:16:19.143 "peer_address": { 00:16:19.143 "trtype": "TCP", 00:16:19.143 "adrfam": "IPv4", 00:16:19.143 "traddr": "10.0.0.1", 00:16:19.143 "trsvcid": "34634" 00:16:19.143 }, 00:16:19.143 "auth": { 00:16:19.143 "state": "completed", 00:16:19.143 "digest": "sha384", 00:16:19.143 "dhgroup": "ffdhe8192" 00:16:19.143 } 00:16:19.143 } 00:16:19.143 ]' 00:16:19.143 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:19.143 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:19.143 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:19.143 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:19.143 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:19.143 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:19.143 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:19.143 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:19.401 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWMwOTYyZDYzZDBiNmYyNzRlMGEwNDllZTk2NzI1YmI0Yjk4YWY2MWZiMzE2ZTcyNWVmYWY4MDlkZWNlM2IyMUJs8ko=: 00:16:19.401 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NWMwOTYyZDYzZDBiNmYyNzRlMGEwNDllZTk2NzI1YmI0Yjk4YWY2MWZiMzE2ZTcyNWVmYWY4MDlkZWNlM2IyMUJs8ko=: 00:16:19.968 11:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:19.968 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:19.968 11:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:19.968 11:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.968 11:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.968 11:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.968 11:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:16:19.968 11:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:19.968 11:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:19.968 11:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:19.968 11:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:20.227 11:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:16:20.227 11:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:20.227 11:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:20.227 11:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:20.227 11:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:20.227 11:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:20.227 11:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:20.227 11:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.227 11:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.227 11:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.227 11:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:20.227 11:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:20.227 11:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:20.487 00:16:20.487 11:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:20.487 11:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:20.487 11:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:20.746 11:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:20.746 11:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:20.746 11:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.746 11:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.746 11:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.746 11:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:20.746 { 00:16:20.746 "cntlid": 97, 00:16:20.746 "qid": 0, 00:16:20.746 "state": "enabled", 00:16:20.746 "thread": "nvmf_tgt_poll_group_000", 00:16:20.746 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:20.746 "listen_address": { 00:16:20.746 "trtype": "TCP", 00:16:20.746 "adrfam": "IPv4", 00:16:20.746 "traddr": "10.0.0.2", 00:16:20.746 "trsvcid": "4420" 00:16:20.746 }, 00:16:20.746 "peer_address": { 00:16:20.746 "trtype": "TCP", 00:16:20.746 "adrfam": "IPv4", 00:16:20.746 "traddr": "10.0.0.1", 00:16:20.746 "trsvcid": "34664" 00:16:20.746 }, 00:16:20.746 "auth": { 00:16:20.746 "state": "completed", 00:16:20.746 "digest": "sha512", 00:16:20.746 "dhgroup": "null" 00:16:20.746 } 00:16:20.746 } 00:16:20.746 ]' 00:16:20.746 11:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:20.746 11:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:20.746 11:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:20.746 11:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:20.746 11:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:21.005 11:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:21.005 11:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:21.005 11:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:21.005 11:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWM4MjMwYzk0ZDVmNDgzOTIzOWM1NzBlYTIxODc3OGExODkyODAyNzhlODYxNDE3N1Zy3A==: --dhchap-ctrl-secret DHHC-1:03:NWM0MjVjYjhiOGU5Nzk4MjQ1NjkyY2E4ODc4MzEwMzgzYTc2MGRjZDZiZTk1MTY5ZjBhMjk5MmJmNjRiZDQyYd/wShw=: 00:16:21.005 11:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NWM4MjMwYzk0ZDVmNDgzOTIzOWM1NzBlYTIxODc3OGExODkyODAyNzhlODYxNDE3N1Zy3A==: --dhchap-ctrl-secret DHHC-1:03:NWM0MjVjYjhiOGU5Nzk4MjQ1NjkyY2E4ODc4MzEwMzgzYTc2MGRjZDZiZTk1MTY5ZjBhMjk5MmJmNjRiZDQyYd/wShw=: 00:16:21.572 11:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:21.572 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:21.572 11:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:21.572 11:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.572 11:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.572 11:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.572 11:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:21.572 11:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:21.572 11:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:21.841 11:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:16:21.841 11:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:21.841 11:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:21.841 11:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:21.841 11:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:21.841 11:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:21.841 11:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:21.841 11:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.841 11:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.841 11:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.841 11:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:21.841 11:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:21.841 11:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:22.098 00:16:22.098 11:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:22.098 11:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:22.098 11:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:22.357 11:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:22.357 11:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:22.357 11:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.357 11:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.357 11:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.357 11:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:22.357 { 00:16:22.357 "cntlid": 99, 00:16:22.357 "qid": 0, 00:16:22.357 "state": "enabled", 00:16:22.357 "thread": "nvmf_tgt_poll_group_000", 00:16:22.357 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:22.357 "listen_address": { 00:16:22.357 "trtype": "TCP", 00:16:22.357 "adrfam": "IPv4", 00:16:22.357 "traddr": "10.0.0.2", 00:16:22.357 "trsvcid": "4420" 00:16:22.357 }, 00:16:22.357 "peer_address": { 00:16:22.357 "trtype": "TCP", 00:16:22.357 "adrfam": "IPv4", 00:16:22.357 "traddr": "10.0.0.1", 00:16:22.357 "trsvcid": "34682" 00:16:22.357 }, 00:16:22.357 "auth": { 00:16:22.357 "state": "completed", 00:16:22.357 "digest": "sha512", 00:16:22.357 "dhgroup": "null" 00:16:22.357 } 00:16:22.357 } 00:16:22.357 ]' 00:16:22.357 11:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:22.357 11:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:22.357 11:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:22.357 11:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:22.357 11:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:22.357 11:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:22.357 11:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:22.357 11:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:22.616 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmYwYzFiZTJiYTJkNDM1ZTY0OWZmN2I3ZTk1ZWYzMDP/AKiL: --dhchap-ctrl-secret DHHC-1:02:ZDRjZTY5MmQ5MjY2NTY0ZjQ0MjRhNDZjYjE5NzJiZjJhZDcyYTZhYmIxODA1ZmYwqEkFpA==: 00:16:22.616 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZmYwYzFiZTJiYTJkNDM1ZTY0OWZmN2I3ZTk1ZWYzMDP/AKiL: --dhchap-ctrl-secret DHHC-1:02:ZDRjZTY5MmQ5MjY2NTY0ZjQ0MjRhNDZjYjE5NzJiZjJhZDcyYTZhYmIxODA1ZmYwqEkFpA==: 00:16:23.183 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:23.183 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:23.183 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:23.183 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.183 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.183 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.183 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:23.183 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:23.183 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:23.442 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:16:23.442 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:23.443 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:23.443 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:23.443 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:23.443 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:23.443 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:23.443 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.443 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.443 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.443 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:23.443 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:23.443 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:23.701 00:16:23.701 11:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:23.701 11:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:23.701 11:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:23.961 11:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:23.961 11:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:23.961 11:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.961 11:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.961 11:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.961 11:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:23.961 { 00:16:23.961 "cntlid": 101, 00:16:23.961 "qid": 0, 00:16:23.961 "state": "enabled", 00:16:23.961 "thread": "nvmf_tgt_poll_group_000", 00:16:23.961 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:23.961 "listen_address": { 00:16:23.961 "trtype": "TCP", 00:16:23.961 "adrfam": "IPv4", 00:16:23.961 "traddr": "10.0.0.2", 00:16:23.961 "trsvcid": "4420" 00:16:23.961 }, 00:16:23.961 "peer_address": { 00:16:23.961 "trtype": "TCP", 00:16:23.961 "adrfam": "IPv4", 00:16:23.961 "traddr": "10.0.0.1", 00:16:23.961 "trsvcid": "34712" 00:16:23.961 }, 00:16:23.961 "auth": { 00:16:23.961 "state": "completed", 00:16:23.961 "digest": "sha512", 00:16:23.961 "dhgroup": "null" 00:16:23.961 } 00:16:23.961 } 00:16:23.961 ]' 00:16:23.961 11:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:23.961 11:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:23.961 11:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:23.961 11:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:23.961 11:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:23.961 11:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:23.961 11:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:23.961 11:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:24.220 11:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2Y1NThjMjU5OTMyMjkzNmE2ZjBkNThmZDI0N2YxOGNmOWI0OWUwMTQ5YjViMDJik9KxkQ==: --dhchap-ctrl-secret DHHC-1:01:ZWJhMDFkMjViZjEyYTgwMzhkZTM0NWIyYjRlZTk0MGVw0Hvh: 00:16:24.221 11:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Y2Y1NThjMjU5OTMyMjkzNmE2ZjBkNThmZDI0N2YxOGNmOWI0OWUwMTQ5YjViMDJik9KxkQ==: --dhchap-ctrl-secret DHHC-1:01:ZWJhMDFkMjViZjEyYTgwMzhkZTM0NWIyYjRlZTk0MGVw0Hvh: 00:16:24.788 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:24.788 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:24.788 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:24.788 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.788 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.788 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.788 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:24.788 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:24.788 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:25.047 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:16:25.047 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:25.047 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:25.047 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:25.047 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:25.047 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:25.047 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:25.047 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.047 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.047 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.047 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:25.047 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:25.047 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:25.306 00:16:25.306 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:25.306 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:25.306 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:25.565 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:25.565 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:25.565 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.565 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.565 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.565 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:25.565 { 00:16:25.565 "cntlid": 103, 00:16:25.565 "qid": 0, 00:16:25.565 "state": "enabled", 00:16:25.565 "thread": "nvmf_tgt_poll_group_000", 00:16:25.565 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:25.565 "listen_address": { 00:16:25.565 "trtype": "TCP", 00:16:25.565 "adrfam": "IPv4", 00:16:25.565 "traddr": "10.0.0.2", 00:16:25.565 "trsvcid": "4420" 00:16:25.565 }, 00:16:25.565 "peer_address": { 00:16:25.565 "trtype": "TCP", 00:16:25.565 "adrfam": "IPv4", 00:16:25.565 "traddr": "10.0.0.1", 00:16:25.565 "trsvcid": "34746" 00:16:25.565 }, 00:16:25.565 "auth": { 00:16:25.565 "state": "completed", 00:16:25.565 "digest": "sha512", 00:16:25.565 "dhgroup": "null" 00:16:25.565 } 00:16:25.565 } 00:16:25.565 ]' 00:16:25.565 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:25.565 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:25.565 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:25.565 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:25.565 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:25.565 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:25.565 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:25.565 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:25.824 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWMwOTYyZDYzZDBiNmYyNzRlMGEwNDllZTk2NzI1YmI0Yjk4YWY2MWZiMzE2ZTcyNWVmYWY4MDlkZWNlM2IyMUJs8ko=: 00:16:25.824 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NWMwOTYyZDYzZDBiNmYyNzRlMGEwNDllZTk2NzI1YmI0Yjk4YWY2MWZiMzE2ZTcyNWVmYWY4MDlkZWNlM2IyMUJs8ko=: 00:16:26.390 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:26.390 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:26.390 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:26.390 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.390 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.390 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.390 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:26.390 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:26.390 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:26.390 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:26.649 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:16:26.649 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:26.649 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:26.649 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:26.649 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:26.649 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:26.649 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:26.649 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.649 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.649 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.649 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:26.649 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:26.649 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:26.908 00:16:26.908 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:26.908 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:26.908 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:27.167 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:27.167 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:27.167 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.167 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.167 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.167 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:27.167 { 00:16:27.167 "cntlid": 105, 00:16:27.167 "qid": 0, 00:16:27.167 "state": "enabled", 00:16:27.167 "thread": "nvmf_tgt_poll_group_000", 00:16:27.167 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:27.167 "listen_address": { 00:16:27.167 "trtype": "TCP", 00:16:27.167 "adrfam": "IPv4", 00:16:27.167 "traddr": "10.0.0.2", 00:16:27.167 "trsvcid": "4420" 00:16:27.167 }, 00:16:27.167 "peer_address": { 00:16:27.167 "trtype": "TCP", 00:16:27.167 "adrfam": "IPv4", 00:16:27.167 "traddr": "10.0.0.1", 00:16:27.167 "trsvcid": "55970" 00:16:27.167 }, 00:16:27.167 "auth": { 00:16:27.167 "state": "completed", 00:16:27.167 "digest": "sha512", 00:16:27.167 "dhgroup": "ffdhe2048" 00:16:27.167 } 00:16:27.167 } 00:16:27.167 ]' 00:16:27.167 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:27.167 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:27.167 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:27.167 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:27.167 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:27.167 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:27.167 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:27.167 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:27.426 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWM4MjMwYzk0ZDVmNDgzOTIzOWM1NzBlYTIxODc3OGExODkyODAyNzhlODYxNDE3N1Zy3A==: --dhchap-ctrl-secret DHHC-1:03:NWM0MjVjYjhiOGU5Nzk4MjQ1NjkyY2E4ODc4MzEwMzgzYTc2MGRjZDZiZTk1MTY5ZjBhMjk5MmJmNjRiZDQyYd/wShw=: 00:16:27.426 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NWM4MjMwYzk0ZDVmNDgzOTIzOWM1NzBlYTIxODc3OGExODkyODAyNzhlODYxNDE3N1Zy3A==: --dhchap-ctrl-secret DHHC-1:03:NWM0MjVjYjhiOGU5Nzk4MjQ1NjkyY2E4ODc4MzEwMzgzYTc2MGRjZDZiZTk1MTY5ZjBhMjk5MmJmNjRiZDQyYd/wShw=: 00:16:27.993 11:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:27.993 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:27.993 11:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:27.993 11:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.993 11:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.993 11:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.993 11:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:27.993 11:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:27.993 11:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:28.252 11:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:16:28.252 11:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:28.252 11:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:28.252 11:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:28.252 11:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:28.252 11:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:28.252 11:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:28.252 11:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.252 11:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.252 11:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.252 11:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:28.252 11:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:28.252 11:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:28.511 00:16:28.511 11:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:28.511 11:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:28.511 11:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:28.769 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:28.769 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:28.769 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.769 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.769 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.770 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:28.770 { 00:16:28.770 "cntlid": 107, 00:16:28.770 "qid": 0, 00:16:28.770 "state": "enabled", 00:16:28.770 "thread": "nvmf_tgt_poll_group_000", 00:16:28.770 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:28.770 "listen_address": { 00:16:28.770 "trtype": "TCP", 00:16:28.770 "adrfam": "IPv4", 00:16:28.770 "traddr": "10.0.0.2", 00:16:28.770 "trsvcid": "4420" 00:16:28.770 }, 00:16:28.770 "peer_address": { 00:16:28.770 "trtype": "TCP", 00:16:28.770 "adrfam": "IPv4", 00:16:28.770 "traddr": "10.0.0.1", 00:16:28.770 "trsvcid": "55998" 00:16:28.770 }, 00:16:28.770 "auth": { 00:16:28.770 "state": "completed", 00:16:28.770 "digest": "sha512", 00:16:28.770 "dhgroup": "ffdhe2048" 00:16:28.770 } 00:16:28.770 } 00:16:28.770 ]' 00:16:28.770 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:28.770 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:28.770 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:28.770 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:28.770 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:28.770 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:28.770 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:28.770 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:29.028 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmYwYzFiZTJiYTJkNDM1ZTY0OWZmN2I3ZTk1ZWYzMDP/AKiL: --dhchap-ctrl-secret DHHC-1:02:ZDRjZTY5MmQ5MjY2NTY0ZjQ0MjRhNDZjYjE5NzJiZjJhZDcyYTZhYmIxODA1ZmYwqEkFpA==: 00:16:29.028 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZmYwYzFiZTJiYTJkNDM1ZTY0OWZmN2I3ZTk1ZWYzMDP/AKiL: --dhchap-ctrl-secret DHHC-1:02:ZDRjZTY5MmQ5MjY2NTY0ZjQ0MjRhNDZjYjE5NzJiZjJhZDcyYTZhYmIxODA1ZmYwqEkFpA==: 00:16:29.595 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:29.595 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:29.595 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:29.595 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.595 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.595 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.595 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:29.595 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:29.595 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:29.854 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:16:29.854 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:29.854 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:29.854 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:29.854 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:29.854 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:29.854 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:29.854 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.854 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.854 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.854 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:29.854 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:29.854 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:30.112 00:16:30.113 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:30.113 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:30.113 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:30.113 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:30.113 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:30.113 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.113 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.371 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.371 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:30.371 { 00:16:30.371 "cntlid": 109, 00:16:30.371 "qid": 0, 00:16:30.371 "state": "enabled", 00:16:30.371 "thread": "nvmf_tgt_poll_group_000", 00:16:30.371 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:30.371 "listen_address": { 00:16:30.371 "trtype": "TCP", 00:16:30.371 "adrfam": "IPv4", 00:16:30.371 "traddr": "10.0.0.2", 00:16:30.371 "trsvcid": "4420" 00:16:30.371 }, 00:16:30.371 "peer_address": { 00:16:30.371 "trtype": "TCP", 00:16:30.371 "adrfam": "IPv4", 00:16:30.371 "traddr": "10.0.0.1", 00:16:30.371 "trsvcid": "56038" 00:16:30.371 }, 00:16:30.371 "auth": { 00:16:30.371 "state": "completed", 00:16:30.371 "digest": "sha512", 00:16:30.371 "dhgroup": "ffdhe2048" 00:16:30.371 } 00:16:30.371 } 00:16:30.371 ]' 00:16:30.371 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:30.371 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:30.371 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:30.371 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:30.371 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:30.371 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:30.371 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:30.371 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:30.629 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2Y1NThjMjU5OTMyMjkzNmE2ZjBkNThmZDI0N2YxOGNmOWI0OWUwMTQ5YjViMDJik9KxkQ==: --dhchap-ctrl-secret DHHC-1:01:ZWJhMDFkMjViZjEyYTgwMzhkZTM0NWIyYjRlZTk0MGVw0Hvh: 00:16:30.629 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Y2Y1NThjMjU5OTMyMjkzNmE2ZjBkNThmZDI0N2YxOGNmOWI0OWUwMTQ5YjViMDJik9KxkQ==: --dhchap-ctrl-secret DHHC-1:01:ZWJhMDFkMjViZjEyYTgwMzhkZTM0NWIyYjRlZTk0MGVw0Hvh: 00:16:31.194 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:31.194 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:31.194 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:31.194 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.194 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.194 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.194 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:31.194 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:31.194 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:31.453 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:16:31.453 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:31.453 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:31.453 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:31.453 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:31.453 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:31.453 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:31.453 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.453 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.453 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.453 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:31.453 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:31.453 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:31.711 00:16:31.711 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:31.711 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:31.711 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:31.711 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:31.711 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:31.711 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.711 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.712 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.712 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:31.712 { 00:16:31.712 "cntlid": 111, 00:16:31.712 "qid": 0, 00:16:31.712 "state": "enabled", 00:16:31.712 "thread": "nvmf_tgt_poll_group_000", 00:16:31.712 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:31.712 "listen_address": { 00:16:31.712 "trtype": "TCP", 00:16:31.712 "adrfam": "IPv4", 00:16:31.712 "traddr": "10.0.0.2", 00:16:31.712 "trsvcid": "4420" 00:16:31.712 }, 00:16:31.712 "peer_address": { 00:16:31.712 "trtype": "TCP", 00:16:31.712 "adrfam": "IPv4", 00:16:31.712 "traddr": "10.0.0.1", 00:16:31.712 "trsvcid": "56060" 00:16:31.712 }, 00:16:31.712 "auth": { 00:16:31.712 "state": "completed", 00:16:31.712 "digest": "sha512", 00:16:31.712 "dhgroup": "ffdhe2048" 00:16:31.712 } 00:16:31.712 } 00:16:31.712 ]' 00:16:31.712 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:31.970 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:31.970 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:31.970 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:31.970 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:31.970 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:31.970 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:31.970 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:32.229 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWMwOTYyZDYzZDBiNmYyNzRlMGEwNDllZTk2NzI1YmI0Yjk4YWY2MWZiMzE2ZTcyNWVmYWY4MDlkZWNlM2IyMUJs8ko=: 00:16:32.229 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NWMwOTYyZDYzZDBiNmYyNzRlMGEwNDllZTk2NzI1YmI0Yjk4YWY2MWZiMzE2ZTcyNWVmYWY4MDlkZWNlM2IyMUJs8ko=: 00:16:32.797 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:32.797 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:32.797 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:32.797 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.797 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.797 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.797 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:32.797 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:32.797 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:32.797 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:32.797 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:16:32.797 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:32.797 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:32.797 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:32.797 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:32.797 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:32.797 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:32.797 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.797 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.797 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.797 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:32.797 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:32.797 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:33.056 00:16:33.315 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:33.315 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:33.315 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:33.315 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:33.315 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:33.315 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.315 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.315 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.315 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:33.315 { 00:16:33.315 "cntlid": 113, 00:16:33.315 "qid": 0, 00:16:33.315 "state": "enabled", 00:16:33.315 "thread": "nvmf_tgt_poll_group_000", 00:16:33.315 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:33.315 "listen_address": { 00:16:33.315 "trtype": "TCP", 00:16:33.315 "adrfam": "IPv4", 00:16:33.315 "traddr": "10.0.0.2", 00:16:33.315 "trsvcid": "4420" 00:16:33.315 }, 00:16:33.315 "peer_address": { 00:16:33.315 "trtype": "TCP", 00:16:33.315 "adrfam": "IPv4", 00:16:33.315 "traddr": "10.0.0.1", 00:16:33.315 "trsvcid": "56074" 00:16:33.315 }, 00:16:33.315 "auth": { 00:16:33.315 "state": "completed", 00:16:33.315 "digest": "sha512", 00:16:33.315 "dhgroup": "ffdhe3072" 00:16:33.315 } 00:16:33.315 } 00:16:33.315 ]' 00:16:33.315 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:33.573 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:33.573 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:33.573 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:33.573 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:33.573 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:33.573 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:33.573 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:33.832 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWM4MjMwYzk0ZDVmNDgzOTIzOWM1NzBlYTIxODc3OGExODkyODAyNzhlODYxNDE3N1Zy3A==: --dhchap-ctrl-secret DHHC-1:03:NWM0MjVjYjhiOGU5Nzk4MjQ1NjkyY2E4ODc4MzEwMzgzYTc2MGRjZDZiZTk1MTY5ZjBhMjk5MmJmNjRiZDQyYd/wShw=: 00:16:33.832 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NWM4MjMwYzk0ZDVmNDgzOTIzOWM1NzBlYTIxODc3OGExODkyODAyNzhlODYxNDE3N1Zy3A==: --dhchap-ctrl-secret DHHC-1:03:NWM0MjVjYjhiOGU5Nzk4MjQ1NjkyY2E4ODc4MzEwMzgzYTc2MGRjZDZiZTk1MTY5ZjBhMjk5MmJmNjRiZDQyYd/wShw=: 00:16:34.400 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:34.400 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:34.400 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:34.400 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.400 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.400 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.400 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:34.400 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:34.400 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:34.400 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:16:34.400 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:34.400 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:34.400 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:34.400 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:34.400 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:34.400 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:34.400 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.400 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.400 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.400 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:34.400 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:34.400 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:34.659 00:16:34.659 11:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:34.659 11:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:34.659 11:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:34.918 11:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:34.918 11:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:34.918 11:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.918 11:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.918 11:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.918 11:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:34.918 { 00:16:34.918 "cntlid": 115, 00:16:34.918 "qid": 0, 00:16:34.918 "state": "enabled", 00:16:34.918 "thread": "nvmf_tgt_poll_group_000", 00:16:34.918 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:34.918 "listen_address": { 00:16:34.918 "trtype": "TCP", 00:16:34.918 "adrfam": "IPv4", 00:16:34.918 "traddr": "10.0.0.2", 00:16:34.918 "trsvcid": "4420" 00:16:34.918 }, 00:16:34.918 "peer_address": { 00:16:34.918 "trtype": "TCP", 00:16:34.918 "adrfam": "IPv4", 00:16:34.918 "traddr": "10.0.0.1", 00:16:34.918 "trsvcid": "56100" 00:16:34.918 }, 00:16:34.918 "auth": { 00:16:34.918 "state": "completed", 00:16:34.918 "digest": "sha512", 00:16:34.918 "dhgroup": "ffdhe3072" 00:16:34.918 } 00:16:34.918 } 00:16:34.918 ]' 00:16:34.918 11:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:34.918 11:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:34.918 11:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:35.176 11:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:35.176 11:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:35.176 11:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:35.176 11:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:35.176 11:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:35.435 11:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmYwYzFiZTJiYTJkNDM1ZTY0OWZmN2I3ZTk1ZWYzMDP/AKiL: --dhchap-ctrl-secret DHHC-1:02:ZDRjZTY5MmQ5MjY2NTY0ZjQ0MjRhNDZjYjE5NzJiZjJhZDcyYTZhYmIxODA1ZmYwqEkFpA==: 00:16:35.435 11:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZmYwYzFiZTJiYTJkNDM1ZTY0OWZmN2I3ZTk1ZWYzMDP/AKiL: --dhchap-ctrl-secret DHHC-1:02:ZDRjZTY5MmQ5MjY2NTY0ZjQ0MjRhNDZjYjE5NzJiZjJhZDcyYTZhYmIxODA1ZmYwqEkFpA==: 00:16:36.004 11:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:36.004 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:36.004 11:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:36.004 11:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.004 11:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.004 11:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.004 11:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:36.004 11:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:36.004 11:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:36.004 11:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:16:36.004 11:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:36.004 11:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:36.004 11:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:36.004 11:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:36.004 11:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:36.005 11:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:36.005 11:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.005 11:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.005 11:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.005 11:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:36.005 11:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:36.005 11:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:36.263 00:16:36.264 11:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:36.264 11:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:36.264 11:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:36.522 11:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:36.522 11:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:36.522 11:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.522 11:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.522 11:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.522 11:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:36.522 { 00:16:36.522 "cntlid": 117, 00:16:36.522 "qid": 0, 00:16:36.522 "state": "enabled", 00:16:36.522 "thread": "nvmf_tgt_poll_group_000", 00:16:36.522 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:36.522 "listen_address": { 00:16:36.522 "trtype": "TCP", 00:16:36.522 "adrfam": "IPv4", 00:16:36.522 "traddr": "10.0.0.2", 00:16:36.522 "trsvcid": "4420" 00:16:36.522 }, 00:16:36.522 "peer_address": { 00:16:36.522 "trtype": "TCP", 00:16:36.522 "adrfam": "IPv4", 00:16:36.522 "traddr": "10.0.0.1", 00:16:36.522 "trsvcid": "56126" 00:16:36.522 }, 00:16:36.522 "auth": { 00:16:36.522 "state": "completed", 00:16:36.522 "digest": "sha512", 00:16:36.522 "dhgroup": "ffdhe3072" 00:16:36.522 } 00:16:36.522 } 00:16:36.522 ]' 00:16:36.522 11:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:36.522 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:36.522 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:36.781 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:36.782 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:36.782 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:36.782 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:36.782 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:36.782 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2Y1NThjMjU5OTMyMjkzNmE2ZjBkNThmZDI0N2YxOGNmOWI0OWUwMTQ5YjViMDJik9KxkQ==: --dhchap-ctrl-secret DHHC-1:01:ZWJhMDFkMjViZjEyYTgwMzhkZTM0NWIyYjRlZTk0MGVw0Hvh: 00:16:36.782 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Y2Y1NThjMjU5OTMyMjkzNmE2ZjBkNThmZDI0N2YxOGNmOWI0OWUwMTQ5YjViMDJik9KxkQ==: --dhchap-ctrl-secret DHHC-1:01:ZWJhMDFkMjViZjEyYTgwMzhkZTM0NWIyYjRlZTk0MGVw0Hvh: 00:16:37.718 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:37.718 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:37.718 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:37.718 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.718 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.718 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.718 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:37.718 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:37.718 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:37.718 11:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:16:37.718 11:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:37.718 11:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:37.718 11:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:37.718 11:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:37.718 11:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:37.718 11:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:37.718 11:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.718 11:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.718 11:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.718 11:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:37.718 11:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:37.718 11:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:37.977 00:16:37.977 11:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:37.977 11:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:37.977 11:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:38.237 11:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:38.237 11:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:38.237 11:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.237 11:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.237 11:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.237 11:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:38.237 { 00:16:38.237 "cntlid": 119, 00:16:38.237 "qid": 0, 00:16:38.237 "state": "enabled", 00:16:38.237 "thread": "nvmf_tgt_poll_group_000", 00:16:38.237 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:38.237 "listen_address": { 00:16:38.237 "trtype": "TCP", 00:16:38.237 "adrfam": "IPv4", 00:16:38.237 "traddr": "10.0.0.2", 00:16:38.237 "trsvcid": "4420" 00:16:38.237 }, 00:16:38.237 "peer_address": { 00:16:38.237 "trtype": "TCP", 00:16:38.237 "adrfam": "IPv4", 00:16:38.237 "traddr": "10.0.0.1", 00:16:38.237 "trsvcid": "45710" 00:16:38.237 }, 00:16:38.237 "auth": { 00:16:38.237 "state": "completed", 00:16:38.237 "digest": "sha512", 00:16:38.237 "dhgroup": "ffdhe3072" 00:16:38.237 } 00:16:38.237 } 00:16:38.237 ]' 00:16:38.237 11:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:38.237 11:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:38.237 11:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:38.237 11:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:38.237 11:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:38.237 11:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:38.237 11:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:38.237 11:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:38.496 11:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWMwOTYyZDYzZDBiNmYyNzRlMGEwNDllZTk2NzI1YmI0Yjk4YWY2MWZiMzE2ZTcyNWVmYWY4MDlkZWNlM2IyMUJs8ko=: 00:16:38.496 11:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NWMwOTYyZDYzZDBiNmYyNzRlMGEwNDllZTk2NzI1YmI0Yjk4YWY2MWZiMzE2ZTcyNWVmYWY4MDlkZWNlM2IyMUJs8ko=: 00:16:39.063 11:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:39.063 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:39.063 11:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:39.063 11:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.063 11:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.063 11:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.063 11:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:39.063 11:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:39.063 11:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:39.064 11:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:39.322 11:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:16:39.322 11:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:39.322 11:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:39.322 11:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:39.322 11:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:39.322 11:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:39.322 11:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:39.322 11:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.322 11:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.322 11:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.322 11:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:39.322 11:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:39.322 11:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:39.581 00:16:39.581 11:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:39.581 11:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:39.581 11:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:39.840 11:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:39.840 11:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:39.840 11:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.840 11:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.840 11:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.840 11:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:39.840 { 00:16:39.840 "cntlid": 121, 00:16:39.840 "qid": 0, 00:16:39.840 "state": "enabled", 00:16:39.840 "thread": "nvmf_tgt_poll_group_000", 00:16:39.840 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:39.840 "listen_address": { 00:16:39.840 "trtype": "TCP", 00:16:39.840 "adrfam": "IPv4", 00:16:39.840 "traddr": "10.0.0.2", 00:16:39.840 "trsvcid": "4420" 00:16:39.840 }, 00:16:39.840 "peer_address": { 00:16:39.840 "trtype": "TCP", 00:16:39.840 "adrfam": "IPv4", 00:16:39.840 "traddr": "10.0.0.1", 00:16:39.840 "trsvcid": "45744" 00:16:39.840 }, 00:16:39.840 "auth": { 00:16:39.840 "state": "completed", 00:16:39.840 "digest": "sha512", 00:16:39.840 "dhgroup": "ffdhe4096" 00:16:39.840 } 00:16:39.840 } 00:16:39.840 ]' 00:16:39.840 11:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:39.840 11:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:39.840 11:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:39.840 11:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:39.840 11:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:39.840 11:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:39.840 11:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:39.840 11:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:40.099 11:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWM4MjMwYzk0ZDVmNDgzOTIzOWM1NzBlYTIxODc3OGExODkyODAyNzhlODYxNDE3N1Zy3A==: --dhchap-ctrl-secret DHHC-1:03:NWM0MjVjYjhiOGU5Nzk4MjQ1NjkyY2E4ODc4MzEwMzgzYTc2MGRjZDZiZTk1MTY5ZjBhMjk5MmJmNjRiZDQyYd/wShw=: 00:16:40.099 11:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NWM4MjMwYzk0ZDVmNDgzOTIzOWM1NzBlYTIxODc3OGExODkyODAyNzhlODYxNDE3N1Zy3A==: --dhchap-ctrl-secret DHHC-1:03:NWM0MjVjYjhiOGU5Nzk4MjQ1NjkyY2E4ODc4MzEwMzgzYTc2MGRjZDZiZTk1MTY5ZjBhMjk5MmJmNjRiZDQyYd/wShw=: 00:16:40.667 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:40.667 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:40.667 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:40.667 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.667 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.667 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.667 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:40.667 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:40.667 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:40.926 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:16:40.926 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:40.926 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:40.926 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:40.926 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:40.926 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:40.926 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:40.926 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.926 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.926 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.926 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:40.926 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:40.926 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:41.184 00:16:41.184 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:41.184 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:41.184 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:41.443 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:41.443 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:41.443 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.443 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.443 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.443 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:41.443 { 00:16:41.443 "cntlid": 123, 00:16:41.443 "qid": 0, 00:16:41.443 "state": "enabled", 00:16:41.443 "thread": "nvmf_tgt_poll_group_000", 00:16:41.443 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:41.443 "listen_address": { 00:16:41.443 "trtype": "TCP", 00:16:41.443 "adrfam": "IPv4", 00:16:41.443 "traddr": "10.0.0.2", 00:16:41.443 "trsvcid": "4420" 00:16:41.443 }, 00:16:41.443 "peer_address": { 00:16:41.443 "trtype": "TCP", 00:16:41.443 "adrfam": "IPv4", 00:16:41.443 "traddr": "10.0.0.1", 00:16:41.443 "trsvcid": "45776" 00:16:41.443 }, 00:16:41.443 "auth": { 00:16:41.443 "state": "completed", 00:16:41.443 "digest": "sha512", 00:16:41.443 "dhgroup": "ffdhe4096" 00:16:41.443 } 00:16:41.443 } 00:16:41.443 ]' 00:16:41.443 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:41.443 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:41.443 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:41.443 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:41.443 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:41.705 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:41.705 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:41.705 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:41.705 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmYwYzFiZTJiYTJkNDM1ZTY0OWZmN2I3ZTk1ZWYzMDP/AKiL: --dhchap-ctrl-secret DHHC-1:02:ZDRjZTY5MmQ5MjY2NTY0ZjQ0MjRhNDZjYjE5NzJiZjJhZDcyYTZhYmIxODA1ZmYwqEkFpA==: 00:16:41.705 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZmYwYzFiZTJiYTJkNDM1ZTY0OWZmN2I3ZTk1ZWYzMDP/AKiL: --dhchap-ctrl-secret DHHC-1:02:ZDRjZTY5MmQ5MjY2NTY0ZjQ0MjRhNDZjYjE5NzJiZjJhZDcyYTZhYmIxODA1ZmYwqEkFpA==: 00:16:42.271 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:42.271 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:42.271 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:42.271 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.271 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.271 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.271 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:42.271 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:42.271 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:42.530 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:16:42.530 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:42.530 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:42.530 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:42.530 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:42.530 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:42.530 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:42.530 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.530 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.530 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.530 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:42.530 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:42.530 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:42.789 00:16:42.789 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:42.789 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:42.789 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:43.047 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:43.047 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:43.047 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.047 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.047 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.047 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:43.047 { 00:16:43.047 "cntlid": 125, 00:16:43.047 "qid": 0, 00:16:43.047 "state": "enabled", 00:16:43.047 "thread": "nvmf_tgt_poll_group_000", 00:16:43.047 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:43.047 "listen_address": { 00:16:43.047 "trtype": "TCP", 00:16:43.047 "adrfam": "IPv4", 00:16:43.047 "traddr": "10.0.0.2", 00:16:43.047 "trsvcid": "4420" 00:16:43.047 }, 00:16:43.047 "peer_address": { 00:16:43.047 "trtype": "TCP", 00:16:43.047 "adrfam": "IPv4", 00:16:43.047 "traddr": "10.0.0.1", 00:16:43.047 "trsvcid": "45806" 00:16:43.047 }, 00:16:43.047 "auth": { 00:16:43.047 "state": "completed", 00:16:43.047 "digest": "sha512", 00:16:43.047 "dhgroup": "ffdhe4096" 00:16:43.047 } 00:16:43.047 } 00:16:43.047 ]' 00:16:43.047 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:43.047 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:43.047 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:43.306 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:43.306 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:43.306 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:43.306 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:43.306 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:43.306 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2Y1NThjMjU5OTMyMjkzNmE2ZjBkNThmZDI0N2YxOGNmOWI0OWUwMTQ5YjViMDJik9KxkQ==: --dhchap-ctrl-secret DHHC-1:01:ZWJhMDFkMjViZjEyYTgwMzhkZTM0NWIyYjRlZTk0MGVw0Hvh: 00:16:43.306 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Y2Y1NThjMjU5OTMyMjkzNmE2ZjBkNThmZDI0N2YxOGNmOWI0OWUwMTQ5YjViMDJik9KxkQ==: --dhchap-ctrl-secret DHHC-1:01:ZWJhMDFkMjViZjEyYTgwMzhkZTM0NWIyYjRlZTk0MGVw0Hvh: 00:16:43.872 11:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:44.130 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:44.130 11:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:44.130 11:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.130 11:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.130 11:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.130 11:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:44.130 11:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:44.130 11:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:44.130 11:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:16:44.130 11:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:44.130 11:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:44.130 11:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:44.130 11:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:44.130 11:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:44.130 11:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:44.130 11:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.130 11:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.130 11:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.130 11:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:44.130 11:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:44.130 11:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:44.388 00:16:44.647 11:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:44.647 11:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:44.647 11:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:44.647 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:44.647 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:44.647 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.647 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.647 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.647 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:44.647 { 00:16:44.647 "cntlid": 127, 00:16:44.647 "qid": 0, 00:16:44.647 "state": "enabled", 00:16:44.647 "thread": "nvmf_tgt_poll_group_000", 00:16:44.647 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:44.647 "listen_address": { 00:16:44.647 "trtype": "TCP", 00:16:44.647 "adrfam": "IPv4", 00:16:44.647 "traddr": "10.0.0.2", 00:16:44.647 "trsvcid": "4420" 00:16:44.647 }, 00:16:44.647 "peer_address": { 00:16:44.647 "trtype": "TCP", 00:16:44.647 "adrfam": "IPv4", 00:16:44.647 "traddr": "10.0.0.1", 00:16:44.647 "trsvcid": "45830" 00:16:44.647 }, 00:16:44.647 "auth": { 00:16:44.647 "state": "completed", 00:16:44.647 "digest": "sha512", 00:16:44.647 "dhgroup": "ffdhe4096" 00:16:44.647 } 00:16:44.647 } 00:16:44.647 ]' 00:16:44.647 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:44.647 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:44.647 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:44.905 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:44.906 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:44.906 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:44.906 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:44.906 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:45.164 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWMwOTYyZDYzZDBiNmYyNzRlMGEwNDllZTk2NzI1YmI0Yjk4YWY2MWZiMzE2ZTcyNWVmYWY4MDlkZWNlM2IyMUJs8ko=: 00:16:45.164 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NWMwOTYyZDYzZDBiNmYyNzRlMGEwNDllZTk2NzI1YmI0Yjk4YWY2MWZiMzE2ZTcyNWVmYWY4MDlkZWNlM2IyMUJs8ko=: 00:16:45.732 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:45.732 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:45.732 11:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:45.732 11:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.732 11:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.732 11:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.732 11:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:45.732 11:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:45.732 11:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:45.732 11:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:45.732 11:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:16:45.732 11:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:45.732 11:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:45.732 11:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:45.732 11:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:45.732 11:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:45.732 11:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:45.732 11:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.732 11:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.732 11:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.732 11:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:45.732 11:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:45.732 11:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:46.301 00:16:46.301 11:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:46.301 11:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:46.301 11:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:46.301 11:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:46.301 11:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:46.301 11:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.301 11:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.301 11:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.301 11:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:46.301 { 00:16:46.301 "cntlid": 129, 00:16:46.301 "qid": 0, 00:16:46.301 "state": "enabled", 00:16:46.301 "thread": "nvmf_tgt_poll_group_000", 00:16:46.301 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:46.301 "listen_address": { 00:16:46.301 "trtype": "TCP", 00:16:46.301 "adrfam": "IPv4", 00:16:46.301 "traddr": "10.0.0.2", 00:16:46.301 "trsvcid": "4420" 00:16:46.301 }, 00:16:46.301 "peer_address": { 00:16:46.301 "trtype": "TCP", 00:16:46.301 "adrfam": "IPv4", 00:16:46.301 "traddr": "10.0.0.1", 00:16:46.301 "trsvcid": "45866" 00:16:46.301 }, 00:16:46.301 "auth": { 00:16:46.301 "state": "completed", 00:16:46.301 "digest": "sha512", 00:16:46.301 "dhgroup": "ffdhe6144" 00:16:46.301 } 00:16:46.301 } 00:16:46.301 ]' 00:16:46.301 11:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:46.561 11:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:46.561 11:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:46.561 11:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:46.561 11:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:46.561 11:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:46.561 11:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:46.561 11:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:46.820 11:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWM4MjMwYzk0ZDVmNDgzOTIzOWM1NzBlYTIxODc3OGExODkyODAyNzhlODYxNDE3N1Zy3A==: --dhchap-ctrl-secret DHHC-1:03:NWM0MjVjYjhiOGU5Nzk4MjQ1NjkyY2E4ODc4MzEwMzgzYTc2MGRjZDZiZTk1MTY5ZjBhMjk5MmJmNjRiZDQyYd/wShw=: 00:16:46.820 11:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NWM4MjMwYzk0ZDVmNDgzOTIzOWM1NzBlYTIxODc3OGExODkyODAyNzhlODYxNDE3N1Zy3A==: --dhchap-ctrl-secret DHHC-1:03:NWM0MjVjYjhiOGU5Nzk4MjQ1NjkyY2E4ODc4MzEwMzgzYTc2MGRjZDZiZTk1MTY5ZjBhMjk5MmJmNjRiZDQyYd/wShw=: 00:16:47.387 11:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:47.387 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:47.387 11:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:47.387 11:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.387 11:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.387 11:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.387 11:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:47.387 11:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:47.387 11:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:47.647 11:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:16:47.647 11:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:47.647 11:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:47.647 11:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:47.647 11:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:47.647 11:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:47.647 11:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:47.647 11:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.647 11:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.647 11:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.647 11:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:47.647 11:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:47.647 11:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:47.907 00:16:47.907 11:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:47.907 11:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:47.907 11:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:48.166 11:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:48.166 11:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:48.166 11:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.166 11:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.166 11:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.166 11:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:48.166 { 00:16:48.166 "cntlid": 131, 00:16:48.166 "qid": 0, 00:16:48.166 "state": "enabled", 00:16:48.166 "thread": "nvmf_tgt_poll_group_000", 00:16:48.166 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:48.166 "listen_address": { 00:16:48.166 "trtype": "TCP", 00:16:48.166 "adrfam": "IPv4", 00:16:48.166 "traddr": "10.0.0.2", 00:16:48.166 "trsvcid": "4420" 00:16:48.166 }, 00:16:48.166 "peer_address": { 00:16:48.166 "trtype": "TCP", 00:16:48.166 "adrfam": "IPv4", 00:16:48.166 "traddr": "10.0.0.1", 00:16:48.166 "trsvcid": "59890" 00:16:48.166 }, 00:16:48.166 "auth": { 00:16:48.166 "state": "completed", 00:16:48.166 "digest": "sha512", 00:16:48.166 "dhgroup": "ffdhe6144" 00:16:48.166 } 00:16:48.166 } 00:16:48.166 ]' 00:16:48.166 11:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:48.166 11:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:48.166 11:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:48.166 11:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:48.166 11:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:48.166 11:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:48.167 11:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:48.167 11:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:48.426 11:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmYwYzFiZTJiYTJkNDM1ZTY0OWZmN2I3ZTk1ZWYzMDP/AKiL: --dhchap-ctrl-secret DHHC-1:02:ZDRjZTY5MmQ5MjY2NTY0ZjQ0MjRhNDZjYjE5NzJiZjJhZDcyYTZhYmIxODA1ZmYwqEkFpA==: 00:16:48.426 11:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZmYwYzFiZTJiYTJkNDM1ZTY0OWZmN2I3ZTk1ZWYzMDP/AKiL: --dhchap-ctrl-secret DHHC-1:02:ZDRjZTY5MmQ5MjY2NTY0ZjQ0MjRhNDZjYjE5NzJiZjJhZDcyYTZhYmIxODA1ZmYwqEkFpA==: 00:16:48.994 11:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:48.994 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:48.994 11:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:48.994 11:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.994 11:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.994 11:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.994 11:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:48.994 11:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:48.994 11:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:49.253 11:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:16:49.253 11:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:49.253 11:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:49.253 11:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:49.253 11:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:49.253 11:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:49.253 11:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:49.253 11:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.253 11:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.253 11:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.253 11:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:49.253 11:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:49.253 11:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:49.511 00:16:49.511 11:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:49.511 11:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:49.511 11:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:49.769 11:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.769 11:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:49.769 11:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.769 11:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.769 11:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.769 11:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:49.769 { 00:16:49.769 "cntlid": 133, 00:16:49.769 "qid": 0, 00:16:49.769 "state": "enabled", 00:16:49.769 "thread": "nvmf_tgt_poll_group_000", 00:16:49.769 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:49.769 "listen_address": { 00:16:49.769 "trtype": "TCP", 00:16:49.769 "adrfam": "IPv4", 00:16:49.769 "traddr": "10.0.0.2", 00:16:49.769 "trsvcid": "4420" 00:16:49.770 }, 00:16:49.770 "peer_address": { 00:16:49.770 "trtype": "TCP", 00:16:49.770 "adrfam": "IPv4", 00:16:49.770 "traddr": "10.0.0.1", 00:16:49.770 "trsvcid": "59918" 00:16:49.770 }, 00:16:49.770 "auth": { 00:16:49.770 "state": "completed", 00:16:49.770 "digest": "sha512", 00:16:49.770 "dhgroup": "ffdhe6144" 00:16:49.770 } 00:16:49.770 } 00:16:49.770 ]' 00:16:49.770 11:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:49.770 11:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:49.770 11:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:49.770 11:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:49.770 11:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:50.029 11:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:50.029 11:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:50.029 11:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:50.030 11:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2Y1NThjMjU5OTMyMjkzNmE2ZjBkNThmZDI0N2YxOGNmOWI0OWUwMTQ5YjViMDJik9KxkQ==: --dhchap-ctrl-secret DHHC-1:01:ZWJhMDFkMjViZjEyYTgwMzhkZTM0NWIyYjRlZTk0MGVw0Hvh: 00:16:50.030 11:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Y2Y1NThjMjU5OTMyMjkzNmE2ZjBkNThmZDI0N2YxOGNmOWI0OWUwMTQ5YjViMDJik9KxkQ==: --dhchap-ctrl-secret DHHC-1:01:ZWJhMDFkMjViZjEyYTgwMzhkZTM0NWIyYjRlZTk0MGVw0Hvh: 00:16:50.664 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:50.664 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:50.664 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:50.664 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.664 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.664 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.664 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:50.664 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:50.664 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:50.923 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:16:50.923 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:50.923 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:50.923 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:50.923 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:50.923 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:50.923 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:50.923 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.923 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.923 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.923 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:50.923 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:50.923 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:51.181 00:16:51.181 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:51.181 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:51.181 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:51.440 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.440 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:51.440 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.440 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.440 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.440 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:51.440 { 00:16:51.440 "cntlid": 135, 00:16:51.440 "qid": 0, 00:16:51.440 "state": "enabled", 00:16:51.440 "thread": "nvmf_tgt_poll_group_000", 00:16:51.440 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:51.440 "listen_address": { 00:16:51.440 "trtype": "TCP", 00:16:51.440 "adrfam": "IPv4", 00:16:51.440 "traddr": "10.0.0.2", 00:16:51.440 "trsvcid": "4420" 00:16:51.440 }, 00:16:51.440 "peer_address": { 00:16:51.440 "trtype": "TCP", 00:16:51.440 "adrfam": "IPv4", 00:16:51.440 "traddr": "10.0.0.1", 00:16:51.440 "trsvcid": "59944" 00:16:51.440 }, 00:16:51.440 "auth": { 00:16:51.440 "state": "completed", 00:16:51.440 "digest": "sha512", 00:16:51.440 "dhgroup": "ffdhe6144" 00:16:51.440 } 00:16:51.440 } 00:16:51.440 ]' 00:16:51.440 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:51.440 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:51.440 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:51.700 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:51.700 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:51.700 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:51.700 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:51.700 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:51.700 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWMwOTYyZDYzZDBiNmYyNzRlMGEwNDllZTk2NzI1YmI0Yjk4YWY2MWZiMzE2ZTcyNWVmYWY4MDlkZWNlM2IyMUJs8ko=: 00:16:51.700 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NWMwOTYyZDYzZDBiNmYyNzRlMGEwNDllZTk2NzI1YmI0Yjk4YWY2MWZiMzE2ZTcyNWVmYWY4MDlkZWNlM2IyMUJs8ko=: 00:16:52.266 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:52.527 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:52.527 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:52.527 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.527 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.527 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.527 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:52.527 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:52.527 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:52.527 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:52.527 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:16:52.527 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:52.527 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:52.527 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:52.527 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:52.527 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:52.527 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:52.527 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.527 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.527 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.527 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:52.528 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:52.528 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:53.094 00:16:53.094 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:53.094 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:53.094 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:53.353 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.353 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:53.353 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.353 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.353 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.353 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:53.353 { 00:16:53.353 "cntlid": 137, 00:16:53.353 "qid": 0, 00:16:53.353 "state": "enabled", 00:16:53.353 "thread": "nvmf_tgt_poll_group_000", 00:16:53.353 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:53.353 "listen_address": { 00:16:53.353 "trtype": "TCP", 00:16:53.353 "adrfam": "IPv4", 00:16:53.353 "traddr": "10.0.0.2", 00:16:53.353 "trsvcid": "4420" 00:16:53.353 }, 00:16:53.353 "peer_address": { 00:16:53.353 "trtype": "TCP", 00:16:53.353 "adrfam": "IPv4", 00:16:53.353 "traddr": "10.0.0.1", 00:16:53.353 "trsvcid": "59980" 00:16:53.353 }, 00:16:53.353 "auth": { 00:16:53.353 "state": "completed", 00:16:53.353 "digest": "sha512", 00:16:53.353 "dhgroup": "ffdhe8192" 00:16:53.353 } 00:16:53.353 } 00:16:53.353 ]' 00:16:53.353 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:53.353 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:53.353 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:53.353 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:53.353 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:53.612 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:53.612 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:53.612 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:53.612 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWM4MjMwYzk0ZDVmNDgzOTIzOWM1NzBlYTIxODc3OGExODkyODAyNzhlODYxNDE3N1Zy3A==: --dhchap-ctrl-secret DHHC-1:03:NWM0MjVjYjhiOGU5Nzk4MjQ1NjkyY2E4ODc4MzEwMzgzYTc2MGRjZDZiZTk1MTY5ZjBhMjk5MmJmNjRiZDQyYd/wShw=: 00:16:53.612 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NWM4MjMwYzk0ZDVmNDgzOTIzOWM1NzBlYTIxODc3OGExODkyODAyNzhlODYxNDE3N1Zy3A==: --dhchap-ctrl-secret DHHC-1:03:NWM0MjVjYjhiOGU5Nzk4MjQ1NjkyY2E4ODc4MzEwMzgzYTc2MGRjZDZiZTk1MTY5ZjBhMjk5MmJmNjRiZDQyYd/wShw=: 00:16:54.180 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:54.180 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:54.181 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:54.181 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.181 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.181 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.181 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:54.181 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:54.181 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:54.440 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:16:54.440 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:54.440 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:54.440 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:54.440 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:54.440 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:54.440 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:54.440 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.440 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.440 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.440 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:54.440 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:54.440 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:55.008 00:16:55.008 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:55.008 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:55.008 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:55.267 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.267 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:55.267 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.267 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.267 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.267 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:55.267 { 00:16:55.267 "cntlid": 139, 00:16:55.267 "qid": 0, 00:16:55.267 "state": "enabled", 00:16:55.267 "thread": "nvmf_tgt_poll_group_000", 00:16:55.267 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:55.267 "listen_address": { 00:16:55.267 "trtype": "TCP", 00:16:55.267 "adrfam": "IPv4", 00:16:55.267 "traddr": "10.0.0.2", 00:16:55.267 "trsvcid": "4420" 00:16:55.267 }, 00:16:55.267 "peer_address": { 00:16:55.267 "trtype": "TCP", 00:16:55.267 "adrfam": "IPv4", 00:16:55.267 "traddr": "10.0.0.1", 00:16:55.267 "trsvcid": "60006" 00:16:55.267 }, 00:16:55.267 "auth": { 00:16:55.267 "state": "completed", 00:16:55.267 "digest": "sha512", 00:16:55.267 "dhgroup": "ffdhe8192" 00:16:55.267 } 00:16:55.267 } 00:16:55.267 ]' 00:16:55.267 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:55.267 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:55.267 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:55.267 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:55.267 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:55.267 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:55.267 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:55.267 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:55.527 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmYwYzFiZTJiYTJkNDM1ZTY0OWZmN2I3ZTk1ZWYzMDP/AKiL: --dhchap-ctrl-secret DHHC-1:02:ZDRjZTY5MmQ5MjY2NTY0ZjQ0MjRhNDZjYjE5NzJiZjJhZDcyYTZhYmIxODA1ZmYwqEkFpA==: 00:16:55.527 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZmYwYzFiZTJiYTJkNDM1ZTY0OWZmN2I3ZTk1ZWYzMDP/AKiL: --dhchap-ctrl-secret DHHC-1:02:ZDRjZTY5MmQ5MjY2NTY0ZjQ0MjRhNDZjYjE5NzJiZjJhZDcyYTZhYmIxODA1ZmYwqEkFpA==: 00:16:56.095 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:56.095 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:56.095 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:56.095 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.095 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.095 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.095 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:56.095 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:56.095 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:56.354 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:16:56.354 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:56.354 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:56.354 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:56.354 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:56.354 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:56.354 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:56.354 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.354 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.354 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.354 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:56.354 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:56.354 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:56.922 00:16:56.922 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:56.922 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:56.922 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:56.922 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:56.922 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:56.922 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.922 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.180 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.181 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:57.181 { 00:16:57.181 "cntlid": 141, 00:16:57.181 "qid": 0, 00:16:57.181 "state": "enabled", 00:16:57.181 "thread": "nvmf_tgt_poll_group_000", 00:16:57.181 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:57.181 "listen_address": { 00:16:57.181 "trtype": "TCP", 00:16:57.181 "adrfam": "IPv4", 00:16:57.181 "traddr": "10.0.0.2", 00:16:57.181 "trsvcid": "4420" 00:16:57.181 }, 00:16:57.181 "peer_address": { 00:16:57.181 "trtype": "TCP", 00:16:57.181 "adrfam": "IPv4", 00:16:57.181 "traddr": "10.0.0.1", 00:16:57.181 "trsvcid": "48086" 00:16:57.181 }, 00:16:57.181 "auth": { 00:16:57.181 "state": "completed", 00:16:57.181 "digest": "sha512", 00:16:57.181 "dhgroup": "ffdhe8192" 00:16:57.181 } 00:16:57.181 } 00:16:57.181 ]' 00:16:57.181 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:57.181 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:57.181 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:57.181 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:57.181 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:57.181 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:57.181 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:57.181 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:57.439 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2Y1NThjMjU5OTMyMjkzNmE2ZjBkNThmZDI0N2YxOGNmOWI0OWUwMTQ5YjViMDJik9KxkQ==: --dhchap-ctrl-secret DHHC-1:01:ZWJhMDFkMjViZjEyYTgwMzhkZTM0NWIyYjRlZTk0MGVw0Hvh: 00:16:57.439 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Y2Y1NThjMjU5OTMyMjkzNmE2ZjBkNThmZDI0N2YxOGNmOWI0OWUwMTQ5YjViMDJik9KxkQ==: --dhchap-ctrl-secret DHHC-1:01:ZWJhMDFkMjViZjEyYTgwMzhkZTM0NWIyYjRlZTk0MGVw0Hvh: 00:16:58.006 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:58.006 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:58.006 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:58.006 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.006 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.006 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.006 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:58.006 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:58.006 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:58.265 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:16:58.265 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:58.265 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:58.265 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:58.265 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:58.265 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:58.265 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:58.265 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.265 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.265 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.265 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:58.265 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:58.265 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:58.831 00:16:58.831 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:58.831 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:58.831 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:58.831 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:58.831 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:58.831 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.831 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.831 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.831 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:58.831 { 00:16:58.831 "cntlid": 143, 00:16:58.831 "qid": 0, 00:16:58.831 "state": "enabled", 00:16:58.831 "thread": "nvmf_tgt_poll_group_000", 00:16:58.831 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:58.831 "listen_address": { 00:16:58.831 "trtype": "TCP", 00:16:58.831 "adrfam": "IPv4", 00:16:58.831 "traddr": "10.0.0.2", 00:16:58.831 "trsvcid": "4420" 00:16:58.831 }, 00:16:58.831 "peer_address": { 00:16:58.831 "trtype": "TCP", 00:16:58.831 "adrfam": "IPv4", 00:16:58.831 "traddr": "10.0.0.1", 00:16:58.831 "trsvcid": "48114" 00:16:58.831 }, 00:16:58.831 "auth": { 00:16:58.831 "state": "completed", 00:16:58.831 "digest": "sha512", 00:16:58.831 "dhgroup": "ffdhe8192" 00:16:58.831 } 00:16:58.831 } 00:16:58.831 ]' 00:16:58.831 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:58.831 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:58.831 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:59.090 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:59.090 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:59.090 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:59.090 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:59.090 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:59.349 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWMwOTYyZDYzZDBiNmYyNzRlMGEwNDllZTk2NzI1YmI0Yjk4YWY2MWZiMzE2ZTcyNWVmYWY4MDlkZWNlM2IyMUJs8ko=: 00:16:59.349 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NWMwOTYyZDYzZDBiNmYyNzRlMGEwNDllZTk2NzI1YmI0Yjk4YWY2MWZiMzE2ZTcyNWVmYWY4MDlkZWNlM2IyMUJs8ko=: 00:16:59.917 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:59.917 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:59.917 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:59.917 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.917 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.917 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.917 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:16:59.917 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:16:59.917 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:16:59.917 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:59.917 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:59.917 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:59.917 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:16:59.917 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:59.917 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:59.917 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:59.917 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:59.917 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:59.917 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:59.917 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.917 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.917 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.917 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:59.917 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:59.917 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:00.485 00:17:00.485 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:00.485 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:00.485 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:00.744 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.744 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:00.744 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.744 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.744 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.744 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:00.744 { 00:17:00.744 "cntlid": 145, 00:17:00.744 "qid": 0, 00:17:00.744 "state": "enabled", 00:17:00.744 "thread": "nvmf_tgt_poll_group_000", 00:17:00.744 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:00.744 "listen_address": { 00:17:00.744 "trtype": "TCP", 00:17:00.744 "adrfam": "IPv4", 00:17:00.744 "traddr": "10.0.0.2", 00:17:00.744 "trsvcid": "4420" 00:17:00.744 }, 00:17:00.744 "peer_address": { 00:17:00.744 "trtype": "TCP", 00:17:00.744 "adrfam": "IPv4", 00:17:00.744 "traddr": "10.0.0.1", 00:17:00.744 "trsvcid": "48144" 00:17:00.744 }, 00:17:00.744 "auth": { 00:17:00.744 "state": "completed", 00:17:00.744 "digest": "sha512", 00:17:00.744 "dhgroup": "ffdhe8192" 00:17:00.744 } 00:17:00.744 } 00:17:00.744 ]' 00:17:00.744 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:00.744 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:00.744 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:00.744 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:00.744 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:01.004 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:01.004 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:01.004 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:01.004 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWM4MjMwYzk0ZDVmNDgzOTIzOWM1NzBlYTIxODc3OGExODkyODAyNzhlODYxNDE3N1Zy3A==: --dhchap-ctrl-secret DHHC-1:03:NWM0MjVjYjhiOGU5Nzk4MjQ1NjkyY2E4ODc4MzEwMzgzYTc2MGRjZDZiZTk1MTY5ZjBhMjk5MmJmNjRiZDQyYd/wShw=: 00:17:01.004 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NWM4MjMwYzk0ZDVmNDgzOTIzOWM1NzBlYTIxODc3OGExODkyODAyNzhlODYxNDE3N1Zy3A==: --dhchap-ctrl-secret DHHC-1:03:NWM0MjVjYjhiOGU5Nzk4MjQ1NjkyY2E4ODc4MzEwMzgzYTc2MGRjZDZiZTk1MTY5ZjBhMjk5MmJmNjRiZDQyYd/wShw=: 00:17:01.571 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:01.571 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:01.571 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:01.571 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.571 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.571 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.571 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:17:01.571 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.571 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.571 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.571 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:17:01.571 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:01.571 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:17:01.571 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:01.571 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:01.571 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:01.571 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:01.571 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:17:01.571 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:17:01.571 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:17:02.139 request: 00:17:02.139 { 00:17:02.139 "name": "nvme0", 00:17:02.139 "trtype": "tcp", 00:17:02.139 "traddr": "10.0.0.2", 00:17:02.139 "adrfam": "ipv4", 00:17:02.139 "trsvcid": "4420", 00:17:02.139 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:02.139 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:02.139 "prchk_reftag": false, 00:17:02.139 "prchk_guard": false, 00:17:02.139 "hdgst": false, 00:17:02.139 "ddgst": false, 00:17:02.139 "dhchap_key": "key2", 00:17:02.139 "allow_unrecognized_csi": false, 00:17:02.139 "method": "bdev_nvme_attach_controller", 00:17:02.139 "req_id": 1 00:17:02.139 } 00:17:02.139 Got JSON-RPC error response 00:17:02.139 response: 00:17:02.139 { 00:17:02.139 "code": -5, 00:17:02.139 "message": "Input/output error" 00:17:02.139 } 00:17:02.139 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:02.139 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:02.139 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:02.139 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:02.139 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:02.139 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.139 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.139 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.139 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:02.139 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.139 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.139 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.139 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:02.139 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:02.139 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:02.139 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:02.139 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:02.139 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:02.139 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:02.139 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:02.139 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:02.139 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:02.707 request: 00:17:02.707 { 00:17:02.707 "name": "nvme0", 00:17:02.707 "trtype": "tcp", 00:17:02.707 "traddr": "10.0.0.2", 00:17:02.707 "adrfam": "ipv4", 00:17:02.707 "trsvcid": "4420", 00:17:02.707 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:02.707 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:02.707 "prchk_reftag": false, 00:17:02.707 "prchk_guard": false, 00:17:02.707 "hdgst": false, 00:17:02.707 "ddgst": false, 00:17:02.707 "dhchap_key": "key1", 00:17:02.707 "dhchap_ctrlr_key": "ckey2", 00:17:02.707 "allow_unrecognized_csi": false, 00:17:02.707 "method": "bdev_nvme_attach_controller", 00:17:02.707 "req_id": 1 00:17:02.707 } 00:17:02.707 Got JSON-RPC error response 00:17:02.707 response: 00:17:02.707 { 00:17:02.707 "code": -5, 00:17:02.707 "message": "Input/output error" 00:17:02.707 } 00:17:02.707 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:02.707 11:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:02.707 11:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:02.707 11:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:02.707 11:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:02.707 11:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.707 11:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.707 11:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.707 11:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:17:02.707 11:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.707 11:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.707 11:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.707 11:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:02.707 11:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:02.707 11:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:02.707 11:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:02.707 11:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:02.707 11:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:02.707 11:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:02.707 11:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:02.707 11:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:02.707 11:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:03.274 request: 00:17:03.274 { 00:17:03.274 "name": "nvme0", 00:17:03.274 "trtype": "tcp", 00:17:03.274 "traddr": "10.0.0.2", 00:17:03.274 "adrfam": "ipv4", 00:17:03.274 "trsvcid": "4420", 00:17:03.274 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:03.274 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:03.274 "prchk_reftag": false, 00:17:03.274 "prchk_guard": false, 00:17:03.274 "hdgst": false, 00:17:03.274 "ddgst": false, 00:17:03.274 "dhchap_key": "key1", 00:17:03.274 "dhchap_ctrlr_key": "ckey1", 00:17:03.274 "allow_unrecognized_csi": false, 00:17:03.274 "method": "bdev_nvme_attach_controller", 00:17:03.274 "req_id": 1 00:17:03.274 } 00:17:03.274 Got JSON-RPC error response 00:17:03.274 response: 00:17:03.274 { 00:17:03.274 "code": -5, 00:17:03.274 "message": "Input/output error" 00:17:03.274 } 00:17:03.274 11:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:03.274 11:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:03.274 11:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:03.274 11:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:03.274 11:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:03.274 11:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.274 11:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.274 11:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.274 11:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 4037232 00:17:03.274 11:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 4037232 ']' 00:17:03.274 11:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 4037232 00:17:03.274 11:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:17:03.274 11:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:03.274 11:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4037232 00:17:03.274 11:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:03.274 11:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:03.274 11:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4037232' 00:17:03.275 killing process with pid 4037232 00:17:03.275 11:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 4037232 00:17:03.275 11:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 4037232 00:17:03.275 11:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:17:03.275 11:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:03.275 11:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:03.275 11:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.275 11:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=4058975 00:17:03.275 11:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:17:03.275 11:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 4058975 00:17:03.275 11:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 4058975 ']' 00:17:03.275 11:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:03.275 11:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:03.275 11:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:03.275 11:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:03.275 11:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.534 11:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:03.534 11:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:17:03.534 11:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:03.534 11:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:03.534 11:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.534 11:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:03.534 11:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:03.534 11:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 4058975 00:17:03.534 11:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 4058975 ']' 00:17:03.534 11:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:03.534 11:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:03.534 11:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:03.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:03.534 11:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:03.534 11:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.793 11:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:03.793 11:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:17:03.793 11:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:17:03.793 11:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.793 11:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.053 null0 00:17:04.053 11:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.053 11:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:04.053 11:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Nif 00:17:04.053 11:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.053 11:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.053 11:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.053 11:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.Pw7 ]] 00:17:04.053 11:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Pw7 00:17:04.053 11:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.053 11:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.053 11:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.053 11:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:04.053 11:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.Xir 00:17:04.053 11:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.053 11:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.053 11:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.053 11:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.l8t ]] 00:17:04.053 11:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.l8t 00:17:04.053 11:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.053 11:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.053 11:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.053 11:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:04.053 11:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.BvB 00:17:04.053 11:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.053 11:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.053 11:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.053 11:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.S21 ]] 00:17:04.053 11:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.S21 00:17:04.053 11:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.053 11:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.053 11:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.053 11:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:04.053 11:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.H31 00:17:04.053 11:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.053 11:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.053 11:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.053 11:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:17:04.053 11:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:17:04.053 11:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:04.053 11:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:04.053 11:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:04.053 11:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:04.053 11:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:04.053 11:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:04.053 11:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.053 11:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.053 11:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.053 11:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:04.053 11:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:04.053 11:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:04.988 nvme0n1 00:17:04.988 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:04.988 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:04.988 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:04.988 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.988 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:04.988 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.988 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.988 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.988 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:04.988 { 00:17:04.988 "cntlid": 1, 00:17:04.988 "qid": 0, 00:17:04.988 "state": "enabled", 00:17:04.988 "thread": "nvmf_tgt_poll_group_000", 00:17:04.988 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:04.988 "listen_address": { 00:17:04.988 "trtype": "TCP", 00:17:04.988 "adrfam": "IPv4", 00:17:04.988 "traddr": "10.0.0.2", 00:17:04.988 "trsvcid": "4420" 00:17:04.988 }, 00:17:04.988 "peer_address": { 00:17:04.988 "trtype": "TCP", 00:17:04.988 "adrfam": "IPv4", 00:17:04.988 "traddr": "10.0.0.1", 00:17:04.988 "trsvcid": "48198" 00:17:04.988 }, 00:17:04.988 "auth": { 00:17:04.988 "state": "completed", 00:17:04.988 "digest": "sha512", 00:17:04.988 "dhgroup": "ffdhe8192" 00:17:04.988 } 00:17:04.988 } 00:17:04.988 ]' 00:17:04.988 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:04.988 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:04.988 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:04.988 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:04.988 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:05.247 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:05.247 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:05.247 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:05.247 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWMwOTYyZDYzZDBiNmYyNzRlMGEwNDllZTk2NzI1YmI0Yjk4YWY2MWZiMzE2ZTcyNWVmYWY4MDlkZWNlM2IyMUJs8ko=: 00:17:05.247 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NWMwOTYyZDYzZDBiNmYyNzRlMGEwNDllZTk2NzI1YmI0Yjk4YWY2MWZiMzE2ZTcyNWVmYWY4MDlkZWNlM2IyMUJs8ko=: 00:17:05.814 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:05.814 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:05.814 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:05.814 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.814 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.072 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.072 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:06.072 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.072 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.072 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.072 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:17:06.072 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:17:06.072 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:17:06.072 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:06.072 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:17:06.072 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:06.072 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:06.072 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:06.072 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:06.072 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:06.072 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:06.072 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:06.331 request: 00:17:06.331 { 00:17:06.331 "name": "nvme0", 00:17:06.331 "trtype": "tcp", 00:17:06.331 "traddr": "10.0.0.2", 00:17:06.331 "adrfam": "ipv4", 00:17:06.332 "trsvcid": "4420", 00:17:06.332 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:06.332 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:06.332 "prchk_reftag": false, 00:17:06.332 "prchk_guard": false, 00:17:06.332 "hdgst": false, 00:17:06.332 "ddgst": false, 00:17:06.332 "dhchap_key": "key3", 00:17:06.332 "allow_unrecognized_csi": false, 00:17:06.332 "method": "bdev_nvme_attach_controller", 00:17:06.332 "req_id": 1 00:17:06.332 } 00:17:06.332 Got JSON-RPC error response 00:17:06.332 response: 00:17:06.332 { 00:17:06.332 "code": -5, 00:17:06.332 "message": "Input/output error" 00:17:06.332 } 00:17:06.332 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:06.332 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:06.332 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:06.332 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:06.332 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:17:06.332 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:17:06.332 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:06.332 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:06.590 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:17:06.590 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:06.590 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:17:06.590 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:06.590 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:06.590 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:06.590 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:06.590 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:06.590 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:06.590 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:06.849 request: 00:17:06.849 { 00:17:06.849 "name": "nvme0", 00:17:06.849 "trtype": "tcp", 00:17:06.849 "traddr": "10.0.0.2", 00:17:06.849 "adrfam": "ipv4", 00:17:06.849 "trsvcid": "4420", 00:17:06.849 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:06.849 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:06.849 "prchk_reftag": false, 00:17:06.849 "prchk_guard": false, 00:17:06.849 "hdgst": false, 00:17:06.849 "ddgst": false, 00:17:06.849 "dhchap_key": "key3", 00:17:06.849 "allow_unrecognized_csi": false, 00:17:06.849 "method": "bdev_nvme_attach_controller", 00:17:06.849 "req_id": 1 00:17:06.849 } 00:17:06.849 Got JSON-RPC error response 00:17:06.849 response: 00:17:06.849 { 00:17:06.849 "code": -5, 00:17:06.849 "message": "Input/output error" 00:17:06.849 } 00:17:06.849 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:06.849 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:06.849 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:06.849 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:06.849 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:17:06.849 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:17:06.849 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:17:06.849 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:06.849 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:06.849 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:06.849 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:06.849 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.849 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.849 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.849 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:06.849 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.849 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.849 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.849 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:06.849 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:06.849 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:06.849 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:07.108 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:07.108 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:07.108 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:07.108 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:07.108 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:07.108 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:07.367 request: 00:17:07.367 { 00:17:07.367 "name": "nvme0", 00:17:07.367 "trtype": "tcp", 00:17:07.367 "traddr": "10.0.0.2", 00:17:07.367 "adrfam": "ipv4", 00:17:07.367 "trsvcid": "4420", 00:17:07.367 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:07.367 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:07.367 "prchk_reftag": false, 00:17:07.367 "prchk_guard": false, 00:17:07.367 "hdgst": false, 00:17:07.367 "ddgst": false, 00:17:07.367 "dhchap_key": "key0", 00:17:07.367 "dhchap_ctrlr_key": "key1", 00:17:07.367 "allow_unrecognized_csi": false, 00:17:07.367 "method": "bdev_nvme_attach_controller", 00:17:07.367 "req_id": 1 00:17:07.367 } 00:17:07.367 Got JSON-RPC error response 00:17:07.367 response: 00:17:07.367 { 00:17:07.367 "code": -5, 00:17:07.367 "message": "Input/output error" 00:17:07.367 } 00:17:07.367 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:07.367 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:07.367 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:07.367 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:07.367 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:17:07.367 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:17:07.367 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:17:07.625 nvme0n1 00:17:07.625 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:17:07.625 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:07.625 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:17:07.884 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.884 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:07.884 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:07.884 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:17:07.884 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.884 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.884 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.884 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:17:07.884 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:07.884 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:08.819 nvme0n1 00:17:08.819 11:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:17:08.819 11:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:17:08.819 11:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:09.078 11:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.078 11:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:09.078 11:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.078 11:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.078 11:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.078 11:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:17:09.078 11:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:17:09.078 11:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:09.078 11:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.078 11:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2Y1NThjMjU5OTMyMjkzNmE2ZjBkNThmZDI0N2YxOGNmOWI0OWUwMTQ5YjViMDJik9KxkQ==: --dhchap-ctrl-secret DHHC-1:03:NWMwOTYyZDYzZDBiNmYyNzRlMGEwNDllZTk2NzI1YmI0Yjk4YWY2MWZiMzE2ZTcyNWVmYWY4MDlkZWNlM2IyMUJs8ko=: 00:17:09.078 11:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Y2Y1NThjMjU5OTMyMjkzNmE2ZjBkNThmZDI0N2YxOGNmOWI0OWUwMTQ5YjViMDJik9KxkQ==: --dhchap-ctrl-secret DHHC-1:03:NWMwOTYyZDYzZDBiNmYyNzRlMGEwNDllZTk2NzI1YmI0Yjk4YWY2MWZiMzE2ZTcyNWVmYWY4MDlkZWNlM2IyMUJs8ko=: 00:17:09.645 11:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:17:09.645 11:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:17:09.645 11:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:17:09.645 11:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:17:09.645 11:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:17:09.645 11:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:17:09.645 11:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:17:09.645 11:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:09.645 11:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:09.904 11:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:17:09.904 11:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:09.904 11:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:17:09.904 11:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:09.904 11:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:09.904 11:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:09.904 11:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:09.904 11:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:17:09.904 11:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:09.904 11:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:10.470 request: 00:17:10.470 { 00:17:10.470 "name": "nvme0", 00:17:10.470 "trtype": "tcp", 00:17:10.470 "traddr": "10.0.0.2", 00:17:10.470 "adrfam": "ipv4", 00:17:10.470 "trsvcid": "4420", 00:17:10.470 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:10.470 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:10.470 "prchk_reftag": false, 00:17:10.470 "prchk_guard": false, 00:17:10.470 "hdgst": false, 00:17:10.470 "ddgst": false, 00:17:10.470 "dhchap_key": "key1", 00:17:10.470 "allow_unrecognized_csi": false, 00:17:10.470 "method": "bdev_nvme_attach_controller", 00:17:10.470 "req_id": 1 00:17:10.470 } 00:17:10.470 Got JSON-RPC error response 00:17:10.470 response: 00:17:10.470 { 00:17:10.470 "code": -5, 00:17:10.470 "message": "Input/output error" 00:17:10.470 } 00:17:10.470 11:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:10.470 11:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:10.470 11:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:10.470 11:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:10.470 11:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:10.470 11:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:10.470 11:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:11.038 nvme0n1 00:17:11.297 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:17:11.297 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:17:11.297 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:11.297 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.297 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:11.297 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:11.556 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:11.556 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.556 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.556 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.556 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:17:11.556 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:17:11.556 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:17:11.817 nvme0n1 00:17:11.817 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:17:11.817 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:17:11.817 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:12.075 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:12.075 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:12.075 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:12.334 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:12.334 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.334 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.334 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.334 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:ZmYwYzFiZTJiYTJkNDM1ZTY0OWZmN2I3ZTk1ZWYzMDP/AKiL: '' 2s 00:17:12.334 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:17:12.334 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:17:12.334 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:ZmYwYzFiZTJiYTJkNDM1ZTY0OWZmN2I3ZTk1ZWYzMDP/AKiL: 00:17:12.334 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:17:12.334 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:17:12.334 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:17:12.334 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:ZmYwYzFiZTJiYTJkNDM1ZTY0OWZmN2I3ZTk1ZWYzMDP/AKiL: ]] 00:17:12.334 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:ZmYwYzFiZTJiYTJkNDM1ZTY0OWZmN2I3ZTk1ZWYzMDP/AKiL: 00:17:12.334 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:17:12.334 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:17:12.334 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:17:14.238 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:17:14.238 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:17:14.238 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:14.238 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:17:14.238 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:17:14.238 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:17:14.238 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:17:14.238 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key2 00:17:14.238 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.238 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.238 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.238 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:Y2Y1NThjMjU5OTMyMjkzNmE2ZjBkNThmZDI0N2YxOGNmOWI0OWUwMTQ5YjViMDJik9KxkQ==: 2s 00:17:14.238 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:17:14.238 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:17:14.238 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:17:14.238 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:Y2Y1NThjMjU5OTMyMjkzNmE2ZjBkNThmZDI0N2YxOGNmOWI0OWUwMTQ5YjViMDJik9KxkQ==: 00:17:14.238 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:17:14.238 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:17:14.238 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:17:14.238 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:Y2Y1NThjMjU5OTMyMjkzNmE2ZjBkNThmZDI0N2YxOGNmOWI0OWUwMTQ5YjViMDJik9KxkQ==: ]] 00:17:14.238 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:Y2Y1NThjMjU5OTMyMjkzNmE2ZjBkNThmZDI0N2YxOGNmOWI0OWUwMTQ5YjViMDJik9KxkQ==: 00:17:14.238 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:17:14.238 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:17:16.769 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:17:16.769 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:17:16.769 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:16.769 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:17:16.769 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:17:16.769 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:17:16.769 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:17:16.769 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:16.769 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:16.769 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:16.769 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.769 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.769 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.769 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:16.769 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:16.769 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:17.027 nvme0n1 00:17:17.027 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:17.027 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.027 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.027 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.027 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:17.027 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:17.592 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:17:17.592 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:17.592 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:17:17.850 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.850 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:17.850 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.850 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.850 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.850 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:17:17.850 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:17:18.108 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:17:18.108 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:17:18.108 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:18.368 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.368 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:18.368 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.368 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.368 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.368 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:18.368 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:18.368 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:18.368 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:17:18.368 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:18.368 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:17:18.368 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:18.368 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:18.368 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:18.627 request: 00:17:18.627 { 00:17:18.627 "name": "nvme0", 00:17:18.627 "dhchap_key": "key1", 00:17:18.627 "dhchap_ctrlr_key": "key3", 00:17:18.627 "method": "bdev_nvme_set_keys", 00:17:18.627 "req_id": 1 00:17:18.627 } 00:17:18.627 Got JSON-RPC error response 00:17:18.627 response: 00:17:18.627 { 00:17:18.627 "code": -13, 00:17:18.627 "message": "Permission denied" 00:17:18.627 } 00:17:18.627 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:18.627 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:18.627 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:18.627 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:18.627 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:17:18.627 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:17:18.627 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:18.886 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:17:18.886 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:17:19.821 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:17:19.821 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:17:19.821 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:20.081 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:17:20.081 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:20.081 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.081 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.081 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.081 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:20.081 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:20.081 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:21.019 nvme0n1 00:17:21.019 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:21.019 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.019 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.019 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.019 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:21.019 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:21.019 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:21.019 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:17:21.019 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:21.019 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:17:21.019 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:21.019 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:21.019 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:21.278 request: 00:17:21.278 { 00:17:21.278 "name": "nvme0", 00:17:21.278 "dhchap_key": "key2", 00:17:21.278 "dhchap_ctrlr_key": "key0", 00:17:21.278 "method": "bdev_nvme_set_keys", 00:17:21.278 "req_id": 1 00:17:21.278 } 00:17:21.278 Got JSON-RPC error response 00:17:21.278 response: 00:17:21.278 { 00:17:21.278 "code": -13, 00:17:21.278 "message": "Permission denied" 00:17:21.278 } 00:17:21.278 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:21.278 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:21.278 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:21.278 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:21.278 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:17:21.278 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:17:21.278 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:21.536 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:17:21.537 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:17:22.473 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:17:22.473 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:17:22.473 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:22.732 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:17:22.732 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:17:22.732 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:17:22.732 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 4037281 00:17:22.732 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 4037281 ']' 00:17:22.732 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 4037281 00:17:22.732 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:17:22.732 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:22.732 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4037281 00:17:22.732 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:22.732 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:22.732 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4037281' 00:17:22.732 killing process with pid 4037281 00:17:22.733 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 4037281 00:17:22.733 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 4037281 00:17:22.993 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:17:22.993 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:22.993 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:17:22.993 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:22.993 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:17:22.993 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:22.993 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:23.252 rmmod nvme_tcp 00:17:23.252 rmmod nvme_fabrics 00:17:23.252 rmmod nvme_keyring 00:17:23.252 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:23.253 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:17:23.253 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:17:23.253 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 4058975 ']' 00:17:23.253 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 4058975 00:17:23.253 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 4058975 ']' 00:17:23.253 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 4058975 00:17:23.253 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:17:23.253 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:23.253 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4058975 00:17:23.253 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:23.253 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:23.253 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4058975' 00:17:23.253 killing process with pid 4058975 00:17:23.253 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 4058975 00:17:23.253 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 4058975 00:17:23.511 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:23.511 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:23.511 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:23.511 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:17:23.512 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:17:23.512 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:23.512 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:17:23.512 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:23.512 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:23.512 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:23.512 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:23.512 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:25.419 11:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:25.419 11:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.Nif /tmp/spdk.key-sha256.Xir /tmp/spdk.key-sha384.BvB /tmp/spdk.key-sha512.H31 /tmp/spdk.key-sha512.Pw7 /tmp/spdk.key-sha384.l8t /tmp/spdk.key-sha256.S21 '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:17:25.419 00:17:25.419 real 2m33.787s 00:17:25.419 user 5m54.841s 00:17:25.419 sys 0m24.197s 00:17:25.419 11:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:25.419 11:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.419 ************************************ 00:17:25.419 END TEST nvmf_auth_target 00:17:25.419 ************************************ 00:17:25.419 11:11:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:17:25.419 11:11:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:25.419 11:11:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:17:25.419 11:11:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:25.419 11:11:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:25.419 ************************************ 00:17:25.419 START TEST nvmf_bdevio_no_huge 00:17:25.419 ************************************ 00:17:25.419 11:11:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:25.680 * Looking for test storage... 00:17:25.680 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:25.680 11:11:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:25.680 11:11:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lcov --version 00:17:25.680 11:11:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:25.680 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:25.680 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:25.680 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:25.680 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:25.680 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:17:25.680 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:17:25.680 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:17:25.680 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:17:25.680 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:17:25.680 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:17:25.680 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:17:25.680 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:25.680 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:17:25.680 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:17:25.680 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:25.680 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:25.680 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:17:25.680 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:17:25.680 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:25.680 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:17:25.680 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:17:25.680 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:17:25.680 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:17:25.680 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:25.680 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:17:25.680 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:17:25.680 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:25.680 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:25.680 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:17:25.680 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:25.680 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:25.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:25.680 --rc genhtml_branch_coverage=1 00:17:25.680 --rc genhtml_function_coverage=1 00:17:25.680 --rc genhtml_legend=1 00:17:25.680 --rc geninfo_all_blocks=1 00:17:25.680 --rc geninfo_unexecuted_blocks=1 00:17:25.680 00:17:25.680 ' 00:17:25.680 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:25.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:25.680 --rc genhtml_branch_coverage=1 00:17:25.680 --rc genhtml_function_coverage=1 00:17:25.680 --rc genhtml_legend=1 00:17:25.680 --rc geninfo_all_blocks=1 00:17:25.680 --rc geninfo_unexecuted_blocks=1 00:17:25.680 00:17:25.680 ' 00:17:25.680 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:25.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:25.680 --rc genhtml_branch_coverage=1 00:17:25.680 --rc genhtml_function_coverage=1 00:17:25.680 --rc genhtml_legend=1 00:17:25.680 --rc geninfo_all_blocks=1 00:17:25.680 --rc geninfo_unexecuted_blocks=1 00:17:25.680 00:17:25.680 ' 00:17:25.680 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:25.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:25.680 --rc genhtml_branch_coverage=1 00:17:25.680 --rc genhtml_function_coverage=1 00:17:25.680 --rc genhtml_legend=1 00:17:25.680 --rc geninfo_all_blocks=1 00:17:25.680 --rc geninfo_unexecuted_blocks=1 00:17:25.680 00:17:25.680 ' 00:17:25.680 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:25.680 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:17:25.680 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:25.680 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:25.680 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:25.680 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:25.680 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:25.680 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:25.680 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:25.680 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:25.680 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:25.680 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:25.680 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:25.680 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:17:25.680 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:25.680 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:25.680 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:25.680 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:25.680 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:25.680 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:17:25.680 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:25.680 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:25.680 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:25.680 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:25.680 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:25.680 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:25.680 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:17:25.680 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:25.680 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:17:25.680 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:25.680 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:25.680 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:25.681 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:25.681 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:25.681 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:25.681 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:25.681 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:25.681 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:25.681 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:25.681 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:25.681 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:25.681 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:17:25.681 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:25.681 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:25.681 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:25.681 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:25.681 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:25.681 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:25.681 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:25.681 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:25.681 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:25.681 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:25.681 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:17:25.681 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:32.254 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:32.254 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:17:32.254 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:32.254 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:32.254 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:32.254 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:32.254 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:32.254 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:17:32.254 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:32.254 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:17:32.254 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:17:32.254 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:17:32.254 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:17:32.254 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:17:32.254 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:17:32.254 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:32.254 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:32.254 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:32.254 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:32.254 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:32.254 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:32.254 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:32.254 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:32.254 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:32.254 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:32.254 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:32.254 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:32.254 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:32.254 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:32.254 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:32.254 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:32.254 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:32.254 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:32.254 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:32.254 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:17:32.254 Found 0000:86:00.0 (0x8086 - 0x159b) 00:17:32.254 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:32.254 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:32.254 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:32.254 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:32.254 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:32.254 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:32.254 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:17:32.254 Found 0000:86:00.1 (0x8086 - 0x159b) 00:17:32.254 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:32.254 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:32.254 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:32.254 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:32.254 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:32.254 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:32.254 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:32.254 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:32.254 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:32.254 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:32.255 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:32.255 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:32.255 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:32.255 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:32.255 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:32.255 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:17:32.255 Found net devices under 0000:86:00.0: cvl_0_0 00:17:32.255 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:32.255 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:32.255 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:32.255 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:32.255 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:32.255 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:32.255 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:32.255 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:32.255 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:17:32.255 Found net devices under 0000:86:00.1: cvl_0_1 00:17:32.255 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:32.255 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:32.255 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:17:32.255 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:32.255 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:32.255 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:32.255 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:32.255 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:32.255 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:32.255 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:32.255 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:32.255 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:32.255 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:32.255 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:32.255 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:32.255 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:32.255 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:32.255 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:32.255 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:32.255 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:32.255 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:32.255 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:32.255 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:32.255 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:32.255 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:32.255 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:32.255 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:32.255 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:32.255 11:11:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:32.255 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:32.255 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.463 ms 00:17:32.255 00:17:32.255 --- 10.0.0.2 ping statistics --- 00:17:32.255 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:32.255 rtt min/avg/max/mdev = 0.463/0.463/0.463/0.000 ms 00:17:32.255 11:11:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:32.255 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:32.255 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.226 ms 00:17:32.255 00:17:32.255 --- 10.0.0.1 ping statistics --- 00:17:32.255 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:32.255 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:17:32.255 11:11:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:32.255 11:11:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:17:32.255 11:11:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:32.255 11:11:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:32.255 11:11:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:32.255 11:11:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:32.255 11:11:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:32.255 11:11:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:32.255 11:11:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:32.255 11:11:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:17:32.255 11:11:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:32.255 11:11:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:32.255 11:11:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:32.255 11:11:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=4065867 00:17:32.255 11:11:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 4065867 00:17:32.255 11:11:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:17:32.255 11:11:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 4065867 ']' 00:17:32.255 11:11:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:32.255 11:11:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:32.255 11:11:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:32.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:32.255 11:11:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:32.255 11:11:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:32.255 [2024-11-20 11:11:59.117623] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:17:32.255 [2024-11-20 11:11:59.117677] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:17:32.255 [2024-11-20 11:11:59.209846] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:32.255 [2024-11-20 11:11:59.257311] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:32.255 [2024-11-20 11:11:59.257346] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:32.255 [2024-11-20 11:11:59.257353] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:32.255 [2024-11-20 11:11:59.257361] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:32.255 [2024-11-20 11:11:59.257366] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:32.255 [2024-11-20 11:11:59.258619] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:17:32.255 [2024-11-20 11:11:59.258638] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:17:32.255 [2024-11-20 11:11:59.258751] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:32.255 [2024-11-20 11:11:59.258751] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:17:32.515 11:11:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:32.515 11:11:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:17:32.515 11:11:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:32.515 11:11:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:32.515 11:11:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:32.515 11:11:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:32.515 11:11:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:32.515 11:11:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.515 11:11:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:32.515 [2024-11-20 11:12:00.004733] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:32.773 11:12:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.773 11:12:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:32.773 11:12:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.773 11:12:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:32.774 Malloc0 00:17:32.774 11:12:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.774 11:12:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:32.774 11:12:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.774 11:12:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:32.774 11:12:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.774 11:12:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:32.774 11:12:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.774 11:12:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:32.774 11:12:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.774 11:12:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:32.774 11:12:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.774 11:12:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:32.774 [2024-11-20 11:12:00.049060] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:32.774 11:12:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.774 11:12:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:17:32.774 11:12:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:17:32.774 11:12:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:17:32.774 11:12:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:17:32.774 11:12:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:32.774 11:12:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:32.774 { 00:17:32.774 "params": { 00:17:32.774 "name": "Nvme$subsystem", 00:17:32.774 "trtype": "$TEST_TRANSPORT", 00:17:32.774 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:32.774 "adrfam": "ipv4", 00:17:32.774 "trsvcid": "$NVMF_PORT", 00:17:32.774 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:32.774 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:32.774 "hdgst": ${hdgst:-false}, 00:17:32.774 "ddgst": ${ddgst:-false} 00:17:32.774 }, 00:17:32.774 "method": "bdev_nvme_attach_controller" 00:17:32.774 } 00:17:32.774 EOF 00:17:32.774 )") 00:17:32.774 11:12:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:17:32.774 11:12:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:17:32.774 11:12:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:17:32.774 11:12:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:17:32.774 "params": { 00:17:32.774 "name": "Nvme1", 00:17:32.774 "trtype": "tcp", 00:17:32.774 "traddr": "10.0.0.2", 00:17:32.774 "adrfam": "ipv4", 00:17:32.774 "trsvcid": "4420", 00:17:32.774 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:32.774 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:32.774 "hdgst": false, 00:17:32.774 "ddgst": false 00:17:32.774 }, 00:17:32.774 "method": "bdev_nvme_attach_controller" 00:17:32.774 }' 00:17:32.774 [2024-11-20 11:12:00.100131] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:17:32.774 [2024-11-20 11:12:00.100184] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid4066133 ] 00:17:32.774 [2024-11-20 11:12:00.181675] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:32.774 [2024-11-20 11:12:00.230651] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:32.774 [2024-11-20 11:12:00.230759] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:32.774 [2024-11-20 11:12:00.230759] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:33.032 I/O targets: 00:17:33.032 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:17:33.032 00:17:33.032 00:17:33.032 CUnit - A unit testing framework for C - Version 2.1-3 00:17:33.032 http://cunit.sourceforge.net/ 00:17:33.032 00:17:33.032 00:17:33.032 Suite: bdevio tests on: Nvme1n1 00:17:33.032 Test: blockdev write read block ...passed 00:17:33.032 Test: blockdev write zeroes read block ...passed 00:17:33.032 Test: blockdev write zeroes read no split ...passed 00:17:33.290 Test: blockdev write zeroes read split ...passed 00:17:33.290 Test: blockdev write zeroes read split partial ...passed 00:17:33.290 Test: blockdev reset ...[2024-11-20 11:12:00.562653] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:17:33.290 [2024-11-20 11:12:00.562721] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1970920 (9): Bad file descriptor 00:17:33.290 [2024-11-20 11:12:00.576907] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:17:33.290 passed 00:17:33.290 Test: blockdev write read 8 blocks ...passed 00:17:33.290 Test: blockdev write read size > 128k ...passed 00:17:33.290 Test: blockdev write read invalid size ...passed 00:17:33.290 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:33.290 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:33.290 Test: blockdev write read max offset ...passed 00:17:33.290 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:33.290 Test: blockdev writev readv 8 blocks ...passed 00:17:33.290 Test: blockdev writev readv 30 x 1block ...passed 00:17:33.548 Test: blockdev writev readv block ...passed 00:17:33.548 Test: blockdev writev readv size > 128k ...passed 00:17:33.548 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:33.548 Test: blockdev comparev and writev ...[2024-11-20 11:12:00.787789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:33.548 [2024-11-20 11:12:00.787818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:33.548 [2024-11-20 11:12:00.787832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:33.548 [2024-11-20 11:12:00.787840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:33.548 [2024-11-20 11:12:00.788081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:33.548 [2024-11-20 11:12:00.788092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:33.548 [2024-11-20 11:12:00.788104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:33.548 [2024-11-20 11:12:00.788111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:33.548 [2024-11-20 11:12:00.788347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:33.548 [2024-11-20 11:12:00.788356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:33.548 [2024-11-20 11:12:00.788368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:33.548 [2024-11-20 11:12:00.788374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:33.548 [2024-11-20 11:12:00.788622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:33.548 [2024-11-20 11:12:00.788632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:33.548 [2024-11-20 11:12:00.788643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:33.548 [2024-11-20 11:12:00.788650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:33.548 passed 00:17:33.548 Test: blockdev nvme passthru rw ...passed 00:17:33.548 Test: blockdev nvme passthru vendor specific ...[2024-11-20 11:12:00.871325] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:33.548 [2024-11-20 11:12:00.871339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:33.548 [2024-11-20 11:12:00.871442] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:33.548 [2024-11-20 11:12:00.871452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:33.548 [2024-11-20 11:12:00.871556] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:33.548 [2024-11-20 11:12:00.871566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:33.548 [2024-11-20 11:12:00.871668] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:33.548 [2024-11-20 11:12:00.871678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:33.548 passed 00:17:33.548 Test: blockdev nvme admin passthru ...passed 00:17:33.548 Test: blockdev copy ...passed 00:17:33.548 00:17:33.548 Run Summary: Type Total Ran Passed Failed Inactive 00:17:33.548 suites 1 1 n/a 0 0 00:17:33.548 tests 23 23 23 0 0 00:17:33.548 asserts 152 152 152 0 n/a 00:17:33.548 00:17:33.548 Elapsed time = 1.063 seconds 00:17:33.806 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:33.806 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.806 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:33.806 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.806 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:17:33.806 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:17:33.806 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:33.806 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:17:33.806 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:33.806 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:17:33.806 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:33.806 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:33.806 rmmod nvme_tcp 00:17:33.806 rmmod nvme_fabrics 00:17:33.806 rmmod nvme_keyring 00:17:33.806 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:33.806 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:17:33.806 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:17:33.806 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 4065867 ']' 00:17:33.806 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 4065867 00:17:33.806 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 4065867 ']' 00:17:33.806 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 4065867 00:17:33.806 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:17:33.806 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:33.806 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4065867 00:17:34.064 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:17:34.064 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:17:34.064 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4065867' 00:17:34.064 killing process with pid 4065867 00:17:34.064 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 4065867 00:17:34.064 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 4065867 00:17:34.322 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:34.322 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:34.322 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:34.322 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:17:34.322 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:17:34.322 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:34.322 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:17:34.322 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:34.322 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:34.322 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:34.322 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:34.323 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:36.361 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:36.361 00:17:36.361 real 0m10.794s 00:17:36.361 user 0m12.915s 00:17:36.361 sys 0m5.465s 00:17:36.361 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:36.361 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:36.361 ************************************ 00:17:36.361 END TEST nvmf_bdevio_no_huge 00:17:36.361 ************************************ 00:17:36.361 11:12:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:36.361 11:12:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:36.361 11:12:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:36.361 11:12:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:36.361 ************************************ 00:17:36.361 START TEST nvmf_tls 00:17:36.361 ************************************ 00:17:36.361 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:36.361 * Looking for test storage... 00:17:36.361 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:36.361 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:36.361 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lcov --version 00:17:36.361 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:36.621 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:36.621 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:36.621 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:36.621 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:36.621 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:17:36.621 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:17:36.621 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:17:36.621 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:17:36.621 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:17:36.621 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:17:36.621 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:17:36.621 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:36.621 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:17:36.621 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:17:36.621 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:36.621 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:36.621 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:17:36.621 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:17:36.621 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:36.621 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:17:36.621 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:17:36.621 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:17:36.621 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:17:36.621 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:36.621 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:17:36.621 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:17:36.621 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:36.621 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:36.621 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:17:36.621 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:36.621 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:36.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:36.621 --rc genhtml_branch_coverage=1 00:17:36.621 --rc genhtml_function_coverage=1 00:17:36.621 --rc genhtml_legend=1 00:17:36.621 --rc geninfo_all_blocks=1 00:17:36.621 --rc geninfo_unexecuted_blocks=1 00:17:36.621 00:17:36.621 ' 00:17:36.621 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:36.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:36.621 --rc genhtml_branch_coverage=1 00:17:36.621 --rc genhtml_function_coverage=1 00:17:36.621 --rc genhtml_legend=1 00:17:36.621 --rc geninfo_all_blocks=1 00:17:36.621 --rc geninfo_unexecuted_blocks=1 00:17:36.621 00:17:36.621 ' 00:17:36.621 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:36.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:36.621 --rc genhtml_branch_coverage=1 00:17:36.621 --rc genhtml_function_coverage=1 00:17:36.621 --rc genhtml_legend=1 00:17:36.621 --rc geninfo_all_blocks=1 00:17:36.621 --rc geninfo_unexecuted_blocks=1 00:17:36.621 00:17:36.621 ' 00:17:36.621 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:36.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:36.621 --rc genhtml_branch_coverage=1 00:17:36.621 --rc genhtml_function_coverage=1 00:17:36.621 --rc genhtml_legend=1 00:17:36.621 --rc geninfo_all_blocks=1 00:17:36.621 --rc geninfo_unexecuted_blocks=1 00:17:36.621 00:17:36.621 ' 00:17:36.621 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:36.621 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:17:36.621 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:36.621 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:36.621 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:36.621 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:36.621 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:36.621 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:36.621 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:36.621 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:36.621 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:36.621 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:36.621 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:36.621 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:17:36.621 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:36.621 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:36.621 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:36.621 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:36.621 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:36.621 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:17:36.621 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:36.621 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:36.621 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:36.621 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.621 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.621 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.621 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:17:36.621 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.621 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:17:36.621 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:36.621 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:36.621 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:36.621 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:36.621 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:36.621 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:36.621 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:36.621 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:36.621 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:36.621 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:36.621 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:36.621 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:17:36.621 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:36.621 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:36.621 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:36.622 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:36.622 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:36.622 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:36.622 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:36.622 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:36.622 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:36.622 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:36.622 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:17:36.622 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:43.189 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:43.189 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:17:43.189 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:43.189 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:43.189 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:43.189 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:43.189 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:43.189 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:17:43.189 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:43.189 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:17:43.189 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:17:43.189 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:17:43.189 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:17:43.189 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:17:43.189 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:17:43.189 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:43.189 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:43.189 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:43.189 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:43.189 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:43.189 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:43.189 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:43.189 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:43.189 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:43.189 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:43.189 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:43.189 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:43.189 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:43.189 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:43.189 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:43.189 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:43.189 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:43.189 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:43.189 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:43.189 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:17:43.189 Found 0000:86:00.0 (0x8086 - 0x159b) 00:17:43.189 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:43.189 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:43.189 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:43.189 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:43.189 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:43.189 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:43.189 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:17:43.189 Found 0000:86:00.1 (0x8086 - 0x159b) 00:17:43.189 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:43.189 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:43.189 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:43.189 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:43.189 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:43.189 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:43.189 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:43.189 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:43.189 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:43.189 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:43.189 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:43.189 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:43.189 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:43.189 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:43.189 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:43.189 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:17:43.189 Found net devices under 0000:86:00.0: cvl_0_0 00:17:43.189 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:43.189 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:43.189 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:43.189 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:43.189 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:43.189 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:43.189 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:43.189 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:43.189 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:17:43.189 Found net devices under 0000:86:00.1: cvl_0_1 00:17:43.189 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:43.189 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:43.189 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:17:43.189 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:43.189 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:43.189 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:43.189 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:43.189 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:43.189 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:43.189 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:43.189 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:43.189 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:43.189 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:43.189 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:43.189 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:43.189 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:43.189 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:43.189 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:43.189 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:43.189 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:43.189 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:43.189 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:43.189 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:43.189 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:43.189 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:43.189 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:43.189 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:43.189 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:43.189 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:43.189 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:43.189 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.422 ms 00:17:43.189 00:17:43.190 --- 10.0.0.2 ping statistics --- 00:17:43.190 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:43.190 rtt min/avg/max/mdev = 0.422/0.422/0.422/0.000 ms 00:17:43.190 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:43.190 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:43.190 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.154 ms 00:17:43.190 00:17:43.190 --- 10.0.0.1 ping statistics --- 00:17:43.190 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:43.190 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:17:43.190 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:43.190 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:17:43.190 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:43.190 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:43.190 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:43.190 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:43.190 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:43.190 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:43.190 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:43.190 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:17:43.190 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:43.190 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:43.190 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:43.190 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=4070304 00:17:43.190 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 4070304 00:17:43.190 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 4070304 ']' 00:17:43.190 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:17:43.190 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:43.190 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:43.190 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:43.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:43.190 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:43.190 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:43.190 [2024-11-20 11:12:09.933205] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:17:43.190 [2024-11-20 11:12:09.933249] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:43.190 [2024-11-20 11:12:10.012944] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:43.190 [2024-11-20 11:12:10.060833] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:43.190 [2024-11-20 11:12:10.060871] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:43.190 [2024-11-20 11:12:10.060879] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:43.190 [2024-11-20 11:12:10.060886] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:43.190 [2024-11-20 11:12:10.060891] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:43.190 [2024-11-20 11:12:10.061453] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:43.190 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:43.190 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:43.190 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:43.190 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:43.190 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:43.190 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:43.190 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:17:43.190 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:17:43.190 true 00:17:43.190 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:43.190 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:17:43.190 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:17:43.190 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:17:43.190 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:43.449 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:43.449 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:17:43.449 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:17:43.449 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:17:43.449 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:17:43.707 11:12:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:43.707 11:12:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:17:43.965 11:12:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:17:43.965 11:12:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:17:43.965 11:12:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:43.965 11:12:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:17:44.223 11:12:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:17:44.223 11:12:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:17:44.223 11:12:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:17:44.223 11:12:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:44.223 11:12:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:17:44.482 11:12:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:17:44.482 11:12:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:17:44.482 11:12:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:17:44.740 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:44.741 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:17:44.999 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:17:44.999 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:17:44.999 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:17:44.999 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:17:44.999 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:17:44.999 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:17:44.999 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:17:44.999 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:17:44.999 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:17:44.999 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:44.999 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:17:44.999 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:17:44.999 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:17:44.999 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:17:44.999 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:17:44.999 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:17:44.999 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:17:44.999 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:44.999 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:17:44.999 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.yeHP4hnoeA 00:17:44.999 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:17:44.999 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.bkVpCE1lT2 00:17:44.999 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:44.999 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:44.999 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.yeHP4hnoeA 00:17:44.999 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.bkVpCE1lT2 00:17:44.999 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:45.257 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:17:45.516 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.yeHP4hnoeA 00:17:45.516 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.yeHP4hnoeA 00:17:45.516 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:45.516 [2024-11-20 11:12:12.995676] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:45.775 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:45.775 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:46.033 [2024-11-20 11:12:13.372653] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:46.033 [2024-11-20 11:12:13.372898] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:46.033 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:46.292 malloc0 00:17:46.292 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:46.292 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.yeHP4hnoeA 00:17:46.551 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:17:46.809 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.yeHP4hnoeA 00:17:56.783 Initializing NVMe Controllers 00:17:56.783 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:56.783 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:56.783 Initialization complete. Launching workers. 00:17:56.783 ======================================================== 00:17:56.783 Latency(us) 00:17:56.783 Device Information : IOPS MiB/s Average min max 00:17:56.783 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16382.67 63.99 3906.64 812.16 5509.52 00:17:56.783 ======================================================== 00:17:56.783 Total : 16382.67 63.99 3906.64 812.16 5509.52 00:17:56.783 00:17:56.783 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.yeHP4hnoeA 00:17:56.783 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:56.783 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:56.783 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:56.783 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.yeHP4hnoeA 00:17:56.783 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:56.783 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=4072753 00:17:56.783 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:56.783 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:56.783 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 4072753 /var/tmp/bdevperf.sock 00:17:56.783 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 4072753 ']' 00:17:56.783 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:56.783 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:56.783 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:56.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:56.783 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:56.783 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:57.041 [2024-11-20 11:12:24.319786] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:17:57.042 [2024-11-20 11:12:24.319836] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4072753 ] 00:17:57.042 [2024-11-20 11:12:24.395229] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:57.042 [2024-11-20 11:12:24.435308] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:57.042 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:57.042 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:57.042 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.yeHP4hnoeA 00:17:57.300 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:17:57.560 [2024-11-20 11:12:24.910414] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:57.560 TLSTESTn1 00:17:57.560 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:57.818 Running I/O for 10 seconds... 00:17:59.690 5344.00 IOPS, 20.88 MiB/s [2024-11-20T10:12:28.123Z] 5282.50 IOPS, 20.63 MiB/s [2024-11-20T10:12:29.501Z] 5189.33 IOPS, 20.27 MiB/s [2024-11-20T10:12:30.436Z] 5151.25 IOPS, 20.12 MiB/s [2024-11-20T10:12:31.373Z] 5085.20 IOPS, 19.86 MiB/s [2024-11-20T10:12:32.309Z] 5058.17 IOPS, 19.76 MiB/s [2024-11-20T10:12:33.245Z] 5059.43 IOPS, 19.76 MiB/s [2024-11-20T10:12:34.180Z] 5035.50 IOPS, 19.67 MiB/s [2024-11-20T10:12:35.116Z] 5022.44 IOPS, 19.62 MiB/s [2024-11-20T10:12:35.375Z] 5020.60 IOPS, 19.61 MiB/s 00:18:07.879 Latency(us) 00:18:07.879 [2024-11-20T10:12:35.375Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:07.879 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:07.879 Verification LBA range: start 0x0 length 0x2000 00:18:07.879 TLSTESTn1 : 10.02 5023.68 19.62 0.00 0.00 25441.34 5869.75 30773.43 00:18:07.879 [2024-11-20T10:12:35.375Z] =================================================================================================================== 00:18:07.879 [2024-11-20T10:12:35.375Z] Total : 5023.68 19.62 0.00 0.00 25441.34 5869.75 30773.43 00:18:07.879 { 00:18:07.879 "results": [ 00:18:07.879 { 00:18:07.879 "job": "TLSTESTn1", 00:18:07.879 "core_mask": "0x4", 00:18:07.879 "workload": "verify", 00:18:07.879 "status": "finished", 00:18:07.879 "verify_range": { 00:18:07.879 "start": 0, 00:18:07.879 "length": 8192 00:18:07.879 }, 00:18:07.879 "queue_depth": 128, 00:18:07.879 "io_size": 4096, 00:18:07.879 "runtime": 10.019348, 00:18:07.879 "iops": 5023.680183580808, 00:18:07.879 "mibps": 19.623750717112532, 00:18:07.879 "io_failed": 0, 00:18:07.879 "io_timeout": 0, 00:18:07.879 "avg_latency_us": 25441.338582166776, 00:18:07.879 "min_latency_us": 5869.746086956522, 00:18:07.879 "max_latency_us": 30773.426086956522 00:18:07.879 } 00:18:07.879 ], 00:18:07.879 "core_count": 1 00:18:07.879 } 00:18:07.879 11:12:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:07.879 11:12:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 4072753 00:18:07.879 11:12:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 4072753 ']' 00:18:07.879 11:12:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 4072753 00:18:07.879 11:12:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:07.879 11:12:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:07.879 11:12:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4072753 00:18:07.879 11:12:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:07.879 11:12:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:07.879 11:12:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4072753' 00:18:07.879 killing process with pid 4072753 00:18:07.879 11:12:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 4072753 00:18:07.879 Received shutdown signal, test time was about 10.000000 seconds 00:18:07.879 00:18:07.879 Latency(us) 00:18:07.879 [2024-11-20T10:12:35.375Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:07.879 [2024-11-20T10:12:35.375Z] =================================================================================================================== 00:18:07.879 [2024-11-20T10:12:35.375Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:07.879 11:12:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 4072753 00:18:07.879 11:12:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.bkVpCE1lT2 00:18:07.879 11:12:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:07.879 11:12:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.bkVpCE1lT2 00:18:07.879 11:12:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:07.879 11:12:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:07.879 11:12:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:07.879 11:12:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:07.880 11:12:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.bkVpCE1lT2 00:18:07.880 11:12:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:07.880 11:12:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:07.880 11:12:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:07.880 11:12:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.bkVpCE1lT2 00:18:07.880 11:12:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:08.138 11:12:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=4074525 00:18:08.138 11:12:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:08.138 11:12:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:08.139 11:12:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 4074525 /var/tmp/bdevperf.sock 00:18:08.139 11:12:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 4074525 ']' 00:18:08.139 11:12:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:08.139 11:12:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:08.139 11:12:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:08.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:08.139 11:12:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:08.139 11:12:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:08.139 [2024-11-20 11:12:35.416009] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:18:08.139 [2024-11-20 11:12:35.416055] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4074525 ] 00:18:08.139 [2024-11-20 11:12:35.489783] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:08.139 [2024-11-20 11:12:35.532113] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:08.139 11:12:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:08.139 11:12:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:08.139 11:12:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.bkVpCE1lT2 00:18:08.397 11:12:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:08.656 [2024-11-20 11:12:35.974755] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:08.656 [2024-11-20 11:12:35.984041] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:08.656 [2024-11-20 11:12:35.984108] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1853170 (107): Transport endpoint is not connected 00:18:08.656 [2024-11-20 11:12:35.985100] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1853170 (9): Bad file descriptor 00:18:08.656 [2024-11-20 11:12:35.986102] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:18:08.656 [2024-11-20 11:12:35.986116] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:08.656 [2024-11-20 11:12:35.986124] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:18:08.656 [2024-11-20 11:12:35.986134] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:18:08.656 request: 00:18:08.656 { 00:18:08.656 "name": "TLSTEST", 00:18:08.656 "trtype": "tcp", 00:18:08.656 "traddr": "10.0.0.2", 00:18:08.656 "adrfam": "ipv4", 00:18:08.656 "trsvcid": "4420", 00:18:08.656 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:08.656 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:08.656 "prchk_reftag": false, 00:18:08.656 "prchk_guard": false, 00:18:08.656 "hdgst": false, 00:18:08.656 "ddgst": false, 00:18:08.656 "psk": "key0", 00:18:08.656 "allow_unrecognized_csi": false, 00:18:08.656 "method": "bdev_nvme_attach_controller", 00:18:08.656 "req_id": 1 00:18:08.656 } 00:18:08.656 Got JSON-RPC error response 00:18:08.656 response: 00:18:08.656 { 00:18:08.656 "code": -5, 00:18:08.656 "message": "Input/output error" 00:18:08.656 } 00:18:08.656 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 4074525 00:18:08.656 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 4074525 ']' 00:18:08.656 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 4074525 00:18:08.656 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:08.656 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:08.656 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4074525 00:18:08.656 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:08.657 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:08.657 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4074525' 00:18:08.657 killing process with pid 4074525 00:18:08.657 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 4074525 00:18:08.657 Received shutdown signal, test time was about 10.000000 seconds 00:18:08.657 00:18:08.657 Latency(us) 00:18:08.657 [2024-11-20T10:12:36.153Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:08.657 [2024-11-20T10:12:36.153Z] =================================================================================================================== 00:18:08.657 [2024-11-20T10:12:36.153Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:08.657 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 4074525 00:18:08.915 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:08.915 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:08.915 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:08.915 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:08.915 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:08.915 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.yeHP4hnoeA 00:18:08.915 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:08.915 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.yeHP4hnoeA 00:18:08.915 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:08.915 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:08.915 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:08.915 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:08.915 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.yeHP4hnoeA 00:18:08.915 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:08.915 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:08.915 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:18:08.915 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.yeHP4hnoeA 00:18:08.915 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:08.915 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:08.915 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=4074606 00:18:08.915 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:08.915 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 4074606 /var/tmp/bdevperf.sock 00:18:08.915 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 4074606 ']' 00:18:08.915 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:08.915 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:08.915 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:08.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:08.915 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:08.915 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:08.915 [2024-11-20 11:12:36.233485] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:18:08.915 [2024-11-20 11:12:36.233534] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4074606 ] 00:18:08.915 [2024-11-20 11:12:36.302856] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:08.915 [2024-11-20 11:12:36.342241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:09.173 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:09.173 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:09.173 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.yeHP4hnoeA 00:18:09.173 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:18:09.432 [2024-11-20 11:12:36.825219] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:09.432 [2024-11-20 11:12:36.836082] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:09.432 [2024-11-20 11:12:36.836104] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:09.432 [2024-11-20 11:12:36.836127] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:09.432 [2024-11-20 11:12:36.836585] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f0d170 (107): Transport endpoint is not connected 00:18:09.432 [2024-11-20 11:12:36.837579] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f0d170 (9): Bad file descriptor 00:18:09.432 [2024-11-20 11:12:36.838581] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:18:09.432 [2024-11-20 11:12:36.838592] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:09.432 [2024-11-20 11:12:36.838599] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:18:09.432 [2024-11-20 11:12:36.838611] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:18:09.432 request: 00:18:09.432 { 00:18:09.432 "name": "TLSTEST", 00:18:09.432 "trtype": "tcp", 00:18:09.432 "traddr": "10.0.0.2", 00:18:09.432 "adrfam": "ipv4", 00:18:09.432 "trsvcid": "4420", 00:18:09.432 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:09.432 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:18:09.432 "prchk_reftag": false, 00:18:09.432 "prchk_guard": false, 00:18:09.432 "hdgst": false, 00:18:09.432 "ddgst": false, 00:18:09.432 "psk": "key0", 00:18:09.432 "allow_unrecognized_csi": false, 00:18:09.432 "method": "bdev_nvme_attach_controller", 00:18:09.432 "req_id": 1 00:18:09.432 } 00:18:09.432 Got JSON-RPC error response 00:18:09.432 response: 00:18:09.432 { 00:18:09.432 "code": -5, 00:18:09.432 "message": "Input/output error" 00:18:09.432 } 00:18:09.432 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 4074606 00:18:09.432 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 4074606 ']' 00:18:09.432 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 4074606 00:18:09.432 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:09.432 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:09.432 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4074606 00:18:09.432 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:09.432 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:09.432 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4074606' 00:18:09.432 killing process with pid 4074606 00:18:09.432 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 4074606 00:18:09.432 Received shutdown signal, test time was about 10.000000 seconds 00:18:09.432 00:18:09.433 Latency(us) 00:18:09.433 [2024-11-20T10:12:36.929Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:09.433 [2024-11-20T10:12:36.929Z] =================================================================================================================== 00:18:09.433 [2024-11-20T10:12:36.929Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:09.433 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 4074606 00:18:09.691 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:09.691 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:09.691 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:09.691 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:09.691 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:09.691 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.yeHP4hnoeA 00:18:09.691 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:09.691 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.yeHP4hnoeA 00:18:09.691 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:09.691 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:09.691 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:09.691 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:09.691 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.yeHP4hnoeA 00:18:09.691 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:09.691 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:18:09.691 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:09.691 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.yeHP4hnoeA 00:18:09.691 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:09.691 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=4074836 00:18:09.691 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:09.691 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:09.691 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 4074836 /var/tmp/bdevperf.sock 00:18:09.691 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 4074836 ']' 00:18:09.691 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:09.691 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:09.691 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:09.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:09.691 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:09.692 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:09.692 [2024-11-20 11:12:37.091546] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:18:09.692 [2024-11-20 11:12:37.091594] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4074836 ] 00:18:09.692 [2024-11-20 11:12:37.160346] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:09.951 [2024-11-20 11:12:37.203707] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:09.951 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:09.951 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:09.951 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.yeHP4hnoeA 00:18:10.210 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:10.210 [2024-11-20 11:12:37.658651] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:10.210 [2024-11-20 11:12:37.668253] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:10.210 [2024-11-20 11:12:37.668276] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:10.210 [2024-11-20 11:12:37.668300] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:10.210 [2024-11-20 11:12:37.669042] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a0170 (107): Transport endpoint is not connected 00:18:10.210 [2024-11-20 11:12:37.670036] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a0170 (9): Bad file descriptor 00:18:10.210 [2024-11-20 11:12:37.671037] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:18:10.210 [2024-11-20 11:12:37.671048] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:10.210 [2024-11-20 11:12:37.671056] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:18:10.210 [2024-11-20 11:12:37.671067] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:18:10.210 request: 00:18:10.210 { 00:18:10.210 "name": "TLSTEST", 00:18:10.210 "trtype": "tcp", 00:18:10.210 "traddr": "10.0.0.2", 00:18:10.210 "adrfam": "ipv4", 00:18:10.210 "trsvcid": "4420", 00:18:10.210 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:18:10.210 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:10.210 "prchk_reftag": false, 00:18:10.210 "prchk_guard": false, 00:18:10.210 "hdgst": false, 00:18:10.210 "ddgst": false, 00:18:10.210 "psk": "key0", 00:18:10.210 "allow_unrecognized_csi": false, 00:18:10.210 "method": "bdev_nvme_attach_controller", 00:18:10.210 "req_id": 1 00:18:10.210 } 00:18:10.210 Got JSON-RPC error response 00:18:10.210 response: 00:18:10.210 { 00:18:10.210 "code": -5, 00:18:10.210 "message": "Input/output error" 00:18:10.210 } 00:18:10.210 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 4074836 00:18:10.210 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 4074836 ']' 00:18:10.210 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 4074836 00:18:10.210 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:10.210 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:10.210 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4074836 00:18:10.470 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:10.470 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:10.470 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4074836' 00:18:10.470 killing process with pid 4074836 00:18:10.470 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 4074836 00:18:10.470 Received shutdown signal, test time was about 10.000000 seconds 00:18:10.470 00:18:10.470 Latency(us) 00:18:10.470 [2024-11-20T10:12:37.966Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:10.470 [2024-11-20T10:12:37.966Z] =================================================================================================================== 00:18:10.470 [2024-11-20T10:12:37.966Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:10.470 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 4074836 00:18:10.470 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:10.470 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:10.470 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:10.470 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:10.470 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:10.470 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:10.470 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:10.470 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:10.470 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:10.470 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:10.470 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:10.470 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:10.470 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:10.470 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:10.470 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:10.470 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:10.470 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:18:10.470 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:10.470 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=4074863 00:18:10.470 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:10.470 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:10.470 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 4074863 /var/tmp/bdevperf.sock 00:18:10.470 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 4074863 ']' 00:18:10.470 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:10.470 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:10.471 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:10.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:10.471 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:10.471 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:10.471 [2024-11-20 11:12:37.922838] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:18:10.471 [2024-11-20 11:12:37.922888] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4074863 ] 00:18:10.730 [2024-11-20 11:12:37.994499] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:10.730 [2024-11-20 11:12:38.036919] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:10.730 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:10.730 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:10.730 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:18:10.989 [2024-11-20 11:12:38.296241] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:18:10.989 [2024-11-20 11:12:38.296270] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:18:10.989 request: 00:18:10.989 { 00:18:10.989 "name": "key0", 00:18:10.989 "path": "", 00:18:10.989 "method": "keyring_file_add_key", 00:18:10.989 "req_id": 1 00:18:10.989 } 00:18:10.989 Got JSON-RPC error response 00:18:10.989 response: 00:18:10.989 { 00:18:10.989 "code": -1, 00:18:10.989 "message": "Operation not permitted" 00:18:10.989 } 00:18:10.989 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:10.989 [2024-11-20 11:12:38.480813] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:10.989 [2024-11-20 11:12:38.480843] bdev_nvme.c:6716:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:18:11.249 request: 00:18:11.249 { 00:18:11.249 "name": "TLSTEST", 00:18:11.249 "trtype": "tcp", 00:18:11.249 "traddr": "10.0.0.2", 00:18:11.249 "adrfam": "ipv4", 00:18:11.249 "trsvcid": "4420", 00:18:11.249 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:11.249 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:11.249 "prchk_reftag": false, 00:18:11.249 "prchk_guard": false, 00:18:11.249 "hdgst": false, 00:18:11.249 "ddgst": false, 00:18:11.249 "psk": "key0", 00:18:11.249 "allow_unrecognized_csi": false, 00:18:11.249 "method": "bdev_nvme_attach_controller", 00:18:11.249 "req_id": 1 00:18:11.249 } 00:18:11.249 Got JSON-RPC error response 00:18:11.249 response: 00:18:11.249 { 00:18:11.249 "code": -126, 00:18:11.249 "message": "Required key not available" 00:18:11.249 } 00:18:11.249 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 4074863 00:18:11.249 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 4074863 ']' 00:18:11.249 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 4074863 00:18:11.249 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:11.249 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:11.249 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4074863 00:18:11.249 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:11.249 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:11.249 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4074863' 00:18:11.249 killing process with pid 4074863 00:18:11.249 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 4074863 00:18:11.249 Received shutdown signal, test time was about 10.000000 seconds 00:18:11.249 00:18:11.249 Latency(us) 00:18:11.249 [2024-11-20T10:12:38.745Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:11.249 [2024-11-20T10:12:38.745Z] =================================================================================================================== 00:18:11.249 [2024-11-20T10:12:38.745Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:11.249 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 4074863 00:18:11.249 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:11.249 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:11.249 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:11.249 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:11.249 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:11.249 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 4070304 00:18:11.249 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 4070304 ']' 00:18:11.249 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 4070304 00:18:11.249 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:11.249 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:11.249 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4070304 00:18:11.509 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:11.509 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:11.509 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4070304' 00:18:11.509 killing process with pid 4070304 00:18:11.509 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 4070304 00:18:11.509 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 4070304 00:18:11.509 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:18:11.509 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:18:11.509 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:18:11.509 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:18:11.509 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:18:11.509 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:18:11.509 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:18:11.509 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:11.509 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:18:11.509 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.L9jFKNf80K 00:18:11.509 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:11.509 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.L9jFKNf80K 00:18:11.509 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:18:11.509 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:11.509 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:11.509 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:11.509 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=4075108 00:18:11.509 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 4075108 00:18:11.509 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:11.509 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 4075108 ']' 00:18:11.509 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:11.509 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:11.509 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:11.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:11.509 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:11.509 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:11.769 [2024-11-20 11:12:39.038829] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:18:11.769 [2024-11-20 11:12:39.038880] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:11.769 [2024-11-20 11:12:39.111912] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:11.769 [2024-11-20 11:12:39.148391] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:11.769 [2024-11-20 11:12:39.148424] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:11.769 [2024-11-20 11:12:39.148432] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:11.769 [2024-11-20 11:12:39.148438] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:11.769 [2024-11-20 11:12:39.148443] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:11.769 [2024-11-20 11:12:39.149020] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:11.769 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:11.769 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:11.769 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:11.769 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:11.769 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:12.028 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:12.028 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.L9jFKNf80K 00:18:12.028 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.L9jFKNf80K 00:18:12.028 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:12.028 [2024-11-20 11:12:39.463957] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:12.028 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:12.287 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:12.547 [2024-11-20 11:12:39.844943] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:12.547 [2024-11-20 11:12:39.845174] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:12.547 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:12.547 malloc0 00:18:12.805 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:12.805 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.L9jFKNf80K 00:18:13.065 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:13.324 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.L9jFKNf80K 00:18:13.324 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:13.324 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:13.324 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:13.324 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.L9jFKNf80K 00:18:13.324 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:13.324 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:13.324 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=4075360 00:18:13.324 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:13.324 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 4075360 /var/tmp/bdevperf.sock 00:18:13.324 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 4075360 ']' 00:18:13.324 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:13.324 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:13.324 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:13.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:13.324 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:13.324 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:13.324 [2024-11-20 11:12:40.670203] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:18:13.324 [2024-11-20 11:12:40.670250] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4075360 ] 00:18:13.324 [2024-11-20 11:12:40.747182] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:13.324 [2024-11-20 11:12:40.790277] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:13.583 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:13.583 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:13.583 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.L9jFKNf80K 00:18:13.583 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:13.842 [2024-11-20 11:12:41.237195] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:13.842 TLSTESTn1 00:18:13.842 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:14.102 Running I/O for 10 seconds... 00:18:15.975 5187.00 IOPS, 20.26 MiB/s [2024-11-20T10:12:44.851Z] 5274.00 IOPS, 20.60 MiB/s [2024-11-20T10:12:45.789Z] 5323.00 IOPS, 20.79 MiB/s [2024-11-20T10:12:46.725Z] 5337.25 IOPS, 20.85 MiB/s [2024-11-20T10:12:47.662Z] 5352.20 IOPS, 20.91 MiB/s [2024-11-20T10:12:48.598Z] 5348.00 IOPS, 20.89 MiB/s [2024-11-20T10:12:49.531Z] 5344.43 IOPS, 20.88 MiB/s [2024-11-20T10:12:50.466Z] 5341.12 IOPS, 20.86 MiB/s [2024-11-20T10:12:51.844Z] 5338.78 IOPS, 20.85 MiB/s [2024-11-20T10:12:51.844Z] 5342.90 IOPS, 20.87 MiB/s 00:18:24.348 Latency(us) 00:18:24.348 [2024-11-20T10:12:51.844Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:24.348 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:24.348 Verification LBA range: start 0x0 length 0x2000 00:18:24.348 TLSTESTn1 : 10.01 5347.71 20.89 0.00 0.00 23899.14 6354.14 37611.97 00:18:24.348 [2024-11-20T10:12:51.844Z] =================================================================================================================== 00:18:24.348 [2024-11-20T10:12:51.844Z] Total : 5347.71 20.89 0.00 0.00 23899.14 6354.14 37611.97 00:18:24.348 { 00:18:24.348 "results": [ 00:18:24.348 { 00:18:24.348 "job": "TLSTESTn1", 00:18:24.348 "core_mask": "0x4", 00:18:24.348 "workload": "verify", 00:18:24.348 "status": "finished", 00:18:24.348 "verify_range": { 00:18:24.348 "start": 0, 00:18:24.348 "length": 8192 00:18:24.348 }, 00:18:24.348 "queue_depth": 128, 00:18:24.348 "io_size": 4096, 00:18:24.348 "runtime": 10.014576, 00:18:24.348 "iops": 5347.705184922457, 00:18:24.348 "mibps": 20.88947337860335, 00:18:24.348 "io_failed": 0, 00:18:24.348 "io_timeout": 0, 00:18:24.348 "avg_latency_us": 23899.137217894648, 00:18:24.348 "min_latency_us": 6354.142608695653, 00:18:24.348 "max_latency_us": 37611.965217391305 00:18:24.348 } 00:18:24.348 ], 00:18:24.348 "core_count": 1 00:18:24.348 } 00:18:24.348 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:24.348 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 4075360 00:18:24.348 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 4075360 ']' 00:18:24.348 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 4075360 00:18:24.348 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:24.348 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:24.348 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4075360 00:18:24.348 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:24.348 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:24.348 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4075360' 00:18:24.348 killing process with pid 4075360 00:18:24.348 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 4075360 00:18:24.348 Received shutdown signal, test time was about 10.000000 seconds 00:18:24.348 00:18:24.348 Latency(us) 00:18:24.348 [2024-11-20T10:12:51.844Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:24.348 [2024-11-20T10:12:51.844Z] =================================================================================================================== 00:18:24.349 [2024-11-20T10:12:51.845Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:24.349 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 4075360 00:18:24.349 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.L9jFKNf80K 00:18:24.349 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.L9jFKNf80K 00:18:24.349 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:24.349 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.L9jFKNf80K 00:18:24.349 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:24.349 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:24.349 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:24.349 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:24.349 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.L9jFKNf80K 00:18:24.349 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:24.349 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:24.349 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:24.349 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.L9jFKNf80K 00:18:24.349 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:24.349 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=4077194 00:18:24.349 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:24.349 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:24.349 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 4077194 /var/tmp/bdevperf.sock 00:18:24.349 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 4077194 ']' 00:18:24.349 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:24.349 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:24.349 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:24.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:24.349 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:24.349 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:24.349 [2024-11-20 11:12:51.739931] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:18:24.349 [2024-11-20 11:12:51.739986] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4077194 ] 00:18:24.349 [2024-11-20 11:12:51.815921] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:24.609 [2024-11-20 11:12:51.853781] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:24.609 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:24.609 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:24.609 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.L9jFKNf80K 00:18:24.867 [2024-11-20 11:12:52.119753] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.L9jFKNf80K': 0100666 00:18:24.868 [2024-11-20 11:12:52.119784] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:18:24.868 request: 00:18:24.868 { 00:18:24.868 "name": "key0", 00:18:24.868 "path": "/tmp/tmp.L9jFKNf80K", 00:18:24.868 "method": "keyring_file_add_key", 00:18:24.868 "req_id": 1 00:18:24.868 } 00:18:24.868 Got JSON-RPC error response 00:18:24.868 response: 00:18:24.868 { 00:18:24.868 "code": -1, 00:18:24.868 "message": "Operation not permitted" 00:18:24.868 } 00:18:24.868 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:24.868 [2024-11-20 11:12:52.324372] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:24.868 [2024-11-20 11:12:52.324398] bdev_nvme.c:6716:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:18:24.868 request: 00:18:24.868 { 00:18:24.868 "name": "TLSTEST", 00:18:24.868 "trtype": "tcp", 00:18:24.868 "traddr": "10.0.0.2", 00:18:24.868 "adrfam": "ipv4", 00:18:24.868 "trsvcid": "4420", 00:18:24.868 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:24.868 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:24.868 "prchk_reftag": false, 00:18:24.868 "prchk_guard": false, 00:18:24.868 "hdgst": false, 00:18:24.868 "ddgst": false, 00:18:24.868 "psk": "key0", 00:18:24.868 "allow_unrecognized_csi": false, 00:18:24.868 "method": "bdev_nvme_attach_controller", 00:18:24.868 "req_id": 1 00:18:24.868 } 00:18:24.868 Got JSON-RPC error response 00:18:24.868 response: 00:18:24.868 { 00:18:24.868 "code": -126, 00:18:24.868 "message": "Required key not available" 00:18:24.868 } 00:18:24.868 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 4077194 00:18:24.868 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 4077194 ']' 00:18:24.868 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 4077194 00:18:24.868 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:25.127 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:25.127 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4077194 00:18:25.127 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:25.127 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:25.127 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4077194' 00:18:25.127 killing process with pid 4077194 00:18:25.127 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 4077194 00:18:25.127 Received shutdown signal, test time was about 10.000000 seconds 00:18:25.127 00:18:25.127 Latency(us) 00:18:25.127 [2024-11-20T10:12:52.623Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:25.127 [2024-11-20T10:12:52.623Z] =================================================================================================================== 00:18:25.127 [2024-11-20T10:12:52.623Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:25.127 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 4077194 00:18:25.127 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:25.127 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:25.127 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:25.127 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:25.127 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:25.127 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 4075108 00:18:25.127 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 4075108 ']' 00:18:25.127 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 4075108 00:18:25.127 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:25.127 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:25.127 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4075108 00:18:25.127 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:25.127 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:25.127 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4075108' 00:18:25.127 killing process with pid 4075108 00:18:25.127 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 4075108 00:18:25.127 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 4075108 00:18:25.386 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:18:25.386 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:25.386 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:25.386 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:25.386 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=4077433 00:18:25.386 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:25.386 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 4077433 00:18:25.386 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 4077433 ']' 00:18:25.386 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:25.386 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:25.386 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:25.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:25.386 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:25.386 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:25.386 [2024-11-20 11:12:52.823816] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:18:25.386 [2024-11-20 11:12:52.823860] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:25.645 [2024-11-20 11:12:52.903626] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:25.645 [2024-11-20 11:12:52.939707] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:25.645 [2024-11-20 11:12:52.939743] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:25.645 [2024-11-20 11:12:52.939751] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:25.645 [2024-11-20 11:12:52.939757] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:25.645 [2024-11-20 11:12:52.939763] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:25.645 [2024-11-20 11:12:52.940352] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:25.645 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:25.645 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:25.645 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:25.645 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:25.646 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:25.646 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:25.646 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.L9jFKNf80K 00:18:25.646 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:25.646 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.L9jFKNf80K 00:18:25.646 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:18:25.646 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:25.646 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:18:25.646 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:25.646 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.L9jFKNf80K 00:18:25.646 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.L9jFKNf80K 00:18:25.646 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:25.904 [2024-11-20 11:12:53.252061] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:25.904 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:26.186 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:26.186 [2024-11-20 11:12:53.649103] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:26.186 [2024-11-20 11:12:53.649320] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:26.470 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:26.470 malloc0 00:18:26.470 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:26.788 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.L9jFKNf80K 00:18:26.788 [2024-11-20 11:12:54.246778] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.L9jFKNf80K': 0100666 00:18:26.788 [2024-11-20 11:12:54.246809] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:18:26.788 request: 00:18:26.789 { 00:18:26.789 "name": "key0", 00:18:26.789 "path": "/tmp/tmp.L9jFKNf80K", 00:18:26.789 "method": "keyring_file_add_key", 00:18:26.789 "req_id": 1 00:18:26.789 } 00:18:26.789 Got JSON-RPC error response 00:18:26.789 response: 00:18:26.789 { 00:18:26.789 "code": -1, 00:18:26.789 "message": "Operation not permitted" 00:18:26.789 } 00:18:26.789 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:27.049 [2024-11-20 11:12:54.443302] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:18:27.049 [2024-11-20 11:12:54.443333] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:18:27.049 request: 00:18:27.049 { 00:18:27.049 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:27.049 "host": "nqn.2016-06.io.spdk:host1", 00:18:27.049 "psk": "key0", 00:18:27.049 "method": "nvmf_subsystem_add_host", 00:18:27.049 "req_id": 1 00:18:27.049 } 00:18:27.049 Got JSON-RPC error response 00:18:27.049 response: 00:18:27.049 { 00:18:27.049 "code": -32603, 00:18:27.049 "message": "Internal error" 00:18:27.049 } 00:18:27.049 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:27.049 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:27.049 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:27.049 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:27.049 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 4077433 00:18:27.049 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 4077433 ']' 00:18:27.049 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 4077433 00:18:27.049 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:27.049 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:27.049 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4077433 00:18:27.049 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:27.049 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:27.049 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4077433' 00:18:27.049 killing process with pid 4077433 00:18:27.049 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 4077433 00:18:27.049 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 4077433 00:18:27.309 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.L9jFKNf80K 00:18:27.309 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:18:27.309 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:27.309 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:27.309 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:27.309 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=4077708 00:18:27.309 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 4077708 00:18:27.309 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:27.309 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 4077708 ']' 00:18:27.309 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:27.309 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:27.309 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:27.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:27.309 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:27.309 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:27.309 [2024-11-20 11:12:54.747821] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:18:27.309 [2024-11-20 11:12:54.747864] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:27.569 [2024-11-20 11:12:54.825768] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:27.569 [2024-11-20 11:12:54.861616] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:27.569 [2024-11-20 11:12:54.861651] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:27.569 [2024-11-20 11:12:54.861658] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:27.569 [2024-11-20 11:12:54.861664] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:27.569 [2024-11-20 11:12:54.861670] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:27.569 [2024-11-20 11:12:54.862271] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:27.569 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:27.569 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:27.569 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:27.569 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:27.569 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:27.569 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:27.569 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.L9jFKNf80K 00:18:27.569 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.L9jFKNf80K 00:18:27.569 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:27.828 [2024-11-20 11:12:55.181527] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:27.828 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:28.087 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:28.087 [2024-11-20 11:12:55.570525] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:28.087 [2024-11-20 11:12:55.570739] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:28.346 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:28.346 malloc0 00:18:28.346 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:28.605 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.L9jFKNf80K 00:18:28.865 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:29.124 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:29.124 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=4077981 00:18:29.124 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:29.124 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 4077981 /var/tmp/bdevperf.sock 00:18:29.124 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 4077981 ']' 00:18:29.124 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:29.124 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:29.124 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:29.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:29.124 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:29.124 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:29.124 [2024-11-20 11:12:56.414678] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:18:29.124 [2024-11-20 11:12:56.414729] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4077981 ] 00:18:29.124 [2024-11-20 11:12:56.493084] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:29.124 [2024-11-20 11:12:56.533619] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:29.384 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:29.384 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:29.384 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.L9jFKNf80K 00:18:29.384 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:29.643 [2024-11-20 11:12:57.001132] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:29.643 TLSTESTn1 00:18:29.643 11:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:18:29.903 11:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:18:29.903 "subsystems": [ 00:18:29.903 { 00:18:29.903 "subsystem": "keyring", 00:18:29.903 "config": [ 00:18:29.903 { 00:18:29.903 "method": "keyring_file_add_key", 00:18:29.903 "params": { 00:18:29.903 "name": "key0", 00:18:29.903 "path": "/tmp/tmp.L9jFKNf80K" 00:18:29.903 } 00:18:29.903 } 00:18:29.903 ] 00:18:29.903 }, 00:18:29.903 { 00:18:29.903 "subsystem": "iobuf", 00:18:29.903 "config": [ 00:18:29.903 { 00:18:29.903 "method": "iobuf_set_options", 00:18:29.903 "params": { 00:18:29.903 "small_pool_count": 8192, 00:18:29.903 "large_pool_count": 1024, 00:18:29.903 "small_bufsize": 8192, 00:18:29.903 "large_bufsize": 135168, 00:18:29.903 "enable_numa": false 00:18:29.903 } 00:18:29.903 } 00:18:29.903 ] 00:18:29.903 }, 00:18:29.903 { 00:18:29.903 "subsystem": "sock", 00:18:29.903 "config": [ 00:18:29.903 { 00:18:29.903 "method": "sock_set_default_impl", 00:18:29.903 "params": { 00:18:29.903 "impl_name": "posix" 00:18:29.903 } 00:18:29.903 }, 00:18:29.903 { 00:18:29.903 "method": "sock_impl_set_options", 00:18:29.903 "params": { 00:18:29.903 "impl_name": "ssl", 00:18:29.903 "recv_buf_size": 4096, 00:18:29.903 "send_buf_size": 4096, 00:18:29.903 "enable_recv_pipe": true, 00:18:29.903 "enable_quickack": false, 00:18:29.903 "enable_placement_id": 0, 00:18:29.903 "enable_zerocopy_send_server": true, 00:18:29.903 "enable_zerocopy_send_client": false, 00:18:29.903 "zerocopy_threshold": 0, 00:18:29.903 "tls_version": 0, 00:18:29.903 "enable_ktls": false 00:18:29.903 } 00:18:29.903 }, 00:18:29.903 { 00:18:29.903 "method": "sock_impl_set_options", 00:18:29.903 "params": { 00:18:29.903 "impl_name": "posix", 00:18:29.903 "recv_buf_size": 2097152, 00:18:29.903 "send_buf_size": 2097152, 00:18:29.903 "enable_recv_pipe": true, 00:18:29.903 "enable_quickack": false, 00:18:29.903 "enable_placement_id": 0, 00:18:29.903 "enable_zerocopy_send_server": true, 00:18:29.903 "enable_zerocopy_send_client": false, 00:18:29.903 "zerocopy_threshold": 0, 00:18:29.903 "tls_version": 0, 00:18:29.903 "enable_ktls": false 00:18:29.903 } 00:18:29.903 } 00:18:29.903 ] 00:18:29.903 }, 00:18:29.903 { 00:18:29.903 "subsystem": "vmd", 00:18:29.903 "config": [] 00:18:29.903 }, 00:18:29.903 { 00:18:29.903 "subsystem": "accel", 00:18:29.903 "config": [ 00:18:29.903 { 00:18:29.903 "method": "accel_set_options", 00:18:29.903 "params": { 00:18:29.903 "small_cache_size": 128, 00:18:29.903 "large_cache_size": 16, 00:18:29.903 "task_count": 2048, 00:18:29.903 "sequence_count": 2048, 00:18:29.903 "buf_count": 2048 00:18:29.903 } 00:18:29.903 } 00:18:29.903 ] 00:18:29.903 }, 00:18:29.903 { 00:18:29.903 "subsystem": "bdev", 00:18:29.903 "config": [ 00:18:29.903 { 00:18:29.903 "method": "bdev_set_options", 00:18:29.903 "params": { 00:18:29.903 "bdev_io_pool_size": 65535, 00:18:29.903 "bdev_io_cache_size": 256, 00:18:29.903 "bdev_auto_examine": true, 00:18:29.903 "iobuf_small_cache_size": 128, 00:18:29.903 "iobuf_large_cache_size": 16 00:18:29.903 } 00:18:29.903 }, 00:18:29.903 { 00:18:29.903 "method": "bdev_raid_set_options", 00:18:29.903 "params": { 00:18:29.903 "process_window_size_kb": 1024, 00:18:29.903 "process_max_bandwidth_mb_sec": 0 00:18:29.903 } 00:18:29.903 }, 00:18:29.903 { 00:18:29.903 "method": "bdev_iscsi_set_options", 00:18:29.903 "params": { 00:18:29.903 "timeout_sec": 30 00:18:29.903 } 00:18:29.903 }, 00:18:29.903 { 00:18:29.903 "method": "bdev_nvme_set_options", 00:18:29.903 "params": { 00:18:29.903 "action_on_timeout": "none", 00:18:29.903 "timeout_us": 0, 00:18:29.903 "timeout_admin_us": 0, 00:18:29.903 "keep_alive_timeout_ms": 10000, 00:18:29.903 "arbitration_burst": 0, 00:18:29.903 "low_priority_weight": 0, 00:18:29.903 "medium_priority_weight": 0, 00:18:29.903 "high_priority_weight": 0, 00:18:29.903 "nvme_adminq_poll_period_us": 10000, 00:18:29.903 "nvme_ioq_poll_period_us": 0, 00:18:29.903 "io_queue_requests": 0, 00:18:29.903 "delay_cmd_submit": true, 00:18:29.903 "transport_retry_count": 4, 00:18:29.903 "bdev_retry_count": 3, 00:18:29.903 "transport_ack_timeout": 0, 00:18:29.903 "ctrlr_loss_timeout_sec": 0, 00:18:29.903 "reconnect_delay_sec": 0, 00:18:29.903 "fast_io_fail_timeout_sec": 0, 00:18:29.903 "disable_auto_failback": false, 00:18:29.903 "generate_uuids": false, 00:18:29.903 "transport_tos": 0, 00:18:29.903 "nvme_error_stat": false, 00:18:29.904 "rdma_srq_size": 0, 00:18:29.904 "io_path_stat": false, 00:18:29.904 "allow_accel_sequence": false, 00:18:29.904 "rdma_max_cq_size": 0, 00:18:29.904 "rdma_cm_event_timeout_ms": 0, 00:18:29.904 "dhchap_digests": [ 00:18:29.904 "sha256", 00:18:29.904 "sha384", 00:18:29.904 "sha512" 00:18:29.904 ], 00:18:29.904 "dhchap_dhgroups": [ 00:18:29.904 "null", 00:18:29.904 "ffdhe2048", 00:18:29.904 "ffdhe3072", 00:18:29.904 "ffdhe4096", 00:18:29.904 "ffdhe6144", 00:18:29.904 "ffdhe8192" 00:18:29.904 ] 00:18:29.904 } 00:18:29.904 }, 00:18:29.904 { 00:18:29.904 "method": "bdev_nvme_set_hotplug", 00:18:29.904 "params": { 00:18:29.904 "period_us": 100000, 00:18:29.904 "enable": false 00:18:29.904 } 00:18:29.904 }, 00:18:29.904 { 00:18:29.904 "method": "bdev_malloc_create", 00:18:29.904 "params": { 00:18:29.904 "name": "malloc0", 00:18:29.904 "num_blocks": 8192, 00:18:29.904 "block_size": 4096, 00:18:29.904 "physical_block_size": 4096, 00:18:29.904 "uuid": "c2d971f9-f21c-411d-95ca-d744cfb1b0a4", 00:18:29.904 "optimal_io_boundary": 0, 00:18:29.904 "md_size": 0, 00:18:29.904 "dif_type": 0, 00:18:29.904 "dif_is_head_of_md": false, 00:18:29.904 "dif_pi_format": 0 00:18:29.904 } 00:18:29.904 }, 00:18:29.904 { 00:18:29.904 "method": "bdev_wait_for_examine" 00:18:29.904 } 00:18:29.904 ] 00:18:29.904 }, 00:18:29.904 { 00:18:29.904 "subsystem": "nbd", 00:18:29.904 "config": [] 00:18:29.904 }, 00:18:29.904 { 00:18:29.904 "subsystem": "scheduler", 00:18:29.904 "config": [ 00:18:29.904 { 00:18:29.904 "method": "framework_set_scheduler", 00:18:29.904 "params": { 00:18:29.904 "name": "static" 00:18:29.904 } 00:18:29.904 } 00:18:29.904 ] 00:18:29.904 }, 00:18:29.904 { 00:18:29.904 "subsystem": "nvmf", 00:18:29.904 "config": [ 00:18:29.904 { 00:18:29.904 "method": "nvmf_set_config", 00:18:29.904 "params": { 00:18:29.904 "discovery_filter": "match_any", 00:18:29.904 "admin_cmd_passthru": { 00:18:29.904 "identify_ctrlr": false 00:18:29.904 }, 00:18:29.904 "dhchap_digests": [ 00:18:29.904 "sha256", 00:18:29.904 "sha384", 00:18:29.904 "sha512" 00:18:29.904 ], 00:18:29.904 "dhchap_dhgroups": [ 00:18:29.904 "null", 00:18:29.904 "ffdhe2048", 00:18:29.904 "ffdhe3072", 00:18:29.904 "ffdhe4096", 00:18:29.904 "ffdhe6144", 00:18:29.904 "ffdhe8192" 00:18:29.904 ] 00:18:29.904 } 00:18:29.904 }, 00:18:29.904 { 00:18:29.904 "method": "nvmf_set_max_subsystems", 00:18:29.904 "params": { 00:18:29.904 "max_subsystems": 1024 00:18:29.904 } 00:18:29.904 }, 00:18:29.904 { 00:18:29.904 "method": "nvmf_set_crdt", 00:18:29.904 "params": { 00:18:29.904 "crdt1": 0, 00:18:29.904 "crdt2": 0, 00:18:29.904 "crdt3": 0 00:18:29.904 } 00:18:29.904 }, 00:18:29.904 { 00:18:29.904 "method": "nvmf_create_transport", 00:18:29.904 "params": { 00:18:29.904 "trtype": "TCP", 00:18:29.904 "max_queue_depth": 128, 00:18:29.904 "max_io_qpairs_per_ctrlr": 127, 00:18:29.904 "in_capsule_data_size": 4096, 00:18:29.904 "max_io_size": 131072, 00:18:29.904 "io_unit_size": 131072, 00:18:29.904 "max_aq_depth": 128, 00:18:29.904 "num_shared_buffers": 511, 00:18:29.904 "buf_cache_size": 4294967295, 00:18:29.904 "dif_insert_or_strip": false, 00:18:29.904 "zcopy": false, 00:18:29.904 "c2h_success": false, 00:18:29.904 "sock_priority": 0, 00:18:29.904 "abort_timeout_sec": 1, 00:18:29.904 "ack_timeout": 0, 00:18:29.904 "data_wr_pool_size": 0 00:18:29.904 } 00:18:29.904 }, 00:18:29.904 { 00:18:29.904 "method": "nvmf_create_subsystem", 00:18:29.904 "params": { 00:18:29.904 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:29.904 "allow_any_host": false, 00:18:29.904 "serial_number": "SPDK00000000000001", 00:18:29.904 "model_number": "SPDK bdev Controller", 00:18:29.904 "max_namespaces": 10, 00:18:29.904 "min_cntlid": 1, 00:18:29.904 "max_cntlid": 65519, 00:18:29.904 "ana_reporting": false 00:18:29.904 } 00:18:29.904 }, 00:18:29.904 { 00:18:29.904 "method": "nvmf_subsystem_add_host", 00:18:29.904 "params": { 00:18:29.904 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:29.904 "host": "nqn.2016-06.io.spdk:host1", 00:18:29.904 "psk": "key0" 00:18:29.904 } 00:18:29.904 }, 00:18:29.904 { 00:18:29.904 "method": "nvmf_subsystem_add_ns", 00:18:29.904 "params": { 00:18:29.904 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:29.904 "namespace": { 00:18:29.904 "nsid": 1, 00:18:29.904 "bdev_name": "malloc0", 00:18:29.904 "nguid": "C2D971F9F21C411D95CAD744CFB1B0A4", 00:18:29.904 "uuid": "c2d971f9-f21c-411d-95ca-d744cfb1b0a4", 00:18:29.904 "no_auto_visible": false 00:18:29.904 } 00:18:29.904 } 00:18:29.904 }, 00:18:29.904 { 00:18:29.904 "method": "nvmf_subsystem_add_listener", 00:18:29.904 "params": { 00:18:29.904 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:29.904 "listen_address": { 00:18:29.904 "trtype": "TCP", 00:18:29.904 "adrfam": "IPv4", 00:18:29.904 "traddr": "10.0.0.2", 00:18:29.904 "trsvcid": "4420" 00:18:29.904 }, 00:18:29.904 "secure_channel": true 00:18:29.904 } 00:18:29.904 } 00:18:29.904 ] 00:18:29.904 } 00:18:29.904 ] 00:18:29.904 }' 00:18:29.904 11:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:30.163 11:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:18:30.163 "subsystems": [ 00:18:30.163 { 00:18:30.163 "subsystem": "keyring", 00:18:30.163 "config": [ 00:18:30.163 { 00:18:30.163 "method": "keyring_file_add_key", 00:18:30.163 "params": { 00:18:30.163 "name": "key0", 00:18:30.163 "path": "/tmp/tmp.L9jFKNf80K" 00:18:30.163 } 00:18:30.163 } 00:18:30.163 ] 00:18:30.163 }, 00:18:30.163 { 00:18:30.163 "subsystem": "iobuf", 00:18:30.163 "config": [ 00:18:30.163 { 00:18:30.163 "method": "iobuf_set_options", 00:18:30.163 "params": { 00:18:30.163 "small_pool_count": 8192, 00:18:30.163 "large_pool_count": 1024, 00:18:30.163 "small_bufsize": 8192, 00:18:30.163 "large_bufsize": 135168, 00:18:30.163 "enable_numa": false 00:18:30.163 } 00:18:30.163 } 00:18:30.163 ] 00:18:30.163 }, 00:18:30.163 { 00:18:30.163 "subsystem": "sock", 00:18:30.163 "config": [ 00:18:30.163 { 00:18:30.163 "method": "sock_set_default_impl", 00:18:30.163 "params": { 00:18:30.164 "impl_name": "posix" 00:18:30.164 } 00:18:30.164 }, 00:18:30.164 { 00:18:30.164 "method": "sock_impl_set_options", 00:18:30.164 "params": { 00:18:30.164 "impl_name": "ssl", 00:18:30.164 "recv_buf_size": 4096, 00:18:30.164 "send_buf_size": 4096, 00:18:30.164 "enable_recv_pipe": true, 00:18:30.164 "enable_quickack": false, 00:18:30.164 "enable_placement_id": 0, 00:18:30.164 "enable_zerocopy_send_server": true, 00:18:30.164 "enable_zerocopy_send_client": false, 00:18:30.164 "zerocopy_threshold": 0, 00:18:30.164 "tls_version": 0, 00:18:30.164 "enable_ktls": false 00:18:30.164 } 00:18:30.164 }, 00:18:30.164 { 00:18:30.164 "method": "sock_impl_set_options", 00:18:30.164 "params": { 00:18:30.164 "impl_name": "posix", 00:18:30.164 "recv_buf_size": 2097152, 00:18:30.164 "send_buf_size": 2097152, 00:18:30.164 "enable_recv_pipe": true, 00:18:30.164 "enable_quickack": false, 00:18:30.164 "enable_placement_id": 0, 00:18:30.164 "enable_zerocopy_send_server": true, 00:18:30.164 "enable_zerocopy_send_client": false, 00:18:30.164 "zerocopy_threshold": 0, 00:18:30.164 "tls_version": 0, 00:18:30.164 "enable_ktls": false 00:18:30.164 } 00:18:30.164 } 00:18:30.164 ] 00:18:30.164 }, 00:18:30.164 { 00:18:30.164 "subsystem": "vmd", 00:18:30.164 "config": [] 00:18:30.164 }, 00:18:30.164 { 00:18:30.164 "subsystem": "accel", 00:18:30.164 "config": [ 00:18:30.164 { 00:18:30.164 "method": "accel_set_options", 00:18:30.164 "params": { 00:18:30.164 "small_cache_size": 128, 00:18:30.164 "large_cache_size": 16, 00:18:30.164 "task_count": 2048, 00:18:30.164 "sequence_count": 2048, 00:18:30.164 "buf_count": 2048 00:18:30.164 } 00:18:30.164 } 00:18:30.164 ] 00:18:30.164 }, 00:18:30.164 { 00:18:30.164 "subsystem": "bdev", 00:18:30.164 "config": [ 00:18:30.164 { 00:18:30.164 "method": "bdev_set_options", 00:18:30.164 "params": { 00:18:30.164 "bdev_io_pool_size": 65535, 00:18:30.164 "bdev_io_cache_size": 256, 00:18:30.164 "bdev_auto_examine": true, 00:18:30.164 "iobuf_small_cache_size": 128, 00:18:30.164 "iobuf_large_cache_size": 16 00:18:30.164 } 00:18:30.164 }, 00:18:30.164 { 00:18:30.164 "method": "bdev_raid_set_options", 00:18:30.164 "params": { 00:18:30.164 "process_window_size_kb": 1024, 00:18:30.164 "process_max_bandwidth_mb_sec": 0 00:18:30.164 } 00:18:30.164 }, 00:18:30.164 { 00:18:30.164 "method": "bdev_iscsi_set_options", 00:18:30.164 "params": { 00:18:30.164 "timeout_sec": 30 00:18:30.164 } 00:18:30.164 }, 00:18:30.164 { 00:18:30.164 "method": "bdev_nvme_set_options", 00:18:30.164 "params": { 00:18:30.164 "action_on_timeout": "none", 00:18:30.164 "timeout_us": 0, 00:18:30.164 "timeout_admin_us": 0, 00:18:30.164 "keep_alive_timeout_ms": 10000, 00:18:30.164 "arbitration_burst": 0, 00:18:30.164 "low_priority_weight": 0, 00:18:30.164 "medium_priority_weight": 0, 00:18:30.164 "high_priority_weight": 0, 00:18:30.164 "nvme_adminq_poll_period_us": 10000, 00:18:30.164 "nvme_ioq_poll_period_us": 0, 00:18:30.164 "io_queue_requests": 512, 00:18:30.164 "delay_cmd_submit": true, 00:18:30.164 "transport_retry_count": 4, 00:18:30.164 "bdev_retry_count": 3, 00:18:30.164 "transport_ack_timeout": 0, 00:18:30.164 "ctrlr_loss_timeout_sec": 0, 00:18:30.164 "reconnect_delay_sec": 0, 00:18:30.164 "fast_io_fail_timeout_sec": 0, 00:18:30.164 "disable_auto_failback": false, 00:18:30.164 "generate_uuids": false, 00:18:30.164 "transport_tos": 0, 00:18:30.164 "nvme_error_stat": false, 00:18:30.164 "rdma_srq_size": 0, 00:18:30.164 "io_path_stat": false, 00:18:30.164 "allow_accel_sequence": false, 00:18:30.164 "rdma_max_cq_size": 0, 00:18:30.164 "rdma_cm_event_timeout_ms": 0, 00:18:30.164 "dhchap_digests": [ 00:18:30.164 "sha256", 00:18:30.164 "sha384", 00:18:30.164 "sha512" 00:18:30.164 ], 00:18:30.164 "dhchap_dhgroups": [ 00:18:30.164 "null", 00:18:30.164 "ffdhe2048", 00:18:30.164 "ffdhe3072", 00:18:30.164 "ffdhe4096", 00:18:30.164 "ffdhe6144", 00:18:30.164 "ffdhe8192" 00:18:30.164 ] 00:18:30.164 } 00:18:30.164 }, 00:18:30.164 { 00:18:30.164 "method": "bdev_nvme_attach_controller", 00:18:30.164 "params": { 00:18:30.164 "name": "TLSTEST", 00:18:30.164 "trtype": "TCP", 00:18:30.164 "adrfam": "IPv4", 00:18:30.164 "traddr": "10.0.0.2", 00:18:30.164 "trsvcid": "4420", 00:18:30.164 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:30.164 "prchk_reftag": false, 00:18:30.164 "prchk_guard": false, 00:18:30.164 "ctrlr_loss_timeout_sec": 0, 00:18:30.164 "reconnect_delay_sec": 0, 00:18:30.164 "fast_io_fail_timeout_sec": 0, 00:18:30.164 "psk": "key0", 00:18:30.164 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:30.164 "hdgst": false, 00:18:30.164 "ddgst": false, 00:18:30.164 "multipath": "multipath" 00:18:30.164 } 00:18:30.164 }, 00:18:30.164 { 00:18:30.164 "method": "bdev_nvme_set_hotplug", 00:18:30.164 "params": { 00:18:30.164 "period_us": 100000, 00:18:30.164 "enable": false 00:18:30.164 } 00:18:30.164 }, 00:18:30.164 { 00:18:30.164 "method": "bdev_wait_for_examine" 00:18:30.164 } 00:18:30.164 ] 00:18:30.164 }, 00:18:30.164 { 00:18:30.164 "subsystem": "nbd", 00:18:30.164 "config": [] 00:18:30.164 } 00:18:30.164 ] 00:18:30.164 }' 00:18:30.164 11:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 4077981 00:18:30.164 11:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 4077981 ']' 00:18:30.164 11:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 4077981 00:18:30.164 11:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:30.164 11:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:30.164 11:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4077981 00:18:30.424 11:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:30.424 11:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:30.424 11:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4077981' 00:18:30.424 killing process with pid 4077981 00:18:30.424 11:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 4077981 00:18:30.424 Received shutdown signal, test time was about 10.000000 seconds 00:18:30.424 00:18:30.424 Latency(us) 00:18:30.424 [2024-11-20T10:12:57.920Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:30.424 [2024-11-20T10:12:57.920Z] =================================================================================================================== 00:18:30.424 [2024-11-20T10:12:57.920Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:30.424 11:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 4077981 00:18:30.424 11:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 4077708 00:18:30.424 11:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 4077708 ']' 00:18:30.424 11:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 4077708 00:18:30.424 11:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:30.424 11:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:30.424 11:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4077708 00:18:30.424 11:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:30.424 11:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:30.424 11:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4077708' 00:18:30.424 killing process with pid 4077708 00:18:30.424 11:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 4077708 00:18:30.424 11:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 4077708 00:18:30.684 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:18:30.684 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:30.684 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:30.684 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:18:30.684 "subsystems": [ 00:18:30.684 { 00:18:30.684 "subsystem": "keyring", 00:18:30.684 "config": [ 00:18:30.684 { 00:18:30.684 "method": "keyring_file_add_key", 00:18:30.684 "params": { 00:18:30.684 "name": "key0", 00:18:30.684 "path": "/tmp/tmp.L9jFKNf80K" 00:18:30.684 } 00:18:30.684 } 00:18:30.684 ] 00:18:30.684 }, 00:18:30.684 { 00:18:30.684 "subsystem": "iobuf", 00:18:30.684 "config": [ 00:18:30.684 { 00:18:30.684 "method": "iobuf_set_options", 00:18:30.684 "params": { 00:18:30.684 "small_pool_count": 8192, 00:18:30.684 "large_pool_count": 1024, 00:18:30.684 "small_bufsize": 8192, 00:18:30.684 "large_bufsize": 135168, 00:18:30.684 "enable_numa": false 00:18:30.684 } 00:18:30.684 } 00:18:30.684 ] 00:18:30.684 }, 00:18:30.684 { 00:18:30.684 "subsystem": "sock", 00:18:30.684 "config": [ 00:18:30.684 { 00:18:30.684 "method": "sock_set_default_impl", 00:18:30.684 "params": { 00:18:30.684 "impl_name": "posix" 00:18:30.684 } 00:18:30.684 }, 00:18:30.684 { 00:18:30.684 "method": "sock_impl_set_options", 00:18:30.684 "params": { 00:18:30.684 "impl_name": "ssl", 00:18:30.684 "recv_buf_size": 4096, 00:18:30.684 "send_buf_size": 4096, 00:18:30.684 "enable_recv_pipe": true, 00:18:30.684 "enable_quickack": false, 00:18:30.684 "enable_placement_id": 0, 00:18:30.684 "enable_zerocopy_send_server": true, 00:18:30.684 "enable_zerocopy_send_client": false, 00:18:30.684 "zerocopy_threshold": 0, 00:18:30.684 "tls_version": 0, 00:18:30.684 "enable_ktls": false 00:18:30.684 } 00:18:30.684 }, 00:18:30.684 { 00:18:30.684 "method": "sock_impl_set_options", 00:18:30.684 "params": { 00:18:30.684 "impl_name": "posix", 00:18:30.684 "recv_buf_size": 2097152, 00:18:30.684 "send_buf_size": 2097152, 00:18:30.684 "enable_recv_pipe": true, 00:18:30.684 "enable_quickack": false, 00:18:30.684 "enable_placement_id": 0, 00:18:30.684 "enable_zerocopy_send_server": true, 00:18:30.684 "enable_zerocopy_send_client": false, 00:18:30.684 "zerocopy_threshold": 0, 00:18:30.684 "tls_version": 0, 00:18:30.684 "enable_ktls": false 00:18:30.684 } 00:18:30.684 } 00:18:30.684 ] 00:18:30.684 }, 00:18:30.684 { 00:18:30.684 "subsystem": "vmd", 00:18:30.684 "config": [] 00:18:30.684 }, 00:18:30.684 { 00:18:30.684 "subsystem": "accel", 00:18:30.684 "config": [ 00:18:30.684 { 00:18:30.684 "method": "accel_set_options", 00:18:30.684 "params": { 00:18:30.684 "small_cache_size": 128, 00:18:30.684 "large_cache_size": 16, 00:18:30.684 "task_count": 2048, 00:18:30.684 "sequence_count": 2048, 00:18:30.684 "buf_count": 2048 00:18:30.684 } 00:18:30.684 } 00:18:30.684 ] 00:18:30.684 }, 00:18:30.684 { 00:18:30.684 "subsystem": "bdev", 00:18:30.684 "config": [ 00:18:30.684 { 00:18:30.684 "method": "bdev_set_options", 00:18:30.684 "params": { 00:18:30.684 "bdev_io_pool_size": 65535, 00:18:30.684 "bdev_io_cache_size": 256, 00:18:30.684 "bdev_auto_examine": true, 00:18:30.684 "iobuf_small_cache_size": 128, 00:18:30.684 "iobuf_large_cache_size": 16 00:18:30.684 } 00:18:30.684 }, 00:18:30.684 { 00:18:30.684 "method": "bdev_raid_set_options", 00:18:30.684 "params": { 00:18:30.684 "process_window_size_kb": 1024, 00:18:30.684 "process_max_bandwidth_mb_sec": 0 00:18:30.684 } 00:18:30.684 }, 00:18:30.684 { 00:18:30.684 "method": "bdev_iscsi_set_options", 00:18:30.684 "params": { 00:18:30.684 "timeout_sec": 30 00:18:30.684 } 00:18:30.684 }, 00:18:30.684 { 00:18:30.684 "method": "bdev_nvme_set_options", 00:18:30.684 "params": { 00:18:30.684 "action_on_timeout": "none", 00:18:30.684 "timeout_us": 0, 00:18:30.684 "timeout_admin_us": 0, 00:18:30.684 "keep_alive_timeout_ms": 10000, 00:18:30.684 "arbitration_burst": 0, 00:18:30.684 "low_priority_weight": 0, 00:18:30.684 "medium_priority_weight": 0, 00:18:30.684 "high_priority_weight": 0, 00:18:30.684 "nvme_adminq_poll_period_us": 10000, 00:18:30.684 "nvme_ioq_poll_period_us": 0, 00:18:30.684 "io_queue_requests": 0, 00:18:30.684 "delay_cmd_submit": true, 00:18:30.684 "transport_retry_count": 4, 00:18:30.684 "bdev_retry_count": 3, 00:18:30.684 "transport_ack_timeout": 0, 00:18:30.684 "ctrlr_loss_timeout_sec": 0, 00:18:30.684 "reconnect_delay_sec": 0, 00:18:30.684 "fast_io_fail_timeout_sec": 0, 00:18:30.684 "disable_auto_failback": false, 00:18:30.684 "generate_uuids": false, 00:18:30.684 "transport_tos": 0, 00:18:30.684 "nvme_error_stat": false, 00:18:30.684 "rdma_srq_size": 0, 00:18:30.684 "io_path_stat": false, 00:18:30.684 "allow_accel_sequence": false, 00:18:30.684 "rdma_max_cq_size": 0, 00:18:30.684 "rdma_cm_event_timeout_ms": 0, 00:18:30.684 "dhchap_digests": [ 00:18:30.684 "sha256", 00:18:30.684 "sha384", 00:18:30.684 "sha512" 00:18:30.684 ], 00:18:30.684 "dhchap_dhgroups": [ 00:18:30.684 "null", 00:18:30.684 "ffdhe2048", 00:18:30.684 "ffdhe3072", 00:18:30.684 "ffdhe4096", 00:18:30.684 "ffdhe6144", 00:18:30.684 "ffdhe8192" 00:18:30.684 ] 00:18:30.684 } 00:18:30.684 }, 00:18:30.684 { 00:18:30.684 "method": "bdev_nvme_set_hotplug", 00:18:30.684 "params": { 00:18:30.684 "period_us": 100000, 00:18:30.684 "enable": false 00:18:30.684 } 00:18:30.684 }, 00:18:30.684 { 00:18:30.684 "method": "bdev_malloc_create", 00:18:30.684 "params": { 00:18:30.684 "name": "malloc0", 00:18:30.684 "num_blocks": 8192, 00:18:30.684 "block_size": 4096, 00:18:30.684 "physical_block_size": 4096, 00:18:30.684 "uuid": "c2d971f9-f21c-411d-95ca-d744cfb1b0a4", 00:18:30.684 "optimal_io_boundary": 0, 00:18:30.684 "md_size": 0, 00:18:30.684 "dif_type": 0, 00:18:30.684 "dif_is_head_of_md": false, 00:18:30.684 "dif_pi_format": 0 00:18:30.684 } 00:18:30.684 }, 00:18:30.684 { 00:18:30.684 "method": "bdev_wait_for_examine" 00:18:30.684 } 00:18:30.684 ] 00:18:30.684 }, 00:18:30.684 { 00:18:30.684 "subsystem": "nbd", 00:18:30.684 "config": [] 00:18:30.684 }, 00:18:30.684 { 00:18:30.684 "subsystem": "scheduler", 00:18:30.684 "config": [ 00:18:30.684 { 00:18:30.684 "method": "framework_set_scheduler", 00:18:30.684 "params": { 00:18:30.684 "name": "static" 00:18:30.684 } 00:18:30.684 } 00:18:30.684 ] 00:18:30.684 }, 00:18:30.684 { 00:18:30.684 "subsystem": "nvmf", 00:18:30.684 "config": [ 00:18:30.684 { 00:18:30.684 "method": "nvmf_set_config", 00:18:30.684 "params": { 00:18:30.684 "discovery_filter": "match_any", 00:18:30.684 "admin_cmd_passthru": { 00:18:30.684 "identify_ctrlr": false 00:18:30.684 }, 00:18:30.684 "dhchap_digests": [ 00:18:30.684 "sha256", 00:18:30.684 "sha384", 00:18:30.684 "sha512" 00:18:30.684 ], 00:18:30.684 "dhchap_dhgroups": [ 00:18:30.684 "null", 00:18:30.684 "ffdhe2048", 00:18:30.684 "ffdhe3072", 00:18:30.685 "ffdhe4096", 00:18:30.685 "ffdhe6144", 00:18:30.685 "ffdhe8192" 00:18:30.685 ] 00:18:30.685 } 00:18:30.685 }, 00:18:30.685 { 00:18:30.685 "method": "nvmf_set_max_subsystems", 00:18:30.685 "params": { 00:18:30.685 "max_subsystems": 1024 00:18:30.685 } 00:18:30.685 }, 00:18:30.685 { 00:18:30.685 "method": "nvmf_set_crdt", 00:18:30.685 "params": { 00:18:30.685 "crdt1": 0, 00:18:30.685 "crdt2": 0, 00:18:30.685 "crdt3": 0 00:18:30.685 } 00:18:30.685 }, 00:18:30.685 { 00:18:30.685 "method": "nvmf_create_transport", 00:18:30.685 "params": { 00:18:30.685 "trtype": "TCP", 00:18:30.685 "max_queue_depth": 128, 00:18:30.685 "max_io_qpairs_per_ctrlr": 127, 00:18:30.685 "in_capsule_data_size": 4096, 00:18:30.685 "max_io_size": 131072, 00:18:30.685 "io_unit_size": 131072, 00:18:30.685 "max_aq_depth": 128, 00:18:30.685 "num_shared_buffers": 511, 00:18:30.685 "buf_cache_size": 4294967295, 00:18:30.685 "dif_insert_or_strip": false, 00:18:30.685 "zcopy": false, 00:18:30.685 "c2h_success": false, 00:18:30.685 "sock_priority": 0, 00:18:30.685 "abort_timeout_sec": 1, 00:18:30.685 "ack_timeout": 0, 00:18:30.685 "data_wr_pool_size": 0 00:18:30.685 } 00:18:30.685 }, 00:18:30.685 { 00:18:30.685 "method": "nvmf_create_subsystem", 00:18:30.685 "params": { 00:18:30.685 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:30.685 "allow_any_host": false, 00:18:30.685 "serial_number": "SPDK00000000000001", 00:18:30.685 "model_number": "SPDK bdev Controller", 00:18:30.685 "max_namespaces": 10, 00:18:30.685 "min_cntlid": 1, 00:18:30.685 "max_cntlid": 65519, 00:18:30.685 "ana_reporting": false 00:18:30.685 } 00:18:30.685 }, 00:18:30.685 { 00:18:30.685 "method": "nvmf_subsystem_add_host", 00:18:30.685 "params": { 00:18:30.685 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:30.685 "host": "nqn.2016-06.io.spdk:host1", 00:18:30.685 "psk": "key0" 00:18:30.685 } 00:18:30.685 }, 00:18:30.685 { 00:18:30.685 "method": "nvmf_subsystem_add_ns", 00:18:30.685 "params": { 00:18:30.685 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:30.685 "namespace": { 00:18:30.685 "nsid": 1, 00:18:30.685 "bdev_name": "malloc0", 00:18:30.685 "nguid": "C2D971F9F21C411D95CAD744CFB1B0A4", 00:18:30.685 "uuid": "c2d971f9-f21c-411d-95ca-d744cfb1b0a4", 00:18:30.685 "no_auto_visible": false 00:18:30.685 } 00:18:30.685 } 00:18:30.685 }, 00:18:30.685 { 00:18:30.685 "method": "nvmf_subsystem_add_listener", 00:18:30.685 "params": { 00:18:30.685 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:30.685 "listen_address": { 00:18:30.685 "trtype": "TCP", 00:18:30.685 "adrfam": "IPv4", 00:18:30.685 "traddr": "10.0.0.2", 00:18:30.685 "trsvcid": "4420" 00:18:30.685 }, 00:18:30.685 "secure_channel": true 00:18:30.685 } 00:18:30.685 } 00:18:30.685 ] 00:18:30.685 } 00:18:30.685 ] 00:18:30.685 }' 00:18:30.685 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:30.685 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=4078386 00:18:30.685 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:18:30.685 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 4078386 00:18:30.685 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 4078386 ']' 00:18:30.685 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:30.685 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:30.685 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:30.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:30.685 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:30.685 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:30.685 [2024-11-20 11:12:58.122190] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:18:30.685 [2024-11-20 11:12:58.122238] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:30.944 [2024-11-20 11:12:58.196019] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:30.944 [2024-11-20 11:12:58.236853] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:30.944 [2024-11-20 11:12:58.236888] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:30.944 [2024-11-20 11:12:58.236896] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:30.944 [2024-11-20 11:12:58.236902] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:30.944 [2024-11-20 11:12:58.236907] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:30.944 [2024-11-20 11:12:58.237464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:31.203 [2024-11-20 11:12:58.450405] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:31.203 [2024-11-20 11:12:58.482431] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:31.203 [2024-11-20 11:12:58.482635] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:31.773 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:31.773 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:31.773 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:31.773 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:31.773 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:31.773 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:31.773 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=4078464 00:18:31.773 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 4078464 /var/tmp/bdevperf.sock 00:18:31.773 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 4078464 ']' 00:18:31.773 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:31.773 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:18:31.773 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:31.773 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:31.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:31.773 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:18:31.773 "subsystems": [ 00:18:31.773 { 00:18:31.773 "subsystem": "keyring", 00:18:31.773 "config": [ 00:18:31.773 { 00:18:31.773 "method": "keyring_file_add_key", 00:18:31.773 "params": { 00:18:31.773 "name": "key0", 00:18:31.773 "path": "/tmp/tmp.L9jFKNf80K" 00:18:31.773 } 00:18:31.773 } 00:18:31.773 ] 00:18:31.773 }, 00:18:31.773 { 00:18:31.773 "subsystem": "iobuf", 00:18:31.773 "config": [ 00:18:31.773 { 00:18:31.773 "method": "iobuf_set_options", 00:18:31.773 "params": { 00:18:31.773 "small_pool_count": 8192, 00:18:31.773 "large_pool_count": 1024, 00:18:31.773 "small_bufsize": 8192, 00:18:31.773 "large_bufsize": 135168, 00:18:31.773 "enable_numa": false 00:18:31.773 } 00:18:31.773 } 00:18:31.773 ] 00:18:31.773 }, 00:18:31.773 { 00:18:31.773 "subsystem": "sock", 00:18:31.773 "config": [ 00:18:31.773 { 00:18:31.773 "method": "sock_set_default_impl", 00:18:31.773 "params": { 00:18:31.773 "impl_name": "posix" 00:18:31.773 } 00:18:31.773 }, 00:18:31.773 { 00:18:31.773 "method": "sock_impl_set_options", 00:18:31.773 "params": { 00:18:31.773 "impl_name": "ssl", 00:18:31.773 "recv_buf_size": 4096, 00:18:31.773 "send_buf_size": 4096, 00:18:31.773 "enable_recv_pipe": true, 00:18:31.773 "enable_quickack": false, 00:18:31.773 "enable_placement_id": 0, 00:18:31.773 "enable_zerocopy_send_server": true, 00:18:31.773 "enable_zerocopy_send_client": false, 00:18:31.773 "zerocopy_threshold": 0, 00:18:31.773 "tls_version": 0, 00:18:31.773 "enable_ktls": false 00:18:31.773 } 00:18:31.773 }, 00:18:31.773 { 00:18:31.773 "method": "sock_impl_set_options", 00:18:31.773 "params": { 00:18:31.773 "impl_name": "posix", 00:18:31.773 "recv_buf_size": 2097152, 00:18:31.773 "send_buf_size": 2097152, 00:18:31.773 "enable_recv_pipe": true, 00:18:31.773 "enable_quickack": false, 00:18:31.773 "enable_placement_id": 0, 00:18:31.773 "enable_zerocopy_send_server": true, 00:18:31.773 "enable_zerocopy_send_client": false, 00:18:31.773 "zerocopy_threshold": 0, 00:18:31.773 "tls_version": 0, 00:18:31.773 "enable_ktls": false 00:18:31.773 } 00:18:31.773 } 00:18:31.773 ] 00:18:31.773 }, 00:18:31.773 { 00:18:31.773 "subsystem": "vmd", 00:18:31.773 "config": [] 00:18:31.773 }, 00:18:31.773 { 00:18:31.773 "subsystem": "accel", 00:18:31.773 "config": [ 00:18:31.773 { 00:18:31.773 "method": "accel_set_options", 00:18:31.773 "params": { 00:18:31.773 "small_cache_size": 128, 00:18:31.773 "large_cache_size": 16, 00:18:31.773 "task_count": 2048, 00:18:31.773 "sequence_count": 2048, 00:18:31.773 "buf_count": 2048 00:18:31.773 } 00:18:31.773 } 00:18:31.773 ] 00:18:31.773 }, 00:18:31.773 { 00:18:31.773 "subsystem": "bdev", 00:18:31.773 "config": [ 00:18:31.773 { 00:18:31.773 "method": "bdev_set_options", 00:18:31.773 "params": { 00:18:31.773 "bdev_io_pool_size": 65535, 00:18:31.773 "bdev_io_cache_size": 256, 00:18:31.773 "bdev_auto_examine": true, 00:18:31.773 "iobuf_small_cache_size": 128, 00:18:31.773 "iobuf_large_cache_size": 16 00:18:31.773 } 00:18:31.773 }, 00:18:31.773 { 00:18:31.773 "method": "bdev_raid_set_options", 00:18:31.773 "params": { 00:18:31.773 "process_window_size_kb": 1024, 00:18:31.773 "process_max_bandwidth_mb_sec": 0 00:18:31.773 } 00:18:31.773 }, 00:18:31.773 { 00:18:31.773 "method": "bdev_iscsi_set_options", 00:18:31.773 "params": { 00:18:31.773 "timeout_sec": 30 00:18:31.773 } 00:18:31.773 }, 00:18:31.773 { 00:18:31.773 "method": "bdev_nvme_set_options", 00:18:31.773 "params": { 00:18:31.773 "action_on_timeout": "none", 00:18:31.773 "timeout_us": 0, 00:18:31.773 "timeout_admin_us": 0, 00:18:31.773 "keep_alive_timeout_ms": 10000, 00:18:31.773 "arbitration_burst": 0, 00:18:31.773 "low_priority_weight": 0, 00:18:31.773 "medium_priority_weight": 0, 00:18:31.773 "high_priority_weight": 0, 00:18:31.773 "nvme_adminq_poll_period_us": 10000, 00:18:31.773 "nvme_ioq_poll_period_us": 0, 00:18:31.773 "io_queue_requests": 512, 00:18:31.773 "delay_cmd_submit": true, 00:18:31.773 "transport_retry_count": 4, 00:18:31.773 "bdev_retry_count": 3, 00:18:31.773 "transport_ack_timeout": 0, 00:18:31.773 "ctrlr_loss_timeout_sec": 0, 00:18:31.773 "reconnect_delay_sec": 0, 00:18:31.773 "fast_io_fail_timeout_sec": 0, 00:18:31.773 "disable_auto_failback": false, 00:18:31.773 "generate_uuids": false, 00:18:31.773 "transport_tos": 0, 00:18:31.773 "nvme_error_stat": false, 00:18:31.773 "rdma_srq_size": 0, 00:18:31.773 "io_path_stat": false, 00:18:31.773 "allow_accel_sequence": false, 00:18:31.773 "rdma_max_cq_size": 0, 00:18:31.773 "rdma_cm_event_timeout_ms": 0, 00:18:31.773 "dhchap_digests": [ 00:18:31.773 "sha256", 00:18:31.773 "sha384", 00:18:31.773 "sha512" 00:18:31.773 ], 00:18:31.773 "dhchap_dhgroups": [ 00:18:31.773 "null", 00:18:31.773 "ffdhe2048", 00:18:31.773 "ffdhe3072", 00:18:31.773 "ffdhe4096", 00:18:31.773 "ffdhe6144", 00:18:31.773 "ffdhe8192" 00:18:31.773 ] 00:18:31.773 } 00:18:31.773 }, 00:18:31.773 { 00:18:31.773 "method": "bdev_nvme_attach_controller", 00:18:31.773 "params": { 00:18:31.773 "name": "TLSTEST", 00:18:31.773 "trtype": "TCP", 00:18:31.773 "adrfam": "IPv4", 00:18:31.773 "traddr": "10.0.0.2", 00:18:31.773 "trsvcid": "4420", 00:18:31.773 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:31.773 "prchk_reftag": false, 00:18:31.773 "prchk_guard": false, 00:18:31.773 "ctrlr_loss_timeout_sec": 0, 00:18:31.773 "reconnect_delay_sec": 0, 00:18:31.773 "fast_io_fail_timeout_sec": 0, 00:18:31.773 "psk": "key0", 00:18:31.773 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:31.773 "hdgst": false, 00:18:31.773 "ddgst": false, 00:18:31.773 "multipath": "multipath" 00:18:31.773 } 00:18:31.773 }, 00:18:31.773 { 00:18:31.773 "method": "bdev_nvme_set_hotplug", 00:18:31.773 "params": { 00:18:31.773 "period_us": 100000, 00:18:31.773 "enable": false 00:18:31.773 } 00:18:31.773 }, 00:18:31.773 { 00:18:31.773 "method": "bdev_wait_for_examine" 00:18:31.773 } 00:18:31.773 ] 00:18:31.773 }, 00:18:31.773 { 00:18:31.773 "subsystem": "nbd", 00:18:31.773 "config": [] 00:18:31.773 } 00:18:31.774 ] 00:18:31.774 }' 00:18:31.774 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:31.774 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:31.774 [2024-11-20 11:12:59.039007] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:18:31.774 [2024-11-20 11:12:59.039053] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4078464 ] 00:18:31.774 [2024-11-20 11:12:59.111600] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:31.774 [2024-11-20 11:12:59.153920] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:32.033 [2024-11-20 11:12:59.305849] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:32.601 11:12:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:32.601 11:12:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:32.601 11:12:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:32.601 Running I/O for 10 seconds... 00:18:34.916 4868.00 IOPS, 19.02 MiB/s [2024-11-20T10:13:03.350Z] 4846.00 IOPS, 18.93 MiB/s [2024-11-20T10:13:04.291Z] 4767.33 IOPS, 18.62 MiB/s [2024-11-20T10:13:05.231Z] 4766.00 IOPS, 18.62 MiB/s [2024-11-20T10:13:06.168Z] 4724.20 IOPS, 18.45 MiB/s [2024-11-20T10:13:07.105Z] 4724.17 IOPS, 18.45 MiB/s [2024-11-20T10:13:08.042Z] 4746.43 IOPS, 18.54 MiB/s [2024-11-20T10:13:09.424Z] 4762.00 IOPS, 18.60 MiB/s [2024-11-20T10:13:10.048Z] 4778.89 IOPS, 18.67 MiB/s [2024-11-20T10:13:10.048Z] 4769.00 IOPS, 18.63 MiB/s 00:18:42.552 Latency(us) 00:18:42.552 [2024-11-20T10:13:10.048Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:42.552 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:42.552 Verification LBA range: start 0x0 length 0x2000 00:18:42.552 TLSTESTn1 : 10.03 4768.87 18.63 0.00 0.00 26793.95 4843.97 48781.58 00:18:42.552 [2024-11-20T10:13:10.048Z] =================================================================================================================== 00:18:42.552 [2024-11-20T10:13:10.048Z] Total : 4768.87 18.63 0.00 0.00 26793.95 4843.97 48781.58 00:18:42.552 { 00:18:42.552 "results": [ 00:18:42.552 { 00:18:42.552 "job": "TLSTESTn1", 00:18:42.552 "core_mask": "0x4", 00:18:42.552 "workload": "verify", 00:18:42.552 "status": "finished", 00:18:42.552 "verify_range": { 00:18:42.552 "start": 0, 00:18:42.552 "length": 8192 00:18:42.552 }, 00:18:42.552 "queue_depth": 128, 00:18:42.552 "io_size": 4096, 00:18:42.552 "runtime": 10.027116, 00:18:42.552 "iops": 4768.868735536718, 00:18:42.552 "mibps": 18.628393498190306, 00:18:42.552 "io_failed": 0, 00:18:42.552 "io_timeout": 0, 00:18:42.552 "avg_latency_us": 26793.95479802949, 00:18:42.552 "min_latency_us": 4843.965217391305, 00:18:42.552 "max_latency_us": 48781.57913043478 00:18:42.552 } 00:18:42.552 ], 00:18:42.552 "core_count": 1 00:18:42.552 } 00:18:42.812 11:13:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:42.812 11:13:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 4078464 00:18:42.812 11:13:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 4078464 ']' 00:18:42.812 11:13:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 4078464 00:18:42.812 11:13:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:42.812 11:13:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:42.812 11:13:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4078464 00:18:42.812 11:13:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:42.812 11:13:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:42.812 11:13:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4078464' 00:18:42.812 killing process with pid 4078464 00:18:42.812 11:13:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 4078464 00:18:42.812 Received shutdown signal, test time was about 10.000000 seconds 00:18:42.812 00:18:42.812 Latency(us) 00:18:42.812 [2024-11-20T10:13:10.308Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:42.812 [2024-11-20T10:13:10.308Z] =================================================================================================================== 00:18:42.812 [2024-11-20T10:13:10.308Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:42.812 11:13:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 4078464 00:18:42.812 11:13:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 4078386 00:18:42.812 11:13:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 4078386 ']' 00:18:42.812 11:13:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 4078386 00:18:42.812 11:13:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:42.812 11:13:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:42.812 11:13:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4078386 00:18:43.071 11:13:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:43.071 11:13:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:43.071 11:13:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4078386' 00:18:43.071 killing process with pid 4078386 00:18:43.071 11:13:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 4078386 00:18:43.071 11:13:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 4078386 00:18:43.071 11:13:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:18:43.071 11:13:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:43.071 11:13:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:43.071 11:13:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:43.071 11:13:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=4080311 00:18:43.071 11:13:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 4080311 00:18:43.071 11:13:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:43.071 11:13:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 4080311 ']' 00:18:43.071 11:13:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:43.071 11:13:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:43.071 11:13:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:43.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:43.071 11:13:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:43.071 11:13:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:43.071 [2024-11-20 11:13:10.532980] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:18:43.071 [2024-11-20 11:13:10.533028] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:43.330 [2024-11-20 11:13:10.612691] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:43.330 [2024-11-20 11:13:10.652576] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:43.330 [2024-11-20 11:13:10.652610] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:43.330 [2024-11-20 11:13:10.652618] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:43.330 [2024-11-20 11:13:10.652624] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:43.330 [2024-11-20 11:13:10.652631] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:43.330 [2024-11-20 11:13:10.653210] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:43.330 11:13:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:43.330 11:13:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:43.330 11:13:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:43.330 11:13:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:43.330 11:13:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:43.330 11:13:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:43.330 11:13:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.L9jFKNf80K 00:18:43.330 11:13:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.L9jFKNf80K 00:18:43.330 11:13:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:43.589 [2024-11-20 11:13:10.970211] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:43.589 11:13:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:43.847 11:13:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:43.847 [2024-11-20 11:13:11.335164] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:43.847 [2024-11-20 11:13:11.335390] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:44.106 11:13:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:44.106 malloc0 00:18:44.106 11:13:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:44.365 11:13:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.L9jFKNf80K 00:18:44.624 11:13:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:44.624 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=4080643 00:18:44.624 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:44.624 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:44.624 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 4080643 /var/tmp/bdevperf.sock 00:18:44.624 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 4080643 ']' 00:18:44.624 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:44.624 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:44.624 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:44.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:44.624 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:44.624 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:44.882 [2024-11-20 11:13:12.156212] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:18:44.882 [2024-11-20 11:13:12.156262] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4080643 ] 00:18:44.882 [2024-11-20 11:13:12.232099] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:44.882 [2024-11-20 11:13:12.274761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:44.882 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:44.882 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:44.882 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.L9jFKNf80K 00:18:45.141 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:45.400 [2024-11-20 11:13:12.734255] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:45.400 nvme0n1 00:18:45.400 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:45.659 Running I/O for 1 seconds... 00:18:46.595 5334.00 IOPS, 20.84 MiB/s 00:18:46.595 Latency(us) 00:18:46.595 [2024-11-20T10:13:14.091Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:46.595 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:46.595 Verification LBA range: start 0x0 length 0x2000 00:18:46.595 nvme0n1 : 1.02 5365.20 20.96 0.00 0.00 23677.03 6895.53 23478.98 00:18:46.595 [2024-11-20T10:13:14.091Z] =================================================================================================================== 00:18:46.595 [2024-11-20T10:13:14.091Z] Total : 5365.20 20.96 0.00 0.00 23677.03 6895.53 23478.98 00:18:46.595 { 00:18:46.595 "results": [ 00:18:46.595 { 00:18:46.595 "job": "nvme0n1", 00:18:46.595 "core_mask": "0x2", 00:18:46.595 "workload": "verify", 00:18:46.595 "status": "finished", 00:18:46.595 "verify_range": { 00:18:46.595 "start": 0, 00:18:46.595 "length": 8192 00:18:46.595 }, 00:18:46.595 "queue_depth": 128, 00:18:46.595 "io_size": 4096, 00:18:46.595 "runtime": 1.018228, 00:18:46.595 "iops": 5365.203078288949, 00:18:46.595 "mibps": 20.957824524566206, 00:18:46.595 "io_failed": 0, 00:18:46.595 "io_timeout": 0, 00:18:46.595 "avg_latency_us": 23677.028257447335, 00:18:46.595 "min_latency_us": 6895.5269565217395, 00:18:46.595 "max_latency_us": 23478.98434782609 00:18:46.595 } 00:18:46.595 ], 00:18:46.595 "core_count": 1 00:18:46.595 } 00:18:46.595 11:13:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 4080643 00:18:46.595 11:13:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 4080643 ']' 00:18:46.595 11:13:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 4080643 00:18:46.595 11:13:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:46.595 11:13:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:46.595 11:13:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4080643 00:18:46.595 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:46.595 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:46.595 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4080643' 00:18:46.595 killing process with pid 4080643 00:18:46.595 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 4080643 00:18:46.595 Received shutdown signal, test time was about 1.000000 seconds 00:18:46.595 00:18:46.595 Latency(us) 00:18:46.595 [2024-11-20T10:13:14.091Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:46.595 [2024-11-20T10:13:14.091Z] =================================================================================================================== 00:18:46.595 [2024-11-20T10:13:14.091Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:46.595 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 4080643 00:18:46.853 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 4080311 00:18:46.853 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 4080311 ']' 00:18:46.853 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 4080311 00:18:46.853 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:46.853 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:46.853 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4080311 00:18:46.853 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:46.853 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:46.853 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4080311' 00:18:46.853 killing process with pid 4080311 00:18:46.853 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 4080311 00:18:46.853 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 4080311 00:18:47.112 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:18:47.113 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:47.113 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:47.113 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:47.113 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=4081029 00:18:47.113 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:47.113 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 4081029 00:18:47.113 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 4081029 ']' 00:18:47.113 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:47.113 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:47.113 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:47.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:47.113 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:47.113 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:47.113 [2024-11-20 11:13:14.448229] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:18:47.113 [2024-11-20 11:13:14.448274] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:47.113 [2024-11-20 11:13:14.527099] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:47.113 [2024-11-20 11:13:14.563182] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:47.113 [2024-11-20 11:13:14.563218] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:47.113 [2024-11-20 11:13:14.563225] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:47.113 [2024-11-20 11:13:14.563232] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:47.113 [2024-11-20 11:13:14.563237] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:47.113 [2024-11-20 11:13:14.563754] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:47.372 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:47.372 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:47.372 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:47.372 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:47.372 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:47.372 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:47.372 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:18:47.372 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.372 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:47.372 [2024-11-20 11:13:14.711274] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:47.372 malloc0 00:18:47.372 [2024-11-20 11:13:14.739501] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:47.372 [2024-11-20 11:13:14.739721] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:47.372 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.372 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=4081056 00:18:47.372 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 4081056 /var/tmp/bdevperf.sock 00:18:47.372 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:47.372 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 4081056 ']' 00:18:47.372 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:47.372 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:47.372 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:47.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:47.372 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:47.372 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:47.372 [2024-11-20 11:13:14.817532] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:18:47.372 [2024-11-20 11:13:14.817576] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4081056 ] 00:18:47.631 [2024-11-20 11:13:14.892801] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:47.631 [2024-11-20 11:13:14.935840] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:47.631 11:13:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:47.631 11:13:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:47.631 11:13:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.L9jFKNf80K 00:18:47.890 11:13:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:47.890 [2024-11-20 11:13:15.376642] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:48.149 nvme0n1 00:18:48.149 11:13:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:48.149 Running I/O for 1 seconds... 00:18:49.086 5097.00 IOPS, 19.91 MiB/s 00:18:49.086 Latency(us) 00:18:49.086 [2024-11-20T10:13:16.582Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:49.086 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:49.086 Verification LBA range: start 0x0 length 0x2000 00:18:49.086 nvme0n1 : 1.01 5152.58 20.13 0.00 0.00 24667.76 4758.48 57671.68 00:18:49.086 [2024-11-20T10:13:16.582Z] =================================================================================================================== 00:18:49.086 [2024-11-20T10:13:16.582Z] Total : 5152.58 20.13 0.00 0.00 24667.76 4758.48 57671.68 00:18:49.086 { 00:18:49.086 "results": [ 00:18:49.086 { 00:18:49.086 "job": "nvme0n1", 00:18:49.086 "core_mask": "0x2", 00:18:49.086 "workload": "verify", 00:18:49.086 "status": "finished", 00:18:49.086 "verify_range": { 00:18:49.086 "start": 0, 00:18:49.086 "length": 8192 00:18:49.086 }, 00:18:49.086 "queue_depth": 128, 00:18:49.086 "io_size": 4096, 00:18:49.086 "runtime": 1.01425, 00:18:49.086 "iops": 5152.575794922356, 00:18:49.086 "mibps": 20.127249198915454, 00:18:49.086 "io_failed": 0, 00:18:49.086 "io_timeout": 0, 00:18:49.086 "avg_latency_us": 24667.76283132831, 00:18:49.086 "min_latency_us": 4758.48347826087, 00:18:49.086 "max_latency_us": 57671.68 00:18:49.086 } 00:18:49.086 ], 00:18:49.086 "core_count": 1 00:18:49.086 } 00:18:49.345 11:13:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:18:49.345 11:13:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.345 11:13:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:49.345 11:13:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.345 11:13:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:18:49.345 "subsystems": [ 00:18:49.345 { 00:18:49.345 "subsystem": "keyring", 00:18:49.345 "config": [ 00:18:49.345 { 00:18:49.345 "method": "keyring_file_add_key", 00:18:49.345 "params": { 00:18:49.345 "name": "key0", 00:18:49.345 "path": "/tmp/tmp.L9jFKNf80K" 00:18:49.345 } 00:18:49.345 } 00:18:49.345 ] 00:18:49.345 }, 00:18:49.345 { 00:18:49.345 "subsystem": "iobuf", 00:18:49.345 "config": [ 00:18:49.345 { 00:18:49.345 "method": "iobuf_set_options", 00:18:49.345 "params": { 00:18:49.345 "small_pool_count": 8192, 00:18:49.345 "large_pool_count": 1024, 00:18:49.345 "small_bufsize": 8192, 00:18:49.345 "large_bufsize": 135168, 00:18:49.345 "enable_numa": false 00:18:49.345 } 00:18:49.345 } 00:18:49.345 ] 00:18:49.345 }, 00:18:49.345 { 00:18:49.345 "subsystem": "sock", 00:18:49.345 "config": [ 00:18:49.345 { 00:18:49.345 "method": "sock_set_default_impl", 00:18:49.345 "params": { 00:18:49.345 "impl_name": "posix" 00:18:49.345 } 00:18:49.345 }, 00:18:49.345 { 00:18:49.345 "method": "sock_impl_set_options", 00:18:49.345 "params": { 00:18:49.345 "impl_name": "ssl", 00:18:49.345 "recv_buf_size": 4096, 00:18:49.345 "send_buf_size": 4096, 00:18:49.345 "enable_recv_pipe": true, 00:18:49.345 "enable_quickack": false, 00:18:49.345 "enable_placement_id": 0, 00:18:49.345 "enable_zerocopy_send_server": true, 00:18:49.345 "enable_zerocopy_send_client": false, 00:18:49.345 "zerocopy_threshold": 0, 00:18:49.345 "tls_version": 0, 00:18:49.345 "enable_ktls": false 00:18:49.345 } 00:18:49.345 }, 00:18:49.345 { 00:18:49.345 "method": "sock_impl_set_options", 00:18:49.345 "params": { 00:18:49.345 "impl_name": "posix", 00:18:49.345 "recv_buf_size": 2097152, 00:18:49.345 "send_buf_size": 2097152, 00:18:49.345 "enable_recv_pipe": true, 00:18:49.345 "enable_quickack": false, 00:18:49.345 "enable_placement_id": 0, 00:18:49.345 "enable_zerocopy_send_server": true, 00:18:49.345 "enable_zerocopy_send_client": false, 00:18:49.345 "zerocopy_threshold": 0, 00:18:49.345 "tls_version": 0, 00:18:49.345 "enable_ktls": false 00:18:49.345 } 00:18:49.345 } 00:18:49.345 ] 00:18:49.345 }, 00:18:49.345 { 00:18:49.345 "subsystem": "vmd", 00:18:49.345 "config": [] 00:18:49.345 }, 00:18:49.345 { 00:18:49.345 "subsystem": "accel", 00:18:49.345 "config": [ 00:18:49.345 { 00:18:49.345 "method": "accel_set_options", 00:18:49.345 "params": { 00:18:49.345 "small_cache_size": 128, 00:18:49.345 "large_cache_size": 16, 00:18:49.345 "task_count": 2048, 00:18:49.345 "sequence_count": 2048, 00:18:49.345 "buf_count": 2048 00:18:49.345 } 00:18:49.345 } 00:18:49.345 ] 00:18:49.345 }, 00:18:49.345 { 00:18:49.345 "subsystem": "bdev", 00:18:49.345 "config": [ 00:18:49.345 { 00:18:49.345 "method": "bdev_set_options", 00:18:49.345 "params": { 00:18:49.345 "bdev_io_pool_size": 65535, 00:18:49.345 "bdev_io_cache_size": 256, 00:18:49.345 "bdev_auto_examine": true, 00:18:49.345 "iobuf_small_cache_size": 128, 00:18:49.346 "iobuf_large_cache_size": 16 00:18:49.346 } 00:18:49.346 }, 00:18:49.346 { 00:18:49.346 "method": "bdev_raid_set_options", 00:18:49.346 "params": { 00:18:49.346 "process_window_size_kb": 1024, 00:18:49.346 "process_max_bandwidth_mb_sec": 0 00:18:49.346 } 00:18:49.346 }, 00:18:49.346 { 00:18:49.346 "method": "bdev_iscsi_set_options", 00:18:49.346 "params": { 00:18:49.346 "timeout_sec": 30 00:18:49.346 } 00:18:49.346 }, 00:18:49.346 { 00:18:49.346 "method": "bdev_nvme_set_options", 00:18:49.346 "params": { 00:18:49.346 "action_on_timeout": "none", 00:18:49.346 "timeout_us": 0, 00:18:49.346 "timeout_admin_us": 0, 00:18:49.346 "keep_alive_timeout_ms": 10000, 00:18:49.346 "arbitration_burst": 0, 00:18:49.346 "low_priority_weight": 0, 00:18:49.346 "medium_priority_weight": 0, 00:18:49.346 "high_priority_weight": 0, 00:18:49.346 "nvme_adminq_poll_period_us": 10000, 00:18:49.346 "nvme_ioq_poll_period_us": 0, 00:18:49.346 "io_queue_requests": 0, 00:18:49.346 "delay_cmd_submit": true, 00:18:49.346 "transport_retry_count": 4, 00:18:49.346 "bdev_retry_count": 3, 00:18:49.346 "transport_ack_timeout": 0, 00:18:49.346 "ctrlr_loss_timeout_sec": 0, 00:18:49.346 "reconnect_delay_sec": 0, 00:18:49.346 "fast_io_fail_timeout_sec": 0, 00:18:49.346 "disable_auto_failback": false, 00:18:49.346 "generate_uuids": false, 00:18:49.346 "transport_tos": 0, 00:18:49.346 "nvme_error_stat": false, 00:18:49.346 "rdma_srq_size": 0, 00:18:49.346 "io_path_stat": false, 00:18:49.346 "allow_accel_sequence": false, 00:18:49.346 "rdma_max_cq_size": 0, 00:18:49.346 "rdma_cm_event_timeout_ms": 0, 00:18:49.346 "dhchap_digests": [ 00:18:49.346 "sha256", 00:18:49.346 "sha384", 00:18:49.346 "sha512" 00:18:49.346 ], 00:18:49.346 "dhchap_dhgroups": [ 00:18:49.346 "null", 00:18:49.346 "ffdhe2048", 00:18:49.346 "ffdhe3072", 00:18:49.346 "ffdhe4096", 00:18:49.346 "ffdhe6144", 00:18:49.346 "ffdhe8192" 00:18:49.346 ] 00:18:49.346 } 00:18:49.346 }, 00:18:49.346 { 00:18:49.346 "method": "bdev_nvme_set_hotplug", 00:18:49.346 "params": { 00:18:49.346 "period_us": 100000, 00:18:49.346 "enable": false 00:18:49.346 } 00:18:49.346 }, 00:18:49.346 { 00:18:49.346 "method": "bdev_malloc_create", 00:18:49.346 "params": { 00:18:49.346 "name": "malloc0", 00:18:49.346 "num_blocks": 8192, 00:18:49.346 "block_size": 4096, 00:18:49.346 "physical_block_size": 4096, 00:18:49.346 "uuid": "847770f9-8bd4-4183-aa7c-9e461b5e3a69", 00:18:49.346 "optimal_io_boundary": 0, 00:18:49.346 "md_size": 0, 00:18:49.346 "dif_type": 0, 00:18:49.346 "dif_is_head_of_md": false, 00:18:49.346 "dif_pi_format": 0 00:18:49.346 } 00:18:49.346 }, 00:18:49.346 { 00:18:49.346 "method": "bdev_wait_for_examine" 00:18:49.346 } 00:18:49.346 ] 00:18:49.346 }, 00:18:49.346 { 00:18:49.346 "subsystem": "nbd", 00:18:49.346 "config": [] 00:18:49.346 }, 00:18:49.346 { 00:18:49.346 "subsystem": "scheduler", 00:18:49.346 "config": [ 00:18:49.346 { 00:18:49.346 "method": "framework_set_scheduler", 00:18:49.346 "params": { 00:18:49.346 "name": "static" 00:18:49.346 } 00:18:49.346 } 00:18:49.346 ] 00:18:49.346 }, 00:18:49.346 { 00:18:49.346 "subsystem": "nvmf", 00:18:49.346 "config": [ 00:18:49.346 { 00:18:49.346 "method": "nvmf_set_config", 00:18:49.346 "params": { 00:18:49.346 "discovery_filter": "match_any", 00:18:49.346 "admin_cmd_passthru": { 00:18:49.346 "identify_ctrlr": false 00:18:49.346 }, 00:18:49.346 "dhchap_digests": [ 00:18:49.346 "sha256", 00:18:49.346 "sha384", 00:18:49.346 "sha512" 00:18:49.346 ], 00:18:49.346 "dhchap_dhgroups": [ 00:18:49.346 "null", 00:18:49.346 "ffdhe2048", 00:18:49.346 "ffdhe3072", 00:18:49.346 "ffdhe4096", 00:18:49.346 "ffdhe6144", 00:18:49.346 "ffdhe8192" 00:18:49.346 ] 00:18:49.346 } 00:18:49.346 }, 00:18:49.346 { 00:18:49.346 "method": "nvmf_set_max_subsystems", 00:18:49.346 "params": { 00:18:49.346 "max_subsystems": 1024 00:18:49.346 } 00:18:49.346 }, 00:18:49.346 { 00:18:49.346 "method": "nvmf_set_crdt", 00:18:49.346 "params": { 00:18:49.346 "crdt1": 0, 00:18:49.346 "crdt2": 0, 00:18:49.346 "crdt3": 0 00:18:49.346 } 00:18:49.346 }, 00:18:49.346 { 00:18:49.346 "method": "nvmf_create_transport", 00:18:49.346 "params": { 00:18:49.346 "trtype": "TCP", 00:18:49.346 "max_queue_depth": 128, 00:18:49.346 "max_io_qpairs_per_ctrlr": 127, 00:18:49.346 "in_capsule_data_size": 4096, 00:18:49.346 "max_io_size": 131072, 00:18:49.346 "io_unit_size": 131072, 00:18:49.346 "max_aq_depth": 128, 00:18:49.346 "num_shared_buffers": 511, 00:18:49.346 "buf_cache_size": 4294967295, 00:18:49.346 "dif_insert_or_strip": false, 00:18:49.346 "zcopy": false, 00:18:49.346 "c2h_success": false, 00:18:49.346 "sock_priority": 0, 00:18:49.346 "abort_timeout_sec": 1, 00:18:49.346 "ack_timeout": 0, 00:18:49.346 "data_wr_pool_size": 0 00:18:49.346 } 00:18:49.346 }, 00:18:49.346 { 00:18:49.346 "method": "nvmf_create_subsystem", 00:18:49.346 "params": { 00:18:49.346 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:49.346 "allow_any_host": false, 00:18:49.346 "serial_number": "00000000000000000000", 00:18:49.346 "model_number": "SPDK bdev Controller", 00:18:49.346 "max_namespaces": 32, 00:18:49.346 "min_cntlid": 1, 00:18:49.346 "max_cntlid": 65519, 00:18:49.346 "ana_reporting": false 00:18:49.346 } 00:18:49.346 }, 00:18:49.346 { 00:18:49.346 "method": "nvmf_subsystem_add_host", 00:18:49.346 "params": { 00:18:49.346 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:49.346 "host": "nqn.2016-06.io.spdk:host1", 00:18:49.346 "psk": "key0" 00:18:49.346 } 00:18:49.346 }, 00:18:49.346 { 00:18:49.346 "method": "nvmf_subsystem_add_ns", 00:18:49.346 "params": { 00:18:49.346 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:49.346 "namespace": { 00:18:49.346 "nsid": 1, 00:18:49.346 "bdev_name": "malloc0", 00:18:49.346 "nguid": "847770F98BD44183AA7C9E461B5E3A69", 00:18:49.346 "uuid": "847770f9-8bd4-4183-aa7c-9e461b5e3a69", 00:18:49.346 "no_auto_visible": false 00:18:49.346 } 00:18:49.346 } 00:18:49.346 }, 00:18:49.346 { 00:18:49.346 "method": "nvmf_subsystem_add_listener", 00:18:49.346 "params": { 00:18:49.346 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:49.346 "listen_address": { 00:18:49.346 "trtype": "TCP", 00:18:49.346 "adrfam": "IPv4", 00:18:49.346 "traddr": "10.0.0.2", 00:18:49.346 "trsvcid": "4420" 00:18:49.346 }, 00:18:49.346 "secure_channel": false, 00:18:49.346 "sock_impl": "ssl" 00:18:49.346 } 00:18:49.346 } 00:18:49.346 ] 00:18:49.346 } 00:18:49.346 ] 00:18:49.346 }' 00:18:49.346 11:13:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:49.606 11:13:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:18:49.606 "subsystems": [ 00:18:49.606 { 00:18:49.606 "subsystem": "keyring", 00:18:49.606 "config": [ 00:18:49.606 { 00:18:49.606 "method": "keyring_file_add_key", 00:18:49.606 "params": { 00:18:49.606 "name": "key0", 00:18:49.606 "path": "/tmp/tmp.L9jFKNf80K" 00:18:49.606 } 00:18:49.606 } 00:18:49.606 ] 00:18:49.606 }, 00:18:49.606 { 00:18:49.606 "subsystem": "iobuf", 00:18:49.606 "config": [ 00:18:49.606 { 00:18:49.606 "method": "iobuf_set_options", 00:18:49.606 "params": { 00:18:49.606 "small_pool_count": 8192, 00:18:49.606 "large_pool_count": 1024, 00:18:49.606 "small_bufsize": 8192, 00:18:49.606 "large_bufsize": 135168, 00:18:49.606 "enable_numa": false 00:18:49.606 } 00:18:49.606 } 00:18:49.606 ] 00:18:49.606 }, 00:18:49.606 { 00:18:49.606 "subsystem": "sock", 00:18:49.606 "config": [ 00:18:49.606 { 00:18:49.606 "method": "sock_set_default_impl", 00:18:49.606 "params": { 00:18:49.606 "impl_name": "posix" 00:18:49.606 } 00:18:49.606 }, 00:18:49.606 { 00:18:49.606 "method": "sock_impl_set_options", 00:18:49.606 "params": { 00:18:49.606 "impl_name": "ssl", 00:18:49.606 "recv_buf_size": 4096, 00:18:49.606 "send_buf_size": 4096, 00:18:49.606 "enable_recv_pipe": true, 00:18:49.606 "enable_quickack": false, 00:18:49.606 "enable_placement_id": 0, 00:18:49.606 "enable_zerocopy_send_server": true, 00:18:49.606 "enable_zerocopy_send_client": false, 00:18:49.606 "zerocopy_threshold": 0, 00:18:49.606 "tls_version": 0, 00:18:49.606 "enable_ktls": false 00:18:49.606 } 00:18:49.606 }, 00:18:49.606 { 00:18:49.606 "method": "sock_impl_set_options", 00:18:49.606 "params": { 00:18:49.606 "impl_name": "posix", 00:18:49.606 "recv_buf_size": 2097152, 00:18:49.606 "send_buf_size": 2097152, 00:18:49.606 "enable_recv_pipe": true, 00:18:49.606 "enable_quickack": false, 00:18:49.606 "enable_placement_id": 0, 00:18:49.606 "enable_zerocopy_send_server": true, 00:18:49.606 "enable_zerocopy_send_client": false, 00:18:49.606 "zerocopy_threshold": 0, 00:18:49.606 "tls_version": 0, 00:18:49.606 "enable_ktls": false 00:18:49.606 } 00:18:49.606 } 00:18:49.606 ] 00:18:49.606 }, 00:18:49.606 { 00:18:49.606 "subsystem": "vmd", 00:18:49.606 "config": [] 00:18:49.606 }, 00:18:49.606 { 00:18:49.606 "subsystem": "accel", 00:18:49.606 "config": [ 00:18:49.606 { 00:18:49.606 "method": "accel_set_options", 00:18:49.606 "params": { 00:18:49.606 "small_cache_size": 128, 00:18:49.606 "large_cache_size": 16, 00:18:49.606 "task_count": 2048, 00:18:49.606 "sequence_count": 2048, 00:18:49.606 "buf_count": 2048 00:18:49.606 } 00:18:49.606 } 00:18:49.606 ] 00:18:49.606 }, 00:18:49.606 { 00:18:49.606 "subsystem": "bdev", 00:18:49.606 "config": [ 00:18:49.606 { 00:18:49.606 "method": "bdev_set_options", 00:18:49.606 "params": { 00:18:49.606 "bdev_io_pool_size": 65535, 00:18:49.606 "bdev_io_cache_size": 256, 00:18:49.606 "bdev_auto_examine": true, 00:18:49.606 "iobuf_small_cache_size": 128, 00:18:49.606 "iobuf_large_cache_size": 16 00:18:49.606 } 00:18:49.606 }, 00:18:49.606 { 00:18:49.606 "method": "bdev_raid_set_options", 00:18:49.606 "params": { 00:18:49.606 "process_window_size_kb": 1024, 00:18:49.606 "process_max_bandwidth_mb_sec": 0 00:18:49.606 } 00:18:49.606 }, 00:18:49.606 { 00:18:49.606 "method": "bdev_iscsi_set_options", 00:18:49.606 "params": { 00:18:49.606 "timeout_sec": 30 00:18:49.606 } 00:18:49.606 }, 00:18:49.606 { 00:18:49.606 "method": "bdev_nvme_set_options", 00:18:49.606 "params": { 00:18:49.606 "action_on_timeout": "none", 00:18:49.606 "timeout_us": 0, 00:18:49.606 "timeout_admin_us": 0, 00:18:49.606 "keep_alive_timeout_ms": 10000, 00:18:49.606 "arbitration_burst": 0, 00:18:49.606 "low_priority_weight": 0, 00:18:49.606 "medium_priority_weight": 0, 00:18:49.606 "high_priority_weight": 0, 00:18:49.606 "nvme_adminq_poll_period_us": 10000, 00:18:49.606 "nvme_ioq_poll_period_us": 0, 00:18:49.606 "io_queue_requests": 512, 00:18:49.606 "delay_cmd_submit": true, 00:18:49.606 "transport_retry_count": 4, 00:18:49.606 "bdev_retry_count": 3, 00:18:49.606 "transport_ack_timeout": 0, 00:18:49.606 "ctrlr_loss_timeout_sec": 0, 00:18:49.606 "reconnect_delay_sec": 0, 00:18:49.606 "fast_io_fail_timeout_sec": 0, 00:18:49.606 "disable_auto_failback": false, 00:18:49.606 "generate_uuids": false, 00:18:49.606 "transport_tos": 0, 00:18:49.606 "nvme_error_stat": false, 00:18:49.606 "rdma_srq_size": 0, 00:18:49.606 "io_path_stat": false, 00:18:49.606 "allow_accel_sequence": false, 00:18:49.606 "rdma_max_cq_size": 0, 00:18:49.606 "rdma_cm_event_timeout_ms": 0, 00:18:49.606 "dhchap_digests": [ 00:18:49.606 "sha256", 00:18:49.607 "sha384", 00:18:49.607 "sha512" 00:18:49.607 ], 00:18:49.607 "dhchap_dhgroups": [ 00:18:49.607 "null", 00:18:49.607 "ffdhe2048", 00:18:49.607 "ffdhe3072", 00:18:49.607 "ffdhe4096", 00:18:49.607 "ffdhe6144", 00:18:49.607 "ffdhe8192" 00:18:49.607 ] 00:18:49.607 } 00:18:49.607 }, 00:18:49.607 { 00:18:49.607 "method": "bdev_nvme_attach_controller", 00:18:49.607 "params": { 00:18:49.607 "name": "nvme0", 00:18:49.607 "trtype": "TCP", 00:18:49.607 "adrfam": "IPv4", 00:18:49.607 "traddr": "10.0.0.2", 00:18:49.607 "trsvcid": "4420", 00:18:49.607 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:49.607 "prchk_reftag": false, 00:18:49.607 "prchk_guard": false, 00:18:49.607 "ctrlr_loss_timeout_sec": 0, 00:18:49.607 "reconnect_delay_sec": 0, 00:18:49.607 "fast_io_fail_timeout_sec": 0, 00:18:49.607 "psk": "key0", 00:18:49.607 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:49.607 "hdgst": false, 00:18:49.607 "ddgst": false, 00:18:49.607 "multipath": "multipath" 00:18:49.607 } 00:18:49.607 }, 00:18:49.607 { 00:18:49.607 "method": "bdev_nvme_set_hotplug", 00:18:49.607 "params": { 00:18:49.607 "period_us": 100000, 00:18:49.607 "enable": false 00:18:49.607 } 00:18:49.607 }, 00:18:49.607 { 00:18:49.607 "method": "bdev_enable_histogram", 00:18:49.607 "params": { 00:18:49.607 "name": "nvme0n1", 00:18:49.607 "enable": true 00:18:49.607 } 00:18:49.607 }, 00:18:49.607 { 00:18:49.607 "method": "bdev_wait_for_examine" 00:18:49.607 } 00:18:49.607 ] 00:18:49.607 }, 00:18:49.607 { 00:18:49.607 "subsystem": "nbd", 00:18:49.607 "config": [] 00:18:49.607 } 00:18:49.607 ] 00:18:49.607 }' 00:18:49.607 11:13:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 4081056 00:18:49.607 11:13:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 4081056 ']' 00:18:49.607 11:13:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 4081056 00:18:49.607 11:13:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:49.607 11:13:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:49.607 11:13:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4081056 00:18:49.607 11:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:49.607 11:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:49.607 11:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4081056' 00:18:49.607 killing process with pid 4081056 00:18:49.607 11:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 4081056 00:18:49.607 Received shutdown signal, test time was about 1.000000 seconds 00:18:49.607 00:18:49.607 Latency(us) 00:18:49.607 [2024-11-20T10:13:17.103Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:49.607 [2024-11-20T10:13:17.103Z] =================================================================================================================== 00:18:49.607 [2024-11-20T10:13:17.103Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:49.607 11:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 4081056 00:18:49.866 11:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 4081029 00:18:49.866 11:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 4081029 ']' 00:18:49.866 11:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 4081029 00:18:49.866 11:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:49.866 11:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:49.866 11:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4081029 00:18:49.866 11:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:49.866 11:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:49.866 11:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4081029' 00:18:49.866 killing process with pid 4081029 00:18:49.866 11:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 4081029 00:18:49.866 11:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 4081029 00:18:50.126 11:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:18:50.126 11:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:50.126 11:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:50.126 11:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:18:50.126 "subsystems": [ 00:18:50.126 { 00:18:50.126 "subsystem": "keyring", 00:18:50.126 "config": [ 00:18:50.126 { 00:18:50.126 "method": "keyring_file_add_key", 00:18:50.126 "params": { 00:18:50.126 "name": "key0", 00:18:50.126 "path": "/tmp/tmp.L9jFKNf80K" 00:18:50.126 } 00:18:50.126 } 00:18:50.126 ] 00:18:50.126 }, 00:18:50.126 { 00:18:50.126 "subsystem": "iobuf", 00:18:50.126 "config": [ 00:18:50.126 { 00:18:50.126 "method": "iobuf_set_options", 00:18:50.126 "params": { 00:18:50.126 "small_pool_count": 8192, 00:18:50.126 "large_pool_count": 1024, 00:18:50.126 "small_bufsize": 8192, 00:18:50.126 "large_bufsize": 135168, 00:18:50.126 "enable_numa": false 00:18:50.126 } 00:18:50.126 } 00:18:50.126 ] 00:18:50.126 }, 00:18:50.126 { 00:18:50.126 "subsystem": "sock", 00:18:50.126 "config": [ 00:18:50.126 { 00:18:50.126 "method": "sock_set_default_impl", 00:18:50.126 "params": { 00:18:50.126 "impl_name": "posix" 00:18:50.126 } 00:18:50.126 }, 00:18:50.126 { 00:18:50.126 "method": "sock_impl_set_options", 00:18:50.126 "params": { 00:18:50.126 "impl_name": "ssl", 00:18:50.126 "recv_buf_size": 4096, 00:18:50.126 "send_buf_size": 4096, 00:18:50.126 "enable_recv_pipe": true, 00:18:50.126 "enable_quickack": false, 00:18:50.126 "enable_placement_id": 0, 00:18:50.126 "enable_zerocopy_send_server": true, 00:18:50.126 "enable_zerocopy_send_client": false, 00:18:50.126 "zerocopy_threshold": 0, 00:18:50.126 "tls_version": 0, 00:18:50.126 "enable_ktls": false 00:18:50.126 } 00:18:50.126 }, 00:18:50.126 { 00:18:50.126 "method": "sock_impl_set_options", 00:18:50.126 "params": { 00:18:50.126 "impl_name": "posix", 00:18:50.126 "recv_buf_size": 2097152, 00:18:50.126 "send_buf_size": 2097152, 00:18:50.126 "enable_recv_pipe": true, 00:18:50.126 "enable_quickack": false, 00:18:50.126 "enable_placement_id": 0, 00:18:50.126 "enable_zerocopy_send_server": true, 00:18:50.126 "enable_zerocopy_send_client": false, 00:18:50.126 "zerocopy_threshold": 0, 00:18:50.126 "tls_version": 0, 00:18:50.126 "enable_ktls": false 00:18:50.126 } 00:18:50.126 } 00:18:50.126 ] 00:18:50.126 }, 00:18:50.126 { 00:18:50.126 "subsystem": "vmd", 00:18:50.126 "config": [] 00:18:50.126 }, 00:18:50.126 { 00:18:50.126 "subsystem": "accel", 00:18:50.126 "config": [ 00:18:50.126 { 00:18:50.126 "method": "accel_set_options", 00:18:50.126 "params": { 00:18:50.126 "small_cache_size": 128, 00:18:50.126 "large_cache_size": 16, 00:18:50.126 "task_count": 2048, 00:18:50.126 "sequence_count": 2048, 00:18:50.126 "buf_count": 2048 00:18:50.126 } 00:18:50.126 } 00:18:50.126 ] 00:18:50.126 }, 00:18:50.126 { 00:18:50.126 "subsystem": "bdev", 00:18:50.126 "config": [ 00:18:50.126 { 00:18:50.126 "method": "bdev_set_options", 00:18:50.126 "params": { 00:18:50.126 "bdev_io_pool_size": 65535, 00:18:50.126 "bdev_io_cache_size": 256, 00:18:50.126 "bdev_auto_examine": true, 00:18:50.126 "iobuf_small_cache_size": 128, 00:18:50.126 "iobuf_large_cache_size": 16 00:18:50.126 } 00:18:50.126 }, 00:18:50.126 { 00:18:50.126 "method": "bdev_raid_set_options", 00:18:50.126 "params": { 00:18:50.126 "process_window_size_kb": 1024, 00:18:50.126 "process_max_bandwidth_mb_sec": 0 00:18:50.126 } 00:18:50.126 }, 00:18:50.126 { 00:18:50.126 "method": "bdev_iscsi_set_options", 00:18:50.126 "params": { 00:18:50.126 "timeout_sec": 30 00:18:50.126 } 00:18:50.126 }, 00:18:50.126 { 00:18:50.126 "method": "bdev_nvme_set_options", 00:18:50.126 "params": { 00:18:50.126 "action_on_timeout": "none", 00:18:50.126 "timeout_us": 0, 00:18:50.126 "timeout_admin_us": 0, 00:18:50.126 "keep_alive_timeout_ms": 10000, 00:18:50.126 "arbitration_burst": 0, 00:18:50.126 "low_priority_weight": 0, 00:18:50.126 "medium_priority_weight": 0, 00:18:50.126 "high_priority_weight": 0, 00:18:50.126 "nvme_adminq_poll_period_us": 10000, 00:18:50.126 "nvme_ioq_poll_period_us": 0, 00:18:50.126 "io_queue_requests": 0, 00:18:50.126 "delay_cmd_submit": true, 00:18:50.126 "transport_retry_count": 4, 00:18:50.126 "bdev_retry_count": 3, 00:18:50.126 "transport_ack_timeout": 0, 00:18:50.126 "ctrlr_loss_timeout_sec": 0, 00:18:50.126 "reconnect_delay_sec": 0, 00:18:50.126 "fast_io_fail_timeout_sec": 0, 00:18:50.126 "disable_auto_failback": false, 00:18:50.126 "generate_uuids": false, 00:18:50.126 "transport_tos": 0, 00:18:50.126 "nvme_error_stat": false, 00:18:50.126 "rdma_srq_size": 0, 00:18:50.126 "io_path_stat": false, 00:18:50.126 "allow_accel_sequence": false, 00:18:50.126 "rdma_max_cq_size": 0, 00:18:50.126 "rdma_cm_event_timeout_ms": 0, 00:18:50.126 "dhchap_digests": [ 00:18:50.126 "sha256", 00:18:50.126 "sha384", 00:18:50.126 "sha512" 00:18:50.126 ], 00:18:50.126 "dhchap_dhgroups": [ 00:18:50.126 "null", 00:18:50.126 "ffdhe2048", 00:18:50.126 "ffdhe3072", 00:18:50.126 "ffdhe4096", 00:18:50.126 "ffdhe6144", 00:18:50.126 "ffdhe8192" 00:18:50.126 ] 00:18:50.126 } 00:18:50.126 }, 00:18:50.126 { 00:18:50.126 "method": "bdev_nvme_set_hotplug", 00:18:50.126 "params": { 00:18:50.126 "period_us": 100000, 00:18:50.126 "enable": false 00:18:50.126 } 00:18:50.126 }, 00:18:50.126 { 00:18:50.126 "method": "bdev_malloc_create", 00:18:50.126 "params": { 00:18:50.126 "name": "malloc0", 00:18:50.126 "num_blocks": 8192, 00:18:50.126 "block_size": 4096, 00:18:50.126 "physical_block_size": 4096, 00:18:50.126 "uuid": "847770f9-8bd4-4183-aa7c-9e461b5e3a69", 00:18:50.126 "optimal_io_boundary": 0, 00:18:50.126 "md_size": 0, 00:18:50.126 "dif_type": 0, 00:18:50.126 "dif_is_head_of_md": false, 00:18:50.126 "dif_pi_format": 0 00:18:50.126 } 00:18:50.126 }, 00:18:50.126 { 00:18:50.126 "method": "bdev_wait_for_examine" 00:18:50.126 } 00:18:50.126 ] 00:18:50.126 }, 00:18:50.126 { 00:18:50.126 "subsystem": "nbd", 00:18:50.126 "config": [] 00:18:50.126 }, 00:18:50.126 { 00:18:50.126 "subsystem": "scheduler", 00:18:50.127 "config": [ 00:18:50.127 { 00:18:50.127 "method": "framework_set_scheduler", 00:18:50.127 "params": { 00:18:50.127 "name": "static" 00:18:50.127 } 00:18:50.127 } 00:18:50.127 ] 00:18:50.127 }, 00:18:50.127 { 00:18:50.127 "subsystem": "nvmf", 00:18:50.127 "config": [ 00:18:50.127 { 00:18:50.127 "method": "nvmf_set_config", 00:18:50.127 "params": { 00:18:50.127 "discovery_filter": "match_any", 00:18:50.127 "admin_cmd_passthru": { 00:18:50.127 "identify_ctrlr": false 00:18:50.127 }, 00:18:50.127 "dhchap_digests": [ 00:18:50.127 "sha256", 00:18:50.127 "sha384", 00:18:50.127 "sha512" 00:18:50.127 ], 00:18:50.127 "dhchap_dhgroups": [ 00:18:50.127 "null", 00:18:50.127 "ffdhe2048", 00:18:50.127 "ffdhe3072", 00:18:50.127 "ffdhe4096", 00:18:50.127 "ffdhe6144", 00:18:50.127 "ffdhe8192" 00:18:50.127 ] 00:18:50.127 } 00:18:50.127 }, 00:18:50.127 { 00:18:50.127 "method": "nvmf_set_max_subsystems", 00:18:50.127 "params": { 00:18:50.127 "max_subsystems": 1024 00:18:50.127 } 00:18:50.127 }, 00:18:50.127 { 00:18:50.127 "method": "nvmf_set_crdt", 00:18:50.127 "params": { 00:18:50.127 "crdt1": 0, 00:18:50.127 "crdt2": 0, 00:18:50.127 "crdt3": 0 00:18:50.127 } 00:18:50.127 }, 00:18:50.127 { 00:18:50.127 "method": "nvmf_create_transport", 00:18:50.127 "params": { 00:18:50.127 "trtype": "TCP", 00:18:50.127 "max_queue_depth": 128, 00:18:50.127 "max_io_qpairs_per_ctrlr": 127, 00:18:50.127 "in_capsule_data_size": 4096, 00:18:50.127 "max_io_size": 131072, 00:18:50.127 "io_unit_size": 131072, 00:18:50.127 "max_aq_depth": 128, 00:18:50.127 "num_shared_buffers": 511, 00:18:50.127 "buf_cache_size": 4294967295, 00:18:50.127 "dif_insert_or_strip": false, 00:18:50.127 "zcopy": false, 00:18:50.127 "c2h_success": false, 00:18:50.127 "sock_priority": 0, 00:18:50.127 "abort_timeout_sec": 1, 00:18:50.127 "ack_timeout": 0, 00:18:50.127 "data_wr_pool_size": 0 00:18:50.127 } 00:18:50.127 }, 00:18:50.127 { 00:18:50.127 "method": "nvmf_create_subsystem", 00:18:50.127 "params": { 00:18:50.127 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:50.127 "allow_any_host": false, 00:18:50.127 "serial_number": "00000000000000000000", 00:18:50.127 "model_number": "SPDK bdev Controller", 00:18:50.127 "max_namespaces": 32, 00:18:50.127 "min_cntlid": 1, 00:18:50.127 "max_cntlid": 65519, 00:18:50.127 "ana_reporting": false 00:18:50.127 } 00:18:50.127 }, 00:18:50.127 { 00:18:50.127 "method": "nvmf_subsystem_add_host", 00:18:50.127 "params": { 00:18:50.127 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:50.127 "host": "nqn.2016-06.io.spdk:host1", 00:18:50.127 "psk": "key0" 00:18:50.127 } 00:18:50.127 }, 00:18:50.127 { 00:18:50.127 "method": "nvmf_subsystem_add_ns", 00:18:50.127 "params": { 00:18:50.127 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:50.127 "namespace": { 00:18:50.127 "nsid": 1, 00:18:50.127 "bdev_name": "malloc0", 00:18:50.127 "nguid": "847770F98BD44183AA7C9E461B5E3A69", 00:18:50.127 "uuid": "847770f9-8bd4-4183-aa7c-9e461b5e3a69", 00:18:50.127 "no_auto_visible": false 00:18:50.127 } 00:18:50.127 } 00:18:50.127 }, 00:18:50.127 { 00:18:50.127 "method": "nvmf_subsystem_add_listener", 00:18:50.127 "params": { 00:18:50.127 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:50.127 "listen_address": { 00:18:50.127 "trtype": "TCP", 00:18:50.127 "adrfam": "IPv4", 00:18:50.127 "traddr": "10.0.0.2", 00:18:50.127 "trsvcid": "4420" 00:18:50.127 }, 00:18:50.127 "secure_channel": false, 00:18:50.127 "sock_impl": "ssl" 00:18:50.127 } 00:18:50.127 } 00:18:50.127 ] 00:18:50.127 } 00:18:50.127 ] 00:18:50.127 }' 00:18:50.127 11:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:50.127 11:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=4081529 00:18:50.127 11:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:18:50.127 11:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 4081529 00:18:50.127 11:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 4081529 ']' 00:18:50.127 11:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:50.127 11:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:50.127 11:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:50.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:50.127 11:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:50.127 11:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:50.127 [2024-11-20 11:13:17.451275] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:18:50.127 [2024-11-20 11:13:17.451321] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:50.127 [2024-11-20 11:13:17.531282] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:50.127 [2024-11-20 11:13:17.571323] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:50.127 [2024-11-20 11:13:17.571360] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:50.127 [2024-11-20 11:13:17.571367] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:50.127 [2024-11-20 11:13:17.571374] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:50.127 [2024-11-20 11:13:17.571379] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:50.127 [2024-11-20 11:13:17.571986] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:50.386 [2024-11-20 11:13:17.785163] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:50.386 [2024-11-20 11:13:17.817195] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:50.386 [2024-11-20 11:13:17.817412] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:50.955 11:13:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:50.955 11:13:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:50.955 11:13:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:50.955 11:13:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:50.955 11:13:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:50.955 11:13:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:50.955 11:13:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=4081772 00:18:50.955 11:13:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 4081772 /var/tmp/bdevperf.sock 00:18:50.955 11:13:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 4081772 ']' 00:18:50.955 11:13:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:50.955 11:13:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:18:50.955 11:13:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:50.955 11:13:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:50.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:50.955 11:13:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:18:50.955 "subsystems": [ 00:18:50.955 { 00:18:50.955 "subsystem": "keyring", 00:18:50.955 "config": [ 00:18:50.955 { 00:18:50.955 "method": "keyring_file_add_key", 00:18:50.955 "params": { 00:18:50.955 "name": "key0", 00:18:50.955 "path": "/tmp/tmp.L9jFKNf80K" 00:18:50.955 } 00:18:50.955 } 00:18:50.955 ] 00:18:50.955 }, 00:18:50.955 { 00:18:50.955 "subsystem": "iobuf", 00:18:50.955 "config": [ 00:18:50.955 { 00:18:50.955 "method": "iobuf_set_options", 00:18:50.955 "params": { 00:18:50.955 "small_pool_count": 8192, 00:18:50.955 "large_pool_count": 1024, 00:18:50.955 "small_bufsize": 8192, 00:18:50.955 "large_bufsize": 135168, 00:18:50.955 "enable_numa": false 00:18:50.955 } 00:18:50.955 } 00:18:50.955 ] 00:18:50.955 }, 00:18:50.955 { 00:18:50.955 "subsystem": "sock", 00:18:50.955 "config": [ 00:18:50.955 { 00:18:50.955 "method": "sock_set_default_impl", 00:18:50.955 "params": { 00:18:50.955 "impl_name": "posix" 00:18:50.955 } 00:18:50.955 }, 00:18:50.955 { 00:18:50.955 "method": "sock_impl_set_options", 00:18:50.955 "params": { 00:18:50.955 "impl_name": "ssl", 00:18:50.955 "recv_buf_size": 4096, 00:18:50.955 "send_buf_size": 4096, 00:18:50.955 "enable_recv_pipe": true, 00:18:50.955 "enable_quickack": false, 00:18:50.955 "enable_placement_id": 0, 00:18:50.955 "enable_zerocopy_send_server": true, 00:18:50.955 "enable_zerocopy_send_client": false, 00:18:50.955 "zerocopy_threshold": 0, 00:18:50.955 "tls_version": 0, 00:18:50.955 "enable_ktls": false 00:18:50.955 } 00:18:50.955 }, 00:18:50.955 { 00:18:50.955 "method": "sock_impl_set_options", 00:18:50.955 "params": { 00:18:50.955 "impl_name": "posix", 00:18:50.955 "recv_buf_size": 2097152, 00:18:50.955 "send_buf_size": 2097152, 00:18:50.955 "enable_recv_pipe": true, 00:18:50.955 "enable_quickack": false, 00:18:50.955 "enable_placement_id": 0, 00:18:50.955 "enable_zerocopy_send_server": true, 00:18:50.955 "enable_zerocopy_send_client": false, 00:18:50.955 "zerocopy_threshold": 0, 00:18:50.955 "tls_version": 0, 00:18:50.955 "enable_ktls": false 00:18:50.955 } 00:18:50.955 } 00:18:50.955 ] 00:18:50.955 }, 00:18:50.955 { 00:18:50.955 "subsystem": "vmd", 00:18:50.955 "config": [] 00:18:50.955 }, 00:18:50.955 { 00:18:50.955 "subsystem": "accel", 00:18:50.955 "config": [ 00:18:50.955 { 00:18:50.955 "method": "accel_set_options", 00:18:50.955 "params": { 00:18:50.955 "small_cache_size": 128, 00:18:50.955 "large_cache_size": 16, 00:18:50.955 "task_count": 2048, 00:18:50.955 "sequence_count": 2048, 00:18:50.955 "buf_count": 2048 00:18:50.955 } 00:18:50.955 } 00:18:50.955 ] 00:18:50.955 }, 00:18:50.955 { 00:18:50.955 "subsystem": "bdev", 00:18:50.955 "config": [ 00:18:50.955 { 00:18:50.955 "method": "bdev_set_options", 00:18:50.955 "params": { 00:18:50.955 "bdev_io_pool_size": 65535, 00:18:50.955 "bdev_io_cache_size": 256, 00:18:50.955 "bdev_auto_examine": true, 00:18:50.955 "iobuf_small_cache_size": 128, 00:18:50.955 "iobuf_large_cache_size": 16 00:18:50.955 } 00:18:50.955 }, 00:18:50.955 { 00:18:50.955 "method": "bdev_raid_set_options", 00:18:50.955 "params": { 00:18:50.955 "process_window_size_kb": 1024, 00:18:50.955 "process_max_bandwidth_mb_sec": 0 00:18:50.955 } 00:18:50.955 }, 00:18:50.955 { 00:18:50.955 "method": "bdev_iscsi_set_options", 00:18:50.955 "params": { 00:18:50.955 "timeout_sec": 30 00:18:50.955 } 00:18:50.955 }, 00:18:50.955 { 00:18:50.955 "method": "bdev_nvme_set_options", 00:18:50.955 "params": { 00:18:50.955 "action_on_timeout": "none", 00:18:50.955 "timeout_us": 0, 00:18:50.955 "timeout_admin_us": 0, 00:18:50.955 "keep_alive_timeout_ms": 10000, 00:18:50.955 "arbitration_burst": 0, 00:18:50.955 "low_priority_weight": 0, 00:18:50.955 "medium_priority_weight": 0, 00:18:50.955 "high_priority_weight": 0, 00:18:50.955 "nvme_adminq_poll_period_us": 10000, 00:18:50.955 "nvme_ioq_poll_period_us": 0, 00:18:50.955 "io_queue_requests": 512, 00:18:50.955 "delay_cmd_submit": true, 00:18:50.956 "transport_retry_count": 4, 00:18:50.956 "bdev_retry_count": 3, 00:18:50.956 "transport_ack_timeout": 0, 00:18:50.956 "ctrlr_loss_timeout_sec": 0, 00:18:50.956 "reconnect_delay_sec": 0, 00:18:50.956 "fast_io_fail_timeout_sec": 0, 00:18:50.956 "disable_auto_failback": false, 00:18:50.956 "generate_uuids": false, 00:18:50.956 "transport_tos": 0, 00:18:50.956 "nvme_error_stat": false, 00:18:50.956 "rdma_srq_size": 0, 00:18:50.956 "io_path_stat": false, 00:18:50.956 "allow_accel_sequence": false, 00:18:50.956 "rdma_max_cq_size": 0, 00:18:50.956 "rdma_cm_event_timeout_ms": 0, 00:18:50.956 "dhchap_digests": [ 00:18:50.956 "sha256", 00:18:50.956 "sha384", 00:18:50.956 "sha512" 00:18:50.956 ], 00:18:50.956 "dhchap_dhgroups": [ 00:18:50.956 "null", 00:18:50.956 "ffdhe2048", 00:18:50.956 "ffdhe3072", 00:18:50.956 "ffdhe4096", 00:18:50.956 "ffdhe6144", 00:18:50.956 "ffdhe8192" 00:18:50.956 ] 00:18:50.956 } 00:18:50.956 }, 00:18:50.956 { 00:18:50.956 "method": "bdev_nvme_attach_controller", 00:18:50.956 "params": { 00:18:50.956 "name": "nvme0", 00:18:50.956 "trtype": "TCP", 00:18:50.956 "adrfam": "IPv4", 00:18:50.956 "traddr": "10.0.0.2", 00:18:50.956 "trsvcid": "4420", 00:18:50.956 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:50.956 "prchk_reftag": false, 00:18:50.956 "prchk_guard": false, 00:18:50.956 "ctrlr_loss_timeout_sec": 0, 00:18:50.956 "reconnect_delay_sec": 0, 00:18:50.956 "fast_io_fail_timeout_sec": 0, 00:18:50.956 "psk": "key0", 00:18:50.956 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:50.956 "hdgst": false, 00:18:50.956 "ddgst": false, 00:18:50.956 "multipath": "multipath" 00:18:50.956 } 00:18:50.956 }, 00:18:50.956 { 00:18:50.956 "method": "bdev_nvme_set_hotplug", 00:18:50.956 "params": { 00:18:50.956 "period_us": 100000, 00:18:50.956 "enable": false 00:18:50.956 } 00:18:50.956 }, 00:18:50.956 { 00:18:50.956 "method": "bdev_enable_histogram", 00:18:50.956 "params": { 00:18:50.956 "name": "nvme0n1", 00:18:50.956 "enable": true 00:18:50.956 } 00:18:50.956 }, 00:18:50.956 { 00:18:50.956 "method": "bdev_wait_for_examine" 00:18:50.956 } 00:18:50.956 ] 00:18:50.956 }, 00:18:50.956 { 00:18:50.956 "subsystem": "nbd", 00:18:50.956 "config": [] 00:18:50.956 } 00:18:50.956 ] 00:18:50.956 }' 00:18:50.956 11:13:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:50.956 11:13:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:50.956 [2024-11-20 11:13:18.370167] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:18:50.956 [2024-11-20 11:13:18.370214] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4081772 ] 00:18:50.956 [2024-11-20 11:13:18.443904] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:51.215 [2024-11-20 11:13:18.485475] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:51.215 [2024-11-20 11:13:18.638923] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:51.783 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:51.783 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:51.783 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:51.783 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:18:52.041 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:52.041 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:52.041 Running I/O for 1 seconds... 00:18:53.421 4961.00 IOPS, 19.38 MiB/s 00:18:53.421 Latency(us) 00:18:53.421 [2024-11-20T10:13:20.917Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:53.421 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:53.421 Verification LBA range: start 0x0 length 0x2000 00:18:53.421 nvme0n1 : 1.02 5002.85 19.54 0.00 0.00 25384.02 5071.92 57899.63 00:18:53.421 [2024-11-20T10:13:20.917Z] =================================================================================================================== 00:18:53.421 [2024-11-20T10:13:20.917Z] Total : 5002.85 19.54 0.00 0.00 25384.02 5071.92 57899.63 00:18:53.421 { 00:18:53.421 "results": [ 00:18:53.421 { 00:18:53.421 "job": "nvme0n1", 00:18:53.421 "core_mask": "0x2", 00:18:53.421 "workload": "verify", 00:18:53.421 "status": "finished", 00:18:53.421 "verify_range": { 00:18:53.421 "start": 0, 00:18:53.421 "length": 8192 00:18:53.421 }, 00:18:53.421 "queue_depth": 128, 00:18:53.421 "io_size": 4096, 00:18:53.421 "runtime": 1.01742, 00:18:53.421 "iops": 5002.850346956026, 00:18:53.421 "mibps": 19.54238416779698, 00:18:53.421 "io_failed": 0, 00:18:53.421 "io_timeout": 0, 00:18:53.421 "avg_latency_us": 25384.015159477236, 00:18:53.421 "min_latency_us": 5071.91652173913, 00:18:53.421 "max_latency_us": 57899.63130434783 00:18:53.421 } 00:18:53.421 ], 00:18:53.421 "core_count": 1 00:18:53.421 } 00:18:53.421 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:18:53.421 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:18:53.421 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:18:53.421 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:18:53.421 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:18:53.421 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:18:53.421 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:53.421 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:18:53.421 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:18:53.421 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:18:53.421 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:53.421 nvmf_trace.0 00:18:53.421 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:18:53.421 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 4081772 00:18:53.421 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 4081772 ']' 00:18:53.421 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 4081772 00:18:53.421 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:53.421 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:53.421 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4081772 00:18:53.421 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:53.421 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:53.421 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4081772' 00:18:53.421 killing process with pid 4081772 00:18:53.421 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 4081772 00:18:53.421 Received shutdown signal, test time was about 1.000000 seconds 00:18:53.421 00:18:53.421 Latency(us) 00:18:53.421 [2024-11-20T10:13:20.917Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:53.421 [2024-11-20T10:13:20.917Z] =================================================================================================================== 00:18:53.421 [2024-11-20T10:13:20.917Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:53.421 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 4081772 00:18:53.421 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:18:53.421 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:53.421 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:18:53.421 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:53.421 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:18:53.421 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:53.421 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:53.421 rmmod nvme_tcp 00:18:53.421 rmmod nvme_fabrics 00:18:53.421 rmmod nvme_keyring 00:18:53.681 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:53.681 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:18:53.681 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:18:53.681 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 4081529 ']' 00:18:53.681 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 4081529 00:18:53.681 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 4081529 ']' 00:18:53.681 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 4081529 00:18:53.681 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:53.681 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:53.681 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4081529 00:18:53.681 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:53.681 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:53.681 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4081529' 00:18:53.681 killing process with pid 4081529 00:18:53.681 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 4081529 00:18:53.681 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 4081529 00:18:53.681 11:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:53.681 11:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:53.681 11:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:53.681 11:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:18:53.681 11:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:18:53.681 11:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:53.681 11:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:18:53.681 11:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:53.681 11:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:53.681 11:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:53.681 11:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:53.681 11:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:56.218 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:56.219 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.yeHP4hnoeA /tmp/tmp.bkVpCE1lT2 /tmp/tmp.L9jFKNf80K 00:18:56.219 00:18:56.219 real 1m19.466s 00:18:56.219 user 2m0.501s 00:18:56.219 sys 0m31.732s 00:18:56.219 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:56.219 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:56.219 ************************************ 00:18:56.219 END TEST nvmf_tls 00:18:56.219 ************************************ 00:18:56.219 11:13:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:56.219 11:13:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:56.219 11:13:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:56.219 11:13:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:56.219 ************************************ 00:18:56.219 START TEST nvmf_fips 00:18:56.219 ************************************ 00:18:56.219 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:56.219 * Looking for test storage... 00:18:56.219 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:18:56.219 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:56.219 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lcov --version 00:18:56.219 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:56.219 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:56.219 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:56.219 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:56.219 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:56.219 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:18:56.219 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:18:56.219 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:18:56.219 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:18:56.219 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:18:56.219 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:18:56.219 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:18:56.219 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:56.219 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:18:56.219 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:18:56.219 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:56.219 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:56.219 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:18:56.219 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:18:56.219 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:56.219 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:18:56.219 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:18:56.219 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:18:56.219 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:18:56.219 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:56.219 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:18:56.219 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:18:56.219 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:56.219 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:56.219 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:18:56.219 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:56.219 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:56.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:56.219 --rc genhtml_branch_coverage=1 00:18:56.219 --rc genhtml_function_coverage=1 00:18:56.219 --rc genhtml_legend=1 00:18:56.219 --rc geninfo_all_blocks=1 00:18:56.219 --rc geninfo_unexecuted_blocks=1 00:18:56.219 00:18:56.219 ' 00:18:56.219 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:56.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:56.219 --rc genhtml_branch_coverage=1 00:18:56.219 --rc genhtml_function_coverage=1 00:18:56.219 --rc genhtml_legend=1 00:18:56.219 --rc geninfo_all_blocks=1 00:18:56.219 --rc geninfo_unexecuted_blocks=1 00:18:56.219 00:18:56.219 ' 00:18:56.219 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:56.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:56.219 --rc genhtml_branch_coverage=1 00:18:56.219 --rc genhtml_function_coverage=1 00:18:56.219 --rc genhtml_legend=1 00:18:56.219 --rc geninfo_all_blocks=1 00:18:56.219 --rc geninfo_unexecuted_blocks=1 00:18:56.219 00:18:56.219 ' 00:18:56.219 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:56.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:56.219 --rc genhtml_branch_coverage=1 00:18:56.219 --rc genhtml_function_coverage=1 00:18:56.219 --rc genhtml_legend=1 00:18:56.219 --rc geninfo_all_blocks=1 00:18:56.219 --rc geninfo_unexecuted_blocks=1 00:18:56.219 00:18:56.219 ' 00:18:56.219 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:56.219 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:18:56.219 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:56.219 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:56.219 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:56.219 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:56.219 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:56.219 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:56.219 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:56.219 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:56.219 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:56.219 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:56.219 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:56.219 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:18:56.219 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:56.219 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:56.219 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:56.219 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:56.219 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:56.219 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:18:56.219 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:56.219 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:56.219 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:56.219 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:56.220 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:56.220 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:56.220 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:18:56.220 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:56.220 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:18:56.220 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:56.220 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:56.220 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:56.220 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:56.220 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:56.220 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:56.220 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:56.220 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:56.220 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:56.220 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:56.220 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:56.220 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:18:56.220 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:18:56.220 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:18:56.220 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:18:56.220 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:18:56.220 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:18:56.220 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:56.220 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:56.220 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:18:56.220 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:18:56.220 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:18:56.220 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:18:56.220 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:18:56.220 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:18:56.220 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:18:56.220 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:56.220 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:18:56.220 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:18:56.220 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:56.220 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:56.220 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:18:56.220 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:18:56.220 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:56.220 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:18:56.220 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:18:56.220 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:18:56.220 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:18:56.220 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:56.220 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:18:56.220 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:18:56.220 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:56.220 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:56.220 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:18:56.220 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:56.220 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:18:56.220 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:18:56.220 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:56.220 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:18:56.220 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:18:56.220 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:18:56.220 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:18:56.220 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:56.220 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:18:56.220 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:18:56.220 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:56.220 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:18:56.220 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:18:56.220 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:18:56.220 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:18:56.220 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:18:56.220 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:18:56.220 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:18:56.220 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:18:56.220 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:18:56.220 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:18:56.220 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:18:56.220 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:18:56.220 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:18:56.220 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:18:56.220 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:18:56.220 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:18:56.220 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:18:56.220 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:18:56.220 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:18:56.220 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:18:56.220 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:18:56.220 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:18:56.220 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:18:56.220 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:18:56.220 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:18:56.220 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:56.220 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:18:56.220 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:56.220 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:18:56.220 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:56.220 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:18:56.220 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:18:56.220 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:18:56.220 Error setting digest 00:18:56.220 40126C98F47F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:18:56.220 40126C98F47F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:18:56.220 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:18:56.220 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:56.220 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:56.220 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:56.220 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:18:56.220 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:56.220 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:56.221 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:56.221 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:56.221 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:56.221 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:56.221 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:56.221 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:56.221 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:56.221 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:56.221 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:18:56.221 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:02.791 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:02.791 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:19:02.791 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:02.791 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:02.791 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:02.791 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:02.791 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:02.791 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:19:02.791 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:02.791 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:19:02.791 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:19:02.791 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:19:02.791 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:19:02.791 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:19:02.791 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:19:02.791 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:02.791 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:02.791 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:02.791 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:02.791 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:02.791 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:02.791 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:02.791 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:02.791 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:02.792 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:02.792 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:02.792 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:02.792 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:02.792 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:02.792 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:02.792 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:02.792 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:02.792 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:02.792 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:02.792 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:02.792 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:02.792 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:02.792 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:02.792 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:02.792 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:02.792 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:02.792 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:02.792 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:02.792 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:02.792 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:02.792 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:02.792 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:02.792 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:02.792 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:02.792 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:02.792 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:02.792 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:02.792 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:02.792 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:02.792 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:02.792 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:02.792 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:02.792 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:02.792 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:02.792 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:02.792 Found net devices under 0000:86:00.0: cvl_0_0 00:19:02.792 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:02.792 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:02.792 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:02.792 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:02.792 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:02.792 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:02.792 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:02.792 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:02.792 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:02.792 Found net devices under 0000:86:00.1: cvl_0_1 00:19:02.792 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:02.792 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:02.792 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:19:02.792 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:02.792 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:02.792 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:02.792 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:02.792 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:02.792 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:02.792 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:02.792 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:02.792 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:02.792 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:02.792 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:02.792 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:02.792 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:02.792 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:02.792 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:02.792 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:02.792 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:02.792 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:02.792 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:02.792 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:02.792 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:02.792 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:02.792 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:02.792 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:02.792 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:02.792 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:02.792 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:02.792 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.345 ms 00:19:02.792 00:19:02.792 --- 10.0.0.2 ping statistics --- 00:19:02.792 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:02.792 rtt min/avg/max/mdev = 0.345/0.345/0.345/0.000 ms 00:19:02.792 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:02.792 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:02.792 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.225 ms 00:19:02.792 00:19:02.792 --- 10.0.0.1 ping statistics --- 00:19:02.792 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:02.792 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:19:02.792 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:02.792 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:19:02.792 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:02.792 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:02.792 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:02.792 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:02.792 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:02.792 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:02.792 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:02.792 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:19:02.792 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:02.792 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:02.792 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:02.792 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=4085786 00:19:02.792 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 4085786 00:19:02.792 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:02.792 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 4085786 ']' 00:19:02.792 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:02.792 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:02.792 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:02.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:02.792 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:02.792 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:02.792 [2024-11-20 11:13:29.671392] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:19:02.792 [2024-11-20 11:13:29.671444] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:02.792 [2024-11-20 11:13:29.752582] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:02.793 [2024-11-20 11:13:29.791381] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:02.793 [2024-11-20 11:13:29.791416] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:02.793 [2024-11-20 11:13:29.791422] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:02.793 [2024-11-20 11:13:29.791428] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:02.793 [2024-11-20 11:13:29.791433] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:02.793 [2024-11-20 11:13:29.791972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:03.051 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:03.051 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:19:03.051 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:03.051 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:03.051 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:03.051 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:03.051 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:19:03.051 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:03.051 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:19:03.051 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.2xA 00:19:03.051 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:03.051 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.2xA 00:19:03.310 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.2xA 00:19:03.310 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.2xA 00:19:03.310 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:03.310 [2024-11-20 11:13:30.716139] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:03.310 [2024-11-20 11:13:30.732141] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:03.310 [2024-11-20 11:13:30.732334] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:03.310 malloc0 00:19:03.310 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:03.310 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=4086006 00:19:03.310 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 4086006 /var/tmp/bdevperf.sock 00:19:03.310 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:03.310 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 4086006 ']' 00:19:03.310 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:03.310 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:03.310 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:03.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:03.310 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:03.310 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:03.569 [2024-11-20 11:13:30.861956] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:19:03.569 [2024-11-20 11:13:30.862009] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4086006 ] 00:19:03.569 [2024-11-20 11:13:30.938191] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:03.569 [2024-11-20 11:13:30.978604] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:04.507 11:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:04.507 11:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:19:04.507 11:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.2xA 00:19:04.507 11:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:04.765 [2024-11-20 11:13:32.079848] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:04.766 TLSTESTn1 00:19:04.766 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:05.025 Running I/O for 10 seconds... 00:19:06.898 5150.00 IOPS, 20.12 MiB/s [2024-11-20T10:13:35.330Z] 5358.00 IOPS, 20.93 MiB/s [2024-11-20T10:13:36.709Z] 5385.00 IOPS, 21.04 MiB/s [2024-11-20T10:13:37.644Z] 5370.75 IOPS, 20.98 MiB/s [2024-11-20T10:13:38.580Z] 5406.80 IOPS, 21.12 MiB/s [2024-11-20T10:13:39.594Z] 5394.33 IOPS, 21.07 MiB/s [2024-11-20T10:13:40.571Z] 5401.86 IOPS, 21.10 MiB/s [2024-11-20T10:13:41.508Z] 5404.25 IOPS, 21.11 MiB/s [2024-11-20T10:13:42.444Z] 5406.44 IOPS, 21.12 MiB/s [2024-11-20T10:13:42.444Z] 5412.50 IOPS, 21.14 MiB/s 00:19:14.948 Latency(us) 00:19:14.948 [2024-11-20T10:13:42.444Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:14.948 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:14.948 Verification LBA range: start 0x0 length 0x2000 00:19:14.948 TLSTESTn1 : 10.01 5418.17 21.16 0.00 0.00 23590.13 5185.89 26784.28 00:19:14.948 [2024-11-20T10:13:42.444Z] =================================================================================================================== 00:19:14.948 [2024-11-20T10:13:42.444Z] Total : 5418.17 21.16 0.00 0.00 23590.13 5185.89 26784.28 00:19:14.948 { 00:19:14.948 "results": [ 00:19:14.948 { 00:19:14.948 "job": "TLSTESTn1", 00:19:14.948 "core_mask": "0x4", 00:19:14.948 "workload": "verify", 00:19:14.948 "status": "finished", 00:19:14.948 "verify_range": { 00:19:14.948 "start": 0, 00:19:14.948 "length": 8192 00:19:14.948 }, 00:19:14.948 "queue_depth": 128, 00:19:14.948 "io_size": 4096, 00:19:14.948 "runtime": 10.012786, 00:19:14.948 "iops": 5418.172324865427, 00:19:14.948 "mibps": 21.164735644005574, 00:19:14.948 "io_failed": 0, 00:19:14.948 "io_timeout": 0, 00:19:14.948 "avg_latency_us": 23590.126155318314, 00:19:14.948 "min_latency_us": 5185.892173913044, 00:19:14.948 "max_latency_us": 26784.278260869567 00:19:14.948 } 00:19:14.948 ], 00:19:14.948 "core_count": 1 00:19:14.948 } 00:19:14.948 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:19:14.949 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:19:14.949 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:19:14.949 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:19:14.949 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:19:14.949 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:14.949 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:19:14.949 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:19:14.949 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:19:14.949 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:14.949 nvmf_trace.0 00:19:14.949 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:19:14.949 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 4086006 00:19:14.949 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 4086006 ']' 00:19:14.949 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 4086006 00:19:14.949 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:19:14.949 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:14.949 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4086006 00:19:15.207 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:15.207 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:15.207 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4086006' 00:19:15.207 killing process with pid 4086006 00:19:15.207 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 4086006 00:19:15.207 Received shutdown signal, test time was about 10.000000 seconds 00:19:15.207 00:19:15.207 Latency(us) 00:19:15.207 [2024-11-20T10:13:42.703Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:15.207 [2024-11-20T10:13:42.703Z] =================================================================================================================== 00:19:15.207 [2024-11-20T10:13:42.703Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:15.207 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 4086006 00:19:15.207 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:19:15.207 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:15.207 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:19:15.207 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:15.207 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:19:15.207 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:15.207 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:15.207 rmmod nvme_tcp 00:19:15.207 rmmod nvme_fabrics 00:19:15.207 rmmod nvme_keyring 00:19:15.207 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:15.207 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:19:15.207 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:19:15.207 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 4085786 ']' 00:19:15.207 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 4085786 00:19:15.208 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 4085786 ']' 00:19:15.208 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 4085786 00:19:15.208 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:19:15.208 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:15.208 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4085786 00:19:15.466 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:15.466 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:15.466 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4085786' 00:19:15.466 killing process with pid 4085786 00:19:15.466 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 4085786 00:19:15.466 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 4085786 00:19:15.466 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:15.467 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:15.467 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:15.467 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:19:15.467 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:19:15.467 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:15.467 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:19:15.467 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:15.467 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:15.467 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:15.467 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:15.467 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:18.006 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:18.006 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.2xA 00:19:18.006 00:19:18.006 real 0m21.680s 00:19:18.006 user 0m23.460s 00:19:18.006 sys 0m9.733s 00:19:18.006 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:18.006 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:18.006 ************************************ 00:19:18.006 END TEST nvmf_fips 00:19:18.006 ************************************ 00:19:18.006 11:13:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:19:18.006 11:13:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:18.006 11:13:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:18.006 11:13:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:18.006 ************************************ 00:19:18.006 START TEST nvmf_control_msg_list 00:19:18.006 ************************************ 00:19:18.006 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:19:18.006 * Looking for test storage... 00:19:18.006 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:18.006 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:18.006 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lcov --version 00:19:18.006 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:18.006 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:18.006 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:18.006 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:18.006 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:18.006 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:19:18.006 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:19:18.006 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:19:18.006 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:19:18.006 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:19:18.006 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:19:18.006 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:19:18.006 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:18.006 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:19:18.006 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:19:18.006 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:18.006 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:18.006 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:19:18.006 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:19:18.006 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:18.006 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:19:18.006 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:19:18.006 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:19:18.006 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:19:18.006 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:18.006 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:19:18.006 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:19:18.006 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:18.006 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:18.006 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:19:18.006 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:18.006 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:18.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:18.006 --rc genhtml_branch_coverage=1 00:19:18.006 --rc genhtml_function_coverage=1 00:19:18.006 --rc genhtml_legend=1 00:19:18.006 --rc geninfo_all_blocks=1 00:19:18.006 --rc geninfo_unexecuted_blocks=1 00:19:18.006 00:19:18.006 ' 00:19:18.006 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:18.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:18.006 --rc genhtml_branch_coverage=1 00:19:18.006 --rc genhtml_function_coverage=1 00:19:18.006 --rc genhtml_legend=1 00:19:18.006 --rc geninfo_all_blocks=1 00:19:18.006 --rc geninfo_unexecuted_blocks=1 00:19:18.006 00:19:18.006 ' 00:19:18.006 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:18.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:18.006 --rc genhtml_branch_coverage=1 00:19:18.006 --rc genhtml_function_coverage=1 00:19:18.006 --rc genhtml_legend=1 00:19:18.006 --rc geninfo_all_blocks=1 00:19:18.006 --rc geninfo_unexecuted_blocks=1 00:19:18.006 00:19:18.006 ' 00:19:18.006 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:18.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:18.006 --rc genhtml_branch_coverage=1 00:19:18.006 --rc genhtml_function_coverage=1 00:19:18.006 --rc genhtml_legend=1 00:19:18.006 --rc geninfo_all_blocks=1 00:19:18.006 --rc geninfo_unexecuted_blocks=1 00:19:18.006 00:19:18.006 ' 00:19:18.006 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:18.006 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:19:18.006 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:18.006 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:18.006 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:18.006 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:18.006 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:18.006 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:18.006 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:18.006 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:18.006 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:18.006 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:18.006 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:18.006 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:19:18.006 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:18.006 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:18.006 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:18.006 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:18.006 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:18.006 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:19:18.006 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:18.006 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:18.006 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:18.007 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:18.007 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:18.007 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:18.007 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:19:18.007 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:18.007 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:19:18.007 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:18.007 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:18.007 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:18.007 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:18.007 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:18.007 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:18.007 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:18.007 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:18.007 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:18.007 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:18.007 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:19:18.007 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:18.007 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:18.007 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:18.007 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:18.007 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:18.007 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:18.007 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:18.007 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:18.007 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:18.007 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:18.007 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:19:18.007 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:24.578 11:13:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:24.578 11:13:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:19:24.578 11:13:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:24.578 11:13:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:24.578 11:13:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:24.578 11:13:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:24.578 11:13:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:24.578 11:13:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:19:24.578 11:13:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:24.578 11:13:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:19:24.578 11:13:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:19:24.578 11:13:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:19:24.578 11:13:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:19:24.578 11:13:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:19:24.578 11:13:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:19:24.578 11:13:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:24.578 11:13:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:24.578 11:13:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:24.578 11:13:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:24.578 11:13:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:24.578 11:13:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:24.578 11:13:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:24.578 11:13:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:24.578 11:13:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:24.578 11:13:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:24.578 11:13:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:24.578 11:13:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:24.578 11:13:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:24.578 11:13:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:24.578 11:13:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:24.578 11:13:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:24.578 11:13:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:24.578 11:13:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:24.578 11:13:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:24.578 11:13:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:24.578 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:24.578 11:13:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:24.578 11:13:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:24.578 11:13:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:24.578 11:13:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:24.578 11:13:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:24.578 11:13:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:24.578 11:13:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:24.578 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:24.578 11:13:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:24.578 11:13:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:24.578 11:13:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:24.578 11:13:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:24.578 11:13:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:24.578 11:13:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:24.578 11:13:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:24.578 11:13:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:24.578 11:13:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:24.578 11:13:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:24.578 11:13:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:24.578 11:13:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:24.578 11:13:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:24.578 11:13:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:24.578 11:13:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:24.578 11:13:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:24.578 Found net devices under 0000:86:00.0: cvl_0_0 00:19:24.578 11:13:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:24.578 11:13:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:24.578 11:13:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:24.578 11:13:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:24.578 11:13:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:24.578 11:13:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:24.578 11:13:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:24.578 11:13:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:24.578 11:13:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:24.578 Found net devices under 0000:86:00.1: cvl_0_1 00:19:24.578 11:13:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:24.578 11:13:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:24.578 11:13:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:19:24.578 11:13:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:24.578 11:13:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:24.578 11:13:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:24.578 11:13:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:24.578 11:13:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:24.578 11:13:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:24.578 11:13:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:24.578 11:13:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:24.578 11:13:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:24.578 11:13:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:24.578 11:13:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:24.578 11:13:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:24.578 11:13:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:24.578 11:13:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:24.578 11:13:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:24.578 11:13:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:24.578 11:13:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:24.578 11:13:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:24.578 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:24.579 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:24.579 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:24.579 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:24.579 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:24.579 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:24.579 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:24.579 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:24.579 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:24.579 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.480 ms 00:19:24.579 00:19:24.579 --- 10.0.0.2 ping statistics --- 00:19:24.579 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:24.579 rtt min/avg/max/mdev = 0.480/0.480/0.480/0.000 ms 00:19:24.579 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:24.579 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:24.579 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:19:24.579 00:19:24.579 --- 10.0.0.1 ping statistics --- 00:19:24.579 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:24.579 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:19:24.579 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:24.579 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:19:24.579 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:24.579 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:24.579 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:24.579 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:24.579 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:24.579 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:24.579 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:24.579 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:19:24.579 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:24.579 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:24.579 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:24.579 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=4091420 00:19:24.579 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:24.579 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 4091420 00:19:24.579 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 4091420 ']' 00:19:24.579 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:24.579 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:24.579 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:24.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:24.579 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:24.579 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:24.579 [2024-11-20 11:13:51.258884] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:19:24.579 [2024-11-20 11:13:51.258935] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:24.579 [2024-11-20 11:13:51.340102] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:24.579 [2024-11-20 11:13:51.381378] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:24.579 [2024-11-20 11:13:51.381413] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:24.579 [2024-11-20 11:13:51.381420] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:24.579 [2024-11-20 11:13:51.381427] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:24.579 [2024-11-20 11:13:51.381432] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:24.579 [2024-11-20 11:13:51.381996] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:24.579 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:24.579 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:19:24.579 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:24.579 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:24.579 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:24.579 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:24.579 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:19:24.579 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:19:24.579 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:19:24.579 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.579 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:24.579 [2024-11-20 11:13:51.521462] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:24.579 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.579 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:19:24.579 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.579 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:24.579 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.579 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:19:24.579 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.579 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:24.579 Malloc0 00:19:24.579 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.579 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:19:24.579 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.579 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:24.579 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.579 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:24.579 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.579 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:24.579 [2024-11-20 11:13:51.561860] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:24.579 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.579 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=4091446 00:19:24.579 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:24.579 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=4091447 00:19:24.579 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:24.579 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=4091448 00:19:24.579 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:24.579 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 4091446 00:19:24.579 [2024-11-20 11:13:51.660577] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:24.579 [2024-11-20 11:13:51.660771] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:24.579 [2024-11-20 11:13:51.660940] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:25.519 Initializing NVMe Controllers 00:19:25.519 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:25.519 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:19:25.519 Initialization complete. Launching workers. 00:19:25.519 ======================================================== 00:19:25.519 Latency(us) 00:19:25.519 Device Information : IOPS MiB/s Average min max 00:19:25.519 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 25.00 0.10 40880.15 40356.36 41048.37 00:19:25.519 ======================================================== 00:19:25.519 Total : 25.00 0.10 40880.15 40356.36 41048.37 00:19:25.519 00:19:25.519 Initializing NVMe Controllers 00:19:25.519 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:25.519 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:19:25.519 Initialization complete. Launching workers. 00:19:25.519 ======================================================== 00:19:25.519 Latency(us) 00:19:25.519 Device Information : IOPS MiB/s Average min max 00:19:25.519 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 25.00 0.10 40920.41 40383.91 41919.90 00:19:25.519 ======================================================== 00:19:25.519 Total : 25.00 0.10 40920.41 40383.91 41919.90 00:19:25.519 00:19:25.519 11:13:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 4091447 00:19:25.519 11:13:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 4091448 00:19:25.519 Initializing NVMe Controllers 00:19:25.519 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:25.519 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:19:25.519 Initialization complete. Launching workers. 00:19:25.519 ======================================================== 00:19:25.519 Latency(us) 00:19:25.519 Device Information : IOPS MiB/s Average min max 00:19:25.519 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 25.00 0.10 40882.26 40448.01 41027.94 00:19:25.519 ======================================================== 00:19:25.519 Total : 25.00 0.10 40882.26 40448.01 41027.94 00:19:25.519 00:19:25.519 11:13:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:19:25.519 11:13:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:19:25.519 11:13:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:25.519 11:13:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:19:25.519 11:13:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:25.519 11:13:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:19:25.519 11:13:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:25.519 11:13:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:25.519 rmmod nvme_tcp 00:19:25.519 rmmod nvme_fabrics 00:19:25.519 rmmod nvme_keyring 00:19:25.519 11:13:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:25.519 11:13:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:19:25.519 11:13:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:19:25.519 11:13:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 4091420 ']' 00:19:25.519 11:13:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 4091420 00:19:25.519 11:13:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 4091420 ']' 00:19:25.519 11:13:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 4091420 00:19:25.519 11:13:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:19:25.519 11:13:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:25.519 11:13:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4091420 00:19:25.519 11:13:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:25.519 11:13:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:25.519 11:13:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4091420' 00:19:25.519 killing process with pid 4091420 00:19:25.519 11:13:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 4091420 00:19:25.519 11:13:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 4091420 00:19:25.779 11:13:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:25.779 11:13:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:25.779 11:13:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:25.779 11:13:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:19:25.779 11:13:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:19:25.779 11:13:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:25.779 11:13:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:19:25.779 11:13:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:25.779 11:13:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:25.779 11:13:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:25.779 11:13:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:25.779 11:13:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:28.316 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:28.316 00:19:28.316 real 0m10.171s 00:19:28.316 user 0m6.994s 00:19:28.316 sys 0m5.343s 00:19:28.316 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:28.316 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:28.316 ************************************ 00:19:28.316 END TEST nvmf_control_msg_list 00:19:28.316 ************************************ 00:19:28.316 11:13:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:19:28.316 11:13:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:28.316 11:13:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:28.316 11:13:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:28.316 ************************************ 00:19:28.316 START TEST nvmf_wait_for_buf 00:19:28.316 ************************************ 00:19:28.316 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:19:28.316 * Looking for test storage... 00:19:28.316 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:28.316 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:28.316 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lcov --version 00:19:28.316 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:28.316 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:28.316 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:28.316 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:28.316 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:28.316 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:19:28.316 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:19:28.316 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:19:28.316 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:19:28.316 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:19:28.316 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:19:28.316 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:19:28.316 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:28.316 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:19:28.316 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:19:28.316 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:28.316 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:28.316 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:19:28.316 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:19:28.316 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:28.316 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:19:28.316 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:19:28.316 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:19:28.316 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:19:28.316 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:28.316 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:19:28.316 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:19:28.316 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:28.316 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:28.316 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:19:28.316 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:28.316 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:28.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:28.316 --rc genhtml_branch_coverage=1 00:19:28.316 --rc genhtml_function_coverage=1 00:19:28.316 --rc genhtml_legend=1 00:19:28.316 --rc geninfo_all_blocks=1 00:19:28.316 --rc geninfo_unexecuted_blocks=1 00:19:28.316 00:19:28.316 ' 00:19:28.316 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:28.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:28.316 --rc genhtml_branch_coverage=1 00:19:28.316 --rc genhtml_function_coverage=1 00:19:28.316 --rc genhtml_legend=1 00:19:28.316 --rc geninfo_all_blocks=1 00:19:28.316 --rc geninfo_unexecuted_blocks=1 00:19:28.316 00:19:28.316 ' 00:19:28.316 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:28.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:28.316 --rc genhtml_branch_coverage=1 00:19:28.316 --rc genhtml_function_coverage=1 00:19:28.316 --rc genhtml_legend=1 00:19:28.316 --rc geninfo_all_blocks=1 00:19:28.316 --rc geninfo_unexecuted_blocks=1 00:19:28.316 00:19:28.316 ' 00:19:28.316 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:28.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:28.316 --rc genhtml_branch_coverage=1 00:19:28.316 --rc genhtml_function_coverage=1 00:19:28.316 --rc genhtml_legend=1 00:19:28.316 --rc geninfo_all_blocks=1 00:19:28.316 --rc geninfo_unexecuted_blocks=1 00:19:28.316 00:19:28.316 ' 00:19:28.316 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:28.316 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:19:28.316 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:28.316 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:28.316 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:28.316 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:28.316 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:28.316 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:28.316 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:28.316 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:28.316 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:28.317 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:28.317 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:28.317 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:19:28.317 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:28.317 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:28.317 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:28.317 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:28.317 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:28.317 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:19:28.317 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:28.317 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:28.317 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:28.317 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:28.317 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:28.317 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:28.317 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:19:28.317 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:28.317 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:19:28.317 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:28.317 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:28.317 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:28.317 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:28.317 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:28.317 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:28.317 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:28.317 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:28.317 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:28.317 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:28.317 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:19:28.317 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:28.317 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:28.317 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:28.317 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:28.317 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:28.317 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:28.317 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:28.317 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:28.317 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:28.317 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:28.317 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:19:28.317 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:34.887 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:34.887 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:19:34.887 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:34.887 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:34.887 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:34.887 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:34.887 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:34.887 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:19:34.887 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:34.887 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:19:34.887 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:19:34.887 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:19:34.887 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:19:34.887 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:19:34.887 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:19:34.887 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:34.887 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:34.887 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:34.887 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:34.887 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:34.887 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:34.887 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:34.887 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:34.887 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:34.887 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:34.887 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:34.887 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:34.887 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:34.887 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:34.887 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:34.887 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:34.887 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:34.887 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:34.887 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:34.887 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:34.887 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:34.887 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:34.887 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:34.887 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:34.887 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:34.887 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:34.887 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:34.887 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:34.887 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:34.887 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:34.887 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:34.887 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:34.887 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:34.887 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:34.887 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:34.887 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:34.887 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:34.887 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:34.887 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:34.887 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:34.887 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:34.887 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:34.887 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:34.887 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:34.887 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:34.887 Found net devices under 0000:86:00.0: cvl_0_0 00:19:34.887 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:34.887 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:34.887 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:34.888 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:34.888 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:34.888 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:34.888 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:34.888 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:34.888 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:34.888 Found net devices under 0000:86:00.1: cvl_0_1 00:19:34.888 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:34.888 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:34.888 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:19:34.888 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:34.888 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:34.888 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:34.888 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:34.888 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:34.888 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:34.888 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:34.888 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:34.888 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:34.888 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:34.888 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:34.888 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:34.888 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:34.888 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:34.888 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:34.888 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:34.888 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:34.888 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:34.888 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:34.888 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:34.888 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:34.888 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:34.888 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:34.888 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:34.888 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:34.888 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:34.888 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:34.888 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.506 ms 00:19:34.888 00:19:34.888 --- 10.0.0.2 ping statistics --- 00:19:34.888 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:34.888 rtt min/avg/max/mdev = 0.506/0.506/0.506/0.000 ms 00:19:34.888 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:34.888 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:34.888 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:19:34.888 00:19:34.888 --- 10.0.0.1 ping statistics --- 00:19:34.888 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:34.888 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:19:34.888 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:34.888 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:19:34.888 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:34.888 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:34.888 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:34.888 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:34.888 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:34.888 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:34.888 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:34.888 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:19:34.888 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:34.888 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:34.888 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:34.888 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=4095201 00:19:34.888 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:19:34.888 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 4095201 00:19:34.888 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 4095201 ']' 00:19:34.888 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:34.888 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:34.888 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:34.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:34.888 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:34.888 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:34.888 [2024-11-20 11:14:01.527788] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:19:34.888 [2024-11-20 11:14:01.527833] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:34.888 [2024-11-20 11:14:01.607116] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:34.888 [2024-11-20 11:14:01.649068] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:34.888 [2024-11-20 11:14:01.649103] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:34.888 [2024-11-20 11:14:01.649111] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:34.888 [2024-11-20 11:14:01.649118] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:34.888 [2024-11-20 11:14:01.649124] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:34.888 [2024-11-20 11:14:01.649690] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:34.888 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:34.888 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:19:34.888 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:34.888 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:34.888 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:34.888 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:34.888 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:19:34.888 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:19:34.888 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:19:34.888 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.888 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:34.888 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.888 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:19:34.888 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.888 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:34.888 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.888 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:19:34.888 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.888 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:34.888 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.888 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:19:34.888 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.888 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:34.888 Malloc0 00:19:34.888 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.888 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:19:34.888 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.888 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:34.888 [2024-11-20 11:14:01.823972] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:34.889 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.889 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:19:34.889 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.889 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:34.889 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.889 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:19:34.889 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.889 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:34.889 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.889 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:34.889 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.889 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:34.889 [2024-11-20 11:14:01.852157] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:34.889 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.889 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:34.889 [2024-11-20 11:14:01.938299] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:35.825 Initializing NVMe Controllers 00:19:35.826 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:35.826 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:19:35.826 Initialization complete. Launching workers. 00:19:35.826 ======================================================== 00:19:35.826 Latency(us) 00:19:35.826 Device Information : IOPS MiB/s Average min max 00:19:35.826 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 129.00 16.12 32238.24 7250.93 63864.36 00:19:35.826 ======================================================== 00:19:35.826 Total : 129.00 16.12 32238.24 7250.93 63864.36 00:19:35.826 00:19:35.826 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:19:35.826 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:19:35.826 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.826 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:36.085 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.085 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=2038 00:19:36.085 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 2038 -eq 0 ]] 00:19:36.085 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:19:36.085 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:19:36.085 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:36.085 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:19:36.085 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:36.085 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:19:36.085 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:36.085 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:36.085 rmmod nvme_tcp 00:19:36.085 rmmod nvme_fabrics 00:19:36.085 rmmod nvme_keyring 00:19:36.085 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:36.085 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:19:36.085 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:19:36.085 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 4095201 ']' 00:19:36.085 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 4095201 00:19:36.085 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 4095201 ']' 00:19:36.085 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 4095201 00:19:36.085 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:19:36.085 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:36.085 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4095201 00:19:36.085 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:36.085 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:36.085 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4095201' 00:19:36.085 killing process with pid 4095201 00:19:36.085 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 4095201 00:19:36.085 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 4095201 00:19:36.344 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:36.344 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:36.344 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:36.344 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:19:36.344 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:19:36.344 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:36.344 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:19:36.344 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:36.344 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:36.344 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:36.344 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:36.344 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:38.249 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:38.249 00:19:38.249 real 0m10.395s 00:19:38.249 user 0m3.932s 00:19:38.249 sys 0m4.931s 00:19:38.249 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:38.249 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:38.249 ************************************ 00:19:38.249 END TEST nvmf_wait_for_buf 00:19:38.249 ************************************ 00:19:38.249 11:14:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:19:38.249 11:14:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:19:38.249 11:14:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:19:38.249 11:14:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:19:38.249 11:14:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:19:38.249 11:14:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:44.823 11:14:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:44.823 11:14:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:19:44.823 11:14:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:44.823 11:14:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:44.824 11:14:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:44.824 11:14:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:44.824 11:14:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:44.824 11:14:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:19:44.824 11:14:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:44.824 11:14:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:19:44.824 11:14:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:19:44.824 11:14:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:19:44.824 11:14:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:19:44.824 11:14:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:19:44.824 11:14:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:19:44.824 11:14:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:44.824 11:14:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:44.824 11:14:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:44.824 11:14:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:44.824 11:14:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:44.824 11:14:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:44.824 11:14:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:44.824 11:14:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:44.824 11:14:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:44.824 11:14:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:44.824 11:14:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:44.824 11:14:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:44.824 11:14:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:44.824 11:14:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:44.824 11:14:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:44.824 11:14:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:44.824 11:14:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:44.824 11:14:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:44.824 11:14:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:44.824 11:14:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:44.824 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:44.824 11:14:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:44.824 11:14:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:44.824 11:14:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:44.824 11:14:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:44.824 11:14:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:44.824 11:14:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:44.824 11:14:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:44.824 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:44.824 11:14:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:44.824 11:14:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:44.824 11:14:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:44.824 11:14:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:44.824 11:14:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:44.824 11:14:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:44.824 11:14:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:44.824 11:14:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:44.824 11:14:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:44.824 11:14:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:44.824 11:14:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:44.824 11:14:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:44.824 11:14:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:44.824 11:14:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:44.824 11:14:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:44.824 11:14:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:44.824 Found net devices under 0000:86:00.0: cvl_0_0 00:19:44.824 11:14:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:44.824 11:14:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:44.824 11:14:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:44.824 11:14:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:44.824 11:14:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:44.824 11:14:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:44.824 11:14:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:44.824 11:14:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:44.824 11:14:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:44.824 Found net devices under 0000:86:00.1: cvl_0_1 00:19:44.824 11:14:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:44.824 11:14:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:44.824 11:14:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:44.824 11:14:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:19:44.824 11:14:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:19:44.824 11:14:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:44.824 11:14:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:44.824 11:14:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:44.824 ************************************ 00:19:44.824 START TEST nvmf_perf_adq 00:19:44.824 ************************************ 00:19:44.824 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:19:44.824 * Looking for test storage... 00:19:44.824 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:44.824 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:44.824 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lcov --version 00:19:44.824 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:44.824 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:44.824 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:44.824 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:44.824 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:44.824 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:19:44.824 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:19:44.824 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:19:44.824 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:19:44.824 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:19:44.824 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:19:44.824 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:19:44.824 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:44.824 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:19:44.824 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:19:44.824 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:44.824 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:44.824 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:19:44.824 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:19:44.824 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:44.824 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:19:44.824 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:19:44.824 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:19:44.824 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:19:44.824 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:44.824 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:19:44.824 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:19:44.824 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:44.824 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:44.825 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:19:44.825 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:44.825 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:44.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:44.825 --rc genhtml_branch_coverage=1 00:19:44.825 --rc genhtml_function_coverage=1 00:19:44.825 --rc genhtml_legend=1 00:19:44.825 --rc geninfo_all_blocks=1 00:19:44.825 --rc geninfo_unexecuted_blocks=1 00:19:44.825 00:19:44.825 ' 00:19:44.825 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:44.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:44.825 --rc genhtml_branch_coverage=1 00:19:44.825 --rc genhtml_function_coverage=1 00:19:44.825 --rc genhtml_legend=1 00:19:44.825 --rc geninfo_all_blocks=1 00:19:44.825 --rc geninfo_unexecuted_blocks=1 00:19:44.825 00:19:44.825 ' 00:19:44.825 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:44.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:44.825 --rc genhtml_branch_coverage=1 00:19:44.825 --rc genhtml_function_coverage=1 00:19:44.825 --rc genhtml_legend=1 00:19:44.825 --rc geninfo_all_blocks=1 00:19:44.825 --rc geninfo_unexecuted_blocks=1 00:19:44.825 00:19:44.825 ' 00:19:44.825 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:44.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:44.825 --rc genhtml_branch_coverage=1 00:19:44.825 --rc genhtml_function_coverage=1 00:19:44.825 --rc genhtml_legend=1 00:19:44.825 --rc geninfo_all_blocks=1 00:19:44.825 --rc geninfo_unexecuted_blocks=1 00:19:44.825 00:19:44.825 ' 00:19:44.825 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:44.825 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:19:44.825 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:44.825 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:44.825 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:44.825 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:44.825 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:44.825 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:44.825 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:44.825 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:44.825 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:44.825 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:44.825 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:44.825 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:19:44.825 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:44.825 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:44.825 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:44.825 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:44.825 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:44.825 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:19:44.825 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:44.825 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:44.825 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:44.825 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:44.825 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:44.825 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:44.825 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:19:44.825 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:44.825 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:19:44.825 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:44.825 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:44.825 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:44.825 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:44.825 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:44.825 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:44.825 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:44.825 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:44.825 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:44.825 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:44.825 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:19:44.825 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:19:44.825 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:50.098 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:50.098 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:19:50.098 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:50.098 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:50.098 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:50.098 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:50.098 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:50.098 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:19:50.098 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:50.098 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:19:50.098 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:19:50.098 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:19:50.098 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:19:50.098 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:19:50.098 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:19:50.098 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:50.098 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:50.098 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:50.098 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:50.098 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:50.098 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:50.098 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:50.098 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:50.098 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:50.098 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:50.098 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:50.098 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:50.098 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:50.098 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:50.098 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:50.098 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:50.098 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:50.098 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:50.098 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:50.098 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:50.098 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:50.098 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:50.098 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:50.098 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:50.098 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:50.098 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:50.098 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:50.099 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:50.099 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:50.099 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:50.099 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:50.099 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:50.099 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:50.099 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:50.099 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:50.099 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:50.099 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:50.099 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:50.099 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:50.099 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:50.099 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:50.099 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:50.099 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:50.099 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:50.099 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:50.099 Found net devices under 0000:86:00.0: cvl_0_0 00:19:50.099 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:50.099 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:50.099 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:50.099 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:50.099 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:50.099 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:50.099 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:50.099 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:50.099 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:50.099 Found net devices under 0000:86:00.1: cvl_0_1 00:19:50.099 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:50.099 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:50.099 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:50.099 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:19:50.099 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:19:50.099 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:19:50.099 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:19:50.099 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:19:51.037 11:14:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:19:52.939 11:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:19:58.213 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:19:58.213 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:58.213 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:58.213 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:58.213 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:58.213 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:58.213 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:58.213 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:58.213 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:58.213 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:58.213 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:58.213 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:19:58.213 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:58.213 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:58.213 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:19:58.213 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:58.213 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:58.213 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:58.213 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:58.213 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:58.213 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:19:58.213 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:58.213 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:19:58.213 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:19:58.213 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:19:58.213 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:19:58.213 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:19:58.213 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:19:58.213 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:58.213 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:58.213 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:58.213 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:58.213 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:58.213 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:58.213 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:58.213 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:58.213 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:58.213 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:58.213 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:58.213 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:58.213 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:58.213 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:58.213 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:58.213 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:58.213 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:58.213 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:58.213 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:58.213 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:58.213 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:58.213 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:58.213 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:58.213 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:58.213 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:58.213 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:58.213 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:58.213 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:58.213 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:58.213 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:58.213 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:58.213 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:58.213 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:58.213 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:58.213 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:58.213 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:58.213 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:58.213 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:58.213 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:58.213 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:58.213 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:58.213 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:58.213 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:58.213 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:58.213 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:58.213 Found net devices under 0000:86:00.0: cvl_0_0 00:19:58.213 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:58.213 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:58.213 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:58.213 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:58.213 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:58.213 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:58.213 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:58.213 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:58.213 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:58.213 Found net devices under 0000:86:00.1: cvl_0_1 00:19:58.213 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:58.213 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:58.213 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:19:58.213 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:58.213 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:58.213 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:58.213 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:58.213 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:58.213 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:58.213 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:58.213 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:58.213 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:58.213 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:58.213 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:58.213 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:58.213 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:58.213 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:58.213 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:58.213 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:58.213 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:58.213 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:58.214 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:58.214 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:58.214 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:58.214 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:58.214 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:58.214 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:58.214 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:58.214 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:58.214 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:58.214 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.463 ms 00:19:58.214 00:19:58.214 --- 10.0.0.2 ping statistics --- 00:19:58.214 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:58.214 rtt min/avg/max/mdev = 0.463/0.463/0.463/0.000 ms 00:19:58.214 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:58.214 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:58.214 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:19:58.214 00:19:58.214 --- 10.0.0.1 ping statistics --- 00:19:58.214 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:58.214 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:19:58.214 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:58.214 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:19:58.214 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:58.214 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:58.214 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:58.214 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:58.214 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:58.214 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:58.214 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:58.214 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:19:58.214 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:58.214 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:58.214 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:58.214 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=4103540 00:19:58.214 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 4103540 00:19:58.214 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:19:58.214 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 4103540 ']' 00:19:58.214 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:58.214 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:58.214 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:58.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:58.214 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:58.214 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:58.214 [2024-11-20 11:14:25.675409] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:19:58.214 [2024-11-20 11:14:25.675461] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:58.473 [2024-11-20 11:14:25.755852] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:58.473 [2024-11-20 11:14:25.800190] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:58.473 [2024-11-20 11:14:25.800227] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:58.473 [2024-11-20 11:14:25.800235] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:58.473 [2024-11-20 11:14:25.800241] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:58.473 [2024-11-20 11:14:25.800246] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:58.473 [2024-11-20 11:14:25.801853] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:58.473 [2024-11-20 11:14:25.801977] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:58.473 [2024-11-20 11:14:25.802034] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:58.473 [2024-11-20 11:14:25.802035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:58.473 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:58.473 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:19:58.473 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:58.473 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:58.473 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:58.473 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:58.473 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:19:58.473 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:19:58.473 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:19:58.473 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.473 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:58.473 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.473 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:19:58.473 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:19:58.473 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.473 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:58.473 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.473 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:19:58.473 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.473 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:58.732 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.732 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:19:58.732 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.732 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:58.732 [2024-11-20 11:14:25.999888] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:58.732 11:14:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.732 11:14:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:58.732 11:14:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.732 11:14:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:58.732 Malloc1 00:19:58.732 11:14:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.732 11:14:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:58.732 11:14:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.732 11:14:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:58.732 11:14:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.732 11:14:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:58.732 11:14:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.732 11:14:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:58.732 11:14:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.732 11:14:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:58.732 11:14:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.732 11:14:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:58.732 [2024-11-20 11:14:26.063421] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:58.732 11:14:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.732 11:14:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=4103566 00:19:58.732 11:14:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:19:58.732 11:14:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:20:00.633 11:14:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:20:00.633 11:14:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.633 11:14:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:00.633 11:14:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.633 11:14:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:20:00.633 "tick_rate": 2300000000, 00:20:00.633 "poll_groups": [ 00:20:00.633 { 00:20:00.633 "name": "nvmf_tgt_poll_group_000", 00:20:00.633 "admin_qpairs": 1, 00:20:00.633 "io_qpairs": 1, 00:20:00.633 "current_admin_qpairs": 1, 00:20:00.633 "current_io_qpairs": 1, 00:20:00.633 "pending_bdev_io": 0, 00:20:00.633 "completed_nvme_io": 20156, 00:20:00.633 "transports": [ 00:20:00.633 { 00:20:00.633 "trtype": "TCP" 00:20:00.633 } 00:20:00.633 ] 00:20:00.633 }, 00:20:00.633 { 00:20:00.633 "name": "nvmf_tgt_poll_group_001", 00:20:00.633 "admin_qpairs": 0, 00:20:00.633 "io_qpairs": 1, 00:20:00.633 "current_admin_qpairs": 0, 00:20:00.633 "current_io_qpairs": 1, 00:20:00.633 "pending_bdev_io": 0, 00:20:00.633 "completed_nvme_io": 20342, 00:20:00.633 "transports": [ 00:20:00.633 { 00:20:00.633 "trtype": "TCP" 00:20:00.633 } 00:20:00.633 ] 00:20:00.633 }, 00:20:00.633 { 00:20:00.633 "name": "nvmf_tgt_poll_group_002", 00:20:00.633 "admin_qpairs": 0, 00:20:00.633 "io_qpairs": 1, 00:20:00.633 "current_admin_qpairs": 0, 00:20:00.633 "current_io_qpairs": 1, 00:20:00.633 "pending_bdev_io": 0, 00:20:00.633 "completed_nvme_io": 20281, 00:20:00.633 "transports": [ 00:20:00.633 { 00:20:00.633 "trtype": "TCP" 00:20:00.633 } 00:20:00.633 ] 00:20:00.633 }, 00:20:00.633 { 00:20:00.633 "name": "nvmf_tgt_poll_group_003", 00:20:00.633 "admin_qpairs": 0, 00:20:00.633 "io_qpairs": 1, 00:20:00.633 "current_admin_qpairs": 0, 00:20:00.633 "current_io_qpairs": 1, 00:20:00.633 "pending_bdev_io": 0, 00:20:00.633 "completed_nvme_io": 20319, 00:20:00.633 "transports": [ 00:20:00.633 { 00:20:00.633 "trtype": "TCP" 00:20:00.633 } 00:20:00.633 ] 00:20:00.633 } 00:20:00.633 ] 00:20:00.633 }' 00:20:00.633 11:14:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:20:00.633 11:14:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:20:00.892 11:14:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:20:00.892 11:14:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:20:00.892 11:14:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 4103566 00:20:09.004 Initializing NVMe Controllers 00:20:09.004 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:09.004 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:20:09.004 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:20:09.004 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:20:09.004 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:20:09.004 Initialization complete. Launching workers. 00:20:09.004 ======================================================== 00:20:09.004 Latency(us) 00:20:09.004 Device Information : IOPS MiB/s Average min max 00:20:09.004 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10536.05 41.16 6074.62 2350.83 10242.72 00:20:09.004 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10562.45 41.26 6059.31 1297.27 9685.35 00:20:09.004 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10543.55 41.19 6070.24 1975.78 12605.51 00:20:09.004 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10555.75 41.23 6062.00 2445.94 10843.65 00:20:09.004 ======================================================== 00:20:09.004 Total : 42197.80 164.84 6066.54 1297.27 12605.51 00:20:09.004 00:20:09.004 11:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:20:09.004 11:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:09.004 11:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:20:09.004 11:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:09.004 11:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:20:09.004 11:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:09.004 11:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:09.004 rmmod nvme_tcp 00:20:09.004 rmmod nvme_fabrics 00:20:09.004 rmmod nvme_keyring 00:20:09.004 11:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:09.004 11:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:20:09.004 11:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:20:09.004 11:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 4103540 ']' 00:20:09.004 11:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 4103540 00:20:09.005 11:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 4103540 ']' 00:20:09.005 11:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 4103540 00:20:09.005 11:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:20:09.005 11:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:09.005 11:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4103540 00:20:09.005 11:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:09.005 11:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:09.005 11:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4103540' 00:20:09.005 killing process with pid 4103540 00:20:09.005 11:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 4103540 00:20:09.005 11:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 4103540 00:20:09.005 11:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:09.005 11:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:09.005 11:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:09.005 11:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:20:09.005 11:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:20:09.005 11:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:09.005 11:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:20:09.005 11:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:09.005 11:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:09.005 11:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:09.005 11:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:09.005 11:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:11.540 11:14:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:11.540 11:14:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:20:11.540 11:14:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:20:11.540 11:14:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:20:12.479 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:20:14.386 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:20:19.660 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:20:19.660 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:19.660 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:19.660 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:19.660 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:19.660 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:19.660 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:19.660 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:19.660 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:19.660 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:19.660 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:19.660 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:20:19.660 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:19.660 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:19.660 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:20:19.660 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:19.660 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:19.660 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:19.660 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:19.660 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:19.660 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:20:19.660 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:19.660 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:20:19.660 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:20:19.660 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:20:19.660 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:20:19.660 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:20:19.660 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:20:19.660 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:19.660 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:19.660 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:19.660 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:19.660 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:19.660 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:19.660 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:19.660 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:19.660 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:19.660 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:19.660 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:19.660 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:19.660 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:19.660 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:19.660 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:19.660 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:19.660 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:19.660 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:19.660 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:19.660 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:19.660 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:19.660 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:19.660 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:19.660 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:19.660 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:19.660 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:19.660 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:19.660 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:19.660 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:19.660 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:19.660 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:19.660 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:19.660 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:19.660 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:19.660 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:19.660 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:19.660 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:19.660 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:19.660 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:19.660 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:19.660 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:19.660 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:19.660 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:19.660 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:19.660 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:19.660 Found net devices under 0000:86:00.0: cvl_0_0 00:20:19.661 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:19.661 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:19.661 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:19.661 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:19.661 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:19.661 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:19.661 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:19.661 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:19.661 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:19.661 Found net devices under 0000:86:00.1: cvl_0_1 00:20:19.661 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:19.661 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:19.661 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:20:19.661 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:19.661 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:19.661 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:19.661 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:19.661 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:19.661 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:19.661 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:19.661 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:19.661 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:19.661 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:19.661 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:19.661 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:19.661 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:19.661 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:19.661 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:19.661 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:19.661 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:19.661 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:19.661 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:19.661 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:19.661 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:19.661 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:19.661 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:19.661 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:19.661 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:19.661 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:19.661 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:19.661 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.429 ms 00:20:19.661 00:20:19.661 --- 10.0.0.2 ping statistics --- 00:20:19.661 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:19.661 rtt min/avg/max/mdev = 0.429/0.429/0.429/0.000 ms 00:20:19.661 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:19.661 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:19.661 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.224 ms 00:20:19.661 00:20:19.661 --- 10.0.0.1 ping statistics --- 00:20:19.661 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:19.661 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:20:19.661 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:19.661 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:20:19.661 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:19.661 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:19.661 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:19.661 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:19.661 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:19.661 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:19.661 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:19.661 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:20:19.661 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:20:19.661 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:20:19.661 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:20:19.661 net.core.busy_poll = 1 00:20:19.661 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:20:19.661 net.core.busy_read = 1 00:20:19.661 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:20:19.661 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:20:19.661 11:14:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:20:19.661 11:14:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:20:19.920 11:14:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:20:19.920 11:14:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:20:19.920 11:14:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:19.920 11:14:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:19.920 11:14:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:19.920 11:14:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=4107353 00:20:19.920 11:14:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 4107353 00:20:19.920 11:14:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:20:19.920 11:14:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 4107353 ']' 00:20:19.920 11:14:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:19.920 11:14:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:19.920 11:14:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:19.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:19.920 11:14:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:19.920 11:14:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:19.920 [2024-11-20 11:14:47.264729] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:20:19.921 [2024-11-20 11:14:47.264781] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:19.921 [2024-11-20 11:14:47.347681] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:19.921 [2024-11-20 11:14:47.391746] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:19.921 [2024-11-20 11:14:47.391781] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:19.921 [2024-11-20 11:14:47.391788] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:19.921 [2024-11-20 11:14:47.391794] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:19.921 [2024-11-20 11:14:47.391799] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:19.921 [2024-11-20 11:14:47.393375] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:19.921 [2024-11-20 11:14:47.393402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:19.921 [2024-11-20 11:14:47.393506] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:19.921 [2024-11-20 11:14:47.393507] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:20.856 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:20.856 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:20:20.856 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:20.856 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:20.856 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:20.856 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:20.856 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:20:20.856 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:20:20.856 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:20:20.856 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.856 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:20.856 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.856 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:20:20.856 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:20:20.856 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.856 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:20.856 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.856 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:20:20.856 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.856 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:20.856 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.856 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:20:20.856 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.856 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:20.856 [2024-11-20 11:14:48.276879] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:20.856 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.856 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:20.856 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.856 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:20.856 Malloc1 00:20:20.856 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.856 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:20.856 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.856 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:20.856 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.856 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:20.856 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.856 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:20.856 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.856 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:20.856 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.856 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:20.856 [2024-11-20 11:14:48.345880] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:21.115 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.115 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=4107602 00:20:21.115 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:20:21.115 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:20:23.170 11:14:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:20:23.170 11:14:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.170 11:14:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:23.170 11:14:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.170 11:14:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:20:23.170 "tick_rate": 2300000000, 00:20:23.170 "poll_groups": [ 00:20:23.170 { 00:20:23.170 "name": "nvmf_tgt_poll_group_000", 00:20:23.170 "admin_qpairs": 1, 00:20:23.170 "io_qpairs": 1, 00:20:23.170 "current_admin_qpairs": 1, 00:20:23.170 "current_io_qpairs": 1, 00:20:23.170 "pending_bdev_io": 0, 00:20:23.170 "completed_nvme_io": 26635, 00:20:23.170 "transports": [ 00:20:23.170 { 00:20:23.170 "trtype": "TCP" 00:20:23.170 } 00:20:23.170 ] 00:20:23.170 }, 00:20:23.170 { 00:20:23.170 "name": "nvmf_tgt_poll_group_001", 00:20:23.170 "admin_qpairs": 0, 00:20:23.170 "io_qpairs": 3, 00:20:23.170 "current_admin_qpairs": 0, 00:20:23.170 "current_io_qpairs": 3, 00:20:23.170 "pending_bdev_io": 0, 00:20:23.170 "completed_nvme_io": 29683, 00:20:23.170 "transports": [ 00:20:23.170 { 00:20:23.170 "trtype": "TCP" 00:20:23.170 } 00:20:23.170 ] 00:20:23.170 }, 00:20:23.170 { 00:20:23.170 "name": "nvmf_tgt_poll_group_002", 00:20:23.170 "admin_qpairs": 0, 00:20:23.170 "io_qpairs": 0, 00:20:23.170 "current_admin_qpairs": 0, 00:20:23.170 "current_io_qpairs": 0, 00:20:23.170 "pending_bdev_io": 0, 00:20:23.170 "completed_nvme_io": 0, 00:20:23.170 "transports": [ 00:20:23.170 { 00:20:23.170 "trtype": "TCP" 00:20:23.170 } 00:20:23.170 ] 00:20:23.170 }, 00:20:23.170 { 00:20:23.170 "name": "nvmf_tgt_poll_group_003", 00:20:23.170 "admin_qpairs": 0, 00:20:23.170 "io_qpairs": 0, 00:20:23.170 "current_admin_qpairs": 0, 00:20:23.170 "current_io_qpairs": 0, 00:20:23.170 "pending_bdev_io": 0, 00:20:23.170 "completed_nvme_io": 0, 00:20:23.170 "transports": [ 00:20:23.170 { 00:20:23.170 "trtype": "TCP" 00:20:23.170 } 00:20:23.170 ] 00:20:23.170 } 00:20:23.170 ] 00:20:23.170 }' 00:20:23.171 11:14:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:20:23.171 11:14:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:20:23.171 11:14:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:20:23.171 11:14:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:20:23.171 11:14:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 4107602 00:20:31.287 Initializing NVMe Controllers 00:20:31.287 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:31.287 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:20:31.287 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:20:31.287 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:20:31.287 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:20:31.287 Initialization complete. Launching workers. 00:20:31.287 ======================================================== 00:20:31.287 Latency(us) 00:20:31.287 Device Information : IOPS MiB/s Average min max 00:20:31.287 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 5327.20 20.81 12012.06 1465.78 58747.30 00:20:31.287 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 13651.80 53.33 4686.99 1522.13 46558.58 00:20:31.287 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 5210.50 20.35 12284.37 1579.08 59176.25 00:20:31.287 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 5607.80 21.91 11426.00 1534.49 57275.29 00:20:31.287 ======================================================== 00:20:31.287 Total : 29797.29 116.40 8593.36 1465.78 59176.25 00:20:31.287 00:20:31.287 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:20:31.287 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:31.287 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:20:31.287 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:31.287 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:20:31.287 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:31.287 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:31.287 rmmod nvme_tcp 00:20:31.287 rmmod nvme_fabrics 00:20:31.287 rmmod nvme_keyring 00:20:31.287 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:31.287 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:20:31.287 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:20:31.287 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 4107353 ']' 00:20:31.287 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 4107353 00:20:31.287 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 4107353 ']' 00:20:31.287 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 4107353 00:20:31.287 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:20:31.287 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:31.287 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4107353 00:20:31.287 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:31.287 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:31.287 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4107353' 00:20:31.287 killing process with pid 4107353 00:20:31.287 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 4107353 00:20:31.287 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 4107353 00:20:31.546 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:31.546 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:31.546 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:31.546 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:20:31.546 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:20:31.546 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:31.546 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:20:31.546 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:31.546 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:31.546 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:31.546 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:31.546 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:33.453 11:15:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:33.453 11:15:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:20:33.453 00:20:33.453 real 0m49.531s 00:20:33.453 user 2m46.383s 00:20:33.453 sys 0m10.437s 00:20:33.453 11:15:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:33.453 11:15:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:33.453 ************************************ 00:20:33.453 END TEST nvmf_perf_adq 00:20:33.453 ************************************ 00:20:33.713 11:15:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:20:33.713 11:15:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:33.713 11:15:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:33.713 11:15:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:33.713 ************************************ 00:20:33.713 START TEST nvmf_shutdown 00:20:33.713 ************************************ 00:20:33.713 11:15:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:20:33.713 * Looking for test storage... 00:20:33.713 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:33.713 11:15:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:33.713 11:15:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:20:33.713 11:15:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:33.713 11:15:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:33.713 11:15:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:33.713 11:15:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:33.713 11:15:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:33.713 11:15:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:20:33.713 11:15:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:20:33.713 11:15:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:20:33.713 11:15:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:20:33.713 11:15:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:20:33.713 11:15:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:20:33.713 11:15:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:20:33.713 11:15:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:33.713 11:15:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:20:33.713 11:15:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:20:33.713 11:15:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:33.713 11:15:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:33.713 11:15:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:20:33.713 11:15:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:20:33.713 11:15:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:33.713 11:15:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:20:33.713 11:15:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:20:33.713 11:15:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:20:33.713 11:15:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:20:33.713 11:15:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:33.713 11:15:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:20:33.713 11:15:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:20:33.713 11:15:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:33.713 11:15:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:33.713 11:15:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:20:33.713 11:15:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:33.713 11:15:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:33.713 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:33.713 --rc genhtml_branch_coverage=1 00:20:33.713 --rc genhtml_function_coverage=1 00:20:33.713 --rc genhtml_legend=1 00:20:33.713 --rc geninfo_all_blocks=1 00:20:33.713 --rc geninfo_unexecuted_blocks=1 00:20:33.713 00:20:33.713 ' 00:20:33.713 11:15:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:33.713 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:33.713 --rc genhtml_branch_coverage=1 00:20:33.713 --rc genhtml_function_coverage=1 00:20:33.713 --rc genhtml_legend=1 00:20:33.713 --rc geninfo_all_blocks=1 00:20:33.713 --rc geninfo_unexecuted_blocks=1 00:20:33.713 00:20:33.713 ' 00:20:33.713 11:15:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:33.713 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:33.713 --rc genhtml_branch_coverage=1 00:20:33.713 --rc genhtml_function_coverage=1 00:20:33.713 --rc genhtml_legend=1 00:20:33.713 --rc geninfo_all_blocks=1 00:20:33.713 --rc geninfo_unexecuted_blocks=1 00:20:33.713 00:20:33.713 ' 00:20:33.713 11:15:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:33.713 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:33.713 --rc genhtml_branch_coverage=1 00:20:33.713 --rc genhtml_function_coverage=1 00:20:33.713 --rc genhtml_legend=1 00:20:33.713 --rc geninfo_all_blocks=1 00:20:33.713 --rc geninfo_unexecuted_blocks=1 00:20:33.713 00:20:33.713 ' 00:20:33.713 11:15:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:33.713 11:15:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:20:33.714 11:15:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:33.714 11:15:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:33.714 11:15:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:33.714 11:15:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:33.714 11:15:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:33.714 11:15:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:33.714 11:15:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:33.714 11:15:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:33.714 11:15:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:33.714 11:15:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:33.714 11:15:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:33.714 11:15:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:20:33.714 11:15:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:33.714 11:15:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:33.714 11:15:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:33.714 11:15:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:33.714 11:15:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:33.714 11:15:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:20:33.714 11:15:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:33.714 11:15:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:33.714 11:15:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:33.714 11:15:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:33.714 11:15:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:33.714 11:15:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:33.714 11:15:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:20:33.714 11:15:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:33.714 11:15:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:20:33.714 11:15:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:33.714 11:15:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:33.714 11:15:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:33.714 11:15:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:33.714 11:15:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:33.714 11:15:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:33.714 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:33.714 11:15:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:33.714 11:15:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:33.714 11:15:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:33.714 11:15:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:20:33.714 11:15:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:20:33.714 11:15:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:20:33.714 11:15:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:33.714 11:15:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:33.714 11:15:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:33.973 ************************************ 00:20:33.973 START TEST nvmf_shutdown_tc1 00:20:33.973 ************************************ 00:20:33.973 11:15:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:20:33.973 11:15:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:20:33.973 11:15:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:20:33.973 11:15:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:33.973 11:15:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:33.973 11:15:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:33.973 11:15:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:33.973 11:15:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:33.973 11:15:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:33.973 11:15:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:33.973 11:15:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:33.973 11:15:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:33.973 11:15:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:33.973 11:15:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:20:33.973 11:15:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:40.542 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:40.542 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:20:40.542 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:40.542 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:40.542 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:40.542 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:40.542 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:40.542 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:20:40.542 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:40.542 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:20:40.542 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:20:40.542 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:20:40.542 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:20:40.542 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:20:40.542 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:20:40.542 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:40.542 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:40.542 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:40.542 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:40.542 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:40.542 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:40.542 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:40.542 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:40.542 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:40.542 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:40.542 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:40.542 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:40.542 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:40.542 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:40.542 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:40.542 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:40.542 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:40.542 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:40.542 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:40.542 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:40.542 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:40.542 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:40.542 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:40.542 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:40.542 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:40.542 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:40.542 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:40.542 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:40.542 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:40.542 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:40.542 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:40.542 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:40.542 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:40.542 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:40.542 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:40.542 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:40.542 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:40.542 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:40.542 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:40.542 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:40.542 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:40.542 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:40.542 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:40.542 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:40.542 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:40.542 Found net devices under 0000:86:00.0: cvl_0_0 00:20:40.542 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:40.542 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:40.542 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:40.542 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:40.542 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:40.542 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:40.542 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:40.542 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:40.542 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:40.542 Found net devices under 0000:86:00.1: cvl_0_1 00:20:40.542 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:40.542 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:40.542 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:20:40.542 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:40.542 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:40.542 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:40.542 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:40.542 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:40.542 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:40.542 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:40.543 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:40.543 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:40.543 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:40.543 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:40.543 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:40.543 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:40.543 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:40.543 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:40.543 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:40.543 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:40.543 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:40.543 11:15:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:40.543 11:15:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:40.543 11:15:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:40.543 11:15:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:40.543 11:15:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:40.543 11:15:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:40.543 11:15:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:40.543 11:15:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:40.543 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:40.543 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.369 ms 00:20:40.543 00:20:40.543 --- 10.0.0.2 ping statistics --- 00:20:40.543 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:40.543 rtt min/avg/max/mdev = 0.369/0.369/0.369/0.000 ms 00:20:40.543 11:15:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:40.543 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:40.543 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:20:40.543 00:20:40.543 --- 10.0.0.1 ping statistics --- 00:20:40.543 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:40.543 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:20:40.543 11:15:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:40.543 11:15:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:20:40.543 11:15:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:40.543 11:15:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:40.543 11:15:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:40.543 11:15:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:40.543 11:15:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:40.543 11:15:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:40.543 11:15:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:40.543 11:15:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:20:40.543 11:15:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:40.543 11:15:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:40.543 11:15:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:40.543 11:15:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=4112953 00:20:40.543 11:15:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:40.543 11:15:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 4112953 00:20:40.543 11:15:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 4112953 ']' 00:20:40.543 11:15:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:40.543 11:15:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:40.543 11:15:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:40.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:40.543 11:15:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:40.543 11:15:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:40.543 [2024-11-20 11:15:07.273218] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:20:40.543 [2024-11-20 11:15:07.273264] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:40.543 [2024-11-20 11:15:07.354046] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:40.543 [2024-11-20 11:15:07.396528] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:40.543 [2024-11-20 11:15:07.396567] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:40.543 [2024-11-20 11:15:07.396574] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:40.543 [2024-11-20 11:15:07.396580] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:40.543 [2024-11-20 11:15:07.396586] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:40.543 [2024-11-20 11:15:07.398181] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:40.543 [2024-11-20 11:15:07.398285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:40.543 [2024-11-20 11:15:07.398408] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:40.543 [2024-11-20 11:15:07.398410] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:20:40.803 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:40.803 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:20:40.803 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:40.803 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:40.803 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:40.803 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:40.803 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:40.803 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.803 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:40.803 [2024-11-20 11:15:08.161870] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:40.803 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.803 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:20:40.803 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:20:40.803 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:40.803 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:40.803 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:40.803 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:40.803 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:40.803 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:40.803 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:40.803 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:40.803 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:40.803 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:40.803 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:40.803 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:40.803 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:40.803 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:40.803 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:40.803 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:40.803 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:40.803 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:40.803 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:40.803 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:40.803 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:40.803 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:40.803 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:40.803 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:20:40.803 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.803 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:40.803 Malloc1 00:20:40.803 [2024-11-20 11:15:08.272552] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:40.803 Malloc2 00:20:41.062 Malloc3 00:20:41.062 Malloc4 00:20:41.062 Malloc5 00:20:41.062 Malloc6 00:20:41.062 Malloc7 00:20:41.062 Malloc8 00:20:41.321 Malloc9 00:20:41.321 Malloc10 00:20:41.321 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.321 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:20:41.322 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:41.322 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:41.322 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=4113406 00:20:41.322 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 4113406 /var/tmp/bdevperf.sock 00:20:41.322 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 4113406 ']' 00:20:41.322 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:41.322 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:20:41.322 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:41.322 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:41.322 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:41.322 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:41.322 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:20:41.322 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:41.322 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:20:41.322 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:41.322 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:41.322 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:41.322 { 00:20:41.322 "params": { 00:20:41.322 "name": "Nvme$subsystem", 00:20:41.322 "trtype": "$TEST_TRANSPORT", 00:20:41.322 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:41.322 "adrfam": "ipv4", 00:20:41.322 "trsvcid": "$NVMF_PORT", 00:20:41.322 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:41.322 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:41.322 "hdgst": ${hdgst:-false}, 00:20:41.322 "ddgst": ${ddgst:-false} 00:20:41.322 }, 00:20:41.322 "method": "bdev_nvme_attach_controller" 00:20:41.322 } 00:20:41.322 EOF 00:20:41.322 )") 00:20:41.322 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:41.322 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:41.322 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:41.322 { 00:20:41.322 "params": { 00:20:41.322 "name": "Nvme$subsystem", 00:20:41.322 "trtype": "$TEST_TRANSPORT", 00:20:41.322 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:41.322 "adrfam": "ipv4", 00:20:41.322 "trsvcid": "$NVMF_PORT", 00:20:41.322 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:41.322 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:41.322 "hdgst": ${hdgst:-false}, 00:20:41.322 "ddgst": ${ddgst:-false} 00:20:41.322 }, 00:20:41.322 "method": "bdev_nvme_attach_controller" 00:20:41.322 } 00:20:41.322 EOF 00:20:41.322 )") 00:20:41.322 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:41.322 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:41.322 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:41.322 { 00:20:41.322 "params": { 00:20:41.322 "name": "Nvme$subsystem", 00:20:41.322 "trtype": "$TEST_TRANSPORT", 00:20:41.322 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:41.322 "adrfam": "ipv4", 00:20:41.322 "trsvcid": "$NVMF_PORT", 00:20:41.322 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:41.322 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:41.322 "hdgst": ${hdgst:-false}, 00:20:41.322 "ddgst": ${ddgst:-false} 00:20:41.322 }, 00:20:41.322 "method": "bdev_nvme_attach_controller" 00:20:41.322 } 00:20:41.322 EOF 00:20:41.322 )") 00:20:41.322 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:41.322 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:41.322 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:41.322 { 00:20:41.322 "params": { 00:20:41.322 "name": "Nvme$subsystem", 00:20:41.322 "trtype": "$TEST_TRANSPORT", 00:20:41.322 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:41.322 "adrfam": "ipv4", 00:20:41.322 "trsvcid": "$NVMF_PORT", 00:20:41.322 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:41.322 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:41.322 "hdgst": ${hdgst:-false}, 00:20:41.322 "ddgst": ${ddgst:-false} 00:20:41.322 }, 00:20:41.322 "method": "bdev_nvme_attach_controller" 00:20:41.322 } 00:20:41.322 EOF 00:20:41.322 )") 00:20:41.322 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:41.322 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:41.322 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:41.322 { 00:20:41.322 "params": { 00:20:41.322 "name": "Nvme$subsystem", 00:20:41.322 "trtype": "$TEST_TRANSPORT", 00:20:41.322 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:41.322 "adrfam": "ipv4", 00:20:41.322 "trsvcid": "$NVMF_PORT", 00:20:41.322 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:41.322 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:41.322 "hdgst": ${hdgst:-false}, 00:20:41.322 "ddgst": ${ddgst:-false} 00:20:41.322 }, 00:20:41.322 "method": "bdev_nvme_attach_controller" 00:20:41.322 } 00:20:41.322 EOF 00:20:41.322 )") 00:20:41.322 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:41.322 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:41.322 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:41.322 { 00:20:41.322 "params": { 00:20:41.322 "name": "Nvme$subsystem", 00:20:41.322 "trtype": "$TEST_TRANSPORT", 00:20:41.322 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:41.322 "adrfam": "ipv4", 00:20:41.322 "trsvcid": "$NVMF_PORT", 00:20:41.322 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:41.322 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:41.322 "hdgst": ${hdgst:-false}, 00:20:41.322 "ddgst": ${ddgst:-false} 00:20:41.322 }, 00:20:41.322 "method": "bdev_nvme_attach_controller" 00:20:41.322 } 00:20:41.322 EOF 00:20:41.322 )") 00:20:41.322 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:41.322 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:41.323 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:41.323 { 00:20:41.323 "params": { 00:20:41.323 "name": "Nvme$subsystem", 00:20:41.323 "trtype": "$TEST_TRANSPORT", 00:20:41.323 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:41.323 "adrfam": "ipv4", 00:20:41.323 "trsvcid": "$NVMF_PORT", 00:20:41.323 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:41.323 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:41.323 "hdgst": ${hdgst:-false}, 00:20:41.323 "ddgst": ${ddgst:-false} 00:20:41.323 }, 00:20:41.323 "method": "bdev_nvme_attach_controller" 00:20:41.323 } 00:20:41.323 EOF 00:20:41.323 )") 00:20:41.323 [2024-11-20 11:15:08.742195] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:20:41.323 [2024-11-20 11:15:08.742243] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:20:41.323 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:41.323 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:41.323 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:41.323 { 00:20:41.323 "params": { 00:20:41.323 "name": "Nvme$subsystem", 00:20:41.323 "trtype": "$TEST_TRANSPORT", 00:20:41.323 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:41.323 "adrfam": "ipv4", 00:20:41.323 "trsvcid": "$NVMF_PORT", 00:20:41.323 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:41.323 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:41.323 "hdgst": ${hdgst:-false}, 00:20:41.323 "ddgst": ${ddgst:-false} 00:20:41.323 }, 00:20:41.323 "method": "bdev_nvme_attach_controller" 00:20:41.323 } 00:20:41.323 EOF 00:20:41.323 )") 00:20:41.323 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:41.323 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:41.323 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:41.323 { 00:20:41.323 "params": { 00:20:41.323 "name": "Nvme$subsystem", 00:20:41.323 "trtype": "$TEST_TRANSPORT", 00:20:41.323 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:41.323 "adrfam": "ipv4", 00:20:41.323 "trsvcid": "$NVMF_PORT", 00:20:41.323 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:41.323 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:41.323 "hdgst": ${hdgst:-false}, 00:20:41.323 "ddgst": ${ddgst:-false} 00:20:41.323 }, 00:20:41.323 "method": "bdev_nvme_attach_controller" 00:20:41.323 } 00:20:41.323 EOF 00:20:41.323 )") 00:20:41.323 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:41.323 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:41.323 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:41.323 { 00:20:41.323 "params": { 00:20:41.323 "name": "Nvme$subsystem", 00:20:41.323 "trtype": "$TEST_TRANSPORT", 00:20:41.323 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:41.323 "adrfam": "ipv4", 00:20:41.323 "trsvcid": "$NVMF_PORT", 00:20:41.323 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:41.323 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:41.323 "hdgst": ${hdgst:-false}, 00:20:41.323 "ddgst": ${ddgst:-false} 00:20:41.323 }, 00:20:41.323 "method": "bdev_nvme_attach_controller" 00:20:41.323 } 00:20:41.323 EOF 00:20:41.323 )") 00:20:41.323 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:41.323 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:20:41.323 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:20:41.323 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:41.323 "params": { 00:20:41.323 "name": "Nvme1", 00:20:41.323 "trtype": "tcp", 00:20:41.323 "traddr": "10.0.0.2", 00:20:41.323 "adrfam": "ipv4", 00:20:41.323 "trsvcid": "4420", 00:20:41.323 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:41.323 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:41.323 "hdgst": false, 00:20:41.323 "ddgst": false 00:20:41.323 }, 00:20:41.323 "method": "bdev_nvme_attach_controller" 00:20:41.323 },{ 00:20:41.323 "params": { 00:20:41.323 "name": "Nvme2", 00:20:41.323 "trtype": "tcp", 00:20:41.323 "traddr": "10.0.0.2", 00:20:41.323 "adrfam": "ipv4", 00:20:41.323 "trsvcid": "4420", 00:20:41.323 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:41.323 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:41.323 "hdgst": false, 00:20:41.323 "ddgst": false 00:20:41.323 }, 00:20:41.323 "method": "bdev_nvme_attach_controller" 00:20:41.323 },{ 00:20:41.323 "params": { 00:20:41.323 "name": "Nvme3", 00:20:41.323 "trtype": "tcp", 00:20:41.323 "traddr": "10.0.0.2", 00:20:41.323 "adrfam": "ipv4", 00:20:41.323 "trsvcid": "4420", 00:20:41.323 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:41.323 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:41.323 "hdgst": false, 00:20:41.323 "ddgst": false 00:20:41.323 }, 00:20:41.323 "method": "bdev_nvme_attach_controller" 00:20:41.323 },{ 00:20:41.323 "params": { 00:20:41.323 "name": "Nvme4", 00:20:41.323 "trtype": "tcp", 00:20:41.323 "traddr": "10.0.0.2", 00:20:41.323 "adrfam": "ipv4", 00:20:41.323 "trsvcid": "4420", 00:20:41.323 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:41.323 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:41.323 "hdgst": false, 00:20:41.323 "ddgst": false 00:20:41.323 }, 00:20:41.323 "method": "bdev_nvme_attach_controller" 00:20:41.323 },{ 00:20:41.323 "params": { 00:20:41.323 "name": "Nvme5", 00:20:41.323 "trtype": "tcp", 00:20:41.323 "traddr": "10.0.0.2", 00:20:41.323 "adrfam": "ipv4", 00:20:41.323 "trsvcid": "4420", 00:20:41.323 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:41.323 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:41.323 "hdgst": false, 00:20:41.323 "ddgst": false 00:20:41.323 }, 00:20:41.323 "method": "bdev_nvme_attach_controller" 00:20:41.323 },{ 00:20:41.323 "params": { 00:20:41.323 "name": "Nvme6", 00:20:41.323 "trtype": "tcp", 00:20:41.323 "traddr": "10.0.0.2", 00:20:41.323 "adrfam": "ipv4", 00:20:41.323 "trsvcid": "4420", 00:20:41.323 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:41.323 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:41.323 "hdgst": false, 00:20:41.323 "ddgst": false 00:20:41.323 }, 00:20:41.323 "method": "bdev_nvme_attach_controller" 00:20:41.323 },{ 00:20:41.323 "params": { 00:20:41.323 "name": "Nvme7", 00:20:41.323 "trtype": "tcp", 00:20:41.323 "traddr": "10.0.0.2", 00:20:41.323 "adrfam": "ipv4", 00:20:41.323 "trsvcid": "4420", 00:20:41.323 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:41.323 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:41.323 "hdgst": false, 00:20:41.323 "ddgst": false 00:20:41.323 }, 00:20:41.323 "method": "bdev_nvme_attach_controller" 00:20:41.323 },{ 00:20:41.323 "params": { 00:20:41.323 "name": "Nvme8", 00:20:41.323 "trtype": "tcp", 00:20:41.323 "traddr": "10.0.0.2", 00:20:41.323 "adrfam": "ipv4", 00:20:41.323 "trsvcid": "4420", 00:20:41.323 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:41.323 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:41.323 "hdgst": false, 00:20:41.323 "ddgst": false 00:20:41.323 }, 00:20:41.323 "method": "bdev_nvme_attach_controller" 00:20:41.324 },{ 00:20:41.324 "params": { 00:20:41.324 "name": "Nvme9", 00:20:41.324 "trtype": "tcp", 00:20:41.324 "traddr": "10.0.0.2", 00:20:41.324 "adrfam": "ipv4", 00:20:41.324 "trsvcid": "4420", 00:20:41.324 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:41.324 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:41.324 "hdgst": false, 00:20:41.324 "ddgst": false 00:20:41.324 }, 00:20:41.324 "method": "bdev_nvme_attach_controller" 00:20:41.324 },{ 00:20:41.324 "params": { 00:20:41.324 "name": "Nvme10", 00:20:41.324 "trtype": "tcp", 00:20:41.324 "traddr": "10.0.0.2", 00:20:41.324 "adrfam": "ipv4", 00:20:41.324 "trsvcid": "4420", 00:20:41.324 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:41.324 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:41.324 "hdgst": false, 00:20:41.324 "ddgst": false 00:20:41.324 }, 00:20:41.324 "method": "bdev_nvme_attach_controller" 00:20:41.324 }' 00:20:41.581 [2024-11-20 11:15:08.820672] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:41.581 [2024-11-20 11:15:08.862054] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:43.478 11:15:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:43.478 11:15:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:20:43.478 11:15:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:43.478 11:15:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.478 11:15:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:43.478 11:15:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.478 11:15:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 4113406 00:20:43.478 11:15:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:20:43.478 11:15:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:20:44.412 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 4113406 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:20:44.412 11:15:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 4112953 00:20:44.412 11:15:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:20:44.412 11:15:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:44.412 11:15:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:20:44.412 11:15:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:20:44.412 11:15:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:44.412 11:15:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:44.412 { 00:20:44.412 "params": { 00:20:44.412 "name": "Nvme$subsystem", 00:20:44.412 "trtype": "$TEST_TRANSPORT", 00:20:44.412 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:44.412 "adrfam": "ipv4", 00:20:44.412 "trsvcid": "$NVMF_PORT", 00:20:44.412 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:44.412 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:44.412 "hdgst": ${hdgst:-false}, 00:20:44.413 "ddgst": ${ddgst:-false} 00:20:44.413 }, 00:20:44.413 "method": "bdev_nvme_attach_controller" 00:20:44.413 } 00:20:44.413 EOF 00:20:44.413 )") 00:20:44.413 11:15:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:44.413 11:15:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:44.413 11:15:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:44.413 { 00:20:44.413 "params": { 00:20:44.413 "name": "Nvme$subsystem", 00:20:44.413 "trtype": "$TEST_TRANSPORT", 00:20:44.413 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:44.413 "adrfam": "ipv4", 00:20:44.413 "trsvcid": "$NVMF_PORT", 00:20:44.413 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:44.413 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:44.413 "hdgst": ${hdgst:-false}, 00:20:44.413 "ddgst": ${ddgst:-false} 00:20:44.413 }, 00:20:44.413 "method": "bdev_nvme_attach_controller" 00:20:44.413 } 00:20:44.413 EOF 00:20:44.413 )") 00:20:44.413 11:15:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:44.413 11:15:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:44.413 11:15:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:44.413 { 00:20:44.413 "params": { 00:20:44.413 "name": "Nvme$subsystem", 00:20:44.413 "trtype": "$TEST_TRANSPORT", 00:20:44.413 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:44.413 "adrfam": "ipv4", 00:20:44.413 "trsvcid": "$NVMF_PORT", 00:20:44.413 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:44.413 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:44.413 "hdgst": ${hdgst:-false}, 00:20:44.413 "ddgst": ${ddgst:-false} 00:20:44.413 }, 00:20:44.413 "method": "bdev_nvme_attach_controller" 00:20:44.413 } 00:20:44.413 EOF 00:20:44.413 )") 00:20:44.413 11:15:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:44.413 11:15:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:44.413 11:15:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:44.413 { 00:20:44.413 "params": { 00:20:44.413 "name": "Nvme$subsystem", 00:20:44.413 "trtype": "$TEST_TRANSPORT", 00:20:44.413 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:44.413 "adrfam": "ipv4", 00:20:44.413 "trsvcid": "$NVMF_PORT", 00:20:44.413 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:44.413 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:44.413 "hdgst": ${hdgst:-false}, 00:20:44.413 "ddgst": ${ddgst:-false} 00:20:44.413 }, 00:20:44.413 "method": "bdev_nvme_attach_controller" 00:20:44.413 } 00:20:44.413 EOF 00:20:44.413 )") 00:20:44.413 11:15:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:44.413 11:15:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:44.413 11:15:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:44.413 { 00:20:44.413 "params": { 00:20:44.413 "name": "Nvme$subsystem", 00:20:44.413 "trtype": "$TEST_TRANSPORT", 00:20:44.413 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:44.413 "adrfam": "ipv4", 00:20:44.413 "trsvcid": "$NVMF_PORT", 00:20:44.413 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:44.413 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:44.413 "hdgst": ${hdgst:-false}, 00:20:44.413 "ddgst": ${ddgst:-false} 00:20:44.413 }, 00:20:44.413 "method": "bdev_nvme_attach_controller" 00:20:44.413 } 00:20:44.413 EOF 00:20:44.413 )") 00:20:44.413 11:15:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:44.413 11:15:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:44.413 11:15:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:44.413 { 00:20:44.413 "params": { 00:20:44.413 "name": "Nvme$subsystem", 00:20:44.413 "trtype": "$TEST_TRANSPORT", 00:20:44.413 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:44.413 "adrfam": "ipv4", 00:20:44.413 "trsvcid": "$NVMF_PORT", 00:20:44.413 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:44.413 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:44.413 "hdgst": ${hdgst:-false}, 00:20:44.413 "ddgst": ${ddgst:-false} 00:20:44.413 }, 00:20:44.413 "method": "bdev_nvme_attach_controller" 00:20:44.413 } 00:20:44.413 EOF 00:20:44.413 )") 00:20:44.413 11:15:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:44.413 11:15:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:44.413 11:15:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:44.413 { 00:20:44.413 "params": { 00:20:44.413 "name": "Nvme$subsystem", 00:20:44.413 "trtype": "$TEST_TRANSPORT", 00:20:44.413 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:44.413 "adrfam": "ipv4", 00:20:44.413 "trsvcid": "$NVMF_PORT", 00:20:44.413 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:44.413 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:44.413 "hdgst": ${hdgst:-false}, 00:20:44.413 "ddgst": ${ddgst:-false} 00:20:44.413 }, 00:20:44.413 "method": "bdev_nvme_attach_controller" 00:20:44.413 } 00:20:44.413 EOF 00:20:44.413 )") 00:20:44.413 11:15:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:44.413 [2024-11-20 11:15:11.689323] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:20:44.413 [2024-11-20 11:15:11.689378] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4114118 ] 00:20:44.413 11:15:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:44.413 11:15:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:44.413 { 00:20:44.413 "params": { 00:20:44.413 "name": "Nvme$subsystem", 00:20:44.413 "trtype": "$TEST_TRANSPORT", 00:20:44.413 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:44.413 "adrfam": "ipv4", 00:20:44.413 "trsvcid": "$NVMF_PORT", 00:20:44.413 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:44.413 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:44.413 "hdgst": ${hdgst:-false}, 00:20:44.413 "ddgst": ${ddgst:-false} 00:20:44.413 }, 00:20:44.413 "method": "bdev_nvme_attach_controller" 00:20:44.413 } 00:20:44.413 EOF 00:20:44.413 )") 00:20:44.413 11:15:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:44.413 11:15:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:44.413 11:15:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:44.413 { 00:20:44.413 "params": { 00:20:44.413 "name": "Nvme$subsystem", 00:20:44.413 "trtype": "$TEST_TRANSPORT", 00:20:44.413 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:44.413 "adrfam": "ipv4", 00:20:44.413 "trsvcid": "$NVMF_PORT", 00:20:44.413 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:44.413 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:44.413 "hdgst": ${hdgst:-false}, 00:20:44.413 "ddgst": ${ddgst:-false} 00:20:44.413 }, 00:20:44.413 "method": "bdev_nvme_attach_controller" 00:20:44.413 } 00:20:44.413 EOF 00:20:44.413 )") 00:20:44.413 11:15:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:44.413 11:15:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:44.413 11:15:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:44.413 { 00:20:44.413 "params": { 00:20:44.413 "name": "Nvme$subsystem", 00:20:44.413 "trtype": "$TEST_TRANSPORT", 00:20:44.413 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:44.413 "adrfam": "ipv4", 00:20:44.414 "trsvcid": "$NVMF_PORT", 00:20:44.414 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:44.414 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:44.414 "hdgst": ${hdgst:-false}, 00:20:44.414 "ddgst": ${ddgst:-false} 00:20:44.414 }, 00:20:44.414 "method": "bdev_nvme_attach_controller" 00:20:44.414 } 00:20:44.414 EOF 00:20:44.414 )") 00:20:44.414 11:15:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:44.414 11:15:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:20:44.414 11:15:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:20:44.414 11:15:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:44.414 "params": { 00:20:44.414 "name": "Nvme1", 00:20:44.414 "trtype": "tcp", 00:20:44.414 "traddr": "10.0.0.2", 00:20:44.414 "adrfam": "ipv4", 00:20:44.414 "trsvcid": "4420", 00:20:44.414 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:44.414 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:44.414 "hdgst": false, 00:20:44.414 "ddgst": false 00:20:44.414 }, 00:20:44.414 "method": "bdev_nvme_attach_controller" 00:20:44.414 },{ 00:20:44.414 "params": { 00:20:44.414 "name": "Nvme2", 00:20:44.414 "trtype": "tcp", 00:20:44.414 "traddr": "10.0.0.2", 00:20:44.414 "adrfam": "ipv4", 00:20:44.414 "trsvcid": "4420", 00:20:44.414 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:44.414 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:44.414 "hdgst": false, 00:20:44.414 "ddgst": false 00:20:44.414 }, 00:20:44.414 "method": "bdev_nvme_attach_controller" 00:20:44.414 },{ 00:20:44.414 "params": { 00:20:44.414 "name": "Nvme3", 00:20:44.414 "trtype": "tcp", 00:20:44.414 "traddr": "10.0.0.2", 00:20:44.414 "adrfam": "ipv4", 00:20:44.414 "trsvcid": "4420", 00:20:44.414 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:44.414 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:44.414 "hdgst": false, 00:20:44.414 "ddgst": false 00:20:44.414 }, 00:20:44.414 "method": "bdev_nvme_attach_controller" 00:20:44.414 },{ 00:20:44.414 "params": { 00:20:44.414 "name": "Nvme4", 00:20:44.414 "trtype": "tcp", 00:20:44.414 "traddr": "10.0.0.2", 00:20:44.414 "adrfam": "ipv4", 00:20:44.414 "trsvcid": "4420", 00:20:44.414 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:44.414 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:44.414 "hdgst": false, 00:20:44.414 "ddgst": false 00:20:44.414 }, 00:20:44.414 "method": "bdev_nvme_attach_controller" 00:20:44.414 },{ 00:20:44.414 "params": { 00:20:44.414 "name": "Nvme5", 00:20:44.414 "trtype": "tcp", 00:20:44.414 "traddr": "10.0.0.2", 00:20:44.414 "adrfam": "ipv4", 00:20:44.414 "trsvcid": "4420", 00:20:44.414 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:44.414 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:44.414 "hdgst": false, 00:20:44.414 "ddgst": false 00:20:44.414 }, 00:20:44.414 "method": "bdev_nvme_attach_controller" 00:20:44.414 },{ 00:20:44.414 "params": { 00:20:44.414 "name": "Nvme6", 00:20:44.414 "trtype": "tcp", 00:20:44.414 "traddr": "10.0.0.2", 00:20:44.414 "adrfam": "ipv4", 00:20:44.414 "trsvcid": "4420", 00:20:44.414 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:44.414 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:44.414 "hdgst": false, 00:20:44.414 "ddgst": false 00:20:44.414 }, 00:20:44.414 "method": "bdev_nvme_attach_controller" 00:20:44.414 },{ 00:20:44.414 "params": { 00:20:44.414 "name": "Nvme7", 00:20:44.414 "trtype": "tcp", 00:20:44.414 "traddr": "10.0.0.2", 00:20:44.414 "adrfam": "ipv4", 00:20:44.414 "trsvcid": "4420", 00:20:44.414 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:44.414 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:44.414 "hdgst": false, 00:20:44.414 "ddgst": false 00:20:44.414 }, 00:20:44.414 "method": "bdev_nvme_attach_controller" 00:20:44.414 },{ 00:20:44.414 "params": { 00:20:44.414 "name": "Nvme8", 00:20:44.414 "trtype": "tcp", 00:20:44.414 "traddr": "10.0.0.2", 00:20:44.414 "adrfam": "ipv4", 00:20:44.414 "trsvcid": "4420", 00:20:44.414 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:44.414 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:44.414 "hdgst": false, 00:20:44.414 "ddgst": false 00:20:44.414 }, 00:20:44.414 "method": "bdev_nvme_attach_controller" 00:20:44.414 },{ 00:20:44.414 "params": { 00:20:44.414 "name": "Nvme9", 00:20:44.414 "trtype": "tcp", 00:20:44.414 "traddr": "10.0.0.2", 00:20:44.414 "adrfam": "ipv4", 00:20:44.414 "trsvcid": "4420", 00:20:44.414 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:44.414 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:44.414 "hdgst": false, 00:20:44.414 "ddgst": false 00:20:44.414 }, 00:20:44.414 "method": "bdev_nvme_attach_controller" 00:20:44.414 },{ 00:20:44.414 "params": { 00:20:44.414 "name": "Nvme10", 00:20:44.414 "trtype": "tcp", 00:20:44.414 "traddr": "10.0.0.2", 00:20:44.414 "adrfam": "ipv4", 00:20:44.414 "trsvcid": "4420", 00:20:44.414 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:44.414 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:44.414 "hdgst": false, 00:20:44.414 "ddgst": false 00:20:44.414 }, 00:20:44.414 "method": "bdev_nvme_attach_controller" 00:20:44.414 }' 00:20:44.414 [2024-11-20 11:15:11.783785] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:44.414 [2024-11-20 11:15:11.825319] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:45.790 Running I/O for 1 seconds... 00:20:47.164 2200.00 IOPS, 137.50 MiB/s 00:20:47.164 Latency(us) 00:20:47.164 [2024-11-20T10:15:14.660Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:47.164 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:47.164 Verification LBA range: start 0x0 length 0x400 00:20:47.164 Nvme1n1 : 1.14 280.72 17.54 0.00 0.00 225785.77 14588.88 223392.28 00:20:47.164 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:47.164 Verification LBA range: start 0x0 length 0x400 00:20:47.164 Nvme2n1 : 1.05 243.68 15.23 0.00 0.00 256225.73 19489.84 233422.14 00:20:47.164 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:47.164 Verification LBA range: start 0x0 length 0x400 00:20:47.164 Nvme3n1 : 1.09 323.48 20.22 0.00 0.00 186418.29 14019.01 217921.45 00:20:47.164 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:47.164 Verification LBA range: start 0x0 length 0x400 00:20:47.164 Nvme4n1 : 1.15 281.79 17.61 0.00 0.00 215168.26 3704.21 215186.03 00:20:47.164 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:47.164 Verification LBA range: start 0x0 length 0x400 00:20:47.164 Nvme5n1 : 1.15 277.23 17.33 0.00 0.00 216074.86 18919.96 217921.45 00:20:47.164 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:47.164 Verification LBA range: start 0x0 length 0x400 00:20:47.164 Nvme6n1 : 1.09 235.77 14.74 0.00 0.00 248982.26 17096.35 231598.53 00:20:47.164 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:47.164 Verification LBA range: start 0x0 length 0x400 00:20:47.164 Nvme7n1 : 1.15 278.55 17.41 0.00 0.00 208639.55 29861.62 199685.34 00:20:47.164 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:47.164 Verification LBA range: start 0x0 length 0x400 00:20:47.165 Nvme8n1 : 1.16 276.34 17.27 0.00 0.00 207244.42 14702.86 224304.08 00:20:47.165 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:47.165 Verification LBA range: start 0x0 length 0x400 00:20:47.165 Nvme9n1 : 1.16 275.14 17.20 0.00 0.00 205100.79 15386.71 227951.30 00:20:47.165 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:47.165 Verification LBA range: start 0x0 length 0x400 00:20:47.165 Nvme10n1 : 1.17 274.56 17.16 0.00 0.00 202329.49 12993.22 240716.58 00:20:47.165 [2024-11-20T10:15:14.661Z] =================================================================================================================== 00:20:47.165 [2024-11-20T10:15:14.661Z] Total : 2747.25 171.70 0.00 0.00 215400.65 3704.21 240716.58 00:20:47.165 11:15:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:20:47.165 11:15:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:20:47.165 11:15:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:47.165 11:15:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:47.165 11:15:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:20:47.165 11:15:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:47.165 11:15:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:20:47.165 11:15:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:47.165 11:15:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:20:47.165 11:15:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:47.165 11:15:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:47.165 rmmod nvme_tcp 00:20:47.165 rmmod nvme_fabrics 00:20:47.165 rmmod nvme_keyring 00:20:47.165 11:15:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:47.165 11:15:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:20:47.165 11:15:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:20:47.165 11:15:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 4112953 ']' 00:20:47.165 11:15:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 4112953 00:20:47.165 11:15:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 4112953 ']' 00:20:47.165 11:15:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 4112953 00:20:47.165 11:15:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:20:47.165 11:15:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:47.165 11:15:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4112953 00:20:47.423 11:15:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:47.423 11:15:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:47.423 11:15:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4112953' 00:20:47.423 killing process with pid 4112953 00:20:47.423 11:15:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 4112953 00:20:47.423 11:15:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 4112953 00:20:47.682 11:15:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:47.682 11:15:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:47.682 11:15:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:47.682 11:15:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:20:47.682 11:15:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:20:47.682 11:15:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:47.682 11:15:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:20:47.682 11:15:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:47.682 11:15:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:47.682 11:15:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:47.682 11:15:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:47.682 11:15:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:50.216 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:50.216 00:20:50.216 real 0m15.887s 00:20:50.216 user 0m36.607s 00:20:50.216 sys 0m5.870s 00:20:50.216 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:50.216 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:50.216 ************************************ 00:20:50.216 END TEST nvmf_shutdown_tc1 00:20:50.216 ************************************ 00:20:50.216 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:20:50.216 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:50.216 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:50.216 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:50.216 ************************************ 00:20:50.216 START TEST nvmf_shutdown_tc2 00:20:50.216 ************************************ 00:20:50.216 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:20:50.216 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:20:50.216 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:20:50.216 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:50.216 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:50.216 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:50.216 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:50.216 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:50.216 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:50.216 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:50.216 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:50.216 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:50.216 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:50.216 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:20:50.216 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:50.216 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:50.216 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:20:50.216 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:50.216 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:50.216 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:50.216 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:50.216 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:50.216 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:20:50.216 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:50.216 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:20:50.216 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:20:50.216 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:20:50.216 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:20:50.216 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:20:50.216 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:20:50.217 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:50.217 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:50.217 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:50.217 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:50.217 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:50.217 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:50.217 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:50.217 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:50.217 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:50.217 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:50.217 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:50.217 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:50.217 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:50.217 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:50.217 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:50.217 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:50.217 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:50.217 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:50.217 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:50.217 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:50.217 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:50.217 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:50.217 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:50.217 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:50.217 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:50.217 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:50.217 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:50.217 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:50.217 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:50.217 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:50.217 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:50.217 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:50.217 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:50.217 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:50.217 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:50.217 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:50.217 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:50.217 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:50.217 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:50.217 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:50.217 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:50.217 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:50.217 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:50.217 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:50.217 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:50.217 Found net devices under 0000:86:00.0: cvl_0_0 00:20:50.217 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:50.217 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:50.217 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:50.217 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:50.217 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:50.217 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:50.217 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:50.217 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:50.217 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:50.217 Found net devices under 0000:86:00.1: cvl_0_1 00:20:50.217 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:50.217 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:50.217 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:20:50.217 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:50.217 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:50.217 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:50.217 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:50.217 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:50.217 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:50.217 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:50.217 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:50.217 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:50.217 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:50.217 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:50.217 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:50.217 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:50.217 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:50.217 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:50.217 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:50.217 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:50.217 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:50.218 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:50.218 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:50.218 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:50.218 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:50.218 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:50.218 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:50.218 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:50.218 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:50.218 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:50.218 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.423 ms 00:20:50.218 00:20:50.218 --- 10.0.0.2 ping statistics --- 00:20:50.218 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:50.218 rtt min/avg/max/mdev = 0.423/0.423/0.423/0.000 ms 00:20:50.218 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:50.218 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:50.218 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.189 ms 00:20:50.218 00:20:50.218 --- 10.0.0.1 ping statistics --- 00:20:50.218 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:50.218 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:20:50.218 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:50.218 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:20:50.218 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:50.218 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:50.218 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:50.218 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:50.218 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:50.218 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:50.218 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:50.218 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:20:50.218 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:50.218 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:50.218 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:50.218 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=4115154 00:20:50.218 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 4115154 00:20:50.218 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:50.218 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 4115154 ']' 00:20:50.218 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:50.218 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:50.218 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:50.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:50.218 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:50.218 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:50.218 [2024-11-20 11:15:17.544372] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:20:50.218 [2024-11-20 11:15:17.544416] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:50.218 [2024-11-20 11:15:17.625442] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:50.218 [2024-11-20 11:15:17.669339] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:50.218 [2024-11-20 11:15:17.669374] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:50.218 [2024-11-20 11:15:17.669381] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:50.218 [2024-11-20 11:15:17.669387] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:50.218 [2024-11-20 11:15:17.669392] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:50.218 [2024-11-20 11:15:17.670912] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:50.218 [2024-11-20 11:15:17.671019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:50.218 [2024-11-20 11:15:17.671055] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:50.218 [2024-11-20 11:15:17.671055] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:20:51.154 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:51.154 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:20:51.154 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:51.154 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:51.154 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:51.154 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:51.154 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:51.154 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.154 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:51.154 [2024-11-20 11:15:18.422323] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:51.154 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.154 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:20:51.154 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:20:51.154 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:51.154 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:51.154 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:51.154 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:51.154 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:51.154 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:51.154 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:51.154 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:51.154 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:51.154 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:51.154 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:51.154 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:51.154 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:51.154 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:51.154 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:51.154 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:51.154 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:51.154 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:51.154 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:51.154 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:51.154 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:51.154 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:51.154 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:51.154 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:20:51.154 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.154 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:51.154 Malloc1 00:20:51.154 [2024-11-20 11:15:18.527244] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:51.154 Malloc2 00:20:51.154 Malloc3 00:20:51.154 Malloc4 00:20:51.413 Malloc5 00:20:51.413 Malloc6 00:20:51.413 Malloc7 00:20:51.413 Malloc8 00:20:51.413 Malloc9 00:20:51.413 Malloc10 00:20:51.672 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.672 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:20:51.672 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:51.672 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:51.672 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=4115465 00:20:51.672 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 4115465 /var/tmp/bdevperf.sock 00:20:51.672 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 4115465 ']' 00:20:51.672 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:51.672 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:20:51.672 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:51.672 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:51.673 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:51.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:51.673 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:20:51.673 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:51.673 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:20:51.673 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:51.673 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:51.673 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:51.673 { 00:20:51.673 "params": { 00:20:51.673 "name": "Nvme$subsystem", 00:20:51.673 "trtype": "$TEST_TRANSPORT", 00:20:51.673 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:51.673 "adrfam": "ipv4", 00:20:51.673 "trsvcid": "$NVMF_PORT", 00:20:51.673 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:51.673 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:51.673 "hdgst": ${hdgst:-false}, 00:20:51.673 "ddgst": ${ddgst:-false} 00:20:51.673 }, 00:20:51.673 "method": "bdev_nvme_attach_controller" 00:20:51.673 } 00:20:51.673 EOF 00:20:51.673 )") 00:20:51.673 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:51.673 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:51.673 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:51.673 { 00:20:51.673 "params": { 00:20:51.673 "name": "Nvme$subsystem", 00:20:51.673 "trtype": "$TEST_TRANSPORT", 00:20:51.673 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:51.673 "adrfam": "ipv4", 00:20:51.673 "trsvcid": "$NVMF_PORT", 00:20:51.673 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:51.673 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:51.673 "hdgst": ${hdgst:-false}, 00:20:51.673 "ddgst": ${ddgst:-false} 00:20:51.673 }, 00:20:51.673 "method": "bdev_nvme_attach_controller" 00:20:51.673 } 00:20:51.673 EOF 00:20:51.673 )") 00:20:51.673 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:51.673 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:51.673 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:51.673 { 00:20:51.673 "params": { 00:20:51.673 "name": "Nvme$subsystem", 00:20:51.673 "trtype": "$TEST_TRANSPORT", 00:20:51.673 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:51.673 "adrfam": "ipv4", 00:20:51.673 "trsvcid": "$NVMF_PORT", 00:20:51.673 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:51.673 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:51.673 "hdgst": ${hdgst:-false}, 00:20:51.673 "ddgst": ${ddgst:-false} 00:20:51.673 }, 00:20:51.673 "method": "bdev_nvme_attach_controller" 00:20:51.673 } 00:20:51.673 EOF 00:20:51.673 )") 00:20:51.673 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:51.673 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:51.673 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:51.673 { 00:20:51.673 "params": { 00:20:51.673 "name": "Nvme$subsystem", 00:20:51.673 "trtype": "$TEST_TRANSPORT", 00:20:51.673 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:51.673 "adrfam": "ipv4", 00:20:51.673 "trsvcid": "$NVMF_PORT", 00:20:51.673 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:51.673 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:51.673 "hdgst": ${hdgst:-false}, 00:20:51.673 "ddgst": ${ddgst:-false} 00:20:51.673 }, 00:20:51.673 "method": "bdev_nvme_attach_controller" 00:20:51.673 } 00:20:51.673 EOF 00:20:51.673 )") 00:20:51.673 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:51.673 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:51.673 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:51.673 { 00:20:51.673 "params": { 00:20:51.673 "name": "Nvme$subsystem", 00:20:51.673 "trtype": "$TEST_TRANSPORT", 00:20:51.673 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:51.673 "adrfam": "ipv4", 00:20:51.673 "trsvcid": "$NVMF_PORT", 00:20:51.673 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:51.673 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:51.673 "hdgst": ${hdgst:-false}, 00:20:51.673 "ddgst": ${ddgst:-false} 00:20:51.673 }, 00:20:51.673 "method": "bdev_nvme_attach_controller" 00:20:51.673 } 00:20:51.673 EOF 00:20:51.673 )") 00:20:51.673 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:51.673 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:51.673 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:51.673 { 00:20:51.673 "params": { 00:20:51.673 "name": "Nvme$subsystem", 00:20:51.673 "trtype": "$TEST_TRANSPORT", 00:20:51.673 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:51.673 "adrfam": "ipv4", 00:20:51.673 "trsvcid": "$NVMF_PORT", 00:20:51.673 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:51.673 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:51.673 "hdgst": ${hdgst:-false}, 00:20:51.673 "ddgst": ${ddgst:-false} 00:20:51.673 }, 00:20:51.673 "method": "bdev_nvme_attach_controller" 00:20:51.673 } 00:20:51.673 EOF 00:20:51.673 )") 00:20:51.673 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:51.673 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:51.673 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:51.673 { 00:20:51.673 "params": { 00:20:51.673 "name": "Nvme$subsystem", 00:20:51.673 "trtype": "$TEST_TRANSPORT", 00:20:51.673 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:51.673 "adrfam": "ipv4", 00:20:51.673 "trsvcid": "$NVMF_PORT", 00:20:51.673 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:51.673 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:51.673 "hdgst": ${hdgst:-false}, 00:20:51.673 "ddgst": ${ddgst:-false} 00:20:51.673 }, 00:20:51.673 "method": "bdev_nvme_attach_controller" 00:20:51.673 } 00:20:51.673 EOF 00:20:51.673 )") 00:20:51.673 [2024-11-20 11:15:18.999139] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:20:51.673 [2024-11-20 11:15:18.999198] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4115465 ] 00:20:51.673 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:51.673 11:15:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:51.673 11:15:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:51.673 { 00:20:51.673 "params": { 00:20:51.673 "name": "Nvme$subsystem", 00:20:51.673 "trtype": "$TEST_TRANSPORT", 00:20:51.673 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:51.673 "adrfam": "ipv4", 00:20:51.673 "trsvcid": "$NVMF_PORT", 00:20:51.673 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:51.673 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:51.673 "hdgst": ${hdgst:-false}, 00:20:51.673 "ddgst": ${ddgst:-false} 00:20:51.673 }, 00:20:51.673 "method": "bdev_nvme_attach_controller" 00:20:51.673 } 00:20:51.673 EOF 00:20:51.673 )") 00:20:51.673 11:15:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:51.673 11:15:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:51.673 11:15:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:51.673 { 00:20:51.673 "params": { 00:20:51.673 "name": "Nvme$subsystem", 00:20:51.673 "trtype": "$TEST_TRANSPORT", 00:20:51.673 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:51.673 "adrfam": "ipv4", 00:20:51.673 "trsvcid": "$NVMF_PORT", 00:20:51.673 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:51.673 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:51.673 "hdgst": ${hdgst:-false}, 00:20:51.673 "ddgst": ${ddgst:-false} 00:20:51.673 }, 00:20:51.673 "method": "bdev_nvme_attach_controller" 00:20:51.673 } 00:20:51.673 EOF 00:20:51.673 )") 00:20:51.673 11:15:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:51.673 11:15:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:51.673 11:15:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:51.673 { 00:20:51.673 "params": { 00:20:51.673 "name": "Nvme$subsystem", 00:20:51.673 "trtype": "$TEST_TRANSPORT", 00:20:51.673 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:51.673 "adrfam": "ipv4", 00:20:51.673 "trsvcid": "$NVMF_PORT", 00:20:51.673 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:51.673 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:51.673 "hdgst": ${hdgst:-false}, 00:20:51.674 "ddgst": ${ddgst:-false} 00:20:51.674 }, 00:20:51.674 "method": "bdev_nvme_attach_controller" 00:20:51.674 } 00:20:51.674 EOF 00:20:51.674 )") 00:20:51.674 11:15:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:51.674 11:15:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:20:51.674 11:15:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:20:51.674 11:15:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:51.674 "params": { 00:20:51.674 "name": "Nvme1", 00:20:51.674 "trtype": "tcp", 00:20:51.674 "traddr": "10.0.0.2", 00:20:51.674 "adrfam": "ipv4", 00:20:51.674 "trsvcid": "4420", 00:20:51.674 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:51.674 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:51.674 "hdgst": false, 00:20:51.674 "ddgst": false 00:20:51.674 }, 00:20:51.674 "method": "bdev_nvme_attach_controller" 00:20:51.674 },{ 00:20:51.674 "params": { 00:20:51.674 "name": "Nvme2", 00:20:51.674 "trtype": "tcp", 00:20:51.674 "traddr": "10.0.0.2", 00:20:51.674 "adrfam": "ipv4", 00:20:51.674 "trsvcid": "4420", 00:20:51.674 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:51.674 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:51.674 "hdgst": false, 00:20:51.674 "ddgst": false 00:20:51.674 }, 00:20:51.674 "method": "bdev_nvme_attach_controller" 00:20:51.674 },{ 00:20:51.674 "params": { 00:20:51.674 "name": "Nvme3", 00:20:51.674 "trtype": "tcp", 00:20:51.674 "traddr": "10.0.0.2", 00:20:51.674 "adrfam": "ipv4", 00:20:51.674 "trsvcid": "4420", 00:20:51.674 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:51.674 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:51.674 "hdgst": false, 00:20:51.674 "ddgst": false 00:20:51.674 }, 00:20:51.674 "method": "bdev_nvme_attach_controller" 00:20:51.674 },{ 00:20:51.674 "params": { 00:20:51.674 "name": "Nvme4", 00:20:51.674 "trtype": "tcp", 00:20:51.674 "traddr": "10.0.0.2", 00:20:51.674 "adrfam": "ipv4", 00:20:51.674 "trsvcid": "4420", 00:20:51.674 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:51.674 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:51.674 "hdgst": false, 00:20:51.674 "ddgst": false 00:20:51.674 }, 00:20:51.674 "method": "bdev_nvme_attach_controller" 00:20:51.674 },{ 00:20:51.674 "params": { 00:20:51.674 "name": "Nvme5", 00:20:51.674 "trtype": "tcp", 00:20:51.674 "traddr": "10.0.0.2", 00:20:51.674 "adrfam": "ipv4", 00:20:51.674 "trsvcid": "4420", 00:20:51.674 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:51.674 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:51.674 "hdgst": false, 00:20:51.674 "ddgst": false 00:20:51.674 }, 00:20:51.674 "method": "bdev_nvme_attach_controller" 00:20:51.674 },{ 00:20:51.674 "params": { 00:20:51.674 "name": "Nvme6", 00:20:51.674 "trtype": "tcp", 00:20:51.674 "traddr": "10.0.0.2", 00:20:51.674 "adrfam": "ipv4", 00:20:51.674 "trsvcid": "4420", 00:20:51.674 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:51.674 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:51.674 "hdgst": false, 00:20:51.674 "ddgst": false 00:20:51.674 }, 00:20:51.674 "method": "bdev_nvme_attach_controller" 00:20:51.674 },{ 00:20:51.674 "params": { 00:20:51.674 "name": "Nvme7", 00:20:51.674 "trtype": "tcp", 00:20:51.674 "traddr": "10.0.0.2", 00:20:51.674 "adrfam": "ipv4", 00:20:51.674 "trsvcid": "4420", 00:20:51.674 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:51.674 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:51.674 "hdgst": false, 00:20:51.674 "ddgst": false 00:20:51.674 }, 00:20:51.674 "method": "bdev_nvme_attach_controller" 00:20:51.674 },{ 00:20:51.674 "params": { 00:20:51.674 "name": "Nvme8", 00:20:51.674 "trtype": "tcp", 00:20:51.674 "traddr": "10.0.0.2", 00:20:51.674 "adrfam": "ipv4", 00:20:51.674 "trsvcid": "4420", 00:20:51.674 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:51.674 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:51.674 "hdgst": false, 00:20:51.674 "ddgst": false 00:20:51.674 }, 00:20:51.674 "method": "bdev_nvme_attach_controller" 00:20:51.674 },{ 00:20:51.674 "params": { 00:20:51.674 "name": "Nvme9", 00:20:51.674 "trtype": "tcp", 00:20:51.674 "traddr": "10.0.0.2", 00:20:51.674 "adrfam": "ipv4", 00:20:51.674 "trsvcid": "4420", 00:20:51.674 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:51.674 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:51.674 "hdgst": false, 00:20:51.674 "ddgst": false 00:20:51.674 }, 00:20:51.674 "method": "bdev_nvme_attach_controller" 00:20:51.674 },{ 00:20:51.674 "params": { 00:20:51.674 "name": "Nvme10", 00:20:51.674 "trtype": "tcp", 00:20:51.674 "traddr": "10.0.0.2", 00:20:51.674 "adrfam": "ipv4", 00:20:51.674 "trsvcid": "4420", 00:20:51.674 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:51.674 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:51.674 "hdgst": false, 00:20:51.674 "ddgst": false 00:20:51.674 }, 00:20:51.674 "method": "bdev_nvme_attach_controller" 00:20:51.674 }' 00:20:51.674 [2024-11-20 11:15:19.078113] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:51.674 [2024-11-20 11:15:19.119571] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:53.049 Running I/O for 10 seconds... 00:20:53.617 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:53.617 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:20:53.617 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:53.617 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.617 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:53.617 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.617 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:20:53.617 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:20:53.617 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:20:53.617 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:20:53.617 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:20:53.617 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:20:53.617 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:20:53.617 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:53.617 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:20:53.617 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.617 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:53.617 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.617 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:20:53.617 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:20:53.617 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:20:53.617 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:20:53.617 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:20:53.617 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 4115465 00:20:53.617 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 4115465 ']' 00:20:53.617 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 4115465 00:20:53.617 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:20:53.617 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:53.617 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4115465 00:20:53.617 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:53.617 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:53.617 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4115465' 00:20:53.617 killing process with pid 4115465 00:20:53.617 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 4115465 00:20:53.617 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 4115465 00:20:53.617 Received shutdown signal, test time was about 0.750488 seconds 00:20:53.617 00:20:53.617 Latency(us) 00:20:53.617 [2024-11-20T10:15:21.113Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:53.617 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:53.617 Verification LBA range: start 0x0 length 0x400 00:20:53.617 Nvme1n1 : 0.71 269.40 16.84 0.00 0.00 233275.81 28607.89 202420.76 00:20:53.617 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:53.617 Verification LBA range: start 0x0 length 0x400 00:20:53.617 Nvme2n1 : 0.75 340.57 21.29 0.00 0.00 179585.92 16184.54 203332.56 00:20:53.617 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:53.617 Verification LBA range: start 0x0 length 0x400 00:20:53.617 Nvme3n1 : 0.74 261.16 16.32 0.00 0.00 230812.27 15956.59 224304.08 00:20:53.617 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:53.617 Verification LBA range: start 0x0 length 0x400 00:20:53.617 Nvme4n1 : 0.72 275.10 17.19 0.00 0.00 211355.91 6753.06 214274.23 00:20:53.617 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:53.617 Verification LBA range: start 0x0 length 0x400 00:20:53.617 Nvme5n1 : 0.73 262.70 16.42 0.00 0.00 218660.51 18122.13 205156.17 00:20:53.617 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:53.617 Verification LBA range: start 0x0 length 0x400 00:20:53.617 Nvme6n1 : 0.74 259.26 16.20 0.00 0.00 216637.14 17210.32 224304.08 00:20:53.617 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:53.617 Verification LBA range: start 0x0 length 0x400 00:20:53.617 Nvme7n1 : 0.72 266.20 16.64 0.00 0.00 204732.77 13050.21 216097.84 00:20:53.617 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:53.617 Verification LBA range: start 0x0 length 0x400 00:20:53.617 Nvme8n1 : 0.73 264.07 16.50 0.00 0.00 201562.38 31685.23 202420.76 00:20:53.617 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:53.617 Verification LBA range: start 0x0 length 0x400 00:20:53.617 Nvme9n1 : 0.74 258.40 16.15 0.00 0.00 201615.81 21655.37 222480.47 00:20:53.617 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:53.617 Verification LBA range: start 0x0 length 0x400 00:20:53.617 Nvme10n1 : 0.75 256.06 16.00 0.00 0.00 198709.72 21883.33 240716.58 00:20:53.617 [2024-11-20T10:15:21.113Z] =================================================================================================================== 00:20:53.617 [2024-11-20T10:15:21.113Z] Total : 2712.91 169.56 0.00 0.00 208759.52 6753.06 240716.58 00:20:53.877 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:20:54.812 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 4115154 00:20:54.812 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:20:54.812 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:20:54.812 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:54.812 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:54.812 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:20:54.812 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:54.812 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:20:54.812 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:54.812 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:20:54.812 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:54.812 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:54.812 rmmod nvme_tcp 00:20:55.071 rmmod nvme_fabrics 00:20:55.071 rmmod nvme_keyring 00:20:55.071 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:55.071 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:20:55.071 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:20:55.071 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 4115154 ']' 00:20:55.071 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 4115154 00:20:55.071 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 4115154 ']' 00:20:55.071 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 4115154 00:20:55.071 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:20:55.071 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:55.071 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4115154 00:20:55.071 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:55.071 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:55.072 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4115154' 00:20:55.072 killing process with pid 4115154 00:20:55.072 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 4115154 00:20:55.072 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 4115154 00:20:55.332 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:55.332 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:55.332 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:55.332 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:20:55.332 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:20:55.332 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:55.332 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:20:55.332 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:55.332 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:55.332 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:55.332 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:55.332 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:57.869 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:57.869 00:20:57.869 real 0m7.666s 00:20:57.869 user 0m22.735s 00:20:57.869 sys 0m1.305s 00:20:57.869 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:57.869 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:57.869 ************************************ 00:20:57.869 END TEST nvmf_shutdown_tc2 00:20:57.869 ************************************ 00:20:57.869 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:20:57.869 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:57.869 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:57.869 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:57.869 ************************************ 00:20:57.869 START TEST nvmf_shutdown_tc3 00:20:57.869 ************************************ 00:20:57.869 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:20:57.869 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:20:57.869 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:20:57.869 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:57.869 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:57.869 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:57.869 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:57.869 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:57.869 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:57.869 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:57.869 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:57.869 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:57.869 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:57.869 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:20:57.869 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:57.869 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:57.869 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:20:57.869 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:57.869 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:57.869 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:57.869 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:57.869 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:57.869 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:20:57.869 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:57.869 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:20:57.869 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:20:57.869 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:20:57.869 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:20:57.869 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:20:57.869 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:20:57.869 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:57.869 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:57.869 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:57.869 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:57.869 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:57.869 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:57.869 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:57.869 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:57.869 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:57.869 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:57.869 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:57.869 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:57.869 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:57.869 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:57.869 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:57.869 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:57.869 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:57.869 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:57.869 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:57.869 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:57.869 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:57.869 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:57.869 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:57.869 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:57.869 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:57.869 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:57.869 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:57.869 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:57.869 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:57.869 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:57.870 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:57.870 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:57.870 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:57.870 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:57.870 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:57.870 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:57.870 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:57.870 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:57.870 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:57.870 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:57.870 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:57.870 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:57.870 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:57.870 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:57.870 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:57.870 Found net devices under 0000:86:00.0: cvl_0_0 00:20:57.870 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:57.870 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:57.870 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:57.870 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:57.870 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:57.870 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:57.870 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:57.870 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:57.870 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:57.870 Found net devices under 0000:86:00.1: cvl_0_1 00:20:57.870 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:57.870 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:57.870 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:20:57.870 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:57.870 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:57.870 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:57.870 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:57.870 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:57.870 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:57.870 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:57.870 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:57.870 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:57.870 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:57.870 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:57.870 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:57.870 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:57.870 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:57.870 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:57.870 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:57.870 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:57.870 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:57.870 11:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:57.870 11:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:57.870 11:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:57.870 11:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:57.870 11:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:57.870 11:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:57.870 11:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:57.870 11:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:57.870 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:57.870 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.409 ms 00:20:57.870 00:20:57.870 --- 10.0.0.2 ping statistics --- 00:20:57.870 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:57.870 rtt min/avg/max/mdev = 0.409/0.409/0.409/0.000 ms 00:20:57.870 11:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:57.870 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:57.870 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.225 ms 00:20:57.870 00:20:57.870 --- 10.0.0.1 ping statistics --- 00:20:57.870 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:57.870 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:20:57.870 11:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:57.870 11:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:20:57.870 11:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:57.870 11:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:57.870 11:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:57.870 11:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:57.870 11:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:57.870 11:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:57.870 11:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:57.870 11:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:20:57.870 11:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:57.870 11:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:57.871 11:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:57.871 11:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=4116678 00:20:57.871 11:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 4116678 00:20:57.871 11:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:57.871 11:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 4116678 ']' 00:20:57.871 11:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:57.871 11:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:57.871 11:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:57.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:57.871 11:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:57.871 11:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:57.871 [2024-11-20 11:15:25.305603] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:20:57.871 [2024-11-20 11:15:25.305655] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:58.129 [2024-11-20 11:15:25.385037] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:58.129 [2024-11-20 11:15:25.427198] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:58.129 [2024-11-20 11:15:25.427238] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:58.129 [2024-11-20 11:15:25.427246] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:58.129 [2024-11-20 11:15:25.427253] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:58.129 [2024-11-20 11:15:25.427259] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:58.129 [2024-11-20 11:15:25.431681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:58.129 [2024-11-20 11:15:25.431717] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:58.129 [2024-11-20 11:15:25.431827] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:58.129 [2024-11-20 11:15:25.431828] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:20:58.697 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:58.697 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:20:58.697 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:58.697 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:58.697 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:58.955 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:58.955 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:58.955 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.955 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:58.955 [2024-11-20 11:15:26.198171] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:58.955 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.955 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:20:58.955 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:20:58.956 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:58.956 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:58.956 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:58.956 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:58.956 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:58.956 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:58.956 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:58.956 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:58.956 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:58.956 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:58.956 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:58.956 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:58.956 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:58.956 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:58.956 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:58.956 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:58.956 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:58.956 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:58.956 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:58.956 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:58.956 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:58.956 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:58.956 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:58.956 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:20:58.956 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.956 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:58.956 Malloc1 00:20:58.956 [2024-11-20 11:15:26.303489] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:58.956 Malloc2 00:20:58.956 Malloc3 00:20:58.956 Malloc4 00:20:59.215 Malloc5 00:20:59.215 Malloc6 00:20:59.215 Malloc7 00:20:59.215 Malloc8 00:20:59.215 Malloc9 00:20:59.215 Malloc10 00:20:59.215 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.215 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:20:59.215 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:59.215 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:59.474 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=4116960 00:20:59.475 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 4116960 /var/tmp/bdevperf.sock 00:20:59.475 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 4116960 ']' 00:20:59.475 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:59.475 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:59.475 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:20:59.475 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:20:59.475 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:20:59.475 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:59.475 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:59.475 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:59.475 { 00:20:59.475 "params": { 00:20:59.475 "name": "Nvme$subsystem", 00:20:59.475 "trtype": "$TEST_TRANSPORT", 00:20:59.475 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:59.475 "adrfam": "ipv4", 00:20:59.475 "trsvcid": "$NVMF_PORT", 00:20:59.475 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:59.475 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:59.475 "hdgst": ${hdgst:-false}, 00:20:59.475 "ddgst": ${ddgst:-false} 00:20:59.475 }, 00:20:59.475 "method": "bdev_nvme_attach_controller" 00:20:59.475 } 00:20:59.475 EOF 00:20:59.475 )") 00:20:59.475 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:59.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:59.475 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:59.475 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:59.475 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:59.475 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:59.475 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:59.475 { 00:20:59.475 "params": { 00:20:59.475 "name": "Nvme$subsystem", 00:20:59.475 "trtype": "$TEST_TRANSPORT", 00:20:59.475 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:59.475 "adrfam": "ipv4", 00:20:59.475 "trsvcid": "$NVMF_PORT", 00:20:59.475 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:59.475 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:59.475 "hdgst": ${hdgst:-false}, 00:20:59.475 "ddgst": ${ddgst:-false} 00:20:59.475 }, 00:20:59.475 "method": "bdev_nvme_attach_controller" 00:20:59.475 } 00:20:59.475 EOF 00:20:59.475 )") 00:20:59.475 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:59.475 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:59.475 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:59.475 { 00:20:59.475 "params": { 00:20:59.475 "name": "Nvme$subsystem", 00:20:59.475 "trtype": "$TEST_TRANSPORT", 00:20:59.475 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:59.475 "adrfam": "ipv4", 00:20:59.475 "trsvcid": "$NVMF_PORT", 00:20:59.475 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:59.475 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:59.475 "hdgst": ${hdgst:-false}, 00:20:59.475 "ddgst": ${ddgst:-false} 00:20:59.475 }, 00:20:59.475 "method": "bdev_nvme_attach_controller" 00:20:59.475 } 00:20:59.475 EOF 00:20:59.475 )") 00:20:59.475 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:59.475 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:59.475 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:59.475 { 00:20:59.475 "params": { 00:20:59.475 "name": "Nvme$subsystem", 00:20:59.475 "trtype": "$TEST_TRANSPORT", 00:20:59.475 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:59.475 "adrfam": "ipv4", 00:20:59.475 "trsvcid": "$NVMF_PORT", 00:20:59.475 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:59.475 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:59.475 "hdgst": ${hdgst:-false}, 00:20:59.475 "ddgst": ${ddgst:-false} 00:20:59.475 }, 00:20:59.475 "method": "bdev_nvme_attach_controller" 00:20:59.475 } 00:20:59.475 EOF 00:20:59.475 )") 00:20:59.475 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:59.475 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:59.475 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:59.475 { 00:20:59.475 "params": { 00:20:59.475 "name": "Nvme$subsystem", 00:20:59.475 "trtype": "$TEST_TRANSPORT", 00:20:59.475 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:59.475 "adrfam": "ipv4", 00:20:59.475 "trsvcid": "$NVMF_PORT", 00:20:59.475 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:59.475 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:59.475 "hdgst": ${hdgst:-false}, 00:20:59.475 "ddgst": ${ddgst:-false} 00:20:59.475 }, 00:20:59.475 "method": "bdev_nvme_attach_controller" 00:20:59.475 } 00:20:59.475 EOF 00:20:59.475 )") 00:20:59.475 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:59.475 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:59.475 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:59.475 { 00:20:59.475 "params": { 00:20:59.475 "name": "Nvme$subsystem", 00:20:59.475 "trtype": "$TEST_TRANSPORT", 00:20:59.475 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:59.475 "adrfam": "ipv4", 00:20:59.475 "trsvcid": "$NVMF_PORT", 00:20:59.475 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:59.475 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:59.475 "hdgst": ${hdgst:-false}, 00:20:59.475 "ddgst": ${ddgst:-false} 00:20:59.475 }, 00:20:59.475 "method": "bdev_nvme_attach_controller" 00:20:59.475 } 00:20:59.475 EOF 00:20:59.475 )") 00:20:59.475 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:59.475 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:59.475 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:59.475 { 00:20:59.475 "params": { 00:20:59.475 "name": "Nvme$subsystem", 00:20:59.475 "trtype": "$TEST_TRANSPORT", 00:20:59.475 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:59.475 "adrfam": "ipv4", 00:20:59.475 "trsvcid": "$NVMF_PORT", 00:20:59.475 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:59.475 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:59.475 "hdgst": ${hdgst:-false}, 00:20:59.475 "ddgst": ${ddgst:-false} 00:20:59.475 }, 00:20:59.475 "method": "bdev_nvme_attach_controller" 00:20:59.475 } 00:20:59.475 EOF 00:20:59.475 )") 00:20:59.475 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:59.475 [2024-11-20 11:15:26.782068] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:20:59.475 [2024-11-20 11:15:26.782117] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4116960 ] 00:20:59.475 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:59.475 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:59.475 { 00:20:59.475 "params": { 00:20:59.475 "name": "Nvme$subsystem", 00:20:59.475 "trtype": "$TEST_TRANSPORT", 00:20:59.475 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:59.475 "adrfam": "ipv4", 00:20:59.475 "trsvcid": "$NVMF_PORT", 00:20:59.475 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:59.475 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:59.475 "hdgst": ${hdgst:-false}, 00:20:59.475 "ddgst": ${ddgst:-false} 00:20:59.475 }, 00:20:59.475 "method": "bdev_nvme_attach_controller" 00:20:59.475 } 00:20:59.475 EOF 00:20:59.475 )") 00:20:59.475 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:59.475 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:59.475 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:59.475 { 00:20:59.475 "params": { 00:20:59.475 "name": "Nvme$subsystem", 00:20:59.475 "trtype": "$TEST_TRANSPORT", 00:20:59.475 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:59.475 "adrfam": "ipv4", 00:20:59.475 "trsvcid": "$NVMF_PORT", 00:20:59.476 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:59.476 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:59.476 "hdgst": ${hdgst:-false}, 00:20:59.476 "ddgst": ${ddgst:-false} 00:20:59.476 }, 00:20:59.476 "method": "bdev_nvme_attach_controller" 00:20:59.476 } 00:20:59.476 EOF 00:20:59.476 )") 00:20:59.476 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:59.476 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:59.476 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:59.476 { 00:20:59.476 "params": { 00:20:59.476 "name": "Nvme$subsystem", 00:20:59.476 "trtype": "$TEST_TRANSPORT", 00:20:59.476 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:59.476 "adrfam": "ipv4", 00:20:59.476 "trsvcid": "$NVMF_PORT", 00:20:59.476 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:59.476 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:59.476 "hdgst": ${hdgst:-false}, 00:20:59.476 "ddgst": ${ddgst:-false} 00:20:59.476 }, 00:20:59.476 "method": "bdev_nvme_attach_controller" 00:20:59.476 } 00:20:59.476 EOF 00:20:59.476 )") 00:20:59.476 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:59.476 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:20:59.476 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:20:59.476 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:59.476 "params": { 00:20:59.476 "name": "Nvme1", 00:20:59.476 "trtype": "tcp", 00:20:59.476 "traddr": "10.0.0.2", 00:20:59.476 "adrfam": "ipv4", 00:20:59.476 "trsvcid": "4420", 00:20:59.476 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:59.476 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:59.476 "hdgst": false, 00:20:59.476 "ddgst": false 00:20:59.476 }, 00:20:59.476 "method": "bdev_nvme_attach_controller" 00:20:59.476 },{ 00:20:59.476 "params": { 00:20:59.476 "name": "Nvme2", 00:20:59.476 "trtype": "tcp", 00:20:59.476 "traddr": "10.0.0.2", 00:20:59.476 "adrfam": "ipv4", 00:20:59.476 "trsvcid": "4420", 00:20:59.476 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:59.476 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:59.476 "hdgst": false, 00:20:59.476 "ddgst": false 00:20:59.476 }, 00:20:59.476 "method": "bdev_nvme_attach_controller" 00:20:59.476 },{ 00:20:59.476 "params": { 00:20:59.476 "name": "Nvme3", 00:20:59.476 "trtype": "tcp", 00:20:59.476 "traddr": "10.0.0.2", 00:20:59.476 "adrfam": "ipv4", 00:20:59.476 "trsvcid": "4420", 00:20:59.476 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:59.476 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:59.476 "hdgst": false, 00:20:59.476 "ddgst": false 00:20:59.476 }, 00:20:59.476 "method": "bdev_nvme_attach_controller" 00:20:59.476 },{ 00:20:59.476 "params": { 00:20:59.476 "name": "Nvme4", 00:20:59.476 "trtype": "tcp", 00:20:59.476 "traddr": "10.0.0.2", 00:20:59.476 "adrfam": "ipv4", 00:20:59.476 "trsvcid": "4420", 00:20:59.476 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:59.476 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:59.476 "hdgst": false, 00:20:59.476 "ddgst": false 00:20:59.476 }, 00:20:59.476 "method": "bdev_nvme_attach_controller" 00:20:59.476 },{ 00:20:59.476 "params": { 00:20:59.476 "name": "Nvme5", 00:20:59.476 "trtype": "tcp", 00:20:59.476 "traddr": "10.0.0.2", 00:20:59.476 "adrfam": "ipv4", 00:20:59.476 "trsvcid": "4420", 00:20:59.476 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:59.476 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:59.476 "hdgst": false, 00:20:59.476 "ddgst": false 00:20:59.476 }, 00:20:59.476 "method": "bdev_nvme_attach_controller" 00:20:59.476 },{ 00:20:59.476 "params": { 00:20:59.476 "name": "Nvme6", 00:20:59.476 "trtype": "tcp", 00:20:59.476 "traddr": "10.0.0.2", 00:20:59.476 "adrfam": "ipv4", 00:20:59.476 "trsvcid": "4420", 00:20:59.476 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:59.476 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:59.476 "hdgst": false, 00:20:59.476 "ddgst": false 00:20:59.476 }, 00:20:59.476 "method": "bdev_nvme_attach_controller" 00:20:59.476 },{ 00:20:59.476 "params": { 00:20:59.476 "name": "Nvme7", 00:20:59.476 "trtype": "tcp", 00:20:59.476 "traddr": "10.0.0.2", 00:20:59.476 "adrfam": "ipv4", 00:20:59.476 "trsvcid": "4420", 00:20:59.476 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:59.476 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:59.476 "hdgst": false, 00:20:59.476 "ddgst": false 00:20:59.476 }, 00:20:59.476 "method": "bdev_nvme_attach_controller" 00:20:59.476 },{ 00:20:59.476 "params": { 00:20:59.476 "name": "Nvme8", 00:20:59.476 "trtype": "tcp", 00:20:59.476 "traddr": "10.0.0.2", 00:20:59.476 "adrfam": "ipv4", 00:20:59.476 "trsvcid": "4420", 00:20:59.476 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:59.476 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:59.476 "hdgst": false, 00:20:59.476 "ddgst": false 00:20:59.476 }, 00:20:59.476 "method": "bdev_nvme_attach_controller" 00:20:59.476 },{ 00:20:59.476 "params": { 00:20:59.476 "name": "Nvme9", 00:20:59.476 "trtype": "tcp", 00:20:59.476 "traddr": "10.0.0.2", 00:20:59.476 "adrfam": "ipv4", 00:20:59.476 "trsvcid": "4420", 00:20:59.476 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:59.476 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:59.476 "hdgst": false, 00:20:59.476 "ddgst": false 00:20:59.476 }, 00:20:59.476 "method": "bdev_nvme_attach_controller" 00:20:59.476 },{ 00:20:59.476 "params": { 00:20:59.476 "name": "Nvme10", 00:20:59.476 "trtype": "tcp", 00:20:59.476 "traddr": "10.0.0.2", 00:20:59.476 "adrfam": "ipv4", 00:20:59.476 "trsvcid": "4420", 00:20:59.476 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:59.476 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:59.476 "hdgst": false, 00:20:59.476 "ddgst": false 00:20:59.476 }, 00:20:59.476 "method": "bdev_nvme_attach_controller" 00:20:59.476 }' 00:20:59.476 [2024-11-20 11:15:26.859367] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:59.476 [2024-11-20 11:15:26.900758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:01.456 Running I/O for 10 seconds... 00:21:01.456 11:15:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:01.456 11:15:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:21:01.456 11:15:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:01.456 11:15:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.456 11:15:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:01.456 11:15:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.456 11:15:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:01.456 11:15:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:21:01.456 11:15:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:21:01.456 11:15:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:21:01.456 11:15:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:21:01.456 11:15:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:21:01.456 11:15:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:21:01.456 11:15:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:01.456 11:15:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:01.456 11:15:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:01.456 11:15:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.456 11:15:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:01.456 11:15:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.456 11:15:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:21:01.456 11:15:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:21:01.456 11:15:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:21:01.715 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:21:01.715 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:01.715 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:01.715 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:01.715 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.715 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:01.715 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.715 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:21:01.715 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:21:01.715 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:21:01.973 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:21:01.973 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:01.973 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:01.973 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:01.973 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.973 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:01.973 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.247 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:21:02.247 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:21:02.247 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:21:02.247 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:21:02.247 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:21:02.247 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 4116678 00:21:02.247 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 4116678 ']' 00:21:02.247 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 4116678 00:21:02.247 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:21:02.247 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:02.247 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4116678 00:21:02.247 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:02.247 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:02.247 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4116678' 00:21:02.247 killing process with pid 4116678 00:21:02.247 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 4116678 00:21:02.247 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 4116678 00:21:02.247 [2024-11-20 11:15:29.539639] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:02.247 [2024-11-20 11:15:29.539689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.247 [2024-11-20 11:15:29.539700] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:02.247 [2024-11-20 11:15:29.539707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.247 [2024-11-20 11:15:29.539715] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:02.247 [2024-11-20 11:15:29.539722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.247 [2024-11-20 11:15:29.539729] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:02.247 [2024-11-20 11:15:29.539736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.247 [2024-11-20 11:15:29.539743] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x58e1b0 is same with the state(6) to be set 00:21:02.247 [2024-11-20 11:15:29.540003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.247 [2024-11-20 11:15:29.540021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.247 [2024-11-20 11:15:29.540036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.247 [2024-11-20 11:15:29.540043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.247 [2024-11-20 11:15:29.540052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.247 [2024-11-20 11:15:29.540065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.247 [2024-11-20 11:15:29.540073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.247 [2024-11-20 11:15:29.540080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.247 [2024-11-20 11:15:29.540088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.247 [2024-11-20 11:15:29.540095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.247 [2024-11-20 11:15:29.540103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.247 [2024-11-20 11:15:29.540110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.247 [2024-11-20 11:15:29.540118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.247 [2024-11-20 11:15:29.540124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.247 [2024-11-20 11:15:29.540132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.247 [2024-11-20 11:15:29.540139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.247 [2024-11-20 11:15:29.540147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.247 [2024-11-20 11:15:29.540153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.247 [2024-11-20 11:15:29.540162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.247 [2024-11-20 11:15:29.540169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.247 [2024-11-20 11:15:29.540177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.247 [2024-11-20 11:15:29.540183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.247 [2024-11-20 11:15:29.540192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.247 [2024-11-20 11:15:29.540199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.247 [2024-11-20 11:15:29.540207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.247 [2024-11-20 11:15:29.540213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.247 [2024-11-20 11:15:29.540221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.247 [2024-11-20 11:15:29.540228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.247 [2024-11-20 11:15:29.540235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.247 [2024-11-20 11:15:29.540242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.247 [2024-11-20 11:15:29.540256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.247 [2024-11-20 11:15:29.540263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.247 [2024-11-20 11:15:29.540272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.247 [2024-11-20 11:15:29.540278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.247 [2024-11-20 11:15:29.540287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.247 [2024-11-20 11:15:29.540294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.247 [2024-11-20 11:15:29.540302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.247 [2024-11-20 11:15:29.540309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.247 [2024-11-20 11:15:29.540317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.247 [2024-11-20 11:15:29.540324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.247 [2024-11-20 11:15:29.540332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.247 [2024-11-20 11:15:29.540339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.247 [2024-11-20 11:15:29.540347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.247 [2024-11-20 11:15:29.540354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.247 [2024-11-20 11:15:29.540362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.247 [2024-11-20 11:15:29.540368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.247 [2024-11-20 11:15:29.540377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.247 [2024-11-20 11:15:29.540383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.247 [2024-11-20 11:15:29.540392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.247 [2024-11-20 11:15:29.540399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.247 [2024-11-20 11:15:29.540407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.248 [2024-11-20 11:15:29.540414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.248 [2024-11-20 11:15:29.540422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.248 [2024-11-20 11:15:29.540429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.248 [2024-11-20 11:15:29.540437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.248 [2024-11-20 11:15:29.540445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.248 [2024-11-20 11:15:29.540454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.248 [2024-11-20 11:15:29.540460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.248 [2024-11-20 11:15:29.540468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.248 [2024-11-20 11:15:29.540475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.248 [2024-11-20 11:15:29.540483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.248 [2024-11-20 11:15:29.540489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.248 [2024-11-20 11:15:29.540497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.248 [2024-11-20 11:15:29.540504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.248 [2024-11-20 11:15:29.540513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.248 [2024-11-20 11:15:29.540519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.248 [2024-11-20 11:15:29.540528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.248 [2024-11-20 11:15:29.540534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.248 [2024-11-20 11:15:29.540543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.248 [2024-11-20 11:15:29.540549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.248 [2024-11-20 11:15:29.540557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.248 [2024-11-20 11:15:29.540563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.248 [2024-11-20 11:15:29.540571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.248 [2024-11-20 11:15:29.540578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.248 [2024-11-20 11:15:29.540586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.248 [2024-11-20 11:15:29.540592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.248 [2024-11-20 11:15:29.540600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.248 [2024-11-20 11:15:29.540607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.248 [2024-11-20 11:15:29.540616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.248 [2024-11-20 11:15:29.540623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.248 [2024-11-20 11:15:29.540633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.248 [2024-11-20 11:15:29.540639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.248 [2024-11-20 11:15:29.540647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.248 [2024-11-20 11:15:29.540654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.248 [2024-11-20 11:15:29.540661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.248 [2024-11-20 11:15:29.540668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.248 [2024-11-20 11:15:29.540676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.248 [2024-11-20 11:15:29.540683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.248 [2024-11-20 11:15:29.540691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.248 [2024-11-20 11:15:29.540698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.248 [2024-11-20 11:15:29.540706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.248 [2024-11-20 11:15:29.540712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.248 [2024-11-20 11:15:29.540720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.248 [2024-11-20 11:15:29.540726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.248 [2024-11-20 11:15:29.540734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.248 [2024-11-20 11:15:29.540741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.248 [2024-11-20 11:15:29.540749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.248 [2024-11-20 11:15:29.540756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.248 [2024-11-20 11:15:29.540764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.248 [2024-11-20 11:15:29.540770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.248 [2024-11-20 11:15:29.540778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.248 [2024-11-20 11:15:29.540785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.248 [2024-11-20 11:15:29.540793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.248 [2024-11-20 11:15:29.540799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.248 [2024-11-20 11:15:29.540807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.248 [2024-11-20 11:15:29.540815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.248 [2024-11-20 11:15:29.540823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.248 [2024-11-20 11:15:29.540830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.248 [2024-11-20 11:15:29.540838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.248 [2024-11-20 11:15:29.540845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.248 [2024-11-20 11:15:29.540853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.248 [2024-11-20 11:15:29.540859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.248 [2024-11-20 11:15:29.540868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.248 [2024-11-20 11:15:29.540874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.248 [2024-11-20 11:15:29.540882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.248 [2024-11-20 11:15:29.540889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.248 [2024-11-20 11:15:29.540897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.248 [2024-11-20 11:15:29.540904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.248 [2024-11-20 11:15:29.540912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.248 [2024-11-20 11:15:29.540918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.248 [2024-11-20 11:15:29.540926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.248 [2024-11-20 11:15:29.540932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.248 [2024-11-20 11:15:29.540940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.248 [2024-11-20 11:15:29.540951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.248 [2024-11-20 11:15:29.540960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.248 [2024-11-20 11:15:29.540966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.248 [2024-11-20 11:15:29.540977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.248 [2024-11-20 11:15:29.540984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.248 [2024-11-20 11:15:29.541014] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:02.249 [2024-11-20 11:15:29.540973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2b180 is same with the state(6) to be set 00:21:02.249 [2024-11-20 11:15:29.541047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2b180 is same with the state(6) to be set 00:21:02.249 [2024-11-20 11:15:29.541056] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2b180 is same with the state(6) to be set 00:21:02.249 [2024-11-20 11:15:29.541063] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2b180 is same with the state(6) to be set 00:21:02.249 [2024-11-20 11:15:29.541070] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2b180 is same with the state(6) to be set 00:21:02.249 [2024-11-20 11:15:29.541077] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2b180 is same with the state(6) to be set 00:21:02.249 [2024-11-20 11:15:29.541083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2b180 is same with the state(6) to be set 00:21:02.249 [2024-11-20 11:15:29.541090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2b180 is same with the state(6) to be set 00:21:02.249 [2024-11-20 11:15:29.541097] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2b180 is same with the state(6) to be set 00:21:02.249 [2024-11-20 11:15:29.541104] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2b180 is same with the state(6) to be set 00:21:02.249 [2024-11-20 11:15:29.541111] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2b180 is same with the state(6) to be set 00:21:02.249 [2024-11-20 11:15:29.541118] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2b180 is same with the state(6) to be set 00:21:02.249 [2024-11-20 11:15:29.541124] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2b180 is same with the state(6) to be set 00:21:02.249 [2024-11-20 11:15:29.541130] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2b180 is same with the state(6) to be set 00:21:02.249 [2024-11-20 11:15:29.541137] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2b180 is same with the state(6) to be set 00:21:02.249 [2024-11-20 11:15:29.541143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2b180 is same with the state(6) to be set 00:21:02.249 [2024-11-20 11:15:29.541150] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2b180 is same with the state(6) to be set 00:21:02.249 [2024-11-20 11:15:29.541157] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2b180 is same with the state(6) to be set 00:21:02.249 [2024-11-20 11:15:29.541163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2b180 is same with the state(6) to be set 00:21:02.249 [2024-11-20 11:15:29.541169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2b180 is same with the state(6) to be set 00:21:02.249 [2024-11-20 11:15:29.541176] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2b180 is same with the state(6) to be set 00:21:02.249 [2024-11-20 11:15:29.541182] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2b180 is same with the state(6) to be set 00:21:02.249 [2024-11-20 11:15:29.541189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2b180 is same with the state(6) to be set 00:21:02.249 [2024-11-20 11:15:29.541195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2b180 is same with the state(6) to be set 00:21:02.249 [2024-11-20 11:15:29.541202] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2b180 is same with the state(6) to be set 00:21:02.249 [2024-11-20 11:15:29.541208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2b180 is same with the state(6) to be set 00:21:02.249 [2024-11-20 11:15:29.541214] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2b180 is same with the state(6) to be set 00:21:02.249 [2024-11-20 11:15:29.541220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2b180 is same with the state(6) to be set 00:21:02.249 [2024-11-20 11:15:29.541228] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2b180 is same with the state(6) to be set 00:21:02.249 [2024-11-20 11:15:29.541235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2b180 is same with the state(6) to be set 00:21:02.249 [2024-11-20 11:15:29.541241] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2b180 is same with the state(6) to be set 00:21:02.249 [2024-11-20 11:15:29.541247] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2b180 is same with the state(6) to be set 00:21:02.249 [2024-11-20 11:15:29.541254] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2b180 is same with the state(6) to be set 00:21:02.249 [2024-11-20 11:15:29.541271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2b180 is same with the state(6) to be set 00:21:02.249 [2024-11-20 11:15:29.541277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2b180 is same with the state(6) to be set 00:21:02.249 [2024-11-20 11:15:29.541283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2b180 is same with the state(6) to be set 00:21:02.249 [2024-11-20 11:15:29.541289] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2b180 is same with the state(6) to be set 00:21:02.249 [2024-11-20 11:15:29.541295] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2b180 is same with the state(6) to be set 00:21:02.249 [2024-11-20 11:15:29.541301] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2b180 is same with the state(6) to be set 00:21:02.249 [2024-11-20 11:15:29.541307] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2b180 is same with the state(6) to be set 00:21:02.249 [2024-11-20 11:15:29.541314] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2b180 is same with the state(6) to be set 00:21:02.249 [2024-11-20 11:15:29.541320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2b180 is same with the state(6) to be set 00:21:02.249 [2024-11-20 11:15:29.541326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2b180 is same with the state(6) to be set 00:21:02.249 [2024-11-20 11:15:29.541333] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2b180 is same with the state(6) to be set 00:21:02.249 [2024-11-20 11:15:29.541339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2b180 is same with the state(6) to be set 00:21:02.249 [2024-11-20 11:15:29.541345] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2b180 is same with the state(6) to be set 00:21:02.249 [2024-11-20 11:15:29.541351] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2b180 is same with the state(6) to be set 00:21:02.249 [2024-11-20 11:15:29.541358] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2b180 is same with the state(6) to be set 00:21:02.249 [2024-11-20 11:15:29.541364] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2b180 is same with the state(6) to be set 00:21:02.249 [2024-11-20 11:15:29.541370] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2b180 is same with the state(6) to be set 00:21:02.249 [2024-11-20 11:15:29.541375] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2b180 is same with the state(6) to be set 00:21:02.249 [2024-11-20 11:15:29.541382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2b180 is same with the state(6) to be set 00:21:02.249 [2024-11-20 11:15:29.541388] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2b180 is same with the state(6) to be set 00:21:02.249 [2024-11-20 11:15:29.541394] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2b180 is same with the state(6) to be set 00:21:02.249 [2024-11-20 11:15:29.541400] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2b180 is same with the state(6) to be set 00:21:02.249 [2024-11-20 11:15:29.541409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2b180 is same with the state(6) to be set 00:21:02.249 [2024-11-20 11:15:29.541415] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2b180 is same with the state(6) to be set 00:21:02.249 [2024-11-20 11:15:29.541421] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2b180 is same with the state(6) to be set 00:21:02.249 [2024-11-20 11:15:29.541426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2b180 is same with the state(6) to be set 00:21:02.249 [2024-11-20 11:15:29.541432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2b180 is same with the state(6) to be set 00:21:02.249 [2024-11-20 11:15:29.541438] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2b180 is same with the state(6) to be set 00:21:02.249 [2024-11-20 11:15:29.541444] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2b180 is same with the state(6) to be set 00:21:02.249 [2024-11-20 11:15:29.541450] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2b180 is same with the state(6) to be set 00:21:02.249 [2024-11-20 11:15:29.542960] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf28bf0 is same with the state(6) to be set 00:21:02.249 [2024-11-20 11:15:29.542977] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf28bf0 is same with the state(6) to be set 00:21:02.249 [2024-11-20 11:15:29.542986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf28bf0 is same with the state(6) to be set 00:21:02.249 [2024-11-20 11:15:29.542992] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf28bf0 is same with the state(6) to be set 00:21:02.249 [2024-11-20 11:15:29.542998] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf28bf0 is same with the state(6) to be set 00:21:02.249 [2024-11-20 11:15:29.543005] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf28bf0 is same with the state(6) to be set 00:21:02.249 [2024-11-20 11:15:29.543011] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf28bf0 is same with the state(6) to be set 00:21:02.249 [2024-11-20 11:15:29.543018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf28bf0 is same with the state(6) to be set 00:21:02.249 [2024-11-20 11:15:29.543023] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf28bf0 is same with the state(6) to be set 00:21:02.249 [2024-11-20 11:15:29.543029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf28bf0 is same with the state(6) to be set 00:21:02.249 [2024-11-20 11:15:29.543036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf28bf0 is same with the state(6) to be set 00:21:02.249 [2024-11-20 11:15:29.543042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf28bf0 is same with the state(6) to be set 00:21:02.249 [2024-11-20 11:15:29.543048] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf28bf0 is same with the state(6) to be set 00:21:02.249 [2024-11-20 11:15:29.543054] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf28bf0 is same with the state(6) to be set 00:21:02.249 [2024-11-20 11:15:29.543060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf28bf0 is same with the state(6) to be set 00:21:02.249 [2024-11-20 11:15:29.543066] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf28bf0 is same with the state(6) to be set 00:21:02.249 [2024-11-20 11:15:29.543072] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf28bf0 is same with the state(6) to be set 00:21:02.249 [2024-11-20 11:15:29.543078] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf28bf0 is same with the state(6) to be set 00:21:02.249 [2024-11-20 11:15:29.543085] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf28bf0 is same with the state(6) to be set 00:21:02.250 [2024-11-20 11:15:29.543091] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf28bf0 is same with the state(6) to be set 00:21:02.250 [2024-11-20 11:15:29.543100] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf28bf0 is same with the state(6) to be set 00:21:02.250 [2024-11-20 11:15:29.543107] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf28bf0 is same with the state(6) to be set 00:21:02.250 [2024-11-20 11:15:29.543113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf28bf0 is same with the state(6) to be set 00:21:02.250 [2024-11-20 11:15:29.543119] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf28bf0 is same with the state(6) to be set 00:21:02.250 [2024-11-20 11:15:29.543125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf28bf0 is same with the state(6) to be set 00:21:02.250 [2024-11-20 11:15:29.543131] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf28bf0 is same with the state(6) to be set 00:21:02.250 [2024-11-20 11:15:29.543137] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf28bf0 is same with the state(6) to be set 00:21:02.250 [2024-11-20 11:15:29.543143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf28bf0 is same with the state(6) to be set 00:21:02.250 [2024-11-20 11:15:29.543149] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf28bf0 is same with the state(6) to be set 00:21:02.250 [2024-11-20 11:15:29.543154] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf28bf0 is same with the state(6) to be set 00:21:02.250 [2024-11-20 11:15:29.543161] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf28bf0 is same with the state(6) to be set 00:21:02.250 [2024-11-20 11:15:29.543167] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf28bf0 is same with the state(6) to be set 00:21:02.250 [2024-11-20 11:15:29.543173] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf28bf0 is same with the state(6) to be set 00:21:02.250 [2024-11-20 11:15:29.543178] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf28bf0 is same with the state(6) to be set 00:21:02.250 [2024-11-20 11:15:29.543186] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf28bf0 is same with the state(6) to be set 00:21:02.250 [2024-11-20 11:15:29.543192] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf28bf0 is same with the state(6) to be set 00:21:02.250 [2024-11-20 11:15:29.543198] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf28bf0 is same with the state(6) to be set 00:21:02.250 [2024-11-20 11:15:29.543204] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf28bf0 is same with the state(6) to be set 00:21:02.250 [2024-11-20 11:15:29.543210] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf28bf0 is same with the state(6) to be set 00:21:02.250 [2024-11-20 11:15:29.543216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf28bf0 is same with the state(6) to be set 00:21:02.250 [2024-11-20 11:15:29.543222] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf28bf0 is same with the state(6) to be set 00:21:02.250 [2024-11-20 11:15:29.543228] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf28bf0 is same with the state(6) to be set 00:21:02.250 [2024-11-20 11:15:29.543234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf28bf0 is same with the state(6) to be set 00:21:02.250 [2024-11-20 11:15:29.543240] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf28bf0 is same with the state(6) to be set 00:21:02.250 [2024-11-20 11:15:29.543246] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf28bf0 is same with the state(6) to be set 00:21:02.250 [2024-11-20 11:15:29.543252] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf28bf0 is same with the state(6) to be set 00:21:02.250 [2024-11-20 11:15:29.543258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf28bf0 is same with the state(6) to be set 00:21:02.250 [2024-11-20 11:15:29.543266] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf28bf0 is same with the state(6) to be set 00:21:02.250 [2024-11-20 11:15:29.543272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf28bf0 is same with the state(6) to be set 00:21:02.250 [2024-11-20 11:15:29.543278] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf28bf0 is same with the state(6) to be set 00:21:02.250 [2024-11-20 11:15:29.543284] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf28bf0 is same with the state(6) to be set 00:21:02.250 [2024-11-20 11:15:29.543291] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf28bf0 is same with the state(6) to be set 00:21:02.250 [2024-11-20 11:15:29.543297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf28bf0 is same with the state(6) to be set 00:21:02.250 [2024-11-20 11:15:29.543303] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf28bf0 is same with the state(6) to be set 00:21:02.250 [2024-11-20 11:15:29.543309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf28bf0 is same with the state(6) to be set 00:21:02.250 [2024-11-20 11:15:29.543315] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf28bf0 is same with the state(6) to be set 00:21:02.250 [2024-11-20 11:15:29.543321] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf28bf0 is same with the state(6) to be set 00:21:02.250 [2024-11-20 11:15:29.543327] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf28bf0 is same with the state(6) to be set 00:21:02.250 [2024-11-20 11:15:29.543333] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf28bf0 is same with the state(6) to be set 00:21:02.250 [2024-11-20 11:15:29.543340] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf28bf0 is same with the state(6) to be set 00:21:02.250 [2024-11-20 11:15:29.543346] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf28bf0 is same with the state(6) to be set 00:21:02.250 [2024-11-20 11:15:29.543351] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf28bf0 is same with the state(6) to be set 00:21:02.250 [2024-11-20 11:15:29.543357] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf28bf0 is same with the state(6) to be set 00:21:02.250 [2024-11-20 11:15:29.544029] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:21:02.250 [2024-11-20 11:15:29.544061] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x58e1b0 (9): Bad file descriptor 00:21:02.250 [2024-11-20 11:15:29.544858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf290c0 is same with the state(6) to be set 00:21:02.250 [2024-11-20 11:15:29.544889] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf290c0 is same with the state(6) to be set 00:21:02.250 [2024-11-20 11:15:29.544902] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf290c0 is same with the state(6) to be set 00:21:02.250 [2024-11-20 11:15:29.544913] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf290c0 is same with the state(6) to be set 00:21:02.250 [2024-11-20 11:15:29.544923] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf290c0 is same with the state(6) to be set 00:21:02.250 [2024-11-20 11:15:29.544934] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf290c0 is same with the state(6) to be set 00:21:02.250 [2024-11-20 11:15:29.544946] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf290c0 is same with the state(6) to be set 00:21:02.250 [2024-11-20 11:15:29.544962] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf290c0 is same with the state(6) to be set 00:21:02.250 [2024-11-20 11:15:29.544972] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf290c0 is same with the state(6) to be set 00:21:02.250 [2024-11-20 11:15:29.544989] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf290c0 is same with the state(6) to be set 00:21:02.250 [2024-11-20 11:15:29.544999] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf290c0 is same with the state(6) to be set 00:21:02.250 [2024-11-20 11:15:29.545009] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf290c0 is same with the state(6) to be set 00:21:02.250 [2024-11-20 11:15:29.545019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf290c0 is same with the state(6) to be set 00:21:02.250 [2024-11-20 11:15:29.545029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf290c0 is same with the state(6) to be set 00:21:02.250 [2024-11-20 11:15:29.545040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf290c0 is same with the state(6) to be set 00:21:02.250 [2024-11-20 11:15:29.545050] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf290c0 is same with the state(6) to be set 00:21:02.250 [2024-11-20 11:15:29.545060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf290c0 is same with the state(6) to be set 00:21:02.250 [2024-11-20 11:15:29.545070] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf290c0 is same with the state(6) to be set 00:21:02.250 [2024-11-20 11:15:29.545080] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf290c0 is same with the state(6) to be set 00:21:02.250 [2024-11-20 11:15:29.545090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf290c0 is same with the state(6) to be set 00:21:02.250 [2024-11-20 11:15:29.545100] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf290c0 is same with the state(6) to be set 00:21:02.250 [2024-11-20 11:15:29.545110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf290c0 is same with the state(6) to be set 00:21:02.250 [2024-11-20 11:15:29.545121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf290c0 is same with the state(6) to be set 00:21:02.250 [2024-11-20 11:15:29.545131] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf290c0 is same with the state(6) to be set 00:21:02.250 [2024-11-20 11:15:29.545140] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf290c0 is same with the state(6) to be set 00:21:02.250 [2024-11-20 11:15:29.545151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf290c0 is same with the state(6) to be set 00:21:02.250 [2024-11-20 11:15:29.545160] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf290c0 is same with the state(6) to be set 00:21:02.250 [2024-11-20 11:15:29.545171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf290c0 is same with the state(6) to be set 00:21:02.250 [2024-11-20 11:15:29.545180] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf290c0 is same with the state(6) to be set 00:21:02.250 [2024-11-20 11:15:29.545190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf290c0 is same with the state(6) to be set 00:21:02.250 [2024-11-20 11:15:29.545200] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf290c0 is same with the state(6) to be set 00:21:02.250 [2024-11-20 11:15:29.545210] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf290c0 is same with the state(6) to be set 00:21:02.250 [2024-11-20 11:15:29.545220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf290c0 is same with the state(6) to be set 00:21:02.250 [2024-11-20 11:15:29.545230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf290c0 is same with the state(6) to be set 00:21:02.250 [2024-11-20 11:15:29.545240] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf290c0 is same with the state(6) to be set 00:21:02.250 [2024-11-20 11:15:29.545250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf290c0 is same with the state(6) to be set 00:21:02.250 [2024-11-20 11:15:29.545260] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf290c0 is same with the state(6) to be set 00:21:02.250 [2024-11-20 11:15:29.545272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf290c0 is same with the state(6) to be set 00:21:02.251 [2024-11-20 11:15:29.545283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf290c0 is same with the state(6) to be set 00:21:02.251 [2024-11-20 11:15:29.545294] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf290c0 is same with the state(6) to be set 00:21:02.251 [2024-11-20 11:15:29.545303] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf290c0 is same with the state(6) to be set 00:21:02.251 [2024-11-20 11:15:29.545314] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf290c0 is same with the state(6) to be set 00:21:02.251 [2024-11-20 11:15:29.545323] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf290c0 is same with the state(6) to be set 00:21:02.251 [2024-11-20 11:15:29.545333] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf290c0 is same with the state(6) to be set 00:21:02.251 [2024-11-20 11:15:29.545344] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf290c0 is same with the state(6) to be set 00:21:02.251 [2024-11-20 11:15:29.545353] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf290c0 is same with the state(6) to be set 00:21:02.251 [2024-11-20 11:15:29.545363] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf290c0 is same with the state(6) to be set 00:21:02.251 [2024-11-20 11:15:29.545373] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf290c0 is same with the state(6) to be set 00:21:02.251 [2024-11-20 11:15:29.545384] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf290c0 is same with the state(6) to be set 00:21:02.251 [2024-11-20 11:15:29.545395] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf290c0 is same with the state(6) to be set 00:21:02.251 [2024-11-20 11:15:29.545404] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf290c0 is same with the state(6) to be set 00:21:02.251 [2024-11-20 11:15:29.545414] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf290c0 is same with the state(6) to be set 00:21:02.251 [2024-11-20 11:15:29.545424] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf290c0 is same with the state(6) to be set 00:21:02.251 [2024-11-20 11:15:29.545434] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf290c0 is same with the state(6) to be set 00:21:02.251 [2024-11-20 11:15:29.545444] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf290c0 is same with the state(6) to be set 00:21:02.251 [2024-11-20 11:15:29.545453] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf290c0 is same with the state(6) to be set 00:21:02.251 [2024-11-20 11:15:29.545457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.251 [2024-11-20 11:15:29.545463] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf290c0 is same with the state(6) to be set 00:21:02.251 [2024-11-20 11:15:29.545477] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf290c0 is same with the state(6) to be set 00:21:02.251 [2024-11-20 11:15:29.545481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x58e1b0 with addr=10.0.0.2, port=4420 00:21:02.251 [2024-11-20 11:15:29.545487] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf290c0 is same with the state(6) to be set 00:21:02.251 [2024-11-20 11:15:29.545492] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x58e1b0 is same with the state(6) to be set 00:21:02.251 [2024-11-20 11:15:29.545498] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf290c0 is same with the state(6) to be set 00:21:02.251 [2024-11-20 11:15:29.545508] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf290c0 is same with the state(6) to be set 00:21:02.251 [2024-11-20 11:15:29.545521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf290c0 is same with the state(6) to be set 00:21:02.251 [2024-11-20 11:15:29.545530] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf290c0 is same with the state(6) to be set 00:21:02.251 [2024-11-20 11:15:29.546508] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:02.251 [2024-11-20 11:15:29.546534] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x58e1b0 (9): Bad file descriptor 00:21:02.251 [2024-11-20 11:15:29.547044] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf295b0 is same with the state(6) to be set 00:21:02.251 [2024-11-20 11:15:29.547071] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf295b0 is same with the state(6) to be set 00:21:02.251 [2024-11-20 11:15:29.547070] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:21:02.251 [2024-11-20 11:15:29.547080] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf295b0 is same with the state(6) to be set 00:21:02.251 [2024-11-20 11:15:29.547088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf295b0 is same with the state(6) to be set 00:21:02.251 [2024-11-20 11:15:29.547088] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:21:02.251 [2024-11-20 11:15:29.547095] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf295b0 is same with the state(6) to be set 00:21:02.251 [2024-11-20 11:15:29.547099] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:21:02.251 [2024-11-20 11:15:29.547102] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf295b0 is same with the state(6) to be set 00:21:02.251 [2024-11-20 11:15:29.547110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf295b0 is same with t[2024-11-20 11:15:29.547109] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resettinghe state(6) to be set 00:21:02.251 controller failed. 00:21:02.251 [2024-11-20 11:15:29.547119] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf295b0 is same with the state(6) to be set 00:21:02.251 [2024-11-20 11:15:29.547126] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf295b0 is same with the state(6) to be set 00:21:02.251 [2024-11-20 11:15:29.547133] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf295b0 is same with the state(6) to be set 00:21:02.251 [2024-11-20 11:15:29.547140] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf295b0 is same with the state(6) to be set 00:21:02.251 [2024-11-20 11:15:29.547147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf295b0 is same with t[2024-11-20 11:15:29.547146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128he state(6) to be set 00:21:02.251 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.251 [2024-11-20 11:15:29.547157] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf295b0 is same with the state(6) to be set 00:21:02.251 [2024-11-20 11:15:29.547160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.251 [2024-11-20 11:15:29.547165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf295b0 is same with the state(6) to be set 00:21:02.251 [2024-11-20 11:15:29.547172] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf295b0 is same with the state(6) to be set 00:21:02.251 [2024-11-20 11:15:29.547174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.251 [2024-11-20 11:15:29.547178] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf295b0 is same with the state(6) to be set 00:21:02.251 [2024-11-20 11:15:29.547182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-20 11:15:29.547186] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf295b0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.251 he state(6) to be set 00:21:02.251 [2024-11-20 11:15:29.547195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf295b0 is same with the state(6) to be set 00:21:02.251 [2024-11-20 11:15:29.547197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.251 [2024-11-20 11:15:29.547202] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf295b0 is same with the state(6) to be set 00:21:02.251 [2024-11-20 11:15:29.547205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.251 [2024-11-20 11:15:29.547209] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf295b0 is same with the state(6) to be set 00:21:02.251 [2024-11-20 11:15:29.547215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.251 [2024-11-20 11:15:29.547217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf295b0 is same with the state(6) to be set 00:21:02.251 [2024-11-20 11:15:29.547223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-20 11:15:29.547224] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf295b0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.251 he state(6) to be set 00:21:02.251 [2024-11-20 11:15:29.547234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf295b0 is same with the state(6) to be set 00:21:02.251 [2024-11-20 11:15:29.547235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.251 [2024-11-20 11:15:29.547240] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf295b0 is same with the state(6) to be set 00:21:02.251 [2024-11-20 11:15:29.547243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.251 [2024-11-20 11:15:29.547248] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf295b0 is same with the state(6) to be set 00:21:02.251 [2024-11-20 11:15:29.547252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.252 [2024-11-20 11:15:29.547255] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf295b0 is same with the state(6) to be set 00:21:02.252 [2024-11-20 11:15:29.547260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.252 [2024-11-20 11:15:29.547262] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf295b0 is same with the state(6) to be set 00:21:02.252 [2024-11-20 11:15:29.547270] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf295b0 is same with the state(6) to be set 00:21:02.252 [2024-11-20 11:15:29.547270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.252 [2024-11-20 11:15:29.547276] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf295b0 is same with the state(6) to be set 00:21:02.252 [2024-11-20 11:15:29.547278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.252 [2024-11-20 11:15:29.547284] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf295b0 is same with the state(6) to be set 00:21:02.252 [2024-11-20 11:15:29.547289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.252 [2024-11-20 11:15:29.547292] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf295b0 is same with the state(6) to be set 00:21:02.252 [2024-11-20 11:15:29.547296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.252 [2024-11-20 11:15:29.547303] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf295b0 is same with the state(6) to be set 00:21:02.252 [2024-11-20 11:15:29.547307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.252 [2024-11-20 11:15:29.547311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf295b0 is same with the state(6) to be set 00:21:02.252 [2024-11-20 11:15:29.547314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.252 [2024-11-20 11:15:29.547318] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf295b0 is same with the state(6) to be set 00:21:02.252 [2024-11-20 11:15:29.547324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128[2024-11-20 11:15:29.547325] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf295b0 is same with t SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.252 he state(6) to be set 00:21:02.252 [2024-11-20 11:15:29.547333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-20 11:15:29.547334] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf295b0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.252 he state(6) to be set 00:21:02.252 [2024-11-20 11:15:29.547344] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf295b0 is same with the state(6) to be set 00:21:02.252 [2024-11-20 11:15:29.547346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.252 [2024-11-20 11:15:29.547350] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf295b0 is same with the state(6) to be set 00:21:02.252 [2024-11-20 11:15:29.547354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.252 [2024-11-20 11:15:29.547357] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf295b0 is same with the state(6) to be set 00:21:02.252 [2024-11-20 11:15:29.547363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.252 [2024-11-20 11:15:29.547364] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf295b0 is same with the state(6) to be set 00:21:02.252 [2024-11-20 11:15:29.547371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-20 11:15:29.547372] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf295b0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.252 he state(6) to be set 00:21:02.252 [2024-11-20 11:15:29.547382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf295b0 is same with the state(6) to be set 00:21:02.252 [2024-11-20 11:15:29.547383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.252 [2024-11-20 11:15:29.547389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf295b0 is same with the state(6) to be set 00:21:02.252 [2024-11-20 11:15:29.547391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.252 [2024-11-20 11:15:29.547396] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf295b0 is same with the state(6) to be set 00:21:02.252 [2024-11-20 11:15:29.547401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.252 [2024-11-20 11:15:29.547403] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf295b0 is same with the state(6) to be set 00:21:02.252 [2024-11-20 11:15:29.547409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.252 [2024-11-20 11:15:29.547413] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf295b0 is same with the state(6) to be set 00:21:02.252 [2024-11-20 11:15:29.547418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.252 [2024-11-20 11:15:29.547420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf295b0 is same with the state(6) to be set 00:21:02.252 [2024-11-20 11:15:29.547426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.252 [2024-11-20 11:15:29.547427] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf295b0 is same with the state(6) to be set 00:21:02.252 [2024-11-20 11:15:29.547435] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf295b0 is same with the state(6) to be set 00:21:02.252 [2024-11-20 11:15:29.547436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.252 [2024-11-20 11:15:29.547441] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf295b0 is same with the state(6) to be set 00:21:02.252 [2024-11-20 11:15:29.547444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.252 [2024-11-20 11:15:29.547449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf295b0 is same with the state(6) to be set 00:21:02.252 [2024-11-20 11:15:29.547453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.252 [2024-11-20 11:15:29.547456] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf295b0 is same with the state(6) to be set 00:21:02.252 [2024-11-20 11:15:29.547460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.252 [2024-11-20 11:15:29.547463] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf295b0 is same with the state(6) to be set 00:21:02.252 [2024-11-20 11:15:29.547470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:12[2024-11-20 11:15:29.547470] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf295b0 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.252 he state(6) to be set 00:21:02.252 [2024-11-20 11:15:29.547480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-20 11:15:29.547481] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf295b0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.252 he state(6) to be set 00:21:02.252 [2024-11-20 11:15:29.547490] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf295b0 is same with the state(6) to be set 00:21:02.252 [2024-11-20 11:15:29.547491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.252 [2024-11-20 11:15:29.547496] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf295b0 is same with the state(6) to be set 00:21:02.252 [2024-11-20 11:15:29.547499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.252 [2024-11-20 11:15:29.547504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf295b0 is same with the state(6) to be set 00:21:02.252 [2024-11-20 11:15:29.547508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.252 [2024-11-20 11:15:29.547511] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf295b0 is same with the state(6) to be set 00:21:02.252 [2024-11-20 11:15:29.547516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.252 [2024-11-20 11:15:29.547520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf295b0 is same with the state(6) to be set 00:21:02.252 [2024-11-20 11:15:29.547525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.252 [2024-11-20 11:15:29.547527] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf295b0 is same with the state(6) to be set 00:21:02.252 [2024-11-20 11:15:29.547533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.252 [2024-11-20 11:15:29.547535] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf295b0 is same with the state(6) to be set 00:21:02.252 [2024-11-20 11:15:29.547543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:12[2024-11-20 11:15:29.547543] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf295b0 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.252 he state(6) to be set 00:21:02.252 [2024-11-20 11:15:29.547553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.252 [2024-11-20 11:15:29.547562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.252 [2024-11-20 11:15:29.547568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.252 [2024-11-20 11:15:29.547577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.252 [2024-11-20 11:15:29.547583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.252 [2024-11-20 11:15:29.547591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.252 [2024-11-20 11:15:29.547598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.252 [2024-11-20 11:15:29.547609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.252 [2024-11-20 11:15:29.547616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.252 [2024-11-20 11:15:29.547624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.252 [2024-11-20 11:15:29.547631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.253 [2024-11-20 11:15:29.547639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.253 [2024-11-20 11:15:29.547646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.253 [2024-11-20 11:15:29.547658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.253 [2024-11-20 11:15:29.547665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.253 [2024-11-20 11:15:29.547673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.253 [2024-11-20 11:15:29.547680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.253 [2024-11-20 11:15:29.547688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.253 [2024-11-20 11:15:29.547696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.253 [2024-11-20 11:15:29.547705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.253 [2024-11-20 11:15:29.547711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.253 [2024-11-20 11:15:29.547720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.253 [2024-11-20 11:15:29.547727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.253 [2024-11-20 11:15:29.547735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.253 [2024-11-20 11:15:29.547741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.253 [2024-11-20 11:15:29.547750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.253 [2024-11-20 11:15:29.547756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.253 [2024-11-20 11:15:29.547765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.253 [2024-11-20 11:15:29.547771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.253 [2024-11-20 11:15:29.547780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.253 [2024-11-20 11:15:29.547786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.253 [2024-11-20 11:15:29.547794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.253 [2024-11-20 11:15:29.547801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.253 [2024-11-20 11:15:29.547809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.253 [2024-11-20 11:15:29.547816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.253 [2024-11-20 11:15:29.547824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.253 [2024-11-20 11:15:29.547830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.253 [2024-11-20 11:15:29.547839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.253 [2024-11-20 11:15:29.547845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.253 [2024-11-20 11:15:29.547853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.253 [2024-11-20 11:15:29.547859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.253 [2024-11-20 11:15:29.547867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.253 [2024-11-20 11:15:29.547874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.253 [2024-11-20 11:15:29.547884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.253 [2024-11-20 11:15:29.547891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.253 [2024-11-20 11:15:29.547902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.253 [2024-11-20 11:15:29.547909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.253 [2024-11-20 11:15:29.547917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.253 [2024-11-20 11:15:29.547924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.253 [2024-11-20 11:15:29.547932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.253 [2024-11-20 11:15:29.547938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.253 [2024-11-20 11:15:29.547955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.253 [2024-11-20 11:15:29.547962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.253 [2024-11-20 11:15:29.547970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.253 [2024-11-20 11:15:29.547977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.253 [2024-11-20 11:15:29.547985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.253 [2024-11-20 11:15:29.547992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.253 [2024-11-20 11:15:29.548000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.253 [2024-11-20 11:15:29.548007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.253 [2024-11-20 11:15:29.548015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.253 [2024-11-20 11:15:29.548021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.253 [2024-11-20 11:15:29.548029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.253 [2024-11-20 11:15:29.548036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.253 [2024-11-20 11:15:29.548044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.253 [2024-11-20 11:15:29.548051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.253 [2024-11-20 11:15:29.548059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.253 [2024-11-20 11:15:29.548065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.253 [2024-11-20 11:15:29.548074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.253 [2024-11-20 11:15:29.548082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.253 [2024-11-20 11:15:29.548090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.253 [2024-11-20 11:15:29.548097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.253 [2024-11-20 11:15:29.548105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.253 [2024-11-20 11:15:29.548112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.253 [2024-11-20 11:15:29.548120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.253 [2024-11-20 11:15:29.548127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.253 [2024-11-20 11:15:29.548135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.253 [2024-11-20 11:15:29.548142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.253 [2024-11-20 11:15:29.548152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.253 [2024-11-20 11:15:29.548158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.253 [2024-11-20 11:15:29.548167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.253 [2024-11-20 11:15:29.548173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.253 [2024-11-20 11:15:29.548182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.253 [2024-11-20 11:15:29.548188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.253 [2024-11-20 11:15:29.548196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.253 [2024-11-20 11:15:29.548203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.253 [2024-11-20 11:15:29.548210] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x793620 is same with the state(6) to be set 00:21:02.253 [2024-11-20 11:15:29.548436] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf29930 is same with the state(6) to be set 00:21:02.253 [2024-11-20 11:15:29.548465] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf29930 is same with the state(6) to be set 00:21:02.253 [2024-11-20 11:15:29.548474] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf29930 is same with the state(6) to be set 00:21:02.253 [2024-11-20 11:15:29.548481] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf29930 is same with the state(6) to be set 00:21:02.253 [2024-11-20 11:15:29.548488] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf29930 is same with the state(6) to be set 00:21:02.254 [2024-11-20 11:15:29.548495] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf29930 is same with the state(6) to be set 00:21:02.254 [2024-11-20 11:15:29.548501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf29930 is same with the state(6) to be set 00:21:02.254 [2024-11-20 11:15:29.548508] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf29930 is same with the state(6) to be set 00:21:02.254 [2024-11-20 11:15:29.548518] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf29930 is same with the state(6) to be set 00:21:02.254 [2024-11-20 11:15:29.548524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf29930 is same with the state(6) to be set 00:21:02.254 [2024-11-20 11:15:29.548531] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf29930 is same with the state(6) to be set 00:21:02.254 [2024-11-20 11:15:29.548537] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf29930 is same with the state(6) to be set 00:21:02.254 [2024-11-20 11:15:29.548544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf29930 is same with the state(6) to be set 00:21:02.254 [2024-11-20 11:15:29.548550] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf29930 is same with the state(6) to be set 00:21:02.254 [2024-11-20 11:15:29.548557] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf29930 is same with the state(6) to be set 00:21:02.254 [2024-11-20 11:15:29.548563] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf29930 is same with the state(6) to be set 00:21:02.254 [2024-11-20 11:15:29.548570] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf29930 is same with the state(6) to be set 00:21:02.254 [2024-11-20 11:15:29.548576] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf29930 is same with the state(6) to be set 00:21:02.254 [2024-11-20 11:15:29.548582] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf29930 is same with the state(6) to be set 00:21:02.254 [2024-11-20 11:15:29.548588] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf29930 is same with the state(6) to be set 00:21:02.254 [2024-11-20 11:15:29.548595] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf29930 is same with the state(6) to be set 00:21:02.254 [2024-11-20 11:15:29.548601] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf29930 is same with the state(6) to be set 00:21:02.254 [2024-11-20 11:15:29.548608] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf29930 is same with the state(6) to be set 00:21:02.254 [2024-11-20 11:15:29.548614] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf29930 is same with the state(6) to be set 00:21:02.254 [2024-11-20 11:15:29.548621] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf29930 is same with the state(6) to be set 00:21:02.254 [2024-11-20 11:15:29.548627] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf29930 is same with the state(6) to be set 00:21:02.254 [2024-11-20 11:15:29.548633] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf29930 is same with the state(6) to be set 00:21:02.254 [2024-11-20 11:15:29.548639] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf29930 is same with the state(6) to be set 00:21:02.254 [2024-11-20 11:15:29.548645] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf29930 is same with the state(6) to be set 00:21:02.254 [2024-11-20 11:15:29.548651] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf29930 is same with the state(6) to be set 00:21:02.254 [2024-11-20 11:15:29.548657] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf29930 is same with the state(6) to be set 00:21:02.254 [2024-11-20 11:15:29.548664] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf29930 is same with the state(6) to be set 00:21:02.254 [2024-11-20 11:15:29.548670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf29930 is same with the state(6) to be set 00:21:02.254 [2024-11-20 11:15:29.548678] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf29930 is same with the state(6) to be set 00:21:02.254 [2024-11-20 11:15:29.548684] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf29930 is same with the state(6) to be set 00:21:02.254 [2024-11-20 11:15:29.548691] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf29930 is same with the state(6) to be set 00:21:02.254 [2024-11-20 11:15:29.548697] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf29930 is same with the state(6) to be set 00:21:02.254 [2024-11-20 11:15:29.548703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf29930 is same with the state(6) to be set 00:21:02.254 [2024-11-20 11:15:29.548709] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf29930 is same with the state(6) to be set 00:21:02.254 [2024-11-20 11:15:29.548715] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf29930 is same with the state(6) to be set 00:21:02.254 [2024-11-20 11:15:29.548722] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf29930 is same with the state(6) to be set 00:21:02.254 [2024-11-20 11:15:29.548727] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf29930 is same with the state(6) to be set 00:21:02.254 [2024-11-20 11:15:29.548733] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf29930 is same with the state(6) to be set 00:21:02.254 [2024-11-20 11:15:29.548740] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf29930 is same with the state(6) to be set 00:21:02.254 [2024-11-20 11:15:29.548746] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf29930 is same with the state(6) to be set 00:21:02.254 [2024-11-20 11:15:29.548752] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf29930 is same with the state(6) to be set 00:21:02.254 [2024-11-20 11:15:29.548759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf29930 is same with the state(6) to be set 00:21:02.254 [2024-11-20 11:15:29.548765] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf29930 is same with the state(6) to be set 00:21:02.254 [2024-11-20 11:15:29.548771] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf29930 is same with the state(6) to be set 00:21:02.254 [2024-11-20 11:15:29.548777] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf29930 is same with the state(6) to be set 00:21:02.254 [2024-11-20 11:15:29.548783] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf29930 is same with the state(6) to be set 00:21:02.254 [2024-11-20 11:15:29.548790] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf29930 is same with the state(6) to be set 00:21:02.254 [2024-11-20 11:15:29.548796] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf29930 is same with the state(6) to be set 00:21:02.254 [2024-11-20 11:15:29.548802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf29930 is same with the state(6) to be set 00:21:02.254 [2024-11-20 11:15:29.548808] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf29930 is same with the state(6) to be set 00:21:02.254 [2024-11-20 11:15:29.548814] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf29930 is same with the state(6) to be set 00:21:02.254 [2024-11-20 11:15:29.548820] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf29930 is same with the state(6) to be set 00:21:02.254 [2024-11-20 11:15:29.548826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf29930 is same with the state(6) to be set 00:21:02.254 [2024-11-20 11:15:29.548832] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf29930 is same with the state(6) to be set 00:21:02.254 [2024-11-20 11:15:29.548839] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf29930 is same with the state(6) to be set 00:21:02.254 [2024-11-20 11:15:29.548845] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf29930 is same with the state(6) to be set 00:21:02.254 [2024-11-20 11:15:29.548850] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf29930 is same with the state(6) to be set 00:21:02.254 [2024-11-20 11:15:29.548861] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf29930 is same with the state(6) to be set 00:21:02.254 [2024-11-20 11:15:29.549767] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:02.254 [2024-11-20 11:15:29.549786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.254 [2024-11-20 11:15:29.549794] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:02.254 [2024-11-20 11:15:29.549801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.254 [2024-11-20 11:15:29.549808] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:02.254 [2024-11-20 11:15:29.549810] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf29e00 is same with t[2024-11-20 11:15:29.549815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 che state(6) to be set 00:21:02.254 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.254 [2024-11-20 11:15:29.549824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf29e00 is same with t[2024-11-20 11:15:29.549825] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nshe state(6) to be set 00:21:02.254 id:0 cdw10:00000000 cdw11:00000000 00:21:02.254 [2024-11-20 11:15:29.549834] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf29e00 is same with t[2024-11-20 11:15:29.549835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 che state(6) to be set 00:21:02.254 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.254 [2024-11-20 11:15:29.549843] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf29e00 is same with the state(6) to be set 00:21:02.254 [2024-11-20 11:15:29.549844] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x58dd50 is same with the state(6) to be set 00:21:02.254 [2024-11-20 11:15:29.549850] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf29e00 is same with the state(6) to be set 00:21:02.254 [2024-11-20 11:15:29.549857] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf29e00 is same with the state(6) to be set 00:21:02.254 [2024-11-20 11:15:29.549863] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf29e00 is same with the state(6) to be set 00:21:02.254 [2024-11-20 11:15:29.549869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf29e00 is same with the state(6) to be set 00:21:02.254 [2024-11-20 11:15:29.549875] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf29e00 is same with the state(6) to be set 00:21:02.254 [2024-11-20 11:15:29.549881] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf29e00 is same with the state(6) to be set 00:21:02.254 [2024-11-20 11:15:29.549887] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf29e00 is same with the state(6) to be set 00:21:02.254 [2024-11-20 11:15:29.549889] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:02.254 [2024-11-20 11:15:29.549893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf29e00 is same with the state(6) to be set 00:21:02.254 [2024-11-20 11:15:29.549898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.254 [2024-11-20 11:15:29.549900] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf29e00 is same with the state(6) to be set 00:21:02.254 [2024-11-20 11:15:29.549907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf29e00 is same with t[2024-11-20 11:15:29.549906] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nshe state(6) to be set 00:21:02.254 id:0 cdw10:00000000 cdw11:00000000 00:21:02.255 [2024-11-20 11:15:29.549919] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf29e00 is same with the state(6) to be set 00:21:02.255 [2024-11-20 11:15:29.549921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.255 [2024-11-20 11:15:29.549926] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf29e00 is same with the state(6) to be set 00:21:02.255 [2024-11-20 11:15:29.549929] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:02.255 [2024-11-20 11:15:29.549933] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf29e00 is same with the state(6) to be set 00:21:02.255 [2024-11-20 11:15:29.549937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.255 [2024-11-20 11:15:29.549941] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf29e00 is same with the state(6) to be set 00:21:02.255 [2024-11-20 11:15:29.549945] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 ns[2024-11-20 11:15:29.549951] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf29e00 is same with tid:0 cdw10:00000000 cdw11:00000000 00:21:02.255 he state(6) to be set 00:21:02.255 [2024-11-20 11:15:29.549960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-11-20 11:15:29.549960] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf29e00 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.255 he state(6) to be set 00:21:02.255 [2024-11-20 11:15:29.549969] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf29e00 is same with the state(6) to be set 00:21:02.255 [2024-11-20 11:15:29.549972] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x58a9e0 is same with the state(6) to be set 00:21:02.255 [2024-11-20 11:15:29.549976] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf29e00 is same with the state(6) to be set 00:21:02.255 [2024-11-20 11:15:29.549983] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf29e00 is same with the state(6) to be set 00:21:02.255 [2024-11-20 11:15:29.549989] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf29e00 is same with the state(6) to be set 00:21:02.255 [2024-11-20 11:15:29.549995] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf29e00 is same with the state(6) to be set 00:21:02.255 [2024-11-20 11:15:29.549999] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:02.255 [2024-11-20 11:15:29.550001] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf29e00 is same with the state(6) to be set 00:21:02.255 [2024-11-20 11:15:29.550008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-11-20 11:15:29.550008] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf29e00 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.255 he state(6) to be set 00:21:02.255 [2024-11-20 11:15:29.550017] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf29e00 is same with the state(6) to be set 00:21:02.255 [2024-11-20 11:15:29.550018] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:02.255 [2024-11-20 11:15:29.550024] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf29e00 is same with the state(6) to be set 00:21:02.255 [2024-11-20 11:15:29.550026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.255 [2024-11-20 11:15:29.550031] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf29e00 is same with the state(6) to be set 00:21:02.255 [2024-11-20 11:15:29.550034] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:02.255 [2024-11-20 11:15:29.550041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf29e00 is same with the state(6) to be set 00:21:02.255 [2024-11-20 11:15:29.550047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-11-20 11:15:29.550048] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf29e00 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.255 he state(6) to be set 00:21:02.255 [2024-11-20 11:15:29.550056] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf29e00 is same with t[2024-11-20 11:15:29.550057] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nshe state(6) to be set 00:21:02.255 id:0 cdw10:00000000 cdw11:00000000 00:21:02.255 [2024-11-20 11:15:29.550066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.255 [2024-11-20 11:15:29.550069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf29e00 is same with the state(6) to be set 00:21:02.255 [2024-11-20 11:15:29.550073] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bc8b0 is same with the state(6) to be set 00:21:02.255 [2024-11-20 11:15:29.550076] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf29e00 is same with the state(6) to be set 00:21:02.255 [2024-11-20 11:15:29.550083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf29e00 is same with the state(6) to be set 00:21:02.255 [2024-11-20 11:15:29.550090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf29e00 is same with the state(6) to be set 00:21:02.255 [2024-11-20 11:15:29.550096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf29e00 is same with the state(6) to be set 00:21:02.255 [2024-11-20 11:15:29.550096] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:02.255 [2024-11-20 11:15:29.550103] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf29e00 is same with the state(6) to be set 00:21:02.255 [2024-11-20 11:15:29.550106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.255 [2024-11-20 11:15:29.550110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf29e00 is same with the state(6) to be set 00:21:02.255 [2024-11-20 11:15:29.550115] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:02.255 [2024-11-20 11:15:29.550116] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf29e00 is same with the state(6) to be set 00:21:02.255 [2024-11-20 11:15:29.550123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-11-20 11:15:29.550124] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf29e00 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.255 he state(6) to be set 00:21:02.255 [2024-11-20 11:15:29.550133] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 ns[2024-11-20 11:15:29.550133] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf29e00 is same with tid:0 cdw10:00000000 cdw11:00000000 00:21:02.255 he state(6) to be set 00:21:02.255 [2024-11-20 11:15:29.550143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-11-20 11:15:29.550143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf29e00 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.255 he state(6) to be set 00:21:02.255 [2024-11-20 11:15:29.550153] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf29e00 is same with the state(6) to be set 00:21:02.255 [2024-11-20 11:15:29.550154] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:02.255 [2024-11-20 11:15:29.550159] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf29e00 is same with the state(6) to be set 00:21:02.255 [2024-11-20 11:15:29.550164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.255 [2024-11-20 11:15:29.550167] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf29e00 is same with the state(6) to be set 00:21:02.255 [2024-11-20 11:15:29.550172] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7e20 is same with the state(6) to be set 00:21:02.255 [2024-11-20 11:15:29.550174] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf29e00 is same with the state(6) to be set 00:21:02.255 [2024-11-20 11:15:29.550181] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf29e00 is same with the state(6) to be set 00:21:02.255 [2024-11-20 11:15:29.550187] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf29e00 is same with the state(6) to be set 00:21:02.255 [2024-11-20 11:15:29.550193] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf29e00 is same with the state(6) to be set 00:21:02.255 [2024-11-20 11:15:29.550199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf29e00 is same with the state(6) to be set 00:21:02.255 [2024-11-20 11:15:29.550205] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf29e00 is same with t[2024-11-20 11:15:29.550204] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nshe state(6) to be set 00:21:02.255 id:0 cdw10:00000000 cdw11:00000000 00:21:02.255 [2024-11-20 11:15:29.550214] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf29e00 is same with the state(6) to be set 00:21:02.255 [2024-11-20 11:15:29.550216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.255 [2024-11-20 11:15:29.550221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf29e00 is same with the state(6) to be set 00:21:02.255 [2024-11-20 11:15:29.550224] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:02.255 [2024-11-20 11:15:29.550228] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf29e00 is same with the state(6) to be set 00:21:02.255 [2024-11-20 11:15:29.550232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.255 [2024-11-20 11:15:29.550234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf29e00 is same with the state(6) to be set 00:21:02.255 [2024-11-20 11:15:29.550240] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:02.255 [2024-11-20 11:15:29.550242] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf29e00 is same with the state(6) to be set 00:21:02.255 [2024-11-20 11:15:29.550248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.255 [2024-11-20 11:15:29.550249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf29e00 is same with the state(6) to be set 00:21:02.255 [2024-11-20 11:15:29.550257] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 ns[2024-11-20 11:15:29.550257] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf29e00 is same with tid:0 cdw10:00000000 cdw11:00000000 00:21:02.255 he state(6) to be set 00:21:02.255 [2024-11-20 11:15:29.550266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-11-20 11:15:29.550267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf29e00 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.255 he state(6) to be set 00:21:02.255 [2024-11-20 11:15:29.550276] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x58b970 is same with the state(6) to be set 00:21:02.255 [2024-11-20 11:15:29.550277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf29e00 is same with the state(6) to be set 00:21:02.256 [2024-11-20 11:15:29.550284] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf29e00 is same with the state(6) to be set 00:21:02.256 [2024-11-20 11:15:29.551264] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2a2d0 is same with the state(6) to be set 00:21:02.256 [2024-11-20 11:15:29.551287] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2a2d0 is same with the state(6) to be set 00:21:02.256 [2024-11-20 11:15:29.551298] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2a2d0 is same with the state(6) to be set 00:21:02.256 [2024-11-20 11:15:29.551308] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2a2d0 is same with the state(6) to be set 00:21:02.256 [2024-11-20 11:15:29.551318] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2a2d0 is same with the state(6) to be set 00:21:02.256 [2024-11-20 11:15:29.551328] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2a2d0 is same with the state(6) to be set 00:21:02.256 [2024-11-20 11:15:29.551338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2a2d0 is same with the state(6) to be set 00:21:02.256 [2024-11-20 11:15:29.551349] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2a2d0 is same with the state(6) to be set 00:21:02.256 [2024-11-20 11:15:29.551358] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2a2d0 is same with the state(6) to be set 00:21:02.256 [2024-11-20 11:15:29.551368] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2a2d0 is same with the state(6) to be set 00:21:02.256 [2024-11-20 11:15:29.551378] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2a2d0 is same with the state(6) to be set 00:21:02.256 [2024-11-20 11:15:29.551387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2a2d0 is same with the state(6) to be set 00:21:02.256 [2024-11-20 11:15:29.551397] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2a2d0 is same with the state(6) to be set 00:21:02.256 [2024-11-20 11:15:29.551407] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2a2d0 is same with the state(6) to be set 00:21:02.256 [2024-11-20 11:15:29.551416] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2a2d0 is same with the state(6) to be set 00:21:02.256 [2024-11-20 11:15:29.551426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2a2d0 is same with the state(6) to be set 00:21:02.256 [2024-11-20 11:15:29.551436] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2a2d0 is same with the state(6) to be set 00:21:02.256 [2024-11-20 11:15:29.551445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2a2d0 is same with the state(6) to be set 00:21:02.256 [2024-11-20 11:15:29.551455] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2a2d0 is same with the state(6) to be set 00:21:02.256 [2024-11-20 11:15:29.551465] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2a2d0 is same with the state(6) to be set 00:21:02.256 [2024-11-20 11:15:29.551476] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2a2d0 is same with the state(6) to be set 00:21:02.256 [2024-11-20 11:15:29.551485] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2a2d0 is same with the state(6) to be set 00:21:02.256 [2024-11-20 11:15:29.551495] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2a2d0 is same with the state(6) to be set 00:21:02.256 [2024-11-20 11:15:29.551505] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2a2d0 is same with the state(6) to be set 00:21:02.256 [2024-11-20 11:15:29.551514] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2a2d0 is same with the state(6) to be set 00:21:02.256 [2024-11-20 11:15:29.551529] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2a2d0 is same with the state(6) to be set 00:21:02.256 [2024-11-20 11:15:29.551539] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2a2d0 is same with the state(6) to be set 00:21:02.256 [2024-11-20 11:15:29.551549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2a2d0 is same with the state(6) to be set 00:21:02.256 [2024-11-20 11:15:29.551558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2a2d0 is same with the state(6) to be set 00:21:02.256 [2024-11-20 11:15:29.551568] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2a2d0 is same with the state(6) to be set 00:21:02.256 [2024-11-20 11:15:29.551578] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2a2d0 is same with the state(6) to be set 00:21:02.256 [2024-11-20 11:15:29.551588] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2a2d0 is same with the state(6) to be set 00:21:02.256 [2024-11-20 11:15:29.551598] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2a2d0 is same with the state(6) to be set 00:21:02.256 [2024-11-20 11:15:29.551609] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2a2d0 is same with the state(6) to be set 00:21:02.256 [2024-11-20 11:15:29.551620] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2a2d0 is same with the state(6) to be set 00:21:02.256 [2024-11-20 11:15:29.551629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2a2d0 is same with the state(6) to be set 00:21:02.256 [2024-11-20 11:15:29.551639] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2a2d0 is same with the state(6) to be set 00:21:02.256 [2024-11-20 11:15:29.551649] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2a2d0 is same with the state(6) to be set 00:21:02.256 [2024-11-20 11:15:29.551658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2a2d0 is same with the state(6) to be set 00:21:02.256 [2024-11-20 11:15:29.551668] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2a2d0 is same with the state(6) to be set 00:21:02.256 [2024-11-20 11:15:29.551678] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2a2d0 is same with the state(6) to be set 00:21:02.256 [2024-11-20 11:15:29.551688] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2a2d0 is same with the state(6) to be set 00:21:02.256 [2024-11-20 11:15:29.551699] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2a2d0 is same with the state(6) to be set 00:21:02.256 [2024-11-20 11:15:29.551708] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2a2d0 is same with the state(6) to be set 00:21:02.256 [2024-11-20 11:15:29.551718] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2a2d0 is same with the state(6) to be set 00:21:02.256 [2024-11-20 11:15:29.551728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2a2d0 is same with the state(6) to be set 00:21:02.256 [2024-11-20 11:15:29.551727] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:21:02.256 [2024-11-20 11:15:29.551738] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2a2d0 is same with the state(6) to be set 00:21:02.256 [2024-11-20 11:15:29.551749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2a2d0 is same with the state(6) to be set 00:21:02.256 [2024-11-20 11:15:29.551754] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x58dd50 (9): Bad file descriptor 00:21:02.256 [2024-11-20 11:15:29.551759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2a2d0 is same with the state(6) to be set 00:21:02.256 [2024-11-20 11:15:29.551770] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2a2d0 is same with the state(6) to be set 00:21:02.256 [2024-11-20 11:15:29.551782] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2a2d0 is same with the state(6) to be set 00:21:02.256 [2024-11-20 11:15:29.551793] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2a2d0 is same with the state(6) to be set 00:21:02.256 [2024-11-20 11:15:29.551803] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2a2d0 is same with the state(6) to be set 00:21:02.256 [2024-11-20 11:15:29.551813] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2a2d0 is same with the state(6) to be set 00:21:02.256 [2024-11-20 11:15:29.551823] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2a2d0 is same with the state(6) to be set 00:21:02.256 [2024-11-20 11:15:29.551832] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2a2d0 is same with the state(6) to be set 00:21:02.256 [2024-11-20 11:15:29.551842] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2a2d0 is same with the state(6) to be set 00:21:02.256 [2024-11-20 11:15:29.551852] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2a2d0 is same with the state(6) to be set 00:21:02.256 [2024-11-20 11:15:29.551862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2a2d0 is same with the state(6) to be set 00:21:02.256 [2024-11-20 11:15:29.551871] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2a2d0 is same with the state(6) to be set 00:21:02.256 [2024-11-20 11:15:29.551881] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2a2d0 is same with the state(6) to be set 00:21:02.256 [2024-11-20 11:15:29.551890] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2a2d0 is same with the state(6) to be set 00:21:02.256 [2024-11-20 11:15:29.551900] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2a2d0 is same with the state(6) to be set 00:21:02.256 [2024-11-20 11:15:29.552728] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:02.256 [2024-11-20 11:15:29.553376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.256 [2024-11-20 11:15:29.553397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x58dd50 with addr=10.0.0.2, port=4420 00:21:02.256 [2024-11-20 11:15:29.553405] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x58dd50 is same with the state(6) to be set 00:21:02.256 [2024-11-20 11:15:29.553468] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:02.256 [2024-11-20 11:15:29.553624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.256 [2024-11-20 11:15:29.553635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.256 [2024-11-20 11:15:29.553648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.257 [2024-11-20 11:15:29.553656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.257 [2024-11-20 11:15:29.553664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.257 [2024-11-20 11:15:29.553672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.257 [2024-11-20 11:15:29.553680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.257 [2024-11-20 11:15:29.553687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.257 [2024-11-20 11:15:29.553696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.257 [2024-11-20 11:15:29.553702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.257 [2024-11-20 11:15:29.553715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.257 [2024-11-20 11:15:29.553722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.257 [2024-11-20 11:15:29.553730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.257 [2024-11-20 11:15:29.553737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.257 [2024-11-20 11:15:29.553745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.257 [2024-11-20 11:15:29.553752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.257 [2024-11-20 11:15:29.553760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.257 [2024-11-20 11:15:29.553767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.257 [2024-11-20 11:15:29.553775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.257 [2024-11-20 11:15:29.553782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.257 [2024-11-20 11:15:29.553791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.257 [2024-11-20 11:15:29.553797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.257 [2024-11-20 11:15:29.553805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.257 [2024-11-20 11:15:29.553812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.257 [2024-11-20 11:15:29.553821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.257 [2024-11-20 11:15:29.553827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.257 [2024-11-20 11:15:29.553835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.257 [2024-11-20 11:15:29.553842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.257 [2024-11-20 11:15:29.553850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.257 [2024-11-20 11:15:29.553857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.257 [2024-11-20 11:15:29.553865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.257 [2024-11-20 11:15:29.553872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.257 [2024-11-20 11:15:29.553879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.257 [2024-11-20 11:15:29.553886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.257 [2024-11-20 11:15:29.553894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.257 [2024-11-20 11:15:29.553902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.257 [2024-11-20 11:15:29.553911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.257 [2024-11-20 11:15:29.553917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.257 [2024-11-20 11:15:29.553925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.257 [2024-11-20 11:15:29.553931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.257 [2024-11-20 11:15:29.553939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.257 [2024-11-20 11:15:29.553946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.257 [2024-11-20 11:15:29.553962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.257 [2024-11-20 11:15:29.553969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.257 [2024-11-20 11:15:29.553977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.257 [2024-11-20 11:15:29.553983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.257 [2024-11-20 11:15:29.553991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.257 [2024-11-20 11:15:29.553998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.257 [2024-11-20 11:15:29.554006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.257 [2024-11-20 11:15:29.554013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.257 [2024-11-20 11:15:29.554021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.257 [2024-11-20 11:15:29.554027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.257 [2024-11-20 11:15:29.554036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.257 [2024-11-20 11:15:29.554043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.257 [2024-11-20 11:15:29.554051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.257 [2024-11-20 11:15:29.554057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.257 [2024-11-20 11:15:29.554065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.257 [2024-11-20 11:15:29.554071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.257 [2024-11-20 11:15:29.554079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.257 [2024-11-20 11:15:29.554086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.257 [2024-11-20 11:15:29.554095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.257 [2024-11-20 11:15:29.554102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.257 [2024-11-20 11:15:29.554110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.257 [2024-11-20 11:15:29.554116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.257 [2024-11-20 11:15:29.554125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.257 [2024-11-20 11:15:29.554131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.257 [2024-11-20 11:15:29.554141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.257 [2024-11-20 11:15:29.554148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.257 [2024-11-20 11:15:29.554156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.257 [2024-11-20 11:15:29.554163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.257 [2024-11-20 11:15:29.554171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.257 [2024-11-20 11:15:29.554178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.257 [2024-11-20 11:15:29.554186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.257 [2024-11-20 11:15:29.554192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.257 [2024-11-20 11:15:29.554200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.257 [2024-11-20 11:15:29.554207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.257 [2024-11-20 11:15:29.554216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.257 [2024-11-20 11:15:29.554222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.257 [2024-11-20 11:15:29.554230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.257 [2024-11-20 11:15:29.554236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.257 [2024-11-20 11:15:29.554245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.257 [2024-11-20 11:15:29.554251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.258 [2024-11-20 11:15:29.554259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.258 [2024-11-20 11:15:29.554265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.258 [2024-11-20 11:15:29.554273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.258 [2024-11-20 11:15:29.554281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.258 [2024-11-20 11:15:29.554289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.258 [2024-11-20 11:15:29.554296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.258 [2024-11-20 11:15:29.554304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.258 [2024-11-20 11:15:29.554310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.258 [2024-11-20 11:15:29.554318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.258 [2024-11-20 11:15:29.554325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.258 [2024-11-20 11:15:29.554333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.258 [2024-11-20 11:15:29.554339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.258 [2024-11-20 11:15:29.554348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.258 [2024-11-20 11:15:29.554354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.258 [2024-11-20 11:15:29.554363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.258 [2024-11-20 11:15:29.554369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.258 [2024-11-20 11:15:29.554377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.258 [2024-11-20 11:15:29.554384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.258 [2024-11-20 11:15:29.554392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.258 [2024-11-20 11:15:29.554398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.258 [2024-11-20 11:15:29.554406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.258 [2024-11-20 11:15:29.554413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.258 [2024-11-20 11:15:29.554421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.258 [2024-11-20 11:15:29.554428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.258 [2024-11-20 11:15:29.554436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.258 [2024-11-20 11:15:29.554442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.258 [2024-11-20 11:15:29.554450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.258 [2024-11-20 11:15:29.554457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.258 [2024-11-20 11:15:29.554469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.258 [2024-11-20 11:15:29.554475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.258 [2024-11-20 11:15:29.554484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.258 [2024-11-20 11:15:29.554490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.258 [2024-11-20 11:15:29.554498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.258 [2024-11-20 11:15:29.554505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.258 [2024-11-20 11:15:29.554513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.258 [2024-11-20 11:15:29.554519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.258 [2024-11-20 11:15:29.554529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.258 [2024-11-20 11:15:29.554535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.258 [2024-11-20 11:15:29.554621] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x58dd50 (9): Bad file descriptor 00:21:02.258 [2024-11-20 11:15:29.554702] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:02.258 [2024-11-20 11:15:29.554749] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:02.258 [2024-11-20 11:15:29.555703] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:21:02.258 [2024-11-20 11:15:29.555744] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9e7a60 (9): Bad file descriptor 00:21:02.258 [2024-11-20 11:15:29.555755] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:21:02.258 [2024-11-20 11:15:29.555762] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:21:02.258 [2024-11-20 11:15:29.555771] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:21:02.258 [2024-11-20 11:15:29.555778] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:21:02.258 [2024-11-20 11:15:29.555934] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:02.258 [2024-11-20 11:15:29.556196] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:21:02.258 [2024-11-20 11:15:29.556410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.258 [2024-11-20 11:15:29.556424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9e7a60 with addr=10.0.0.2, port=4420 00:21:02.258 [2024-11-20 11:15:29.556432] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7a60 is same with the state(6) to be set 00:21:02.258 [2024-11-20 11:15:29.556678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.258 [2024-11-20 11:15:29.556690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x58e1b0 with addr=10.0.0.2, port=4420 00:21:02.258 [2024-11-20 11:15:29.556698] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x58e1b0 is same with the state(6) to be set 00:21:02.258 [2024-11-20 11:15:29.556706] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9e7a60 (9): Bad file descriptor 00:21:02.258 [2024-11-20 11:15:29.556751] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x58e1b0 (9): Bad file descriptor 00:21:02.258 [2024-11-20 11:15:29.556760] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:21:02.258 [2024-11-20 11:15:29.556767] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:21:02.258 [2024-11-20 11:15:29.556775] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:21:02.258 [2024-11-20 11:15:29.556781] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:21:02.258 [2024-11-20 11:15:29.556810] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:21:02.258 [2024-11-20 11:15:29.556817] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:21:02.258 [2024-11-20 11:15:29.556823] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:21:02.258 [2024-11-20 11:15:29.556829] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:21:02.258 [2024-11-20 11:15:29.559789] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:02.258 [2024-11-20 11:15:29.559800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.258 [2024-11-20 11:15:29.559808] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:02.258 [2024-11-20 11:15:29.559814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.258 [2024-11-20 11:15:29.559822] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:02.258 [2024-11-20 11:15:29.559829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.258 [2024-11-20 11:15:29.559836] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:02.258 [2024-11-20 11:15:29.559842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.258 [2024-11-20 11:15:29.559849] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa01e30 is same with the state(6) to be set 00:21:02.258 [2024-11-20 11:15:29.559873] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:02.258 [2024-11-20 11:15:29.559881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.258 [2024-11-20 11:15:29.559889] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:02.258 [2024-11-20 11:15:29.559896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.258 [2024-11-20 11:15:29.559904] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:02.258 [2024-11-20 11:15:29.559910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.258 [2024-11-20 11:15:29.559917] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:02.258 [2024-11-20 11:15:29.559924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.259 [2024-11-20 11:15:29.559930] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4a2610 is same with the state(6) to be set 00:21:02.259 [2024-11-20 11:15:29.559952] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x58a9e0 (9): Bad file descriptor 00:21:02.259 [2024-11-20 11:15:29.559968] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bc8b0 (9): Bad file descriptor 00:21:02.259 [2024-11-20 11:15:29.559982] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9e7e20 (9): Bad file descriptor 00:21:02.259 [2024-11-20 11:15:29.560009] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:02.259 [2024-11-20 11:15:29.560017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.259 [2024-11-20 11:15:29.560025] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:02.259 [2024-11-20 11:15:29.560031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.259 [2024-11-20 11:15:29.560038] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:02.259 [2024-11-20 11:15:29.560045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.259 [2024-11-20 11:15:29.560052] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:02.259 [2024-11-20 11:15:29.560059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.259 [2024-11-20 11:15:29.560065] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0b50 is same with the state(6) to be set 00:21:02.259 [2024-11-20 11:15:29.560079] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x58b970 (9): Bad file descriptor 00:21:02.259 [2024-11-20 11:15:29.562541] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:21:02.259 [2024-11-20 11:15:29.562740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.259 [2024-11-20 11:15:29.562754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x58dd50 with addr=10.0.0.2, port=4420 00:21:02.259 [2024-11-20 11:15:29.562761] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x58dd50 is same with the state(6) to be set 00:21:02.259 [2024-11-20 11:15:29.562791] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x58dd50 (9): Bad file descriptor 00:21:02.259 [2024-11-20 11:15:29.562830] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:21:02.259 [2024-11-20 11:15:29.562837] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:21:02.259 [2024-11-20 11:15:29.562845] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:21:02.259 [2024-11-20 11:15:29.562851] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:21:02.259 [2024-11-20 11:15:29.565972] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:21:02.259 [2024-11-20 11:15:29.566194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.259 [2024-11-20 11:15:29.566210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9e7a60 with addr=10.0.0.2, port=4420 00:21:02.259 [2024-11-20 11:15:29.566217] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7a60 is same with the state(6) to be set 00:21:02.259 [2024-11-20 11:15:29.566249] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9e7a60 (9): Bad file descriptor 00:21:02.259 [2024-11-20 11:15:29.566292] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:21:02.259 [2024-11-20 11:15:29.566300] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:21:02.259 [2024-11-20 11:15:29.566307] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:21:02.259 [2024-11-20 11:15:29.566314] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:21:02.259 [2024-11-20 11:15:29.566343] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:21:02.259 [2024-11-20 11:15:29.566571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.259 [2024-11-20 11:15:29.566584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x58e1b0 with addr=10.0.0.2, port=4420 00:21:02.259 [2024-11-20 11:15:29.566592] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x58e1b0 is same with the state(6) to be set 00:21:02.259 [2024-11-20 11:15:29.566622] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x58e1b0 (9): Bad file descriptor 00:21:02.259 [2024-11-20 11:15:29.566652] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:21:02.259 [2024-11-20 11:15:29.566659] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:21:02.259 [2024-11-20 11:15:29.566665] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:21:02.259 [2024-11-20 11:15:29.566671] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:21:02.259 [2024-11-20 11:15:29.569805] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa01e30 (9): Bad file descriptor 00:21:02.259 [2024-11-20 11:15:29.569839] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x4a2610 (9): Bad file descriptor 00:21:02.259 [2024-11-20 11:15:29.569868] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9f0b50 (9): Bad file descriptor 00:21:02.259 [2024-11-20 11:15:29.569965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.259 [2024-11-20 11:15:29.569977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.259 [2024-11-20 11:15:29.569990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.259 [2024-11-20 11:15:29.569998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.259 [2024-11-20 11:15:29.570007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.259 [2024-11-20 11:15:29.570014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.259 [2024-11-20 11:15:29.570023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.259 [2024-11-20 11:15:29.570029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.259 [2024-11-20 11:15:29.570038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.259 [2024-11-20 11:15:29.570045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.259 [2024-11-20 11:15:29.570054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.259 [2024-11-20 11:15:29.570061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.259 [2024-11-20 11:15:29.570074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.259 [2024-11-20 11:15:29.570080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.259 [2024-11-20 11:15:29.570089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.259 [2024-11-20 11:15:29.570095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.259 [2024-11-20 11:15:29.570103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.259 [2024-11-20 11:15:29.570110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.259 [2024-11-20 11:15:29.570118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.259 [2024-11-20 11:15:29.570125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.259 [2024-11-20 11:15:29.570133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.259 [2024-11-20 11:15:29.570140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.259 [2024-11-20 11:15:29.570148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.259 [2024-11-20 11:15:29.570155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.259 [2024-11-20 11:15:29.570163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.259 [2024-11-20 11:15:29.570170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.259 [2024-11-20 11:15:29.570178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.259 [2024-11-20 11:15:29.570185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.259 [2024-11-20 11:15:29.570193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.259 [2024-11-20 11:15:29.570200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.259 [2024-11-20 11:15:29.570208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.259 [2024-11-20 11:15:29.570215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.259 [2024-11-20 11:15:29.570223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.259 [2024-11-20 11:15:29.570229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.259 [2024-11-20 11:15:29.570237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.259 [2024-11-20 11:15:29.570244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.259 [2024-11-20 11:15:29.570252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.259 [2024-11-20 11:15:29.570261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.259 [2024-11-20 11:15:29.570269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.260 [2024-11-20 11:15:29.570275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.260 [2024-11-20 11:15:29.570283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.260 [2024-11-20 11:15:29.570290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.260 [2024-11-20 11:15:29.570298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.260 [2024-11-20 11:15:29.570305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.260 [2024-11-20 11:15:29.570313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.260 [2024-11-20 11:15:29.570320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.260 [2024-11-20 11:15:29.570328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.260 [2024-11-20 11:15:29.570335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.260 [2024-11-20 11:15:29.570344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.260 [2024-11-20 11:15:29.570351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.260 [2024-11-20 11:15:29.570359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.260 [2024-11-20 11:15:29.570366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.260 [2024-11-20 11:15:29.570374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.260 [2024-11-20 11:15:29.570380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.260 [2024-11-20 11:15:29.570389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.260 [2024-11-20 11:15:29.570396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.260 [2024-11-20 11:15:29.570404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.260 [2024-11-20 11:15:29.570411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.260 [2024-11-20 11:15:29.570419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.260 [2024-11-20 11:15:29.570425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.260 [2024-11-20 11:15:29.570433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.260 [2024-11-20 11:15:29.570440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.260 [2024-11-20 11:15:29.570450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.260 [2024-11-20 11:15:29.570458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.260 [2024-11-20 11:15:29.570466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.260 [2024-11-20 11:15:29.570473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.260 [2024-11-20 11:15:29.570481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.260 [2024-11-20 11:15:29.570487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.260 [2024-11-20 11:15:29.570495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.260 [2024-11-20 11:15:29.570502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.260 [2024-11-20 11:15:29.570510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.260 [2024-11-20 11:15:29.570517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.260 [2024-11-20 11:15:29.570527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.260 [2024-11-20 11:15:29.570533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.260 [2024-11-20 11:15:29.570541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.260 [2024-11-20 11:15:29.570548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.260 [2024-11-20 11:15:29.570556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.260 [2024-11-20 11:15:29.570562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.260 [2024-11-20 11:15:29.570571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.260 [2024-11-20 11:15:29.570578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.260 [2024-11-20 11:15:29.570586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.260 [2024-11-20 11:15:29.570593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.260 [2024-11-20 11:15:29.570601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.260 [2024-11-20 11:15:29.570608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.260 [2024-11-20 11:15:29.570616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.260 [2024-11-20 11:15:29.570623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.260 [2024-11-20 11:15:29.570632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.260 [2024-11-20 11:15:29.570640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.260 [2024-11-20 11:15:29.570649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.260 [2024-11-20 11:15:29.570655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.260 [2024-11-20 11:15:29.570664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.260 [2024-11-20 11:15:29.570670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.260 [2024-11-20 11:15:29.570678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.260 [2024-11-20 11:15:29.570685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.260 [2024-11-20 11:15:29.570693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.260 [2024-11-20 11:15:29.570700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.260 [2024-11-20 11:15:29.570708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.260 [2024-11-20 11:15:29.570714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.260 [2024-11-20 11:15:29.570722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.260 [2024-11-20 11:15:29.570729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.260 [2024-11-20 11:15:29.570737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.260 [2024-11-20 11:15:29.570743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.260 [2024-11-20 11:15:29.570751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.260 [2024-11-20 11:15:29.570758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.260 [2024-11-20 11:15:29.570766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.260 [2024-11-20 11:15:29.570773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.260 [2024-11-20 11:15:29.570781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.260 [2024-11-20 11:15:29.570788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.260 [2024-11-20 11:15:29.570796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.261 [2024-11-20 11:15:29.570802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.261 [2024-11-20 11:15:29.570810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.261 [2024-11-20 11:15:29.570817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.261 [2024-11-20 11:15:29.570827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.261 [2024-11-20 11:15:29.570835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.261 [2024-11-20 11:15:29.570843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.261 [2024-11-20 11:15:29.570850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.261 [2024-11-20 11:15:29.570858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.261 [2024-11-20 11:15:29.570865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.261 [2024-11-20 11:15:29.570872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.261 [2024-11-20 11:15:29.570879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.261 [2024-11-20 11:15:29.570887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.261 [2024-11-20 11:15:29.570893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.261 [2024-11-20 11:15:29.570901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.261 [2024-11-20 11:15:29.570908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.261 [2024-11-20 11:15:29.570916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.261 [2024-11-20 11:15:29.570922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.261 [2024-11-20 11:15:29.570930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.261 [2024-11-20 11:15:29.570937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.261 [2024-11-20 11:15:29.570944] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9848c0 is same with the state(6) to be set 00:21:02.261 [2024-11-20 11:15:29.571990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.261 [2024-11-20 11:15:29.572005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.261 [2024-11-20 11:15:29.572016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.261 [2024-11-20 11:15:29.572022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.261 [2024-11-20 11:15:29.572031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.261 [2024-11-20 11:15:29.572038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.261 [2024-11-20 11:15:29.572047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.261 [2024-11-20 11:15:29.572054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.261 [2024-11-20 11:15:29.572065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.261 [2024-11-20 11:15:29.572072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.261 [2024-11-20 11:15:29.572080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.261 [2024-11-20 11:15:29.572086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.261 [2024-11-20 11:15:29.572095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.261 [2024-11-20 11:15:29.572101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.261 [2024-11-20 11:15:29.572110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.261 [2024-11-20 11:15:29.572116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.261 [2024-11-20 11:15:29.572124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.261 [2024-11-20 11:15:29.572131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.261 [2024-11-20 11:15:29.572140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.261 [2024-11-20 11:15:29.572146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.261 [2024-11-20 11:15:29.572155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.261 [2024-11-20 11:15:29.572161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.261 [2024-11-20 11:15:29.572169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.261 [2024-11-20 11:15:29.572176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.261 [2024-11-20 11:15:29.572184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.261 [2024-11-20 11:15:29.572190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.261 [2024-11-20 11:15:29.572198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.261 [2024-11-20 11:15:29.572204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.261 [2024-11-20 11:15:29.572213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.261 [2024-11-20 11:15:29.572219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.261 [2024-11-20 11:15:29.572227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.261 [2024-11-20 11:15:29.572234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.261 [2024-11-20 11:15:29.572242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.261 [2024-11-20 11:15:29.572250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.261 [2024-11-20 11:15:29.572258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.261 [2024-11-20 11:15:29.572265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.261 [2024-11-20 11:15:29.572273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.261 [2024-11-20 11:15:29.572279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.261 [2024-11-20 11:15:29.572287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.261 [2024-11-20 11:15:29.572294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.261 [2024-11-20 11:15:29.572302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.261 [2024-11-20 11:15:29.572309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.261 [2024-11-20 11:15:29.572317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.261 [2024-11-20 11:15:29.572323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.261 [2024-11-20 11:15:29.572332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.261 [2024-11-20 11:15:29.572338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.261 [2024-11-20 11:15:29.572346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.261 [2024-11-20 11:15:29.572353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.261 [2024-11-20 11:15:29.572361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.261 [2024-11-20 11:15:29.572368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.261 [2024-11-20 11:15:29.572376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.261 [2024-11-20 11:15:29.572383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.261 [2024-11-20 11:15:29.572391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.261 [2024-11-20 11:15:29.572397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.261 [2024-11-20 11:15:29.572405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.261 [2024-11-20 11:15:29.572412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.261 [2024-11-20 11:15:29.572420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.261 [2024-11-20 11:15:29.572426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.261 [2024-11-20 11:15:29.572436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.262 [2024-11-20 11:15:29.572442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.262 [2024-11-20 11:15:29.572451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.262 [2024-11-20 11:15:29.572457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.262 [2024-11-20 11:15:29.572465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.262 [2024-11-20 11:15:29.572474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.262 [2024-11-20 11:15:29.572482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.262 [2024-11-20 11:15:29.572489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.262 [2024-11-20 11:15:29.572497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.262 [2024-11-20 11:15:29.572504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.262 [2024-11-20 11:15:29.572512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.262 [2024-11-20 11:15:29.572521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.262 [2024-11-20 11:15:29.572530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.262 [2024-11-20 11:15:29.572536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.262 [2024-11-20 11:15:29.572545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.262 [2024-11-20 11:15:29.572551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.262 [2024-11-20 11:15:29.572559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.262 [2024-11-20 11:15:29.572566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.262 [2024-11-20 11:15:29.572574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.262 [2024-11-20 11:15:29.572581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.262 [2024-11-20 11:15:29.572589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.262 [2024-11-20 11:15:29.572595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.262 [2024-11-20 11:15:29.572604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.262 [2024-11-20 11:15:29.572611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.262 [2024-11-20 11:15:29.572619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.262 [2024-11-20 11:15:29.572627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.262 [2024-11-20 11:15:29.572636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.262 [2024-11-20 11:15:29.572642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.262 [2024-11-20 11:15:29.572650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.262 [2024-11-20 11:15:29.572657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.262 [2024-11-20 11:15:29.572665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.262 [2024-11-20 11:15:29.572672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.262 [2024-11-20 11:15:29.572680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.262 [2024-11-20 11:15:29.572687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.262 [2024-11-20 11:15:29.572695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.262 [2024-11-20 11:15:29.572702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.262 [2024-11-20 11:15:29.572710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.262 [2024-11-20 11:15:29.572717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.262 [2024-11-20 11:15:29.572726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.262 [2024-11-20 11:15:29.572732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.262 [2024-11-20 11:15:29.572740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.262 [2024-11-20 11:15:29.572747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.262 [2024-11-20 11:15:29.572755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.262 [2024-11-20 11:15:29.572762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.262 [2024-11-20 11:15:29.572770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.262 [2024-11-20 11:15:29.572777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.262 [2024-11-20 11:15:29.572785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.262 [2024-11-20 11:15:29.572791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.262 [2024-11-20 11:15:29.572799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.262 [2024-11-20 11:15:29.572806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.262 [2024-11-20 11:15:29.572815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.262 [2024-11-20 11:15:29.572822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.262 [2024-11-20 11:15:29.572830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.262 [2024-11-20 11:15:29.572837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.262 [2024-11-20 11:15:29.572846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.262 [2024-11-20 11:15:29.572852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.262 [2024-11-20 11:15:29.572861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.262 [2024-11-20 11:15:29.572867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.262 [2024-11-20 11:15:29.572876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.262 [2024-11-20 11:15:29.572882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.262 [2024-11-20 11:15:29.572890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.262 [2024-11-20 11:15:29.572897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.262 [2024-11-20 11:15:29.572905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.262 [2024-11-20 11:15:29.572912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.262 [2024-11-20 11:15:29.572920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.262 [2024-11-20 11:15:29.572926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.262 [2024-11-20 11:15:29.572935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.262 [2024-11-20 11:15:29.572941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.262 [2024-11-20 11:15:29.572953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.262 [2024-11-20 11:15:29.572960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.262 [2024-11-20 11:15:29.572967] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x991bc0 is same with the state(6) to be set 00:21:02.262 [2024-11-20 11:15:29.573971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.262 [2024-11-20 11:15:29.573985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.262 [2024-11-20 11:15:29.573996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.262 [2024-11-20 11:15:29.574003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.262 [2024-11-20 11:15:29.574014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.262 [2024-11-20 11:15:29.574021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.262 [2024-11-20 11:15:29.574029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.262 [2024-11-20 11:15:29.574036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.263 [2024-11-20 11:15:29.574044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.263 [2024-11-20 11:15:29.574051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.263 [2024-11-20 11:15:29.574059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.263 [2024-11-20 11:15:29.574066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.263 [2024-11-20 11:15:29.574075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.263 [2024-11-20 11:15:29.574081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.263 [2024-11-20 11:15:29.574090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.263 [2024-11-20 11:15:29.574097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.263 [2024-11-20 11:15:29.574105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.263 [2024-11-20 11:15:29.574112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.263 [2024-11-20 11:15:29.574120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.263 [2024-11-20 11:15:29.574127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.263 [2024-11-20 11:15:29.574135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.263 [2024-11-20 11:15:29.574141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.263 [2024-11-20 11:15:29.574149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.263 [2024-11-20 11:15:29.574156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.263 [2024-11-20 11:15:29.574164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.263 [2024-11-20 11:15:29.574170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.263 [2024-11-20 11:15:29.574178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.263 [2024-11-20 11:15:29.574185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.263 [2024-11-20 11:15:29.574192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.263 [2024-11-20 11:15:29.574203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.263 [2024-11-20 11:15:29.574212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.263 [2024-11-20 11:15:29.574218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.263 [2024-11-20 11:15:29.574227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.263 [2024-11-20 11:15:29.574233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.263 [2024-11-20 11:15:29.574241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.263 [2024-11-20 11:15:29.574248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.263 [2024-11-20 11:15:29.574256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.263 [2024-11-20 11:15:29.574262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.263 [2024-11-20 11:15:29.574270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.263 [2024-11-20 11:15:29.574277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.263 [2024-11-20 11:15:29.574285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.263 [2024-11-20 11:15:29.574291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.263 [2024-11-20 11:15:29.574300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.263 [2024-11-20 11:15:29.574307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.263 [2024-11-20 11:15:29.574316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.263 [2024-11-20 11:15:29.574322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.263 [2024-11-20 11:15:29.574330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.263 [2024-11-20 11:15:29.574337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.263 [2024-11-20 11:15:29.574345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.263 [2024-11-20 11:15:29.574351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.263 [2024-11-20 11:15:29.574359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.263 [2024-11-20 11:15:29.574366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.263 [2024-11-20 11:15:29.574374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.263 [2024-11-20 11:15:29.574380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.263 [2024-11-20 11:15:29.574390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.263 [2024-11-20 11:15:29.574396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.263 [2024-11-20 11:15:29.574405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.263 [2024-11-20 11:15:29.574411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.263 [2024-11-20 11:15:29.574419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.263 [2024-11-20 11:15:29.574426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.263 [2024-11-20 11:15:29.574434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.263 [2024-11-20 11:15:29.574440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.263 [2024-11-20 11:15:29.574449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.263 [2024-11-20 11:15:29.574455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.263 [2024-11-20 11:15:29.574463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.263 [2024-11-20 11:15:29.574470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.263 [2024-11-20 11:15:29.574478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.263 [2024-11-20 11:15:29.574484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.263 [2024-11-20 11:15:29.574493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.263 [2024-11-20 11:15:29.574499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.263 [2024-11-20 11:15:29.574507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.263 [2024-11-20 11:15:29.574514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.263 [2024-11-20 11:15:29.574522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.263 [2024-11-20 11:15:29.574528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.263 [2024-11-20 11:15:29.574536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.263 [2024-11-20 11:15:29.574543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.263 [2024-11-20 11:15:29.574551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.263 [2024-11-20 11:15:29.574557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.263 [2024-11-20 11:15:29.574565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.263 [2024-11-20 11:15:29.574572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.263 [2024-11-20 11:15:29.574581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.263 [2024-11-20 11:15:29.574588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.263 [2024-11-20 11:15:29.574596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.263 [2024-11-20 11:15:29.574602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.263 [2024-11-20 11:15:29.574611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.263 [2024-11-20 11:15:29.574617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.263 [2024-11-20 11:15:29.574625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.264 [2024-11-20 11:15:29.574632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.264 [2024-11-20 11:15:29.574639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.264 [2024-11-20 11:15:29.574646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.264 [2024-11-20 11:15:29.574654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.264 [2024-11-20 11:15:29.574660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.264 [2024-11-20 11:15:29.574668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.264 [2024-11-20 11:15:29.574675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.264 [2024-11-20 11:15:29.574683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.264 [2024-11-20 11:15:29.574690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.264 [2024-11-20 11:15:29.574698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.264 [2024-11-20 11:15:29.574704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.264 [2024-11-20 11:15:29.574712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.264 [2024-11-20 11:15:29.574719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.264 [2024-11-20 11:15:29.574727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.264 [2024-11-20 11:15:29.574733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.264 [2024-11-20 11:15:29.574742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.264 [2024-11-20 11:15:29.574748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.264 [2024-11-20 11:15:29.574756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.264 [2024-11-20 11:15:29.574764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.264 [2024-11-20 11:15:29.574773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.264 [2024-11-20 11:15:29.574779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.264 [2024-11-20 11:15:29.574788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.264 [2024-11-20 11:15:29.574794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.264 [2024-11-20 11:15:29.574802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.264 [2024-11-20 11:15:29.574809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.264 [2024-11-20 11:15:29.574817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.264 [2024-11-20 11:15:29.574823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.264 [2024-11-20 11:15:29.574831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.264 [2024-11-20 11:15:29.574838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.264 [2024-11-20 11:15:29.574846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.264 [2024-11-20 11:15:29.574853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.264 [2024-11-20 11:15:29.574861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.264 [2024-11-20 11:15:29.574869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.264 [2024-11-20 11:15:29.574877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.264 [2024-11-20 11:15:29.574884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.264 [2024-11-20 11:15:29.574892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.264 [2024-11-20 11:15:29.574899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.264 [2024-11-20 11:15:29.574907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.264 [2024-11-20 11:15:29.574913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.264 [2024-11-20 11:15:29.574922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.264 [2024-11-20 11:15:29.574928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.264 [2024-11-20 11:15:29.574935] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9930f0 is same with the state(6) to be set 00:21:02.264 [2024-11-20 11:15:29.575964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.264 [2024-11-20 11:15:29.575979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.264 [2024-11-20 11:15:29.575992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.264 [2024-11-20 11:15:29.575998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.264 [2024-11-20 11:15:29.576007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.264 [2024-11-20 11:15:29.576013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.264 [2024-11-20 11:15:29.576021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.264 [2024-11-20 11:15:29.576028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.264 [2024-11-20 11:15:29.576037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.264 [2024-11-20 11:15:29.576043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.264 [2024-11-20 11:15:29.576052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.264 [2024-11-20 11:15:29.576058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.264 [2024-11-20 11:15:29.576067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.264 [2024-11-20 11:15:29.576074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.264 [2024-11-20 11:15:29.576082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.264 [2024-11-20 11:15:29.576089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.264 [2024-11-20 11:15:29.576097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.264 [2024-11-20 11:15:29.576104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.264 [2024-11-20 11:15:29.576112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.264 [2024-11-20 11:15:29.576119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.264 [2024-11-20 11:15:29.576127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.264 [2024-11-20 11:15:29.576134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.264 [2024-11-20 11:15:29.576142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.264 [2024-11-20 11:15:29.576149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.264 [2024-11-20 11:15:29.576157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.264 [2024-11-20 11:15:29.576163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.264 [2024-11-20 11:15:29.576173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.264 [2024-11-20 11:15:29.576180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.264 [2024-11-20 11:15:29.576188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.264 [2024-11-20 11:15:29.576195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.264 [2024-11-20 11:15:29.576203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.264 [2024-11-20 11:15:29.576210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.264 [2024-11-20 11:15:29.576218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.264 [2024-11-20 11:15:29.576225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.264 [2024-11-20 11:15:29.576232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.264 [2024-11-20 11:15:29.576239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.264 [2024-11-20 11:15:29.576247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.265 [2024-11-20 11:15:29.576253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.265 [2024-11-20 11:15:29.576262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.265 [2024-11-20 11:15:29.576269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.265 [2024-11-20 11:15:29.576277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.265 [2024-11-20 11:15:29.576284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.265 [2024-11-20 11:15:29.576293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.265 [2024-11-20 11:15:29.576299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.265 [2024-11-20 11:15:29.576308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.265 [2024-11-20 11:15:29.576314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.265 [2024-11-20 11:15:29.576322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.265 [2024-11-20 11:15:29.576329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.265 [2024-11-20 11:15:29.576337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.265 [2024-11-20 11:15:29.576343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.265 [2024-11-20 11:15:29.576351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.265 [2024-11-20 11:15:29.576360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.265 [2024-11-20 11:15:29.576368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.265 [2024-11-20 11:15:29.576375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.265 [2024-11-20 11:15:29.576383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.265 [2024-11-20 11:15:29.576390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.265 [2024-11-20 11:15:29.576398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.265 [2024-11-20 11:15:29.576405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.265 [2024-11-20 11:15:29.576413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.265 [2024-11-20 11:15:29.576419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.265 [2024-11-20 11:15:29.576427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.265 [2024-11-20 11:15:29.576434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.265 [2024-11-20 11:15:29.576442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.265 [2024-11-20 11:15:29.576448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.265 [2024-11-20 11:15:29.576457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.265 [2024-11-20 11:15:29.576463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.265 [2024-11-20 11:15:29.576471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.265 [2024-11-20 11:15:29.576478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.265 [2024-11-20 11:15:29.576486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.265 [2024-11-20 11:15:29.576493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.265 [2024-11-20 11:15:29.576501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.265 [2024-11-20 11:15:29.576507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.265 [2024-11-20 11:15:29.576516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.265 [2024-11-20 11:15:29.576522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.265 [2024-11-20 11:15:29.576530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.265 [2024-11-20 11:15:29.576537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.265 [2024-11-20 11:15:29.576547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.265 [2024-11-20 11:15:29.576554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.265 [2024-11-20 11:15:29.576562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.265 [2024-11-20 11:15:29.576568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.265 [2024-11-20 11:15:29.576577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.265 [2024-11-20 11:15:29.576583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.265 [2024-11-20 11:15:29.576591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.265 [2024-11-20 11:15:29.576598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.265 [2024-11-20 11:15:29.576606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.265 [2024-11-20 11:15:29.576613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.265 [2024-11-20 11:15:29.576621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.265 [2024-11-20 11:15:29.576628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.265 [2024-11-20 11:15:29.576637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.265 [2024-11-20 11:15:29.576644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.265 [2024-11-20 11:15:29.576652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.265 [2024-11-20 11:15:29.576659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.265 [2024-11-20 11:15:29.576667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.265 [2024-11-20 11:15:29.576674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.265 [2024-11-20 11:15:29.576682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.265 [2024-11-20 11:15:29.576688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.265 [2024-11-20 11:15:29.576696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.265 [2024-11-20 11:15:29.576702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.265 [2024-11-20 11:15:29.576710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.265 [2024-11-20 11:15:29.576717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.265 [2024-11-20 11:15:29.576725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.265 [2024-11-20 11:15:29.576733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.265 [2024-11-20 11:15:29.576741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.265 [2024-11-20 11:15:29.576747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.265 [2024-11-20 11:15:29.576756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.265 [2024-11-20 11:15:29.576762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.266 [2024-11-20 11:15:29.576770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.266 [2024-11-20 11:15:29.576777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.266 [2024-11-20 11:15:29.576785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.266 [2024-11-20 11:15:29.576792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.266 [2024-11-20 11:15:29.576800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.266 [2024-11-20 11:15:29.576807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.266 [2024-11-20 11:15:29.576815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.266 [2024-11-20 11:15:29.576822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.266 [2024-11-20 11:15:29.576830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.266 [2024-11-20 11:15:29.576836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.266 [2024-11-20 11:15:29.576845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.266 [2024-11-20 11:15:29.576851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.266 [2024-11-20 11:15:29.576859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.266 [2024-11-20 11:15:29.576865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.266 [2024-11-20 11:15:29.576874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.266 [2024-11-20 11:15:29.576881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.266 [2024-11-20 11:15:29.576889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.266 [2024-11-20 11:15:29.576896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.266 [2024-11-20 11:15:29.576904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.266 [2024-11-20 11:15:29.576910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.266 [2024-11-20 11:15:29.576920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.266 [2024-11-20 11:15:29.576926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.266 [2024-11-20 11:15:29.576934] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db0f10 is same with the state(6) to be set 00:21:02.266 [2024-11-20 11:15:29.577904] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:21:02.266 [2024-11-20 11:15:29.577919] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:21:02.266 [2024-11-20 11:15:29.577928] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:21:02.266 [2024-11-20 11:15:29.577936] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:21:02.266 [2024-11-20 11:15:29.578323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.266 [2024-11-20 11:15:29.578341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x58b970 with addr=10.0.0.2, port=4420 00:21:02.266 [2024-11-20 11:15:29.578349] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x58b970 is same with the state(6) to be set 00:21:02.266 [2024-11-20 11:15:29.578567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.266 [2024-11-20 11:15:29.578577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x58a9e0 with addr=10.0.0.2, port=4420 00:21:02.266 [2024-11-20 11:15:29.578585] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x58a9e0 is same with the state(6) to be set 00:21:02.266 [2024-11-20 11:15:29.578779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.266 [2024-11-20 11:15:29.578789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bc8b0 with addr=10.0.0.2, port=4420 00:21:02.266 [2024-11-20 11:15:29.578796] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bc8b0 is same with the state(6) to be set 00:21:02.266 [2024-11-20 11:15:29.578920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.266 [2024-11-20 11:15:29.578930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9e7e20 with addr=10.0.0.2, port=4420 00:21:02.266 [2024-11-20 11:15:29.578937] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7e20 is same with the state(6) to be set 00:21:02.266 [2024-11-20 11:15:29.579914] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:21:02.266 [2024-11-20 11:15:29.579929] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:21:02.266 [2024-11-20 11:15:29.579937] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:21:02.266 [2024-11-20 11:15:29.579969] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x58b970 (9): Bad file descriptor 00:21:02.266 [2024-11-20 11:15:29.579979] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x58a9e0 (9): Bad file descriptor 00:21:02.266 [2024-11-20 11:15:29.579988] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bc8b0 (9): Bad file descriptor 00:21:02.266 [2024-11-20 11:15:29.579997] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9e7e20 (9): Bad file descriptor 00:21:02.266 [2024-11-20 11:15:29.580201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.266 [2024-11-20 11:15:29.580214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x58dd50 with addr=10.0.0.2, port=4420 00:21:02.266 [2024-11-20 11:15:29.580222] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x58dd50 is same with the state(6) to be set 00:21:02.266 [2024-11-20 11:15:29.580396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.266 [2024-11-20 11:15:29.580406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9e7a60 with addr=10.0.0.2, port=4420 00:21:02.266 [2024-11-20 11:15:29.580413] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7a60 is same with the state(6) to be set 00:21:02.266 [2024-11-20 11:15:29.580624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.266 [2024-11-20 11:15:29.580635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x58e1b0 with addr=10.0.0.2, port=4420 00:21:02.266 [2024-11-20 11:15:29.580642] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x58e1b0 is same with the state(6) to be set 00:21:02.266 [2024-11-20 11:15:29.580649] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:21:02.266 [2024-11-20 11:15:29.580656] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:21:02.266 [2024-11-20 11:15:29.580663] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:21:02.266 [2024-11-20 11:15:29.580670] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:21:02.266 [2024-11-20 11:15:29.580678] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:21:02.266 [2024-11-20 11:15:29.580684] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:21:02.266 [2024-11-20 11:15:29.580690] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:21:02.266 [2024-11-20 11:15:29.580696] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:21:02.266 [2024-11-20 11:15:29.580703] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:21:02.266 [2024-11-20 11:15:29.580710] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:21:02.266 [2024-11-20 11:15:29.580717] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:21:02.266 [2024-11-20 11:15:29.580723] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:21:02.266 [2024-11-20 11:15:29.580729] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:21:02.266 [2024-11-20 11:15:29.580735] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:21:02.266 [2024-11-20 11:15:29.580741] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:21:02.266 [2024-11-20 11:15:29.580747] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:21:02.266 [2024-11-20 11:15:29.580806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.266 [2024-11-20 11:15:29.580814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.266 [2024-11-20 11:15:29.580826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.266 [2024-11-20 11:15:29.580833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.266 [2024-11-20 11:15:29.580843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.266 [2024-11-20 11:15:29.580850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.266 [2024-11-20 11:15:29.580861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.266 [2024-11-20 11:15:29.580868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.266 [2024-11-20 11:15:29.580877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.266 [2024-11-20 11:15:29.580883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.266 [2024-11-20 11:15:29.580891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.266 [2024-11-20 11:15:29.580898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.266 [2024-11-20 11:15:29.580907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.266 [2024-11-20 11:15:29.580913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.267 [2024-11-20 11:15:29.580921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.267 [2024-11-20 11:15:29.580928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.267 [2024-11-20 11:15:29.580936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.267 [2024-11-20 11:15:29.580943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.267 [2024-11-20 11:15:29.580960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.267 [2024-11-20 11:15:29.580966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.267 [2024-11-20 11:15:29.580975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.267 [2024-11-20 11:15:29.580981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.267 [2024-11-20 11:15:29.580989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.267 [2024-11-20 11:15:29.580996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.267 [2024-11-20 11:15:29.581004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.267 [2024-11-20 11:15:29.581010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.267 [2024-11-20 11:15:29.581019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.267 [2024-11-20 11:15:29.581025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.267 [2024-11-20 11:15:29.581033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.267 [2024-11-20 11:15:29.581040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.267 [2024-11-20 11:15:29.581048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.267 [2024-11-20 11:15:29.581054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.267 [2024-11-20 11:15:29.581064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.267 [2024-11-20 11:15:29.581071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.267 [2024-11-20 11:15:29.581079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.267 [2024-11-20 11:15:29.581086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.267 [2024-11-20 11:15:29.581094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.267 [2024-11-20 11:15:29.581100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.267 [2024-11-20 11:15:29.581109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.267 [2024-11-20 11:15:29.581115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.267 [2024-11-20 11:15:29.581123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.267 [2024-11-20 11:15:29.581130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.267 [2024-11-20 11:15:29.581138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.267 [2024-11-20 11:15:29.581144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.267 [2024-11-20 11:15:29.581152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.267 [2024-11-20 11:15:29.581159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.267 [2024-11-20 11:15:29.581167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.267 [2024-11-20 11:15:29.581173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.267 [2024-11-20 11:15:29.581181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.267 [2024-11-20 11:15:29.581188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.267 [2024-11-20 11:15:29.581196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.267 [2024-11-20 11:15:29.581202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.267 [2024-11-20 11:15:29.581210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.267 [2024-11-20 11:15:29.581217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.267 [2024-11-20 11:15:29.581225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.267 [2024-11-20 11:15:29.581232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.267 [2024-11-20 11:15:29.581240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.267 [2024-11-20 11:15:29.581248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.267 [2024-11-20 11:15:29.581256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.267 [2024-11-20 11:15:29.581262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.267 [2024-11-20 11:15:29.581271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.267 [2024-11-20 11:15:29.581277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.267 [2024-11-20 11:15:29.581285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.267 [2024-11-20 11:15:29.581291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.267 [2024-11-20 11:15:29.581300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.267 [2024-11-20 11:15:29.581306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.267 [2024-11-20 11:15:29.581314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.267 [2024-11-20 11:15:29.581321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.267 [2024-11-20 11:15:29.581329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.267 [2024-11-20 11:15:29.581336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.267 [2024-11-20 11:15:29.581344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.267 [2024-11-20 11:15:29.581350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.267 [2024-11-20 11:15:29.581358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.267 [2024-11-20 11:15:29.581365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.267 [2024-11-20 11:15:29.581373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.267 [2024-11-20 11:15:29.581379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.267 [2024-11-20 11:15:29.581388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.267 [2024-11-20 11:15:29.581394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.267 [2024-11-20 11:15:29.581402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.267 [2024-11-20 11:15:29.581408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.267 [2024-11-20 11:15:29.581416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.267 [2024-11-20 11:15:29.581422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.267 [2024-11-20 11:15:29.581432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.267 [2024-11-20 11:15:29.581439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.267 [2024-11-20 11:15:29.581447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.267 [2024-11-20 11:15:29.581453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.267 [2024-11-20 11:15:29.581462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.267 [2024-11-20 11:15:29.581468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.267 [2024-11-20 11:15:29.581477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.267 [2024-11-20 11:15:29.581483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.267 [2024-11-20 11:15:29.581491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.267 [2024-11-20 11:15:29.581498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.268 [2024-11-20 11:15:29.581506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.268 [2024-11-20 11:15:29.581512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.268 [2024-11-20 11:15:29.581520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.268 [2024-11-20 11:15:29.581527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.268 [2024-11-20 11:15:29.581535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.268 [2024-11-20 11:15:29.581541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.268 [2024-11-20 11:15:29.581549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.268 [2024-11-20 11:15:29.581556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.268 [2024-11-20 11:15:29.581564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.268 [2024-11-20 11:15:29.581570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.268 [2024-11-20 11:15:29.581579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.268 [2024-11-20 11:15:29.581585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.268 [2024-11-20 11:15:29.581593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.268 [2024-11-20 11:15:29.581600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.268 [2024-11-20 11:15:29.581609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.268 [2024-11-20 11:15:29.581621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.268 [2024-11-20 11:15:29.581629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.268 [2024-11-20 11:15:29.581636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.268 [2024-11-20 11:15:29.581644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.268 [2024-11-20 11:15:29.581651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.268 [2024-11-20 11:15:29.581659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.268 [2024-11-20 11:15:29.581666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.268 [2024-11-20 11:15:29.581674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.268 [2024-11-20 11:15:29.581681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.268 [2024-11-20 11:15:29.581689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.268 [2024-11-20 11:15:29.581704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.268 [2024-11-20 11:15:29.581713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.268 [2024-11-20 11:15:29.581720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.268 [2024-11-20 11:15:29.581728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.268 [2024-11-20 11:15:29.581735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.268 [2024-11-20 11:15:29.581743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.268 [2024-11-20 11:15:29.581749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.268 [2024-11-20 11:15:29.581758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.268 [2024-11-20 11:15:29.581764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.268 [2024-11-20 11:15:29.581773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.268 [2024-11-20 11:15:29.581779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.268 [2024-11-20 11:15:29.581786] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x994620 is same with the state(6) to be set 00:21:02.268 [2024-11-20 11:15:29.582802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.268 [2024-11-20 11:15:29.582814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.268 [2024-11-20 11:15:29.582824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.268 [2024-11-20 11:15:29.582834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.268 [2024-11-20 11:15:29.582842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.268 [2024-11-20 11:15:29.582849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.268 [2024-11-20 11:15:29.582858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.268 [2024-11-20 11:15:29.582864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.268 [2024-11-20 11:15:29.582873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.268 [2024-11-20 11:15:29.582880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.268 [2024-11-20 11:15:29.582888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.268 [2024-11-20 11:15:29.582895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.268 [2024-11-20 11:15:29.582903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.268 [2024-11-20 11:15:29.582910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.268 [2024-11-20 11:15:29.582918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.268 [2024-11-20 11:15:29.582925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.268 [2024-11-20 11:15:29.582933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.268 [2024-11-20 11:15:29.582940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.268 [2024-11-20 11:15:29.582951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.268 [2024-11-20 11:15:29.582958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.268 [2024-11-20 11:15:29.582967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.268 [2024-11-20 11:15:29.582974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.268 [2024-11-20 11:15:29.582982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.268 [2024-11-20 11:15:29.582989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.268 [2024-11-20 11:15:29.582996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.268 [2024-11-20 11:15:29.583003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.268 [2024-11-20 11:15:29.583011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.268 [2024-11-20 11:15:29.583018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.268 [2024-11-20 11:15:29.583028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.268 [2024-11-20 11:15:29.583034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.268 [2024-11-20 11:15:29.583043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.268 [2024-11-20 11:15:29.583050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.268 [2024-11-20 11:15:29.583058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.268 [2024-11-20 11:15:29.583065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.268 [2024-11-20 11:15:29.583073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.268 [2024-11-20 11:15:29.583079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.268 [2024-11-20 11:15:29.583088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.268 [2024-11-20 11:15:29.583094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.268 [2024-11-20 11:15:29.583103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.268 [2024-11-20 11:15:29.583109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.268 [2024-11-20 11:15:29.583117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.268 [2024-11-20 11:15:29.583124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.269 [2024-11-20 11:15:29.583132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.269 [2024-11-20 11:15:29.583138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.269 [2024-11-20 11:15:29.583147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.269 [2024-11-20 11:15:29.583153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.269 [2024-11-20 11:15:29.583162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.269 [2024-11-20 11:15:29.583168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.269 [2024-11-20 11:15:29.583177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.269 [2024-11-20 11:15:29.583184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.269 [2024-11-20 11:15:29.583192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.269 [2024-11-20 11:15:29.583198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.269 [2024-11-20 11:15:29.583207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.269 [2024-11-20 11:15:29.583215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.269 [2024-11-20 11:15:29.583223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.269 [2024-11-20 11:15:29.583230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.269 [2024-11-20 11:15:29.583238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.269 [2024-11-20 11:15:29.583245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.269 [2024-11-20 11:15:29.583253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.269 [2024-11-20 11:15:29.583260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.269 [2024-11-20 11:15:29.583268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.269 [2024-11-20 11:15:29.583275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.269 [2024-11-20 11:15:29.583283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.269 [2024-11-20 11:15:29.583289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.269 [2024-11-20 11:15:29.583298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.269 [2024-11-20 11:15:29.583304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.269 [2024-11-20 11:15:29.583312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.269 [2024-11-20 11:15:29.583319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.269 [2024-11-20 11:15:29.583327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.269 [2024-11-20 11:15:29.583333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.269 [2024-11-20 11:15:29.583341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.269 [2024-11-20 11:15:29.583348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.269 [2024-11-20 11:15:29.583357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.269 [2024-11-20 11:15:29.583364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.269 [2024-11-20 11:15:29.583372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.269 [2024-11-20 11:15:29.583378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.269 [2024-11-20 11:15:29.583386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.269 [2024-11-20 11:15:29.583393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.269 [2024-11-20 11:15:29.583402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.269 [2024-11-20 11:15:29.583409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.269 [2024-11-20 11:15:29.583418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.269 [2024-11-20 11:15:29.583424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.269 [2024-11-20 11:15:29.583433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.269 [2024-11-20 11:15:29.583439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.269 [2024-11-20 11:15:29.583448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.269 [2024-11-20 11:15:29.583455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.269 [2024-11-20 11:15:29.583463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.269 [2024-11-20 11:15:29.583470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.269 [2024-11-20 11:15:29.583478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.269 [2024-11-20 11:15:29.583484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.269 [2024-11-20 11:15:29.583492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.269 [2024-11-20 11:15:29.583499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.269 [2024-11-20 11:15:29.583507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.269 [2024-11-20 11:15:29.583514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.269 [2024-11-20 11:15:29.583522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.269 [2024-11-20 11:15:29.583528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.269 [2024-11-20 11:15:29.583536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.269 [2024-11-20 11:15:29.583543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.269 [2024-11-20 11:15:29.583552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.269 [2024-11-20 11:15:29.583558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.269 [2024-11-20 11:15:29.583566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.269 [2024-11-20 11:15:29.583573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.269 [2024-11-20 11:15:29.583581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.269 [2024-11-20 11:15:29.583590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.269 [2024-11-20 11:15:29.583598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.269 [2024-11-20 11:15:29.583604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.269 [2024-11-20 11:15:29.583613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.269 [2024-11-20 11:15:29.583619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.269 [2024-11-20 11:15:29.583627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.269 [2024-11-20 11:15:29.583634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.269 [2024-11-20 11:15:29.583642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.270 [2024-11-20 11:15:29.583649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.270 [2024-11-20 11:15:29.583657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.270 [2024-11-20 11:15:29.583664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.270 [2024-11-20 11:15:29.583672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.270 [2024-11-20 11:15:29.583678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.270 [2024-11-20 11:15:29.583686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.270 [2024-11-20 11:15:29.583693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.270 [2024-11-20 11:15:29.583701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.270 [2024-11-20 11:15:29.583708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.270 [2024-11-20 11:15:29.583716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.270 [2024-11-20 11:15:29.583722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.270 [2024-11-20 11:15:29.583731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.270 [2024-11-20 11:15:29.583738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.270 [2024-11-20 11:15:29.583746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.270 [2024-11-20 11:15:29.583753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.270 [2024-11-20 11:15:29.583761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.270 [2024-11-20 11:15:29.583767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.270 [2024-11-20 11:15:29.583777] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x995ba0 is same with the state(6) to be set 00:21:02.270 [2024-11-20 11:15:29.584769] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:21:02.270 [2024-11-20 11:15:29.584783] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:21:02.270 task offset: 26112 on job bdev=Nvme1n1 fails 00:21:02.270 00:21:02.270 Latency(us) 00:21:02.270 [2024-11-20T10:15:29.766Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:02.270 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:02.270 Job: Nvme1n1 ended in about 0.87 seconds with error 00:21:02.270 Verification LBA range: start 0x0 length 0x400 00:21:02.270 Nvme1n1 : 0.87 220.59 13.79 73.53 0.00 215186.89 2664.18 223392.28 00:21:02.270 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:02.270 Job: Nvme2n1 ended in about 0.88 seconds with error 00:21:02.270 Verification LBA range: start 0x0 length 0x400 00:21:02.270 Nvme2n1 : 0.88 223.16 13.95 72.87 0.00 209829.56 6810.05 217009.64 00:21:02.270 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:02.270 Job: Nvme3n1 ended in about 0.90 seconds with error 00:21:02.270 Verification LBA range: start 0x0 length 0x400 00:21:02.270 Nvme3n1 : 0.90 213.59 13.35 71.20 0.00 214260.42 15044.79 215186.03 00:21:02.270 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:02.270 Job: Nvme4n1 ended in about 0.90 seconds with error 00:21:02.270 Verification LBA range: start 0x0 length 0x400 00:21:02.270 Nvme4n1 : 0.90 213.11 13.32 71.04 0.00 210770.59 15728.64 216097.84 00:21:02.270 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:02.270 Job: Nvme5n1 ended in about 0.90 seconds with error 00:21:02.270 Verification LBA range: start 0x0 length 0x400 00:21:02.270 Nvme5n1 : 0.90 218.19 13.64 70.88 0.00 203346.21 7579.38 222480.47 00:21:02.270 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:02.270 Job: Nvme6n1 ended in about 0.91 seconds with error 00:21:02.270 Verification LBA range: start 0x0 length 0x400 00:21:02.270 Nvme6n1 : 0.91 211.05 13.19 70.35 0.00 205030.85 25758.50 211538.81 00:21:02.270 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:02.270 Job: Nvme7n1 ended in about 0.91 seconds with error 00:21:02.270 Verification LBA range: start 0x0 length 0x400 00:21:02.270 Nvme7n1 : 0.91 216.08 13.50 70.20 0.00 197652.34 16298.52 225215.89 00:21:02.270 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:02.270 Job: Nvme8n1 ended in about 0.88 seconds with error 00:21:02.270 Verification LBA range: start 0x0 length 0x400 00:21:02.270 Nvme8n1 : 0.88 222.02 13.88 67.97 0.00 190156.49 1894.85 221568.67 00:21:02.270 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:02.270 Verification LBA range: start 0x0 length 0x400 00:21:02.270 Nvme9n1 : 0.88 218.97 13.69 0.00 0.00 246257.75 15386.71 237069.36 00:21:02.270 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:02.270 Job: Nvme10n1 ended in about 0.90 seconds with error 00:21:02.270 Verification LBA range: start 0x0 length 0x400 00:21:02.270 Nvme10n1 : 0.90 141.45 8.84 70.73 0.00 250798.97 19033.93 235245.75 00:21:02.270 [2024-11-20T10:15:29.766Z] =================================================================================================================== 00:21:02.270 [2024-11-20T10:15:29.766Z] Total : 2098.20 131.14 638.75 0.00 212475.45 1894.85 237069.36 00:21:02.270 [2024-11-20 11:15:29.616924] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:21:02.270 [2024-11-20 11:15:29.616984] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:21:02.270 [2024-11-20 11:15:29.617045] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x58dd50 (9): Bad file descriptor 00:21:02.270 [2024-11-20 11:15:29.617067] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9e7a60 (9): Bad file descriptor 00:21:02.270 [2024-11-20 11:15:29.617077] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x58e1b0 (9): Bad file descriptor 00:21:02.270 [2024-11-20 11:15:29.617462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.270 [2024-11-20 11:15:29.617481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f0b50 with addr=10.0.0.2, port=4420 00:21:02.270 [2024-11-20 11:15:29.617491] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0b50 is same with the state(6) to be set 00:21:02.270 [2024-11-20 11:15:29.617659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.270 [2024-11-20 11:15:29.617670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa01e30 with addr=10.0.0.2, port=4420 00:21:02.270 [2024-11-20 11:15:29.617677] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa01e30 is same with the state(6) to be set 00:21:02.270 [2024-11-20 11:15:29.617834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.270 [2024-11-20 11:15:29.617845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x4a2610 with addr=10.0.0.2, port=4420 00:21:02.270 [2024-11-20 11:15:29.617853] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4a2610 is same with the state(6) to be set 00:21:02.270 [2024-11-20 11:15:29.617861] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:21:02.270 [2024-11-20 11:15:29.617868] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:21:02.270 [2024-11-20 11:15:29.617876] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:21:02.270 [2024-11-20 11:15:29.617886] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:21:02.270 [2024-11-20 11:15:29.617894] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:21:02.270 [2024-11-20 11:15:29.617900] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:21:02.270 [2024-11-20 11:15:29.617907] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:21:02.270 [2024-11-20 11:15:29.617913] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:21:02.270 [2024-11-20 11:15:29.617920] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:21:02.270 [2024-11-20 11:15:29.617926] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:21:02.270 [2024-11-20 11:15:29.617932] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:21:02.270 [2024-11-20 11:15:29.617938] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:21:02.270 [2024-11-20 11:15:29.618532] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9f0b50 (9): Bad file descriptor 00:21:02.270 [2024-11-20 11:15:29.618548] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa01e30 (9): Bad file descriptor 00:21:02.270 [2024-11-20 11:15:29.618557] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x4a2610 (9): Bad file descriptor 00:21:02.270 [2024-11-20 11:15:29.618600] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:21:02.270 [2024-11-20 11:15:29.618612] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:21:02.270 [2024-11-20 11:15:29.618620] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:21:02.270 [2024-11-20 11:15:29.618632] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:21:02.270 [2024-11-20 11:15:29.618641] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:21:02.270 [2024-11-20 11:15:29.618649] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:21:02.270 [2024-11-20 11:15:29.618657] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:21:02.270 [2024-11-20 11:15:29.618699] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:21:02.270 [2024-11-20 11:15:29.618706] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:21:02.271 [2024-11-20 11:15:29.618713] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:21:02.271 [2024-11-20 11:15:29.618720] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:21:02.271 [2024-11-20 11:15:29.618726] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:21:02.271 [2024-11-20 11:15:29.618732] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:21:02.271 [2024-11-20 11:15:29.618738] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:21:02.271 [2024-11-20 11:15:29.618744] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:21:02.271 [2024-11-20 11:15:29.618751] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:21:02.271 [2024-11-20 11:15:29.618757] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:21:02.271 [2024-11-20 11:15:29.618763] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:21:02.271 [2024-11-20 11:15:29.618769] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:21:02.271 [2024-11-20 11:15:29.619057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.271 [2024-11-20 11:15:29.619072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9e7e20 with addr=10.0.0.2, port=4420 00:21:02.271 [2024-11-20 11:15:29.619079] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7e20 is same with the state(6) to be set 00:21:02.271 [2024-11-20 11:15:29.619221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.271 [2024-11-20 11:15:29.619232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bc8b0 with addr=10.0.0.2, port=4420 00:21:02.271 [2024-11-20 11:15:29.619239] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bc8b0 is same with the state(6) to be set 00:21:02.271 [2024-11-20 11:15:29.619426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.271 [2024-11-20 11:15:29.619437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x58a9e0 with addr=10.0.0.2, port=4420 00:21:02.271 [2024-11-20 11:15:29.619445] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x58a9e0 is same with the state(6) to be set 00:21:02.271 [2024-11-20 11:15:29.619564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.271 [2024-11-20 11:15:29.619574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x58b970 with addr=10.0.0.2, port=4420 00:21:02.271 [2024-11-20 11:15:29.619581] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x58b970 is same with the state(6) to be set 00:21:02.271 [2024-11-20 11:15:29.619749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.271 [2024-11-20 11:15:29.619760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x58e1b0 with addr=10.0.0.2, port=4420 00:21:02.271 [2024-11-20 11:15:29.619770] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x58e1b0 is same with the state(6) to be set 00:21:02.271 [2024-11-20 11:15:29.619911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.271 [2024-11-20 11:15:29.619922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9e7a60 with addr=10.0.0.2, port=4420 00:21:02.271 [2024-11-20 11:15:29.619929] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7a60 is same with the state(6) to be set 00:21:02.271 [2024-11-20 11:15:29.620061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.271 [2024-11-20 11:15:29.620072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x58dd50 with addr=10.0.0.2, port=4420 00:21:02.271 [2024-11-20 11:15:29.620079] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x58dd50 is same with the state(6) to be set 00:21:02.271 [2024-11-20 11:15:29.620110] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9e7e20 (9): Bad file descriptor 00:21:02.271 [2024-11-20 11:15:29.620120] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bc8b0 (9): Bad file descriptor 00:21:02.271 [2024-11-20 11:15:29.620129] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x58a9e0 (9): Bad file descriptor 00:21:02.271 [2024-11-20 11:15:29.620138] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x58b970 (9): Bad file descriptor 00:21:02.271 [2024-11-20 11:15:29.620146] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x58e1b0 (9): Bad file descriptor 00:21:02.271 [2024-11-20 11:15:29.620155] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9e7a60 (9): Bad file descriptor 00:21:02.271 [2024-11-20 11:15:29.620163] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x58dd50 (9): Bad file descriptor 00:21:02.271 [2024-11-20 11:15:29.620187] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:21:02.271 [2024-11-20 11:15:29.620195] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:21:02.271 [2024-11-20 11:15:29.620202] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:21:02.271 [2024-11-20 11:15:29.620208] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:21:02.271 [2024-11-20 11:15:29.620215] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:21:02.271 [2024-11-20 11:15:29.620221] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:21:02.271 [2024-11-20 11:15:29.620228] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:21:02.271 [2024-11-20 11:15:29.620234] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:21:02.271 [2024-11-20 11:15:29.620240] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:21:02.271 [2024-11-20 11:15:29.620246] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:21:02.271 [2024-11-20 11:15:29.620252] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:21:02.271 [2024-11-20 11:15:29.620259] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:21:02.271 [2024-11-20 11:15:29.620266] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:21:02.271 [2024-11-20 11:15:29.620272] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:21:02.271 [2024-11-20 11:15:29.620278] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:21:02.271 [2024-11-20 11:15:29.620286] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:21:02.271 [2024-11-20 11:15:29.620293] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:21:02.271 [2024-11-20 11:15:29.620299] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:21:02.271 [2024-11-20 11:15:29.620305] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:21:02.271 [2024-11-20 11:15:29.620311] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:21:02.271 [2024-11-20 11:15:29.620317] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:21:02.271 [2024-11-20 11:15:29.620323] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:21:02.271 [2024-11-20 11:15:29.620329] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:21:02.271 [2024-11-20 11:15:29.620335] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:21:02.271 [2024-11-20 11:15:29.620342] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:21:02.271 [2024-11-20 11:15:29.620347] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:21:02.271 [2024-11-20 11:15:29.620354] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:21:02.271 [2024-11-20 11:15:29.620359] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:21:02.531 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:21:03.469 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 4116960 00:21:03.469 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:21:03.469 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 4116960 00:21:03.469 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:21:03.469 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:03.469 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:21:03.469 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:03.469 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 4116960 00:21:03.470 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:21:03.470 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:03.470 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:21:03.470 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:21:03.470 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:21:03.470 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:03.470 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:21:03.470 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:21:03.470 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:03.470 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:03.470 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:21:03.470 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:03.470 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:21:03.470 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:03.470 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:21:03.470 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:03.470 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:03.729 rmmod nvme_tcp 00:21:03.729 rmmod nvme_fabrics 00:21:03.729 rmmod nvme_keyring 00:21:03.730 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:03.730 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:21:03.730 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:21:03.730 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 4116678 ']' 00:21:03.730 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 4116678 00:21:03.730 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 4116678 ']' 00:21:03.730 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 4116678 00:21:03.730 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (4116678) - No such process 00:21:03.730 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 4116678 is not found' 00:21:03.730 Process with pid 4116678 is not found 00:21:03.730 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:03.730 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:03.730 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:03.730 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:21:03.730 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:21:03.730 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:03.730 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:21:03.730 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:03.730 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:03.730 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:03.730 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:03.730 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:05.636 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:05.636 00:21:05.636 real 0m8.166s 00:21:05.636 user 0m20.988s 00:21:05.636 sys 0m1.390s 00:21:05.636 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:05.636 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:05.636 ************************************ 00:21:05.636 END TEST nvmf_shutdown_tc3 00:21:05.636 ************************************ 00:21:05.896 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:21:05.896 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:21:05.896 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:21:05.896 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:05.896 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:05.896 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:05.896 ************************************ 00:21:05.896 START TEST nvmf_shutdown_tc4 00:21:05.896 ************************************ 00:21:05.896 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:21:05.896 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:21:05.896 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:21:05.896 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:05.896 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:05.896 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:05.896 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:05.896 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:05.896 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:05.896 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:05.896 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:05.897 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:05.897 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:05.897 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:21:05.897 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:05.897 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:05.897 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:21:05.897 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:05.897 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:05.897 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:05.897 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:05.897 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:05.897 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:21:05.897 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:05.897 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:21:05.897 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:21:05.897 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:21:05.897 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:21:05.897 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:21:05.897 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:21:05.897 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:05.897 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:05.897 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:05.897 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:05.897 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:05.897 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:05.897 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:05.897 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:05.897 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:05.897 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:05.897 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:05.897 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:05.897 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:05.897 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:05.897 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:05.897 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:05.897 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:05.897 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:05.897 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:05.897 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:05.897 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:05.897 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:05.897 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:05.897 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:05.897 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:05.897 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:05.897 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:05.897 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:05.897 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:05.897 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:05.897 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:05.897 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:05.897 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:05.897 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:05.897 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:05.897 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:05.897 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:05.897 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:05.897 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:05.897 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:05.897 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:05.897 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:05.897 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:05.897 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:05.897 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:05.897 Found net devices under 0000:86:00.0: cvl_0_0 00:21:05.897 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:05.897 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:05.897 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:05.897 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:05.897 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:05.897 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:05.897 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:05.897 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:05.897 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:05.897 Found net devices under 0000:86:00.1: cvl_0_1 00:21:05.897 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:05.897 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:05.897 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:21:05.897 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:05.897 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:05.897 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:05.897 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:05.897 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:05.897 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:05.897 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:05.897 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:05.897 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:05.897 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:05.897 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:05.897 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:05.897 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:05.897 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:05.897 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:05.897 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:05.897 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:05.897 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:05.897 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:05.897 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:05.898 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:05.898 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:06.157 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:06.157 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:06.157 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:06.157 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:06.157 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:06.157 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.354 ms 00:21:06.157 00:21:06.157 --- 10.0.0.2 ping statistics --- 00:21:06.157 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:06.157 rtt min/avg/max/mdev = 0.354/0.354/0.354/0.000 ms 00:21:06.157 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:06.157 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:06.157 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:21:06.157 00:21:06.157 --- 10.0.0.1 ping statistics --- 00:21:06.157 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:06.157 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:21:06.157 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:06.157 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:21:06.157 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:06.157 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:06.157 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:06.157 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:06.157 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:06.157 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:06.157 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:06.157 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:21:06.157 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:06.157 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:06.157 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:06.157 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=4118205 00:21:06.157 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 4118205 00:21:06.157 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:06.157 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 4118205 ']' 00:21:06.157 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:06.157 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:06.157 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:06.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:06.157 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:06.158 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:06.158 [2024-11-20 11:15:33.542251] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:21:06.158 [2024-11-20 11:15:33.542296] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:06.158 [2024-11-20 11:15:33.619761] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:06.417 [2024-11-20 11:15:33.661425] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:06.417 [2024-11-20 11:15:33.661462] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:06.417 [2024-11-20 11:15:33.661469] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:06.417 [2024-11-20 11:15:33.661475] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:06.417 [2024-11-20 11:15:33.661480] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:06.417 [2024-11-20 11:15:33.663002] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:06.417 [2024-11-20 11:15:33.663107] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:06.417 [2024-11-20 11:15:33.663139] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:06.417 [2024-11-20 11:15:33.663139] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:21:06.986 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:06.986 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:21:06.986 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:06.986 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:06.986 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:06.986 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:06.986 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:06.986 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.986 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:06.986 [2024-11-20 11:15:34.417523] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:06.986 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.986 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:21:06.986 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:21:06.986 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:06.986 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:06.986 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:06.986 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:06.986 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:06.986 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:06.986 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:06.986 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:06.986 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:06.986 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:06.986 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:06.986 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:06.986 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:06.986 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:06.986 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:06.986 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:06.986 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:06.986 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:06.986 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:06.986 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:06.986 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:06.986 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:06.986 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:06.986 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:21:06.986 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.986 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:07.245 Malloc1 00:21:07.245 [2024-11-20 11:15:34.530979] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:07.245 Malloc2 00:21:07.245 Malloc3 00:21:07.245 Malloc4 00:21:07.245 Malloc5 00:21:07.245 Malloc6 00:21:07.505 Malloc7 00:21:07.505 Malloc8 00:21:07.505 Malloc9 00:21:07.505 Malloc10 00:21:07.505 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.505 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:21:07.505 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:07.505 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:07.505 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=4118503 00:21:07.505 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:21:07.505 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:21:07.764 [2024-11-20 11:15:35.043119] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:13.041 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:13.041 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 4118205 00:21:13.041 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 4118205 ']' 00:21:13.041 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 4118205 00:21:13.041 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:21:13.041 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:13.041 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4118205 00:21:13.041 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:13.041 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:13.041 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4118205' 00:21:13.041 killing process with pid 4118205 00:21:13.041 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 4118205 00:21:13.041 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 4118205 00:21:13.041 [2024-11-20 11:15:40.031593] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1656d80 is same with the state(6) to be set 00:21:13.041 [2024-11-20 11:15:40.031646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1656d80 is same with the state(6) to be set 00:21:13.041 [2024-11-20 11:15:40.031655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1656d80 is same with the state(6) to be set 00:21:13.041 [2024-11-20 11:15:40.031662] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1656d80 is same with the state(6) to be set 00:21:13.041 [2024-11-20 11:15:40.031668] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1656d80 is same with the state(6) to be set 00:21:13.041 [2024-11-20 11:15:40.031674] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1656d80 is same with the state(6) to be set 00:21:13.041 [2024-11-20 11:15:40.032644] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1657250 is same with the state(6) to be set 00:21:13.041 [2024-11-20 11:15:40.032672] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1657250 is same with the state(6) to be set 00:21:13.041 [2024-11-20 11:15:40.032681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1657250 is same with the state(6) to be set 00:21:13.041 [2024-11-20 11:15:40.032688] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1657250 is same with the state(6) to be set 00:21:13.041 [2024-11-20 11:15:40.032695] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1657250 is same with the state(6) to be set 00:21:13.041 [2024-11-20 11:15:40.032702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1657250 is same with the state(6) to be set 00:21:13.041 [2024-11-20 11:15:40.032709] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1657250 is same with the state(6) to be set 00:21:13.041 [2024-11-20 11:15:40.032715] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1657250 is same with the state(6) to be set 00:21:13.041 [2024-11-20 11:15:40.033654] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1657740 is same with the state(6) to be set 00:21:13.041 [2024-11-20 11:15:40.033682] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1657740 is same with the state(6) to be set 00:21:13.041 [2024-11-20 11:15:40.033693] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1657740 is same with the state(6) to be set 00:21:13.041 [2024-11-20 11:15:40.033702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1657740 is same with the state(6) to be set 00:21:13.041 [2024-11-20 11:15:40.033712] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1657740 is same with the state(6) to be set 00:21:13.041 [2024-11-20 11:15:40.033725] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1657740 is same with the state(6) to be set 00:21:13.041 [2024-11-20 11:15:40.033735] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1657740 is same with the state(6) to be set 00:21:13.041 [2024-11-20 11:15:40.033743] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1657740 is same with the state(6) to be set 00:21:13.041 [2024-11-20 11:15:40.033752] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1657740 is same with the state(6) to be set 00:21:13.041 [2024-11-20 11:15:40.033762] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1657740 is same with the state(6) to be set 00:21:13.041 [2024-11-20 11:15:40.033772] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1657740 is same with the state(6) to be set 00:21:13.041 [2024-11-20 11:15:40.034220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16568b0 is same with the state(6) to be set 00:21:13.041 [2024-11-20 11:15:40.034248] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16568b0 is same with the state(6) to be set 00:21:13.041 [2024-11-20 11:15:40.034257] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16568b0 is same with the state(6) to be set 00:21:13.041 [2024-11-20 11:15:40.034264] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16568b0 is same with the state(6) to be set 00:21:13.041 [2024-11-20 11:15:40.034271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16568b0 is same with the state(6) to be set 00:21:13.041 [2024-11-20 11:15:40.034278] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16568b0 is same with the state(6) to be set 00:21:13.041 [2024-11-20 11:15:40.034286] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16568b0 is same with the state(6) to be set 00:21:13.041 [2024-11-20 11:15:40.034293] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16568b0 is same with the state(6) to be set 00:21:13.041 [2024-11-20 11:15:40.034303] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16568b0 is same with the state(6) to be set 00:21:13.041 [2024-11-20 11:15:40.034310] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16568b0 is same with the state(6) to be set 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 starting I/O failed: -6 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 starting I/O failed: -6 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 starting I/O failed: -6 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 starting I/O failed: -6 00:21:13.041 [2024-11-20 11:15:40.040228] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ef000 is same with the state(6) to be set 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 starting I/O failed: -6 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 starting I/O failed: -6 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 starting I/O failed: -6 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 starting I/O failed: -6 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 [2024-11-20 11:15:40.040826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ee190 is same with the state(6) to be set 00:21:13.042 [2024-11-20 11:15:40.040820] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 [2024-11-20 11:15:40.042184] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 [2024-11-20 11:15:40.043747] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 [2024-11-20 11:15:40.045555] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:13.043 NVMe io qpair process completion error 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 [2024-11-20 11:15:40.046628] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 [2024-11-20 11:15:40.047513] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 [2024-11-20 11:15:40.048519] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 [2024-11-20 11:15:40.050203] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:13.044 NVMe io qpair process completion error 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 starting I/O failed: -6 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 starting I/O failed: -6 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 starting I/O failed: -6 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 starting I/O failed: -6 00:21:13.045 [2024-11-20 11:15:40.051156] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 starting I/O failed: -6 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 starting I/O failed: -6 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 starting I/O failed: -6 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 starting I/O failed: -6 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 starting I/O failed: -6 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 starting I/O failed: -6 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 starting I/O failed: -6 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 starting I/O failed: -6 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 starting I/O failed: -6 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 starting I/O failed: -6 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 starting I/O failed: -6 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 starting I/O failed: -6 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 starting I/O failed: -6 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 starting I/O failed: -6 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 starting I/O failed: -6 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 starting I/O failed: -6 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 starting I/O failed: -6 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 starting I/O failed: -6 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 starting I/O failed: -6 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 starting I/O failed: -6 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 starting I/O failed: -6 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 starting I/O failed: -6 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 [2024-11-20 11:15:40.052067] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 starting I/O failed: -6 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 starting I/O failed: -6 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 starting I/O failed: -6 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 starting I/O failed: -6 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 starting I/O failed: -6 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 starting I/O failed: -6 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 starting I/O failed: -6 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 starting I/O failed: -6 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 starting I/O failed: -6 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 starting I/O failed: -6 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 starting I/O failed: -6 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 starting I/O failed: -6 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 starting I/O failed: -6 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 starting I/O failed: -6 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 starting I/O failed: -6 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 starting I/O failed: -6 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 starting I/O failed: -6 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 starting I/O failed: -6 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 starting I/O failed: -6 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 starting I/O failed: -6 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 starting I/O failed: -6 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 starting I/O failed: -6 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 starting I/O failed: -6 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 starting I/O failed: -6 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 starting I/O failed: -6 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 starting I/O failed: -6 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 starting I/O failed: -6 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 starting I/O failed: -6 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 starting I/O failed: -6 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 starting I/O failed: -6 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 starting I/O failed: -6 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 starting I/O failed: -6 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 starting I/O failed: -6 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 starting I/O failed: -6 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 starting I/O failed: -6 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 starting I/O failed: -6 00:21:13.045 [2024-11-20 11:15:40.053080] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 starting I/O failed: -6 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 starting I/O failed: -6 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 starting I/O failed: -6 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 starting I/O failed: -6 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 starting I/O failed: -6 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 starting I/O failed: -6 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 starting I/O failed: -6 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 starting I/O failed: -6 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 starting I/O failed: -6 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 starting I/O failed: -6 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 starting I/O failed: -6 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 starting I/O failed: -6 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 starting I/O failed: -6 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 starting I/O failed: -6 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 starting I/O failed: -6 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 starting I/O failed: -6 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 starting I/O failed: -6 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 starting I/O failed: -6 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 starting I/O failed: -6 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 starting I/O failed: -6 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 starting I/O failed: -6 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 starting I/O failed: -6 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 starting I/O failed: -6 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 starting I/O failed: -6 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 starting I/O failed: -6 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 starting I/O failed: -6 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 starting I/O failed: -6 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 starting I/O failed: -6 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 starting I/O failed: -6 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 starting I/O failed: -6 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 starting I/O failed: -6 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 starting I/O failed: -6 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 starting I/O failed: -6 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 starting I/O failed: -6 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 starting I/O failed: -6 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 starting I/O failed: -6 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 starting I/O failed: -6 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 starting I/O failed: -6 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 starting I/O failed: -6 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 starting I/O failed: -6 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 starting I/O failed: -6 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 starting I/O failed: -6 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 starting I/O failed: -6 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 starting I/O failed: -6 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 starting I/O failed: -6 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 starting I/O failed: -6 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 starting I/O failed: -6 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 starting I/O failed: -6 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 starting I/O failed: -6 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 starting I/O failed: -6 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 starting I/O failed: -6 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 starting I/O failed: -6 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 starting I/O failed: -6 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 starting I/O failed: -6 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 starting I/O failed: -6 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 starting I/O failed: -6 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 starting I/O failed: -6 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 starting I/O failed: -6 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 starting I/O failed: -6 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 starting I/O failed: -6 00:21:13.046 [2024-11-20 11:15:40.055125] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:13.046 NVMe io qpair process completion error 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 starting I/O failed: -6 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 starting I/O failed: -6 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 starting I/O failed: -6 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 starting I/O failed: -6 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 starting I/O failed: -6 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 starting I/O failed: -6 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 starting I/O failed: -6 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 starting I/O failed: -6 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 starting I/O failed: -6 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 starting I/O failed: -6 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 [2024-11-20 11:15:40.056136] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 starting I/O failed: -6 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 starting I/O failed: -6 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 starting I/O failed: -6 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 starting I/O failed: -6 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 starting I/O failed: -6 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 starting I/O failed: -6 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 starting I/O failed: -6 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 starting I/O failed: -6 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.047 Write completed with error (sct=0, sc=8) 00:21:13.047 starting I/O failed: -6 00:21:13.047 Write completed with error (sct=0, sc=8) 00:21:13.047 starting I/O failed: -6 00:21:13.047 Write completed with error (sct=0, sc=8) 00:21:13.047 Write completed with error (sct=0, sc=8) 00:21:13.047 Write completed with error (sct=0, sc=8) 00:21:13.047 starting I/O failed: -6 00:21:13.047 Write completed with error (sct=0, sc=8) 00:21:13.047 starting I/O failed: -6 00:21:13.047 Write completed with error (sct=0, sc=8) 00:21:13.047 Write completed with error (sct=0, sc=8) 00:21:13.047 Write completed with error (sct=0, sc=8) 00:21:13.047 starting I/O failed: -6 00:21:13.047 Write completed with error (sct=0, sc=8) 00:21:13.047 starting I/O failed: -6 00:21:13.047 Write completed with error (sct=0, sc=8) 00:21:13.047 Write completed with error (sct=0, sc=8) 00:21:13.047 Write completed with error (sct=0, sc=8) 00:21:13.047 starting I/O failed: -6 00:21:13.047 Write completed with error (sct=0, sc=8) 00:21:13.047 starting I/O failed: -6 00:21:13.047 Write completed with error (sct=0, sc=8) 00:21:13.047 Write completed with error (sct=0, sc=8) 00:21:13.047 Write completed with error (sct=0, sc=8) 00:21:13.047 starting I/O failed: -6 00:21:13.047 Write completed with error (sct=0, sc=8) 00:21:13.047 starting I/O failed: -6 00:21:13.047 Write completed with error (sct=0, sc=8) 00:21:13.047 Write completed with error (sct=0, sc=8) 00:21:13.047 Write completed with error (sct=0, sc=8) 00:21:13.047 starting I/O failed: -6 00:21:13.047 Write completed with error (sct=0, sc=8) 00:21:13.047 starting I/O failed: -6 00:21:13.047 Write completed with error (sct=0, sc=8) 00:21:13.047 Write completed with error (sct=0, sc=8) 00:21:13.047 Write completed with error (sct=0, sc=8) 00:21:13.047 starting I/O failed: -6 00:21:13.047 Write completed with error (sct=0, sc=8) 00:21:13.047 starting I/O failed: -6 00:21:13.047 Write completed with error (sct=0, sc=8) 00:21:13.047 Write completed with error (sct=0, sc=8) 00:21:13.047 [2024-11-20 11:15:40.057043] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:13.047 Write completed with error (sct=0, sc=8) 00:21:13.047 starting I/O failed: -6 00:21:13.047 Write completed with error (sct=0, sc=8) 00:21:13.047 starting I/O failed: -6 00:21:13.047 Write completed with error (sct=0, sc=8) 00:21:13.047 Write completed with error (sct=0, sc=8) 00:21:13.047 starting I/O failed: -6 00:21:13.047 Write completed with error (sct=0, sc=8) 00:21:13.047 starting I/O failed: -6 00:21:13.047 Write completed with error (sct=0, sc=8) 00:21:13.047 starting I/O failed: -6 00:21:13.047 Write completed with error (sct=0, sc=8) 00:21:13.047 Write completed with error (sct=0, sc=8) 00:21:13.047 starting I/O failed: -6 00:21:13.047 Write completed with error (sct=0, sc=8) 00:21:13.047 starting I/O failed: -6 00:21:13.047 Write completed with error (sct=0, sc=8) 00:21:13.047 starting I/O failed: -6 00:21:13.047 Write completed with error (sct=0, sc=8) 00:21:13.047 Write completed with error (sct=0, sc=8) 00:21:13.047 starting I/O failed: -6 00:21:13.047 Write completed with error (sct=0, sc=8) 00:21:13.047 starting I/O failed: -6 00:21:13.047 Write completed with error (sct=0, sc=8) 00:21:13.047 starting I/O failed: -6 00:21:13.047 Write completed with error (sct=0, sc=8) 00:21:13.047 Write completed with error (sct=0, sc=8) 00:21:13.047 starting I/O failed: -6 00:21:13.047 Write completed with error (sct=0, sc=8) 00:21:13.047 starting I/O failed: -6 00:21:13.047 Write completed with error (sct=0, sc=8) 00:21:13.047 starting I/O failed: -6 00:21:13.047 Write completed with error (sct=0, sc=8) 00:21:13.047 Write completed with error (sct=0, sc=8) 00:21:13.047 starting I/O failed: -6 00:21:13.047 Write completed with error (sct=0, sc=8) 00:21:13.047 starting I/O failed: -6 00:21:13.047 Write completed with error (sct=0, sc=8) 00:21:13.047 starting I/O failed: -6 00:21:13.047 Write completed with error (sct=0, sc=8) 00:21:13.047 Write completed with error (sct=0, sc=8) 00:21:13.047 starting I/O failed: -6 00:21:13.047 Write completed with error (sct=0, sc=8) 00:21:13.047 starting I/O failed: -6 00:21:13.047 Write completed with error (sct=0, sc=8) 00:21:13.047 starting I/O failed: -6 00:21:13.047 Write completed with error (sct=0, sc=8) 00:21:13.047 Write completed with error (sct=0, sc=8) 00:21:13.047 starting I/O failed: -6 00:21:13.047 Write completed with error (sct=0, sc=8) 00:21:13.047 starting I/O failed: -6 00:21:13.047 Write completed with error (sct=0, sc=8) 00:21:13.047 starting I/O failed: -6 00:21:13.047 Write completed with error (sct=0, sc=8) 00:21:13.047 Write completed with error (sct=0, sc=8) 00:21:13.047 starting I/O failed: -6 00:21:13.047 Write completed with error (sct=0, sc=8) 00:21:13.047 starting I/O failed: -6 00:21:13.047 Write completed with error (sct=0, sc=8) 00:21:13.047 starting I/O failed: -6 00:21:13.047 Write completed with error (sct=0, sc=8) 00:21:13.047 Write completed with error (sct=0, sc=8) 00:21:13.047 starting I/O failed: -6 00:21:13.047 Write completed with error (sct=0, sc=8) 00:21:13.047 starting I/O failed: -6 00:21:13.047 Write completed with error (sct=0, sc=8) 00:21:13.047 starting I/O failed: -6 00:21:13.047 Write completed with error (sct=0, sc=8) 00:21:13.047 Write completed with error (sct=0, sc=8) 00:21:13.047 starting I/O failed: -6 00:21:13.047 Write completed with error (sct=0, sc=8) 00:21:13.047 starting I/O failed: -6 00:21:13.047 Write completed with error (sct=0, sc=8) 00:21:13.047 starting I/O failed: -6 00:21:13.047 Write completed with error (sct=0, sc=8) 00:21:13.047 Write completed with error (sct=0, sc=8) 00:21:13.047 starting I/O failed: -6 00:21:13.047 Write completed with error (sct=0, sc=8) 00:21:13.047 starting I/O failed: -6 00:21:13.047 Write completed with error (sct=0, sc=8) 00:21:13.047 starting I/O failed: -6 00:21:13.047 Write completed with error (sct=0, sc=8) 00:21:13.047 Write completed with error (sct=0, sc=8) 00:21:13.047 starting I/O failed: -6 00:21:13.047 Write completed with error (sct=0, sc=8) 00:21:13.047 starting I/O failed: -6 00:21:13.047 Write completed with error (sct=0, sc=8) 00:21:13.047 starting I/O failed: -6 00:21:13.047 [2024-11-20 11:15:40.058101] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:13.047 Write completed with error (sct=0, sc=8) 00:21:13.047 starting I/O failed: -6 00:21:13.047 Write completed with error (sct=0, sc=8) 00:21:13.047 starting I/O failed: -6 00:21:13.047 Write completed with error (sct=0, sc=8) 00:21:13.047 starting I/O failed: -6 00:21:13.047 Write completed with error (sct=0, sc=8) 00:21:13.047 starting I/O failed: -6 00:21:13.047 Write completed with error (sct=0, sc=8) 00:21:13.047 starting I/O failed: -6 00:21:13.047 Write completed with error (sct=0, sc=8) 00:21:13.047 starting I/O failed: -6 00:21:13.047 Write completed with error (sct=0, sc=8) 00:21:13.047 starting I/O failed: -6 00:21:13.047 Write completed with error (sct=0, sc=8) 00:21:13.047 starting I/O failed: -6 00:21:13.047 Write completed with error (sct=0, sc=8) 00:21:13.047 starting I/O failed: -6 00:21:13.047 Write completed with error (sct=0, sc=8) 00:21:13.047 starting I/O failed: -6 00:21:13.047 Write completed with error (sct=0, sc=8) 00:21:13.047 starting I/O failed: -6 00:21:13.047 Write completed with error (sct=0, sc=8) 00:21:13.047 starting I/O failed: -6 00:21:13.047 Write completed with error (sct=0, sc=8) 00:21:13.047 starting I/O failed: -6 00:21:13.047 Write completed with error (sct=0, sc=8) 00:21:13.047 starting I/O failed: -6 00:21:13.047 Write completed with error (sct=0, sc=8) 00:21:13.047 starting I/O failed: -6 00:21:13.047 Write completed with error (sct=0, sc=8) 00:21:13.047 starting I/O failed: -6 00:21:13.047 Write completed with error (sct=0, sc=8) 00:21:13.047 starting I/O failed: -6 00:21:13.047 Write completed with error (sct=0, sc=8) 00:21:13.047 starting I/O failed: -6 00:21:13.047 Write completed with error (sct=0, sc=8) 00:21:13.047 starting I/O failed: -6 00:21:13.047 Write completed with error (sct=0, sc=8) 00:21:13.047 starting I/O failed: -6 00:21:13.047 Write completed with error (sct=0, sc=8) 00:21:13.047 starting I/O failed: -6 00:21:13.047 Write completed with error (sct=0, sc=8) 00:21:13.047 starting I/O failed: -6 00:21:13.047 Write completed with error (sct=0, sc=8) 00:21:13.047 starting I/O failed: -6 00:21:13.047 Write completed with error (sct=0, sc=8) 00:21:13.047 starting I/O failed: -6 00:21:13.047 Write completed with error (sct=0, sc=8) 00:21:13.047 starting I/O failed: -6 00:21:13.047 Write completed with error (sct=0, sc=8) 00:21:13.047 starting I/O failed: -6 00:21:13.047 Write completed with error (sct=0, sc=8) 00:21:13.047 starting I/O failed: -6 00:21:13.047 Write completed with error (sct=0, sc=8) 00:21:13.047 starting I/O failed: -6 00:21:13.047 Write completed with error (sct=0, sc=8) 00:21:13.047 starting I/O failed: -6 00:21:13.047 Write completed with error (sct=0, sc=8) 00:21:13.047 starting I/O failed: -6 00:21:13.047 Write completed with error (sct=0, sc=8) 00:21:13.047 starting I/O failed: -6 00:21:13.047 Write completed with error (sct=0, sc=8) 00:21:13.047 starting I/O failed: -6 00:21:13.047 Write completed with error (sct=0, sc=8) 00:21:13.047 starting I/O failed: -6 00:21:13.047 Write completed with error (sct=0, sc=8) 00:21:13.047 starting I/O failed: -6 00:21:13.047 Write completed with error (sct=0, sc=8) 00:21:13.047 starting I/O failed: -6 00:21:13.047 Write completed with error (sct=0, sc=8) 00:21:13.047 starting I/O failed: -6 00:21:13.047 Write completed with error (sct=0, sc=8) 00:21:13.047 starting I/O failed: -6 00:21:13.047 Write completed with error (sct=0, sc=8) 00:21:13.047 starting I/O failed: -6 00:21:13.047 Write completed with error (sct=0, sc=8) 00:21:13.047 starting I/O failed: -6 00:21:13.047 Write completed with error (sct=0, sc=8) 00:21:13.047 starting I/O failed: -6 00:21:13.047 Write completed with error (sct=0, sc=8) 00:21:13.047 starting I/O failed: -6 00:21:13.047 Write completed with error (sct=0, sc=8) 00:21:13.047 starting I/O failed: -6 00:21:13.047 Write completed with error (sct=0, sc=8) 00:21:13.048 starting I/O failed: -6 00:21:13.048 Write completed with error (sct=0, sc=8) 00:21:13.048 starting I/O failed: -6 00:21:13.048 Write completed with error (sct=0, sc=8) 00:21:13.048 starting I/O failed: -6 00:21:13.048 Write completed with error (sct=0, sc=8) 00:21:13.048 starting I/O failed: -6 00:21:13.048 Write completed with error (sct=0, sc=8) 00:21:13.048 starting I/O failed: -6 00:21:13.048 Write completed with error (sct=0, sc=8) 00:21:13.048 starting I/O failed: -6 00:21:13.048 Write completed with error (sct=0, sc=8) 00:21:13.048 starting I/O failed: -6 00:21:13.048 Write completed with error (sct=0, sc=8) 00:21:13.048 starting I/O failed: -6 00:21:13.048 Write completed with error (sct=0, sc=8) 00:21:13.048 starting I/O failed: -6 00:21:13.048 Write completed with error (sct=0, sc=8) 00:21:13.048 starting I/O failed: -6 00:21:13.048 Write completed with error (sct=0, sc=8) 00:21:13.048 starting I/O failed: -6 00:21:13.048 Write completed with error (sct=0, sc=8) 00:21:13.048 starting I/O failed: -6 00:21:13.048 Write completed with error (sct=0, sc=8) 00:21:13.048 starting I/O failed: -6 00:21:13.048 Write completed with error (sct=0, sc=8) 00:21:13.048 starting I/O failed: -6 00:21:13.048 Write completed with error (sct=0, sc=8) 00:21:13.048 starting I/O failed: -6 00:21:13.048 Write completed with error (sct=0, sc=8) 00:21:13.048 starting I/O failed: -6 00:21:13.048 [2024-11-20 11:15:40.060111] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:13.048 NVMe io qpair process completion error 00:21:13.048 Write completed with error (sct=0, sc=8) 00:21:13.048 Write completed with error (sct=0, sc=8) 00:21:13.048 Write completed with error (sct=0, sc=8) 00:21:13.048 starting I/O failed: -6 00:21:13.048 Write completed with error (sct=0, sc=8) 00:21:13.048 Write completed with error (sct=0, sc=8) 00:21:13.048 Write completed with error (sct=0, sc=8) 00:21:13.048 Write completed with error (sct=0, sc=8) 00:21:13.048 starting I/O failed: -6 00:21:13.048 Write completed with error (sct=0, sc=8) 00:21:13.048 Write completed with error (sct=0, sc=8) 00:21:13.048 Write completed with error (sct=0, sc=8) 00:21:13.048 Write completed with error (sct=0, sc=8) 00:21:13.048 starting I/O failed: -6 00:21:13.048 Write completed with error (sct=0, sc=8) 00:21:13.048 Write completed with error (sct=0, sc=8) 00:21:13.048 Write completed with error (sct=0, sc=8) 00:21:13.048 Write completed with error (sct=0, sc=8) 00:21:13.048 starting I/O failed: -6 00:21:13.048 Write completed with error (sct=0, sc=8) 00:21:13.048 Write completed with error (sct=0, sc=8) 00:21:13.048 Write completed with error (sct=0, sc=8) 00:21:13.048 Write completed with error (sct=0, sc=8) 00:21:13.048 starting I/O failed: -6 00:21:13.048 Write completed with error (sct=0, sc=8) 00:21:13.048 Write completed with error (sct=0, sc=8) 00:21:13.048 Write completed with error (sct=0, sc=8) 00:21:13.048 Write completed with error (sct=0, sc=8) 00:21:13.048 starting I/O failed: -6 00:21:13.048 Write completed with error (sct=0, sc=8) 00:21:13.048 Write completed with error (sct=0, sc=8) 00:21:13.048 Write completed with error (sct=0, sc=8) 00:21:13.048 Write completed with error (sct=0, sc=8) 00:21:13.048 starting I/O failed: -6 00:21:13.048 Write completed with error (sct=0, sc=8) 00:21:13.048 Write completed with error (sct=0, sc=8) 00:21:13.048 Write completed with error (sct=0, sc=8) 00:21:13.048 Write completed with error (sct=0, sc=8) 00:21:13.048 starting I/O failed: -6 00:21:13.048 Write completed with error (sct=0, sc=8) 00:21:13.048 Write completed with error (sct=0, sc=8) 00:21:13.048 Write completed with error (sct=0, sc=8) 00:21:13.048 Write completed with error (sct=0, sc=8) 00:21:13.048 starting I/O failed: -6 00:21:13.048 Write completed with error (sct=0, sc=8) 00:21:13.048 Write completed with error (sct=0, sc=8) 00:21:13.048 Write completed with error (sct=0, sc=8) 00:21:13.048 Write completed with error (sct=0, sc=8) 00:21:13.048 starting I/O failed: -6 00:21:13.048 [2024-11-20 11:15:40.061110] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:13.048 Write completed with error (sct=0, sc=8) 00:21:13.048 starting I/O failed: -6 00:21:13.048 Write completed with error (sct=0, sc=8) 00:21:13.048 starting I/O failed: -6 00:21:13.048 Write completed with error (sct=0, sc=8) 00:21:13.048 Write completed with error (sct=0, sc=8) 00:21:13.048 Write completed with error (sct=0, sc=8) 00:21:13.048 starting I/O failed: -6 00:21:13.048 Write completed with error (sct=0, sc=8) 00:21:13.048 starting I/O failed: -6 00:21:13.048 Write completed with error (sct=0, sc=8) 00:21:13.048 Write completed with error (sct=0, sc=8) 00:21:13.048 Write completed with error (sct=0, sc=8) 00:21:13.048 starting I/O failed: -6 00:21:13.048 Write completed with error (sct=0, sc=8) 00:21:13.048 starting I/O failed: -6 00:21:13.048 Write completed with error (sct=0, sc=8) 00:21:13.048 Write completed with error (sct=0, sc=8) 00:21:13.048 Write completed with error (sct=0, sc=8) 00:21:13.048 starting I/O failed: -6 00:21:13.048 Write completed with error (sct=0, sc=8) 00:21:13.048 starting I/O failed: -6 00:21:13.048 Write completed with error (sct=0, sc=8) 00:21:13.048 Write completed with error (sct=0, sc=8) 00:21:13.048 Write completed with error (sct=0, sc=8) 00:21:13.048 starting I/O failed: -6 00:21:13.048 Write completed with error (sct=0, sc=8) 00:21:13.048 starting I/O failed: -6 00:21:13.048 Write completed with error (sct=0, sc=8) 00:21:13.048 Write completed with error (sct=0, sc=8) 00:21:13.048 Write completed with error (sct=0, sc=8) 00:21:13.048 starting I/O failed: -6 00:21:13.048 Write completed with error (sct=0, sc=8) 00:21:13.048 starting I/O failed: -6 00:21:13.048 Write completed with error (sct=0, sc=8) 00:21:13.048 Write completed with error (sct=0, sc=8) 00:21:13.048 Write completed with error (sct=0, sc=8) 00:21:13.048 starting I/O failed: -6 00:21:13.048 Write completed with error (sct=0, sc=8) 00:21:13.048 starting I/O failed: -6 00:21:13.048 Write completed with error (sct=0, sc=8) 00:21:13.048 Write completed with error (sct=0, sc=8) 00:21:13.048 Write completed with error (sct=0, sc=8) 00:21:13.048 starting I/O failed: -6 00:21:13.048 Write completed with error (sct=0, sc=8) 00:21:13.048 starting I/O failed: -6 00:21:13.048 Write completed with error (sct=0, sc=8) 00:21:13.048 Write completed with error (sct=0, sc=8) 00:21:13.048 Write completed with error (sct=0, sc=8) 00:21:13.048 starting I/O failed: -6 00:21:13.048 Write completed with error (sct=0, sc=8) 00:21:13.048 starting I/O failed: -6 00:21:13.048 Write completed with error (sct=0, sc=8) 00:21:13.048 Write completed with error (sct=0, sc=8) 00:21:13.048 Write completed with error (sct=0, sc=8) 00:21:13.048 starting I/O failed: -6 00:21:13.048 Write completed with error (sct=0, sc=8) 00:21:13.048 starting I/O failed: -6 00:21:13.048 Write completed with error (sct=0, sc=8) 00:21:13.048 Write completed with error (sct=0, sc=8) 00:21:13.048 Write completed with error (sct=0, sc=8) 00:21:13.048 starting I/O failed: -6 00:21:13.048 Write completed with error (sct=0, sc=8) 00:21:13.048 starting I/O failed: -6 00:21:13.048 Write completed with error (sct=0, sc=8) 00:21:13.048 Write completed with error (sct=0, sc=8) 00:21:13.048 Write completed with error (sct=0, sc=8) 00:21:13.048 starting I/O failed: -6 00:21:13.048 [2024-11-20 11:15:40.062038] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:13.048 starting I/O failed: -6 00:21:13.048 starting I/O failed: -6 00:21:13.048 starting I/O failed: -6 00:21:13.048 starting I/O failed: -6 00:21:13.048 starting I/O failed: -6 00:21:13.048 Write completed with error (sct=0, sc=8) 00:21:13.048 starting I/O failed: -6 00:21:13.048 Write completed with error (sct=0, sc=8) 00:21:13.048 starting I/O failed: -6 00:21:13.048 Write completed with error (sct=0, sc=8) 00:21:13.048 starting I/O failed: -6 00:21:13.048 Write completed with error (sct=0, sc=8) 00:21:13.048 Write completed with error (sct=0, sc=8) 00:21:13.048 starting I/O failed: -6 00:21:13.048 Write completed with error (sct=0, sc=8) 00:21:13.048 starting I/O failed: -6 00:21:13.048 Write completed with error (sct=0, sc=8) 00:21:13.048 starting I/O failed: -6 00:21:13.048 Write completed with error (sct=0, sc=8) 00:21:13.048 Write completed with error (sct=0, sc=8) 00:21:13.048 starting I/O failed: -6 00:21:13.048 Write completed with error (sct=0, sc=8) 00:21:13.048 starting I/O failed: -6 00:21:13.048 Write completed with error (sct=0, sc=8) 00:21:13.048 starting I/O failed: -6 00:21:13.048 Write completed with error (sct=0, sc=8) 00:21:13.048 Write completed with error (sct=0, sc=8) 00:21:13.048 starting I/O failed: -6 00:21:13.048 Write completed with error (sct=0, sc=8) 00:21:13.048 starting I/O failed: -6 00:21:13.048 Write completed with error (sct=0, sc=8) 00:21:13.048 starting I/O failed: -6 00:21:13.048 Write completed with error (sct=0, sc=8) 00:21:13.048 Write completed with error (sct=0, sc=8) 00:21:13.048 starting I/O failed: -6 00:21:13.048 Write completed with error (sct=0, sc=8) 00:21:13.048 starting I/O failed: -6 00:21:13.048 Write completed with error (sct=0, sc=8) 00:21:13.048 starting I/O failed: -6 00:21:13.048 Write completed with error (sct=0, sc=8) 00:21:13.048 Write completed with error (sct=0, sc=8) 00:21:13.048 starting I/O failed: -6 00:21:13.048 Write completed with error (sct=0, sc=8) 00:21:13.048 starting I/O failed: -6 00:21:13.048 Write completed with error (sct=0, sc=8) 00:21:13.048 starting I/O failed: -6 00:21:13.048 Write completed with error (sct=0, sc=8) 00:21:13.048 Write completed with error (sct=0, sc=8) 00:21:13.048 starting I/O failed: -6 00:21:13.048 Write completed with error (sct=0, sc=8) 00:21:13.048 starting I/O failed: -6 00:21:13.048 Write completed with error (sct=0, sc=8) 00:21:13.048 starting I/O failed: -6 00:21:13.048 Write completed with error (sct=0, sc=8) 00:21:13.048 Write completed with error (sct=0, sc=8) 00:21:13.048 starting I/O failed: -6 00:21:13.048 Write completed with error (sct=0, sc=8) 00:21:13.049 starting I/O failed: -6 00:21:13.049 Write completed with error (sct=0, sc=8) 00:21:13.049 starting I/O failed: -6 00:21:13.049 Write completed with error (sct=0, sc=8) 00:21:13.049 Write completed with error (sct=0, sc=8) 00:21:13.049 starting I/O failed: -6 00:21:13.049 Write completed with error (sct=0, sc=8) 00:21:13.049 starting I/O failed: -6 00:21:13.049 Write completed with error (sct=0, sc=8) 00:21:13.049 starting I/O failed: -6 00:21:13.049 Write completed with error (sct=0, sc=8) 00:21:13.049 Write completed with error (sct=0, sc=8) 00:21:13.049 starting I/O failed: -6 00:21:13.049 Write completed with error (sct=0, sc=8) 00:21:13.049 starting I/O failed: -6 00:21:13.049 Write completed with error (sct=0, sc=8) 00:21:13.049 starting I/O failed: -6 00:21:13.049 Write completed with error (sct=0, sc=8) 00:21:13.049 Write completed with error (sct=0, sc=8) 00:21:13.049 starting I/O failed: -6 00:21:13.049 [2024-11-20 11:15:40.063245] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:13.049 Write completed with error (sct=0, sc=8) 00:21:13.049 starting I/O failed: -6 00:21:13.049 Write completed with error (sct=0, sc=8) 00:21:13.049 starting I/O failed: -6 00:21:13.049 Write completed with error (sct=0, sc=8) 00:21:13.049 starting I/O failed: -6 00:21:13.049 Write completed with error (sct=0, sc=8) 00:21:13.049 starting I/O failed: -6 00:21:13.049 Write completed with error (sct=0, sc=8) 00:21:13.049 starting I/O failed: -6 00:21:13.049 Write completed with error (sct=0, sc=8) 00:21:13.049 starting I/O failed: -6 00:21:13.049 Write completed with error (sct=0, sc=8) 00:21:13.049 starting I/O failed: -6 00:21:13.049 Write completed with error (sct=0, sc=8) 00:21:13.049 starting I/O failed: -6 00:21:13.049 Write completed with error (sct=0, sc=8) 00:21:13.049 starting I/O failed: -6 00:21:13.049 Write completed with error (sct=0, sc=8) 00:21:13.049 starting I/O failed: -6 00:21:13.049 Write completed with error (sct=0, sc=8) 00:21:13.049 starting I/O failed: -6 00:21:13.049 Write completed with error (sct=0, sc=8) 00:21:13.049 starting I/O failed: -6 00:21:13.049 Write completed with error (sct=0, sc=8) 00:21:13.049 starting I/O failed: -6 00:21:13.049 Write completed with error (sct=0, sc=8) 00:21:13.049 starting I/O failed: -6 00:21:13.049 Write completed with error (sct=0, sc=8) 00:21:13.049 starting I/O failed: -6 00:21:13.049 Write completed with error (sct=0, sc=8) 00:21:13.049 starting I/O failed: -6 00:21:13.049 Write completed with error (sct=0, sc=8) 00:21:13.049 starting I/O failed: -6 00:21:13.049 Write completed with error (sct=0, sc=8) 00:21:13.049 starting I/O failed: -6 00:21:13.049 Write completed with error (sct=0, sc=8) 00:21:13.049 starting I/O failed: -6 00:21:13.049 Write completed with error (sct=0, sc=8) 00:21:13.049 starting I/O failed: -6 00:21:13.049 Write completed with error (sct=0, sc=8) 00:21:13.049 starting I/O failed: -6 00:21:13.049 Write completed with error (sct=0, sc=8) 00:21:13.049 starting I/O failed: -6 00:21:13.049 Write completed with error (sct=0, sc=8) 00:21:13.049 starting I/O failed: -6 00:21:13.049 Write completed with error (sct=0, sc=8) 00:21:13.049 starting I/O failed: -6 00:21:13.049 Write completed with error (sct=0, sc=8) 00:21:13.049 starting I/O failed: -6 00:21:13.049 Write completed with error (sct=0, sc=8) 00:21:13.049 starting I/O failed: -6 00:21:13.049 Write completed with error (sct=0, sc=8) 00:21:13.049 starting I/O failed: -6 00:21:13.049 Write completed with error (sct=0, sc=8) 00:21:13.049 starting I/O failed: -6 00:21:13.049 Write completed with error (sct=0, sc=8) 00:21:13.049 starting I/O failed: -6 00:21:13.049 Write completed with error (sct=0, sc=8) 00:21:13.049 starting I/O failed: -6 00:21:13.049 Write completed with error (sct=0, sc=8) 00:21:13.049 starting I/O failed: -6 00:21:13.049 Write completed with error (sct=0, sc=8) 00:21:13.049 starting I/O failed: -6 00:21:13.049 Write completed with error (sct=0, sc=8) 00:21:13.049 starting I/O failed: -6 00:21:13.049 Write completed with error (sct=0, sc=8) 00:21:13.049 starting I/O failed: -6 00:21:13.049 Write completed with error (sct=0, sc=8) 00:21:13.049 starting I/O failed: -6 00:21:13.049 Write completed with error (sct=0, sc=8) 00:21:13.049 starting I/O failed: -6 00:21:13.049 Write completed with error (sct=0, sc=8) 00:21:13.049 starting I/O failed: -6 00:21:13.049 Write completed with error (sct=0, sc=8) 00:21:13.049 starting I/O failed: -6 00:21:13.049 Write completed with error (sct=0, sc=8) 00:21:13.049 starting I/O failed: -6 00:21:13.049 Write completed with error (sct=0, sc=8) 00:21:13.049 starting I/O failed: -6 00:21:13.049 Write completed with error (sct=0, sc=8) 00:21:13.049 starting I/O failed: -6 00:21:13.049 Write completed with error (sct=0, sc=8) 00:21:13.049 starting I/O failed: -6 00:21:13.049 Write completed with error (sct=0, sc=8) 00:21:13.049 starting I/O failed: -6 00:21:13.049 Write completed with error (sct=0, sc=8) 00:21:13.049 starting I/O failed: -6 00:21:13.049 Write completed with error (sct=0, sc=8) 00:21:13.049 starting I/O failed: -6 00:21:13.049 Write completed with error (sct=0, sc=8) 00:21:13.049 starting I/O failed: -6 00:21:13.049 Write completed with error (sct=0, sc=8) 00:21:13.049 starting I/O failed: -6 00:21:13.049 Write completed with error (sct=0, sc=8) 00:21:13.049 starting I/O failed: -6 00:21:13.049 Write completed with error (sct=0, sc=8) 00:21:13.049 starting I/O failed: -6 00:21:13.049 Write completed with error (sct=0, sc=8) 00:21:13.049 starting I/O failed: -6 00:21:13.049 Write completed with error (sct=0, sc=8) 00:21:13.049 starting I/O failed: -6 00:21:13.049 Write completed with error (sct=0, sc=8) 00:21:13.049 starting I/O failed: -6 00:21:13.049 Write completed with error (sct=0, sc=8) 00:21:13.049 starting I/O failed: -6 00:21:13.049 Write completed with error (sct=0, sc=8) 00:21:13.049 starting I/O failed: -6 00:21:13.049 Write completed with error (sct=0, sc=8) 00:21:13.049 starting I/O failed: -6 00:21:13.049 Write completed with error (sct=0, sc=8) 00:21:13.049 starting I/O failed: -6 00:21:13.049 Write completed with error (sct=0, sc=8) 00:21:13.049 starting I/O failed: -6 00:21:13.049 Write completed with error (sct=0, sc=8) 00:21:13.049 starting I/O failed: -6 00:21:13.049 Write completed with error (sct=0, sc=8) 00:21:13.049 starting I/O failed: -6 00:21:13.049 [2024-11-20 11:15:40.067039] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:13.049 NVMe io qpair process completion error 00:21:13.049 Write completed with error (sct=0, sc=8) 00:21:13.049 starting I/O failed: -6 00:21:13.049 Write completed with error (sct=0, sc=8) 00:21:13.049 Write completed with error (sct=0, sc=8) 00:21:13.049 Write completed with error (sct=0, sc=8) 00:21:13.049 Write completed with error (sct=0, sc=8) 00:21:13.049 starting I/O failed: -6 00:21:13.049 Write completed with error (sct=0, sc=8) 00:21:13.049 Write completed with error (sct=0, sc=8) 00:21:13.049 Write completed with error (sct=0, sc=8) 00:21:13.049 Write completed with error (sct=0, sc=8) 00:21:13.049 starting I/O failed: -6 00:21:13.049 Write completed with error (sct=0, sc=8) 00:21:13.049 Write completed with error (sct=0, sc=8) 00:21:13.049 Write completed with error (sct=0, sc=8) 00:21:13.049 Write completed with error (sct=0, sc=8) 00:21:13.049 starting I/O failed: -6 00:21:13.049 Write completed with error (sct=0, sc=8) 00:21:13.049 Write completed with error (sct=0, sc=8) 00:21:13.049 Write completed with error (sct=0, sc=8) 00:21:13.049 Write completed with error (sct=0, sc=8) 00:21:13.049 starting I/O failed: -6 00:21:13.049 Write completed with error (sct=0, sc=8) 00:21:13.049 Write completed with error (sct=0, sc=8) 00:21:13.049 Write completed with error (sct=0, sc=8) 00:21:13.049 Write completed with error (sct=0, sc=8) 00:21:13.049 starting I/O failed: -6 00:21:13.049 Write completed with error (sct=0, sc=8) 00:21:13.049 Write completed with error (sct=0, sc=8) 00:21:13.049 Write completed with error (sct=0, sc=8) 00:21:13.049 Write completed with error (sct=0, sc=8) 00:21:13.049 starting I/O failed: -6 00:21:13.049 Write completed with error (sct=0, sc=8) 00:21:13.049 Write completed with error (sct=0, sc=8) 00:21:13.049 Write completed with error (sct=0, sc=8) 00:21:13.049 Write completed with error (sct=0, sc=8) 00:21:13.049 starting I/O failed: -6 00:21:13.049 Write completed with error (sct=0, sc=8) 00:21:13.049 Write completed with error (sct=0, sc=8) 00:21:13.049 Write completed with error (sct=0, sc=8) 00:21:13.049 Write completed with error (sct=0, sc=8) 00:21:13.049 starting I/O failed: -6 00:21:13.049 Write completed with error (sct=0, sc=8) 00:21:13.049 Write completed with error (sct=0, sc=8) 00:21:13.049 Write completed with error (sct=0, sc=8) 00:21:13.049 Write completed with error (sct=0, sc=8) 00:21:13.049 starting I/O failed: -6 00:21:13.049 Write completed with error (sct=0, sc=8) 00:21:13.049 Write completed with error (sct=0, sc=8) 00:21:13.049 [2024-11-20 11:15:40.067991] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:13.049 Write completed with error (sct=0, sc=8) 00:21:13.049 starting I/O failed: -6 00:21:13.049 Write completed with error (sct=0, sc=8) 00:21:13.049 starting I/O failed: -6 00:21:13.049 Write completed with error (sct=0, sc=8) 00:21:13.049 Write completed with error (sct=0, sc=8) 00:21:13.049 Write completed with error (sct=0, sc=8) 00:21:13.049 starting I/O failed: -6 00:21:13.049 Write completed with error (sct=0, sc=8) 00:21:13.049 starting I/O failed: -6 00:21:13.049 Write completed with error (sct=0, sc=8) 00:21:13.049 Write completed with error (sct=0, sc=8) 00:21:13.049 Write completed with error (sct=0, sc=8) 00:21:13.049 starting I/O failed: -6 00:21:13.049 Write completed with error (sct=0, sc=8) 00:21:13.049 starting I/O failed: -6 00:21:13.049 Write completed with error (sct=0, sc=8) 00:21:13.049 Write completed with error (sct=0, sc=8) 00:21:13.049 Write completed with error (sct=0, sc=8) 00:21:13.049 starting I/O failed: -6 00:21:13.049 Write completed with error (sct=0, sc=8) 00:21:13.049 starting I/O failed: -6 00:21:13.050 Write completed with error (sct=0, sc=8) 00:21:13.050 Write completed with error (sct=0, sc=8) 00:21:13.050 Write completed with error (sct=0, sc=8) 00:21:13.050 starting I/O failed: -6 00:21:13.050 Write completed with error (sct=0, sc=8) 00:21:13.050 starting I/O failed: -6 00:21:13.050 Write completed with error (sct=0, sc=8) 00:21:13.050 Write completed with error (sct=0, sc=8) 00:21:13.050 Write completed with error (sct=0, sc=8) 00:21:13.050 starting I/O failed: -6 00:21:13.050 Write completed with error (sct=0, sc=8) 00:21:13.050 starting I/O failed: -6 00:21:13.050 Write completed with error (sct=0, sc=8) 00:21:13.050 Write completed with error (sct=0, sc=8) 00:21:13.050 Write completed with error (sct=0, sc=8) 00:21:13.050 starting I/O failed: -6 00:21:13.050 Write completed with error (sct=0, sc=8) 00:21:13.050 starting I/O failed: -6 00:21:13.050 Write completed with error (sct=0, sc=8) 00:21:13.050 Write completed with error (sct=0, sc=8) 00:21:13.050 Write completed with error (sct=0, sc=8) 00:21:13.050 starting I/O failed: -6 00:21:13.050 Write completed with error (sct=0, sc=8) 00:21:13.050 starting I/O failed: -6 00:21:13.050 Write completed with error (sct=0, sc=8) 00:21:13.050 Write completed with error (sct=0, sc=8) 00:21:13.050 Write completed with error (sct=0, sc=8) 00:21:13.050 starting I/O failed: -6 00:21:13.050 Write completed with error (sct=0, sc=8) 00:21:13.050 starting I/O failed: -6 00:21:13.050 Write completed with error (sct=0, sc=8) 00:21:13.050 Write completed with error (sct=0, sc=8) 00:21:13.050 Write completed with error (sct=0, sc=8) 00:21:13.050 starting I/O failed: -6 00:21:13.050 Write completed with error (sct=0, sc=8) 00:21:13.050 starting I/O failed: -6 00:21:13.050 Write completed with error (sct=0, sc=8) 00:21:13.050 Write completed with error (sct=0, sc=8) 00:21:13.050 Write completed with error (sct=0, sc=8) 00:21:13.050 starting I/O failed: -6 00:21:13.050 Write completed with error (sct=0, sc=8) 00:21:13.050 starting I/O failed: -6 00:21:13.050 Write completed with error (sct=0, sc=8) 00:21:13.050 Write completed with error (sct=0, sc=8) 00:21:13.050 Write completed with error (sct=0, sc=8) 00:21:13.050 starting I/O failed: -6 00:21:13.050 [2024-11-20 11:15:40.068890] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:13.050 starting I/O failed: -6 00:21:13.050 starting I/O failed: -6 00:21:13.050 starting I/O failed: -6 00:21:13.050 starting I/O failed: -6 00:21:13.050 starting I/O failed: -6 00:21:13.050 Write completed with error (sct=0, sc=8) 00:21:13.050 starting I/O failed: -6 00:21:13.050 Write completed with error (sct=0, sc=8) 00:21:13.050 starting I/O failed: -6 00:21:13.050 Write completed with error (sct=0, sc=8) 00:21:13.050 starting I/O failed: -6 00:21:13.050 Write completed with error (sct=0, sc=8) 00:21:13.050 Write completed with error (sct=0, sc=8) 00:21:13.050 starting I/O failed: -6 00:21:13.050 Write completed with error (sct=0, sc=8) 00:21:13.050 starting I/O failed: -6 00:21:13.050 Write completed with error (sct=0, sc=8) 00:21:13.050 starting I/O failed: -6 00:21:13.050 Write completed with error (sct=0, sc=8) 00:21:13.050 Write completed with error (sct=0, sc=8) 00:21:13.050 starting I/O failed: -6 00:21:13.050 Write completed with error (sct=0, sc=8) 00:21:13.050 starting I/O failed: -6 00:21:13.050 Write completed with error (sct=0, sc=8) 00:21:13.050 starting I/O failed: -6 00:21:13.050 Write completed with error (sct=0, sc=8) 00:21:13.050 Write completed with error (sct=0, sc=8) 00:21:13.050 starting I/O failed: -6 00:21:13.050 Write completed with error (sct=0, sc=8) 00:21:13.050 starting I/O failed: -6 00:21:13.050 Write completed with error (sct=0, sc=8) 00:21:13.050 starting I/O failed: -6 00:21:13.050 Write completed with error (sct=0, sc=8) 00:21:13.050 Write completed with error (sct=0, sc=8) 00:21:13.050 starting I/O failed: -6 00:21:13.050 Write completed with error (sct=0, sc=8) 00:21:13.050 starting I/O failed: -6 00:21:13.050 Write completed with error (sct=0, sc=8) 00:21:13.050 starting I/O failed: -6 00:21:13.050 Write completed with error (sct=0, sc=8) 00:21:13.050 Write completed with error (sct=0, sc=8) 00:21:13.050 starting I/O failed: -6 00:21:13.050 Write completed with error (sct=0, sc=8) 00:21:13.050 starting I/O failed: -6 00:21:13.050 Write completed with error (sct=0, sc=8) 00:21:13.050 starting I/O failed: -6 00:21:13.050 Write completed with error (sct=0, sc=8) 00:21:13.050 Write completed with error (sct=0, sc=8) 00:21:13.050 starting I/O failed: -6 00:21:13.050 Write completed with error (sct=0, sc=8) 00:21:13.050 starting I/O failed: -6 00:21:13.050 Write completed with error (sct=0, sc=8) 00:21:13.050 starting I/O failed: -6 00:21:13.050 Write completed with error (sct=0, sc=8) 00:21:13.050 Write completed with error (sct=0, sc=8) 00:21:13.050 starting I/O failed: -6 00:21:13.050 Write completed with error (sct=0, sc=8) 00:21:13.050 starting I/O failed: -6 00:21:13.050 Write completed with error (sct=0, sc=8) 00:21:13.050 starting I/O failed: -6 00:21:13.050 Write completed with error (sct=0, sc=8) 00:21:13.050 Write completed with error (sct=0, sc=8) 00:21:13.050 starting I/O failed: -6 00:21:13.050 Write completed with error (sct=0, sc=8) 00:21:13.050 starting I/O failed: -6 00:21:13.050 Write completed with error (sct=0, sc=8) 00:21:13.050 starting I/O failed: -6 00:21:13.050 Write completed with error (sct=0, sc=8) 00:21:13.050 Write completed with error (sct=0, sc=8) 00:21:13.050 starting I/O failed: -6 00:21:13.050 Write completed with error (sct=0, sc=8) 00:21:13.050 starting I/O failed: -6 00:21:13.050 Write completed with error (sct=0, sc=8) 00:21:13.050 starting I/O failed: -6 00:21:13.050 Write completed with error (sct=0, sc=8) 00:21:13.050 Write completed with error (sct=0, sc=8) 00:21:13.050 starting I/O failed: -6 00:21:13.050 [2024-11-20 11:15:40.070091] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:13.050 Write completed with error (sct=0, sc=8) 00:21:13.050 starting I/O failed: -6 00:21:13.050 Write completed with error (sct=0, sc=8) 00:21:13.050 starting I/O failed: -6 00:21:13.050 Write completed with error (sct=0, sc=8) 00:21:13.050 starting I/O failed: -6 00:21:13.050 Write completed with error (sct=0, sc=8) 00:21:13.050 starting I/O failed: -6 00:21:13.050 Write completed with error (sct=0, sc=8) 00:21:13.050 starting I/O failed: -6 00:21:13.050 Write completed with error (sct=0, sc=8) 00:21:13.050 starting I/O failed: -6 00:21:13.050 Write completed with error (sct=0, sc=8) 00:21:13.050 starting I/O failed: -6 00:21:13.050 Write completed with error (sct=0, sc=8) 00:21:13.050 starting I/O failed: -6 00:21:13.050 Write completed with error (sct=0, sc=8) 00:21:13.050 starting I/O failed: -6 00:21:13.050 Write completed with error (sct=0, sc=8) 00:21:13.050 starting I/O failed: -6 00:21:13.050 Write completed with error (sct=0, sc=8) 00:21:13.050 starting I/O failed: -6 00:21:13.050 Write completed with error (sct=0, sc=8) 00:21:13.050 starting I/O failed: -6 00:21:13.050 Write completed with error (sct=0, sc=8) 00:21:13.050 starting I/O failed: -6 00:21:13.050 Write completed with error (sct=0, sc=8) 00:21:13.050 starting I/O failed: -6 00:21:13.050 Write completed with error (sct=0, sc=8) 00:21:13.050 starting I/O failed: -6 00:21:13.050 Write completed with error (sct=0, sc=8) 00:21:13.050 starting I/O failed: -6 00:21:13.050 Write completed with error (sct=0, sc=8) 00:21:13.050 starting I/O failed: -6 00:21:13.050 Write completed with error (sct=0, sc=8) 00:21:13.050 starting I/O failed: -6 00:21:13.050 Write completed with error (sct=0, sc=8) 00:21:13.050 starting I/O failed: -6 00:21:13.050 Write completed with error (sct=0, sc=8) 00:21:13.050 starting I/O failed: -6 00:21:13.050 Write completed with error (sct=0, sc=8) 00:21:13.050 starting I/O failed: -6 00:21:13.050 Write completed with error (sct=0, sc=8) 00:21:13.050 starting I/O failed: -6 00:21:13.050 Write completed with error (sct=0, sc=8) 00:21:13.050 starting I/O failed: -6 00:21:13.050 Write completed with error (sct=0, sc=8) 00:21:13.050 starting I/O failed: -6 00:21:13.050 Write completed with error (sct=0, sc=8) 00:21:13.050 starting I/O failed: -6 00:21:13.050 Write completed with error (sct=0, sc=8) 00:21:13.050 starting I/O failed: -6 00:21:13.050 Write completed with error (sct=0, sc=8) 00:21:13.050 starting I/O failed: -6 00:21:13.050 Write completed with error (sct=0, sc=8) 00:21:13.050 starting I/O failed: -6 00:21:13.050 Write completed with error (sct=0, sc=8) 00:21:13.050 starting I/O failed: -6 00:21:13.050 Write completed with error (sct=0, sc=8) 00:21:13.050 starting I/O failed: -6 00:21:13.050 Write completed with error (sct=0, sc=8) 00:21:13.050 starting I/O failed: -6 00:21:13.050 Write completed with error (sct=0, sc=8) 00:21:13.050 starting I/O failed: -6 00:21:13.050 Write completed with error (sct=0, sc=8) 00:21:13.050 starting I/O failed: -6 00:21:13.050 Write completed with error (sct=0, sc=8) 00:21:13.050 starting I/O failed: -6 00:21:13.050 Write completed with error (sct=0, sc=8) 00:21:13.050 starting I/O failed: -6 00:21:13.050 Write completed with error (sct=0, sc=8) 00:21:13.050 starting I/O failed: -6 00:21:13.050 Write completed with error (sct=0, sc=8) 00:21:13.050 starting I/O failed: -6 00:21:13.050 Write completed with error (sct=0, sc=8) 00:21:13.050 starting I/O failed: -6 00:21:13.050 Write completed with error (sct=0, sc=8) 00:21:13.050 starting I/O failed: -6 00:21:13.050 Write completed with error (sct=0, sc=8) 00:21:13.050 starting I/O failed: -6 00:21:13.050 Write completed with error (sct=0, sc=8) 00:21:13.050 starting I/O failed: -6 00:21:13.050 Write completed with error (sct=0, sc=8) 00:21:13.050 starting I/O failed: -6 00:21:13.050 Write completed with error (sct=0, sc=8) 00:21:13.050 starting I/O failed: -6 00:21:13.050 Write completed with error (sct=0, sc=8) 00:21:13.051 starting I/O failed: -6 00:21:13.051 Write completed with error (sct=0, sc=8) 00:21:13.051 starting I/O failed: -6 00:21:13.051 Write completed with error (sct=0, sc=8) 00:21:13.051 starting I/O failed: -6 00:21:13.051 Write completed with error (sct=0, sc=8) 00:21:13.051 starting I/O failed: -6 00:21:13.051 Write completed with error (sct=0, sc=8) 00:21:13.051 starting I/O failed: -6 00:21:13.051 Write completed with error (sct=0, sc=8) 00:21:13.051 starting I/O failed: -6 00:21:13.051 Write completed with error (sct=0, sc=8) 00:21:13.051 starting I/O failed: -6 00:21:13.051 Write completed with error (sct=0, sc=8) 00:21:13.051 starting I/O failed: -6 00:21:13.051 Write completed with error (sct=0, sc=8) 00:21:13.051 starting I/O failed: -6 00:21:13.051 Write completed with error (sct=0, sc=8) 00:21:13.051 starting I/O failed: -6 00:21:13.051 Write completed with error (sct=0, sc=8) 00:21:13.051 starting I/O failed: -6 00:21:13.051 Write completed with error (sct=0, sc=8) 00:21:13.051 starting I/O failed: -6 00:21:13.051 Write completed with error (sct=0, sc=8) 00:21:13.051 starting I/O failed: -6 00:21:13.051 Write completed with error (sct=0, sc=8) 00:21:13.051 starting I/O failed: -6 00:21:13.051 Write completed with error (sct=0, sc=8) 00:21:13.051 starting I/O failed: -6 00:21:13.051 Write completed with error (sct=0, sc=8) 00:21:13.051 starting I/O failed: -6 00:21:13.051 [2024-11-20 11:15:40.073654] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:13.051 NVMe io qpair process completion error 00:21:13.051 Write completed with error (sct=0, sc=8) 00:21:13.051 Write completed with error (sct=0, sc=8) 00:21:13.051 Write completed with error (sct=0, sc=8) 00:21:13.051 Write completed with error (sct=0, sc=8) 00:21:13.051 starting I/O failed: -6 00:21:13.051 Write completed with error (sct=0, sc=8) 00:21:13.051 Write completed with error (sct=0, sc=8) 00:21:13.051 Write completed with error (sct=0, sc=8) 00:21:13.051 Write completed with error (sct=0, sc=8) 00:21:13.051 starting I/O failed: -6 00:21:13.051 Write completed with error (sct=0, sc=8) 00:21:13.051 Write completed with error (sct=0, sc=8) 00:21:13.051 Write completed with error (sct=0, sc=8) 00:21:13.051 Write completed with error (sct=0, sc=8) 00:21:13.051 starting I/O failed: -6 00:21:13.051 Write completed with error (sct=0, sc=8) 00:21:13.051 Write completed with error (sct=0, sc=8) 00:21:13.051 Write completed with error (sct=0, sc=8) 00:21:13.051 Write completed with error (sct=0, sc=8) 00:21:13.051 starting I/O failed: -6 00:21:13.051 Write completed with error (sct=0, sc=8) 00:21:13.051 Write completed with error (sct=0, sc=8) 00:21:13.051 Write completed with error (sct=0, sc=8) 00:21:13.051 Write completed with error (sct=0, sc=8) 00:21:13.051 starting I/O failed: -6 00:21:13.051 Write completed with error (sct=0, sc=8) 00:21:13.051 Write completed with error (sct=0, sc=8) 00:21:13.051 Write completed with error (sct=0, sc=8) 00:21:13.051 Write completed with error (sct=0, sc=8) 00:21:13.051 starting I/O failed: -6 00:21:13.051 Write completed with error (sct=0, sc=8) 00:21:13.051 Write completed with error (sct=0, sc=8) 00:21:13.051 Write completed with error (sct=0, sc=8) 00:21:13.051 Write completed with error (sct=0, sc=8) 00:21:13.051 starting I/O failed: -6 00:21:13.051 Write completed with error (sct=0, sc=8) 00:21:13.051 Write completed with error (sct=0, sc=8) 00:21:13.051 Write completed with error (sct=0, sc=8) 00:21:13.051 Write completed with error (sct=0, sc=8) 00:21:13.051 starting I/O failed: -6 00:21:13.051 Write completed with error (sct=0, sc=8) 00:21:13.051 Write completed with error (sct=0, sc=8) 00:21:13.051 Write completed with error (sct=0, sc=8) 00:21:13.051 Write completed with error (sct=0, sc=8) 00:21:13.051 starting I/O failed: -6 00:21:13.051 Write completed with error (sct=0, sc=8) 00:21:13.051 Write completed with error (sct=0, sc=8) 00:21:13.051 [2024-11-20 11:15:40.074701] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:13.051 starting I/O failed: -6 00:21:13.051 Write completed with error (sct=0, sc=8) 00:21:13.051 Write completed with error (sct=0, sc=8) 00:21:13.051 Write completed with error (sct=0, sc=8) 00:21:13.051 starting I/O failed: -6 00:21:13.051 Write completed with error (sct=0, sc=8) 00:21:13.051 starting I/O failed: -6 00:21:13.051 Write completed with error (sct=0, sc=8) 00:21:13.051 Write completed with error (sct=0, sc=8) 00:21:13.051 Write completed with error (sct=0, sc=8) 00:21:13.051 starting I/O failed: -6 00:21:13.051 Write completed with error (sct=0, sc=8) 00:21:13.051 starting I/O failed: -6 00:21:13.051 Write completed with error (sct=0, sc=8) 00:21:13.051 Write completed with error (sct=0, sc=8) 00:21:13.051 Write completed with error (sct=0, sc=8) 00:21:13.051 starting I/O failed: -6 00:21:13.051 Write completed with error (sct=0, sc=8) 00:21:13.051 starting I/O failed: -6 00:21:13.051 Write completed with error (sct=0, sc=8) 00:21:13.051 Write completed with error (sct=0, sc=8) 00:21:13.051 Write completed with error (sct=0, sc=8) 00:21:13.051 starting I/O failed: -6 00:21:13.051 Write completed with error (sct=0, sc=8) 00:21:13.051 starting I/O failed: -6 00:21:13.051 Write completed with error (sct=0, sc=8) 00:21:13.051 Write completed with error (sct=0, sc=8) 00:21:13.051 Write completed with error (sct=0, sc=8) 00:21:13.051 starting I/O failed: -6 00:21:13.051 Write completed with error (sct=0, sc=8) 00:21:13.051 starting I/O failed: -6 00:21:13.051 Write completed with error (sct=0, sc=8) 00:21:13.051 Write completed with error (sct=0, sc=8) 00:21:13.051 Write completed with error (sct=0, sc=8) 00:21:13.051 starting I/O failed: -6 00:21:13.051 Write completed with error (sct=0, sc=8) 00:21:13.051 starting I/O failed: -6 00:21:13.051 Write completed with error (sct=0, sc=8) 00:21:13.051 Write completed with error (sct=0, sc=8) 00:21:13.051 Write completed with error (sct=0, sc=8) 00:21:13.051 starting I/O failed: -6 00:21:13.051 Write completed with error (sct=0, sc=8) 00:21:13.051 starting I/O failed: -6 00:21:13.051 Write completed with error (sct=0, sc=8) 00:21:13.051 Write completed with error (sct=0, sc=8) 00:21:13.051 Write completed with error (sct=0, sc=8) 00:21:13.051 starting I/O failed: -6 00:21:13.051 Write completed with error (sct=0, sc=8) 00:21:13.051 starting I/O failed: -6 00:21:13.051 Write completed with error (sct=0, sc=8) 00:21:13.051 Write completed with error (sct=0, sc=8) 00:21:13.051 Write completed with error (sct=0, sc=8) 00:21:13.051 starting I/O failed: -6 00:21:13.051 Write completed with error (sct=0, sc=8) 00:21:13.051 starting I/O failed: -6 00:21:13.051 Write completed with error (sct=0, sc=8) 00:21:13.051 Write completed with error (sct=0, sc=8) 00:21:13.051 [2024-11-20 11:15:40.075508] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:13.052 Write completed with error (sct=0, sc=8) 00:21:13.052 starting I/O failed: -6 00:21:13.052 Write completed with error (sct=0, sc=8) 00:21:13.052 starting I/O failed: -6 00:21:13.052 Write completed with error (sct=0, sc=8) 00:21:13.052 Write completed with error (sct=0, sc=8) 00:21:13.052 starting I/O failed: -6 00:21:13.052 Write completed with error (sct=0, sc=8) 00:21:13.052 starting I/O failed: -6 00:21:13.052 Write completed with error (sct=0, sc=8) 00:21:13.052 starting I/O failed: -6 00:21:13.052 Write completed with error (sct=0, sc=8) 00:21:13.052 Write completed with error (sct=0, sc=8) 00:21:13.052 starting I/O failed: -6 00:21:13.052 Write completed with error (sct=0, sc=8) 00:21:13.052 starting I/O failed: -6 00:21:13.052 Write completed with error (sct=0, sc=8) 00:21:13.052 starting I/O failed: -6 00:21:13.052 Write completed with error (sct=0, sc=8) 00:21:13.052 Write completed with error (sct=0, sc=8) 00:21:13.052 starting I/O failed: -6 00:21:13.052 Write completed with error (sct=0, sc=8) 00:21:13.052 starting I/O failed: -6 00:21:13.052 Write completed with error (sct=0, sc=8) 00:21:13.052 starting I/O failed: -6 00:21:13.052 Write completed with error (sct=0, sc=8) 00:21:13.052 Write completed with error (sct=0, sc=8) 00:21:13.052 starting I/O failed: -6 00:21:13.052 Write completed with error (sct=0, sc=8) 00:21:13.052 starting I/O failed: -6 00:21:13.052 Write completed with error (sct=0, sc=8) 00:21:13.052 starting I/O failed: -6 00:21:13.052 Write completed with error (sct=0, sc=8) 00:21:13.052 Write completed with error (sct=0, sc=8) 00:21:13.052 starting I/O failed: -6 00:21:13.052 Write completed with error (sct=0, sc=8) 00:21:13.052 starting I/O failed: -6 00:21:13.052 Write completed with error (sct=0, sc=8) 00:21:13.052 starting I/O failed: -6 00:21:13.052 Write completed with error (sct=0, sc=8) 00:21:13.052 Write completed with error (sct=0, sc=8) 00:21:13.052 starting I/O failed: -6 00:21:13.052 Write completed with error (sct=0, sc=8) 00:21:13.052 starting I/O failed: -6 00:21:13.052 Write completed with error (sct=0, sc=8) 00:21:13.052 starting I/O failed: -6 00:21:13.052 Write completed with error (sct=0, sc=8) 00:21:13.052 Write completed with error (sct=0, sc=8) 00:21:13.052 starting I/O failed: -6 00:21:13.052 Write completed with error (sct=0, sc=8) 00:21:13.052 starting I/O failed: -6 00:21:13.052 Write completed with error (sct=0, sc=8) 00:21:13.052 starting I/O failed: -6 00:21:13.052 Write completed with error (sct=0, sc=8) 00:21:13.052 Write completed with error (sct=0, sc=8) 00:21:13.052 starting I/O failed: -6 00:21:13.052 Write completed with error (sct=0, sc=8) 00:21:13.052 starting I/O failed: -6 00:21:13.052 Write completed with error (sct=0, sc=8) 00:21:13.052 starting I/O failed: -6 00:21:13.052 Write completed with error (sct=0, sc=8) 00:21:13.052 Write completed with error (sct=0, sc=8) 00:21:13.052 starting I/O failed: -6 00:21:13.052 Write completed with error (sct=0, sc=8) 00:21:13.052 starting I/O failed: -6 00:21:13.052 Write completed with error (sct=0, sc=8) 00:21:13.052 starting I/O failed: -6 00:21:13.052 Write completed with error (sct=0, sc=8) 00:21:13.052 Write completed with error (sct=0, sc=8) 00:21:13.052 starting I/O failed: -6 00:21:13.052 Write completed with error (sct=0, sc=8) 00:21:13.052 starting I/O failed: -6 00:21:13.052 Write completed with error (sct=0, sc=8) 00:21:13.052 starting I/O failed: -6 00:21:13.052 Write completed with error (sct=0, sc=8) 00:21:13.052 Write completed with error (sct=0, sc=8) 00:21:13.052 starting I/O failed: -6 00:21:13.052 Write completed with error (sct=0, sc=8) 00:21:13.052 starting I/O failed: -6 00:21:13.052 Write completed with error (sct=0, sc=8) 00:21:13.052 starting I/O failed: -6 00:21:13.052 Write completed with error (sct=0, sc=8) 00:21:13.052 Write completed with error (sct=0, sc=8) 00:21:13.052 starting I/O failed: -6 00:21:13.052 Write completed with error (sct=0, sc=8) 00:21:13.052 starting I/O failed: -6 00:21:13.052 Write completed with error (sct=0, sc=8) 00:21:13.052 starting I/O failed: -6 00:21:13.052 [2024-11-20 11:15:40.076546] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:13.052 Write completed with error (sct=0, sc=8) 00:21:13.052 starting I/O failed: -6 00:21:13.052 Write completed with error (sct=0, sc=8) 00:21:13.052 starting I/O failed: -6 00:21:13.052 Write completed with error (sct=0, sc=8) 00:21:13.052 starting I/O failed: -6 00:21:13.052 Write completed with error (sct=0, sc=8) 00:21:13.052 starting I/O failed: -6 00:21:13.052 Write completed with error (sct=0, sc=8) 00:21:13.052 starting I/O failed: -6 00:21:13.052 Write completed with error (sct=0, sc=8) 00:21:13.052 starting I/O failed: -6 00:21:13.052 Write completed with error (sct=0, sc=8) 00:21:13.052 starting I/O failed: -6 00:21:13.052 Write completed with error (sct=0, sc=8) 00:21:13.052 starting I/O failed: -6 00:21:13.052 Write completed with error (sct=0, sc=8) 00:21:13.052 starting I/O failed: -6 00:21:13.052 Write completed with error (sct=0, sc=8) 00:21:13.052 starting I/O failed: -6 00:21:13.052 Write completed with error (sct=0, sc=8) 00:21:13.052 starting I/O failed: -6 00:21:13.052 Write completed with error (sct=0, sc=8) 00:21:13.052 starting I/O failed: -6 00:21:13.052 Write completed with error (sct=0, sc=8) 00:21:13.052 starting I/O failed: -6 00:21:13.052 Write completed with error (sct=0, sc=8) 00:21:13.052 starting I/O failed: -6 00:21:13.052 Write completed with error (sct=0, sc=8) 00:21:13.052 starting I/O failed: -6 00:21:13.052 Write completed with error (sct=0, sc=8) 00:21:13.052 starting I/O failed: -6 00:21:13.052 Write completed with error (sct=0, sc=8) 00:21:13.052 starting I/O failed: -6 00:21:13.052 Write completed with error (sct=0, sc=8) 00:21:13.052 starting I/O failed: -6 00:21:13.052 Write completed with error (sct=0, sc=8) 00:21:13.052 starting I/O failed: -6 00:21:13.052 Write completed with error (sct=0, sc=8) 00:21:13.052 starting I/O failed: -6 00:21:13.052 Write completed with error (sct=0, sc=8) 00:21:13.052 starting I/O failed: -6 00:21:13.052 Write completed with error (sct=0, sc=8) 00:21:13.052 starting I/O failed: -6 00:21:13.052 Write completed with error (sct=0, sc=8) 00:21:13.052 starting I/O failed: -6 00:21:13.052 Write completed with error (sct=0, sc=8) 00:21:13.052 starting I/O failed: -6 00:21:13.052 Write completed with error (sct=0, sc=8) 00:21:13.052 starting I/O failed: -6 00:21:13.052 Write completed with error (sct=0, sc=8) 00:21:13.052 starting I/O failed: -6 00:21:13.052 Write completed with error (sct=0, sc=8) 00:21:13.052 starting I/O failed: -6 00:21:13.052 Write completed with error (sct=0, sc=8) 00:21:13.052 starting I/O failed: -6 00:21:13.052 Write completed with error (sct=0, sc=8) 00:21:13.052 starting I/O failed: -6 00:21:13.052 Write completed with error (sct=0, sc=8) 00:21:13.052 starting I/O failed: -6 00:21:13.052 Write completed with error (sct=0, sc=8) 00:21:13.052 starting I/O failed: -6 00:21:13.052 Write completed with error (sct=0, sc=8) 00:21:13.052 starting I/O failed: -6 00:21:13.052 Write completed with error (sct=0, sc=8) 00:21:13.052 starting I/O failed: -6 00:21:13.052 Write completed with error (sct=0, sc=8) 00:21:13.052 starting I/O failed: -6 00:21:13.052 Write completed with error (sct=0, sc=8) 00:21:13.052 starting I/O failed: -6 00:21:13.052 Write completed with error (sct=0, sc=8) 00:21:13.052 starting I/O failed: -6 00:21:13.052 Write completed with error (sct=0, sc=8) 00:21:13.052 starting I/O failed: -6 00:21:13.052 Write completed with error (sct=0, sc=8) 00:21:13.052 starting I/O failed: -6 00:21:13.052 Write completed with error (sct=0, sc=8) 00:21:13.052 starting I/O failed: -6 00:21:13.052 Write completed with error (sct=0, sc=8) 00:21:13.052 starting I/O failed: -6 00:21:13.052 Write completed with error (sct=0, sc=8) 00:21:13.052 starting I/O failed: -6 00:21:13.052 Write completed with error (sct=0, sc=8) 00:21:13.052 starting I/O failed: -6 00:21:13.052 Write completed with error (sct=0, sc=8) 00:21:13.052 starting I/O failed: -6 00:21:13.052 Write completed with error (sct=0, sc=8) 00:21:13.052 starting I/O failed: -6 00:21:13.052 Write completed with error (sct=0, sc=8) 00:21:13.052 starting I/O failed: -6 00:21:13.052 Write completed with error (sct=0, sc=8) 00:21:13.052 starting I/O failed: -6 00:21:13.052 Write completed with error (sct=0, sc=8) 00:21:13.052 starting I/O failed: -6 00:21:13.052 Write completed with error (sct=0, sc=8) 00:21:13.052 starting I/O failed: -6 00:21:13.052 Write completed with error (sct=0, sc=8) 00:21:13.052 starting I/O failed: -6 00:21:13.052 Write completed with error (sct=0, sc=8) 00:21:13.052 starting I/O failed: -6 00:21:13.052 Write completed with error (sct=0, sc=8) 00:21:13.052 starting I/O failed: -6 00:21:13.052 Write completed with error (sct=0, sc=8) 00:21:13.052 starting I/O failed: -6 00:21:13.052 Write completed with error (sct=0, sc=8) 00:21:13.052 starting I/O failed: -6 00:21:13.052 Write completed with error (sct=0, sc=8) 00:21:13.052 starting I/O failed: -6 00:21:13.052 Write completed with error (sct=0, sc=8) 00:21:13.052 starting I/O failed: -6 00:21:13.052 Write completed with error (sct=0, sc=8) 00:21:13.052 starting I/O failed: -6 00:21:13.052 Write completed with error (sct=0, sc=8) 00:21:13.052 starting I/O failed: -6 00:21:13.052 Write completed with error (sct=0, sc=8) 00:21:13.052 starting I/O failed: -6 00:21:13.052 Write completed with error (sct=0, sc=8) 00:21:13.052 starting I/O failed: -6 00:21:13.052 Write completed with error (sct=0, sc=8) 00:21:13.052 starting I/O failed: -6 00:21:13.052 Write completed with error (sct=0, sc=8) 00:21:13.052 starting I/O failed: -6 00:21:13.052 Write completed with error (sct=0, sc=8) 00:21:13.052 starting I/O failed: -6 00:21:13.052 [2024-11-20 11:15:40.078724] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:13.052 NVMe io qpair process completion error 00:21:13.052 Write completed with error (sct=0, sc=8) 00:21:13.052 starting I/O failed: -6 00:21:13.052 Write completed with error (sct=0, sc=8) 00:21:13.053 Write completed with error (sct=0, sc=8) 00:21:13.053 Write completed with error (sct=0, sc=8) 00:21:13.053 Write completed with error (sct=0, sc=8) 00:21:13.053 starting I/O failed: -6 00:21:13.053 Write completed with error (sct=0, sc=8) 00:21:13.053 Write completed with error (sct=0, sc=8) 00:21:13.053 Write completed with error (sct=0, sc=8) 00:21:13.053 Write completed with error (sct=0, sc=8) 00:21:13.053 starting I/O failed: -6 00:21:13.053 Write completed with error (sct=0, sc=8) 00:21:13.053 Write completed with error (sct=0, sc=8) 00:21:13.053 Write completed with error (sct=0, sc=8) 00:21:13.053 Write completed with error (sct=0, sc=8) 00:21:13.053 starting I/O failed: -6 00:21:13.053 Write completed with error (sct=0, sc=8) 00:21:13.053 Write completed with error (sct=0, sc=8) 00:21:13.053 Write completed with error (sct=0, sc=8) 00:21:13.053 Write completed with error (sct=0, sc=8) 00:21:13.053 starting I/O failed: -6 00:21:13.053 Write completed with error (sct=0, sc=8) 00:21:13.053 Write completed with error (sct=0, sc=8) 00:21:13.053 Write completed with error (sct=0, sc=8) 00:21:13.053 Write completed with error (sct=0, sc=8) 00:21:13.053 starting I/O failed: -6 00:21:13.053 Write completed with error (sct=0, sc=8) 00:21:13.053 Write completed with error (sct=0, sc=8) 00:21:13.053 Write completed with error (sct=0, sc=8) 00:21:13.053 Write completed with error (sct=0, sc=8) 00:21:13.053 starting I/O failed: -6 00:21:13.053 Write completed with error (sct=0, sc=8) 00:21:13.053 Write completed with error (sct=0, sc=8) 00:21:13.053 Write completed with error (sct=0, sc=8) 00:21:13.053 Write completed with error (sct=0, sc=8) 00:21:13.053 starting I/O failed: -6 00:21:13.053 Write completed with error (sct=0, sc=8) 00:21:13.053 Write completed with error (sct=0, sc=8) 00:21:13.053 Write completed with error (sct=0, sc=8) 00:21:13.053 Write completed with error (sct=0, sc=8) 00:21:13.053 starting I/O failed: -6 00:21:13.053 Write completed with error (sct=0, sc=8) 00:21:13.053 Write completed with error (sct=0, sc=8) 00:21:13.053 Write completed with error (sct=0, sc=8) 00:21:13.053 Write completed with error (sct=0, sc=8) 00:21:13.053 starting I/O failed: -6 00:21:13.053 Write completed with error (sct=0, sc=8) 00:21:13.053 Write completed with error (sct=0, sc=8) 00:21:13.053 [2024-11-20 11:15:40.079688] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:13.053 Write completed with error (sct=0, sc=8) 00:21:13.053 starting I/O failed: -6 00:21:13.053 Write completed with error (sct=0, sc=8) 00:21:13.053 Write completed with error (sct=0, sc=8) 00:21:13.053 Write completed with error (sct=0, sc=8) 00:21:13.053 starting I/O failed: -6 00:21:13.053 Write completed with error (sct=0, sc=8) 00:21:13.053 starting I/O failed: -6 00:21:13.053 Write completed with error (sct=0, sc=8) 00:21:13.053 Write completed with error (sct=0, sc=8) 00:21:13.053 Write completed with error (sct=0, sc=8) 00:21:13.053 starting I/O failed: -6 00:21:13.053 Write completed with error (sct=0, sc=8) 00:21:13.053 starting I/O failed: -6 00:21:13.053 Write completed with error (sct=0, sc=8) 00:21:13.053 Write completed with error (sct=0, sc=8) 00:21:13.053 Write completed with error (sct=0, sc=8) 00:21:13.053 starting I/O failed: -6 00:21:13.053 Write completed with error (sct=0, sc=8) 00:21:13.053 starting I/O failed: -6 00:21:13.053 Write completed with error (sct=0, sc=8) 00:21:13.053 Write completed with error (sct=0, sc=8) 00:21:13.053 Write completed with error (sct=0, sc=8) 00:21:13.053 starting I/O failed: -6 00:21:13.053 Write completed with error (sct=0, sc=8) 00:21:13.053 starting I/O failed: -6 00:21:13.053 Write completed with error (sct=0, sc=8) 00:21:13.053 Write completed with error (sct=0, sc=8) 00:21:13.053 Write completed with error (sct=0, sc=8) 00:21:13.053 starting I/O failed: -6 00:21:13.053 Write completed with error (sct=0, sc=8) 00:21:13.053 starting I/O failed: -6 00:21:13.053 Write completed with error (sct=0, sc=8) 00:21:13.053 Write completed with error (sct=0, sc=8) 00:21:13.053 Write completed with error (sct=0, sc=8) 00:21:13.053 starting I/O failed: -6 00:21:13.053 Write completed with error (sct=0, sc=8) 00:21:13.053 starting I/O failed: -6 00:21:13.053 Write completed with error (sct=0, sc=8) 00:21:13.053 Write completed with error (sct=0, sc=8) 00:21:13.053 Write completed with error (sct=0, sc=8) 00:21:13.053 starting I/O failed: -6 00:21:13.053 Write completed with error (sct=0, sc=8) 00:21:13.053 starting I/O failed: -6 00:21:13.053 Write completed with error (sct=0, sc=8) 00:21:13.053 Write completed with error (sct=0, sc=8) 00:21:13.053 Write completed with error (sct=0, sc=8) 00:21:13.053 starting I/O failed: -6 00:21:13.053 Write completed with error (sct=0, sc=8) 00:21:13.053 starting I/O failed: -6 00:21:13.053 Write completed with error (sct=0, sc=8) 00:21:13.053 Write completed with error (sct=0, sc=8) 00:21:13.053 Write completed with error (sct=0, sc=8) 00:21:13.053 starting I/O failed: -6 00:21:13.053 Write completed with error (sct=0, sc=8) 00:21:13.053 starting I/O failed: -6 00:21:13.053 Write completed with error (sct=0, sc=8) 00:21:13.053 Write completed with error (sct=0, sc=8) 00:21:13.053 Write completed with error (sct=0, sc=8) 00:21:13.053 starting I/O failed: -6 00:21:13.053 Write completed with error (sct=0, sc=8) 00:21:13.053 starting I/O failed: -6 00:21:13.053 Write completed with error (sct=0, sc=8) 00:21:13.053 Write completed with error (sct=0, sc=8) 00:21:13.053 Write completed with error (sct=0, sc=8) 00:21:13.053 starting I/O failed: -6 00:21:13.053 Write completed with error (sct=0, sc=8) 00:21:13.053 starting I/O failed: -6 00:21:13.053 [2024-11-20 11:15:40.080604] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:13.053 Write completed with error (sct=0, sc=8) 00:21:13.053 Write completed with error (sct=0, sc=8) 00:21:13.053 starting I/O failed: -6 00:21:13.053 Write completed with error (sct=0, sc=8) 00:21:13.053 starting I/O failed: -6 00:21:13.053 Write completed with error (sct=0, sc=8) 00:21:13.053 starting I/O failed: -6 00:21:13.053 Write completed with error (sct=0, sc=8) 00:21:13.053 Write completed with error (sct=0, sc=8) 00:21:13.053 starting I/O failed: -6 00:21:13.053 Write completed with error (sct=0, sc=8) 00:21:13.053 starting I/O failed: -6 00:21:13.053 Write completed with error (sct=0, sc=8) 00:21:13.053 starting I/O failed: -6 00:21:13.053 Write completed with error (sct=0, sc=8) 00:21:13.053 Write completed with error (sct=0, sc=8) 00:21:13.053 starting I/O failed: -6 00:21:13.053 Write completed with error (sct=0, sc=8) 00:21:13.053 starting I/O failed: -6 00:21:13.053 Write completed with error (sct=0, sc=8) 00:21:13.053 starting I/O failed: -6 00:21:13.053 Write completed with error (sct=0, sc=8) 00:21:13.053 Write completed with error (sct=0, sc=8) 00:21:13.053 starting I/O failed: -6 00:21:13.053 Write completed with error (sct=0, sc=8) 00:21:13.053 starting I/O failed: -6 00:21:13.053 Write completed with error (sct=0, sc=8) 00:21:13.053 starting I/O failed: -6 00:21:13.053 Write completed with error (sct=0, sc=8) 00:21:13.053 Write completed with error (sct=0, sc=8) 00:21:13.053 starting I/O failed: -6 00:21:13.053 Write completed with error (sct=0, sc=8) 00:21:13.053 starting I/O failed: -6 00:21:13.053 Write completed with error (sct=0, sc=8) 00:21:13.053 starting I/O failed: -6 00:21:13.053 Write completed with error (sct=0, sc=8) 00:21:13.053 Write completed with error (sct=0, sc=8) 00:21:13.053 starting I/O failed: -6 00:21:13.053 Write completed with error (sct=0, sc=8) 00:21:13.053 starting I/O failed: -6 00:21:13.053 Write completed with error (sct=0, sc=8) 00:21:13.053 starting I/O failed: -6 00:21:13.053 Write completed with error (sct=0, sc=8) 00:21:13.053 Write completed with error (sct=0, sc=8) 00:21:13.053 starting I/O failed: -6 00:21:13.053 Write completed with error (sct=0, sc=8) 00:21:13.053 starting I/O failed: -6 00:21:13.053 Write completed with error (sct=0, sc=8) 00:21:13.053 starting I/O failed: -6 00:21:13.053 Write completed with error (sct=0, sc=8) 00:21:13.053 Write completed with error (sct=0, sc=8) 00:21:13.053 starting I/O failed: -6 00:21:13.053 Write completed with error (sct=0, sc=8) 00:21:13.053 starting I/O failed: -6 00:21:13.053 Write completed with error (sct=0, sc=8) 00:21:13.053 starting I/O failed: -6 00:21:13.053 Write completed with error (sct=0, sc=8) 00:21:13.053 Write completed with error (sct=0, sc=8) 00:21:13.053 starting I/O failed: -6 00:21:13.053 Write completed with error (sct=0, sc=8) 00:21:13.053 starting I/O failed: -6 00:21:13.053 Write completed with error (sct=0, sc=8) 00:21:13.053 starting I/O failed: -6 00:21:13.053 Write completed with error (sct=0, sc=8) 00:21:13.053 Write completed with error (sct=0, sc=8) 00:21:13.053 starting I/O failed: -6 00:21:13.053 Write completed with error (sct=0, sc=8) 00:21:13.053 starting I/O failed: -6 00:21:13.053 Write completed with error (sct=0, sc=8) 00:21:13.053 starting I/O failed: -6 00:21:13.053 Write completed with error (sct=0, sc=8) 00:21:13.053 Write completed with error (sct=0, sc=8) 00:21:13.053 starting I/O failed: -6 00:21:13.053 Write completed with error (sct=0, sc=8) 00:21:13.053 starting I/O failed: -6 00:21:13.053 Write completed with error (sct=0, sc=8) 00:21:13.053 starting I/O failed: -6 00:21:13.053 Write completed with error (sct=0, sc=8) 00:21:13.053 Write completed with error (sct=0, sc=8) 00:21:13.053 starting I/O failed: -6 00:21:13.053 Write completed with error (sct=0, sc=8) 00:21:13.053 starting I/O failed: -6 00:21:13.053 Write completed with error (sct=0, sc=8) 00:21:13.053 starting I/O failed: -6 00:21:13.053 [2024-11-20 11:15:40.081612] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:13.053 Write completed with error (sct=0, sc=8) 00:21:13.053 starting I/O failed: -6 00:21:13.053 Write completed with error (sct=0, sc=8) 00:21:13.053 starting I/O failed: -6 00:21:13.053 Write completed with error (sct=0, sc=8) 00:21:13.053 starting I/O failed: -6 00:21:13.053 Write completed with error (sct=0, sc=8) 00:21:13.053 starting I/O failed: -6 00:21:13.053 Write completed with error (sct=0, sc=8) 00:21:13.054 starting I/O failed: -6 00:21:13.054 Write completed with error (sct=0, sc=8) 00:21:13.054 starting I/O failed: -6 00:21:13.054 Write completed with error (sct=0, sc=8) 00:21:13.054 starting I/O failed: -6 00:21:13.054 Write completed with error (sct=0, sc=8) 00:21:13.054 starting I/O failed: -6 00:21:13.054 Write completed with error (sct=0, sc=8) 00:21:13.054 starting I/O failed: -6 00:21:13.054 Write completed with error (sct=0, sc=8) 00:21:13.054 starting I/O failed: -6 00:21:13.054 Write completed with error (sct=0, sc=8) 00:21:13.054 starting I/O failed: -6 00:21:13.054 Write completed with error (sct=0, sc=8) 00:21:13.054 starting I/O failed: -6 00:21:13.054 Write completed with error (sct=0, sc=8) 00:21:13.054 starting I/O failed: -6 00:21:13.054 Write completed with error (sct=0, sc=8) 00:21:13.054 starting I/O failed: -6 00:21:13.054 Write completed with error (sct=0, sc=8) 00:21:13.054 starting I/O failed: -6 00:21:13.054 Write completed with error (sct=0, sc=8) 00:21:13.054 starting I/O failed: -6 00:21:13.054 Write completed with error (sct=0, sc=8) 00:21:13.054 starting I/O failed: -6 00:21:13.054 Write completed with error (sct=0, sc=8) 00:21:13.054 starting I/O failed: -6 00:21:13.054 Write completed with error (sct=0, sc=8) 00:21:13.054 starting I/O failed: -6 00:21:13.054 Write completed with error (sct=0, sc=8) 00:21:13.054 starting I/O failed: -6 00:21:13.054 Write completed with error (sct=0, sc=8) 00:21:13.054 starting I/O failed: -6 00:21:13.054 Write completed with error (sct=0, sc=8) 00:21:13.054 starting I/O failed: -6 00:21:13.054 Write completed with error (sct=0, sc=8) 00:21:13.054 starting I/O failed: -6 00:21:13.054 Write completed with error (sct=0, sc=8) 00:21:13.054 starting I/O failed: -6 00:21:13.054 Write completed with error (sct=0, sc=8) 00:21:13.054 starting I/O failed: -6 00:21:13.054 Write completed with error (sct=0, sc=8) 00:21:13.054 starting I/O failed: -6 00:21:13.054 Write completed with error (sct=0, sc=8) 00:21:13.054 starting I/O failed: -6 00:21:13.054 Write completed with error (sct=0, sc=8) 00:21:13.054 starting I/O failed: -6 00:21:13.054 Write completed with error (sct=0, sc=8) 00:21:13.054 starting I/O failed: -6 00:21:13.054 Write completed with error (sct=0, sc=8) 00:21:13.054 starting I/O failed: -6 00:21:13.054 Write completed with error (sct=0, sc=8) 00:21:13.054 starting I/O failed: -6 00:21:13.054 Write completed with error (sct=0, sc=8) 00:21:13.054 starting I/O failed: -6 00:21:13.054 Write completed with error (sct=0, sc=8) 00:21:13.054 starting I/O failed: -6 00:21:13.054 Write completed with error (sct=0, sc=8) 00:21:13.054 starting I/O failed: -6 00:21:13.054 Write completed with error (sct=0, sc=8) 00:21:13.054 starting I/O failed: -6 00:21:13.054 Write completed with error (sct=0, sc=8) 00:21:13.054 starting I/O failed: -6 00:21:13.054 Write completed with error (sct=0, sc=8) 00:21:13.054 starting I/O failed: -6 00:21:13.054 Write completed with error (sct=0, sc=8) 00:21:13.054 starting I/O failed: -6 00:21:13.054 Write completed with error (sct=0, sc=8) 00:21:13.054 starting I/O failed: -6 00:21:13.054 Write completed with error (sct=0, sc=8) 00:21:13.054 starting I/O failed: -6 00:21:13.054 Write completed with error (sct=0, sc=8) 00:21:13.054 starting I/O failed: -6 00:21:13.054 Write completed with error (sct=0, sc=8) 00:21:13.054 starting I/O failed: -6 00:21:13.054 Write completed with error (sct=0, sc=8) 00:21:13.054 starting I/O failed: -6 00:21:13.054 Write completed with error (sct=0, sc=8) 00:21:13.054 starting I/O failed: -6 00:21:13.054 Write completed with error (sct=0, sc=8) 00:21:13.054 starting I/O failed: -6 00:21:13.054 Write completed with error (sct=0, sc=8) 00:21:13.054 starting I/O failed: -6 00:21:13.054 Write completed with error (sct=0, sc=8) 00:21:13.054 starting I/O failed: -6 00:21:13.054 Write completed with error (sct=0, sc=8) 00:21:13.054 starting I/O failed: -6 00:21:13.054 Write completed with error (sct=0, sc=8) 00:21:13.054 starting I/O failed: -6 00:21:13.054 Write completed with error (sct=0, sc=8) 00:21:13.054 starting I/O failed: -6 00:21:13.054 Write completed with error (sct=0, sc=8) 00:21:13.054 starting I/O failed: -6 00:21:13.054 Write completed with error (sct=0, sc=8) 00:21:13.054 starting I/O failed: -6 00:21:13.054 Write completed with error (sct=0, sc=8) 00:21:13.054 starting I/O failed: -6 00:21:13.054 Write completed with error (sct=0, sc=8) 00:21:13.054 starting I/O failed: -6 00:21:13.054 Write completed with error (sct=0, sc=8) 00:21:13.054 starting I/O failed: -6 00:21:13.054 Write completed with error (sct=0, sc=8) 00:21:13.054 starting I/O failed: -6 00:21:13.054 Write completed with error (sct=0, sc=8) 00:21:13.054 starting I/O failed: -6 00:21:13.054 Write completed with error (sct=0, sc=8) 00:21:13.054 starting I/O failed: -6 00:21:13.054 Write completed with error (sct=0, sc=8) 00:21:13.054 starting I/O failed: -6 00:21:13.054 [2024-11-20 11:15:40.086144] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:13.054 NVMe io qpair process completion error 00:21:13.054 Write completed with error (sct=0, sc=8) 00:21:13.054 Write completed with error (sct=0, sc=8) 00:21:13.054 Write completed with error (sct=0, sc=8) 00:21:13.054 Write completed with error (sct=0, sc=8) 00:21:13.054 starting I/O failed: -6 00:21:13.054 Write completed with error (sct=0, sc=8) 00:21:13.054 Write completed with error (sct=0, sc=8) 00:21:13.054 Write completed with error (sct=0, sc=8) 00:21:13.054 Write completed with error (sct=0, sc=8) 00:21:13.054 starting I/O failed: -6 00:21:13.054 Write completed with error (sct=0, sc=8) 00:21:13.054 Write completed with error (sct=0, sc=8) 00:21:13.054 Write completed with error (sct=0, sc=8) 00:21:13.054 Write completed with error (sct=0, sc=8) 00:21:13.054 starting I/O failed: -6 00:21:13.054 Write completed with error (sct=0, sc=8) 00:21:13.054 Write completed with error (sct=0, sc=8) 00:21:13.054 Write completed with error (sct=0, sc=8) 00:21:13.054 Write completed with error (sct=0, sc=8) 00:21:13.054 starting I/O failed: -6 00:21:13.054 Write completed with error (sct=0, sc=8) 00:21:13.054 Write completed with error (sct=0, sc=8) 00:21:13.054 Write completed with error (sct=0, sc=8) 00:21:13.054 Write completed with error (sct=0, sc=8) 00:21:13.054 starting I/O failed: -6 00:21:13.054 Write completed with error (sct=0, sc=8) 00:21:13.054 Write completed with error (sct=0, sc=8) 00:21:13.054 Write completed with error (sct=0, sc=8) 00:21:13.054 Write completed with error (sct=0, sc=8) 00:21:13.054 starting I/O failed: -6 00:21:13.054 Write completed with error (sct=0, sc=8) 00:21:13.054 Write completed with error (sct=0, sc=8) 00:21:13.054 Write completed with error (sct=0, sc=8) 00:21:13.054 Write completed with error (sct=0, sc=8) 00:21:13.054 starting I/O failed: -6 00:21:13.054 Write completed with error (sct=0, sc=8) 00:21:13.054 Write completed with error (sct=0, sc=8) 00:21:13.054 Write completed with error (sct=0, sc=8) 00:21:13.054 Write completed with error (sct=0, sc=8) 00:21:13.054 starting I/O failed: -6 00:21:13.054 Write completed with error (sct=0, sc=8) 00:21:13.054 Write completed with error (sct=0, sc=8) 00:21:13.054 Write completed with error (sct=0, sc=8) 00:21:13.054 Write completed with error (sct=0, sc=8) 00:21:13.054 starting I/O failed: -6 00:21:13.054 Write completed with error (sct=0, sc=8) 00:21:13.054 Write completed with error (sct=0, sc=8) 00:21:13.054 Write completed with error (sct=0, sc=8) 00:21:13.054 Write completed with error (sct=0, sc=8) 00:21:13.054 starting I/O failed: -6 00:21:13.054 Write completed with error (sct=0, sc=8) 00:21:13.054 Write completed with error (sct=0, sc=8) 00:21:13.054 [2024-11-20 11:15:40.087326] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:13.054 starting I/O failed: -6 00:21:13.054 Write completed with error (sct=0, sc=8) 00:21:13.054 Write completed with error (sct=0, sc=8) 00:21:13.054 Write completed with error (sct=0, sc=8) 00:21:13.054 starting I/O failed: -6 00:21:13.054 Write completed with error (sct=0, sc=8) 00:21:13.054 starting I/O failed: -6 00:21:13.054 Write completed with error (sct=0, sc=8) 00:21:13.054 Write completed with error (sct=0, sc=8) 00:21:13.054 Write completed with error (sct=0, sc=8) 00:21:13.054 starting I/O failed: -6 00:21:13.054 Write completed with error (sct=0, sc=8) 00:21:13.054 starting I/O failed: -6 00:21:13.054 Write completed with error (sct=0, sc=8) 00:21:13.054 Write completed with error (sct=0, sc=8) 00:21:13.054 Write completed with error (sct=0, sc=8) 00:21:13.054 starting I/O failed: -6 00:21:13.054 Write completed with error (sct=0, sc=8) 00:21:13.054 starting I/O failed: -6 00:21:13.054 Write completed with error (sct=0, sc=8) 00:21:13.054 Write completed with error (sct=0, sc=8) 00:21:13.054 Write completed with error (sct=0, sc=8) 00:21:13.054 starting I/O failed: -6 00:21:13.054 Write completed with error (sct=0, sc=8) 00:21:13.054 starting I/O failed: -6 00:21:13.054 Write completed with error (sct=0, sc=8) 00:21:13.054 Write completed with error (sct=0, sc=8) 00:21:13.054 Write completed with error (sct=0, sc=8) 00:21:13.054 starting I/O failed: -6 00:21:13.054 Write completed with error (sct=0, sc=8) 00:21:13.054 starting I/O failed: -6 00:21:13.054 Write completed with error (sct=0, sc=8) 00:21:13.054 Write completed with error (sct=0, sc=8) 00:21:13.054 Write completed with error (sct=0, sc=8) 00:21:13.054 starting I/O failed: -6 00:21:13.054 Write completed with error (sct=0, sc=8) 00:21:13.054 starting I/O failed: -6 00:21:13.054 Write completed with error (sct=0, sc=8) 00:21:13.054 Write completed with error (sct=0, sc=8) 00:21:13.054 Write completed with error (sct=0, sc=8) 00:21:13.054 starting I/O failed: -6 00:21:13.054 Write completed with error (sct=0, sc=8) 00:21:13.054 starting I/O failed: -6 00:21:13.054 Write completed with error (sct=0, sc=8) 00:21:13.054 Write completed with error (sct=0, sc=8) 00:21:13.054 Write completed with error (sct=0, sc=8) 00:21:13.054 starting I/O failed: -6 00:21:13.054 Write completed with error (sct=0, sc=8) 00:21:13.054 starting I/O failed: -6 00:21:13.054 Write completed with error (sct=0, sc=8) 00:21:13.055 Write completed with error (sct=0, sc=8) 00:21:13.055 Write completed with error (sct=0, sc=8) 00:21:13.055 starting I/O failed: -6 00:21:13.055 Write completed with error (sct=0, sc=8) 00:21:13.055 starting I/O failed: -6 00:21:13.055 Write completed with error (sct=0, sc=8) 00:21:13.055 Write completed with error (sct=0, sc=8) 00:21:13.055 Write completed with error (sct=0, sc=8) 00:21:13.055 starting I/O failed: -6 00:21:13.055 [2024-11-20 11:15:40.088447] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:13.055 Write completed with error (sct=0, sc=8) 00:21:13.055 starting I/O failed: -6 00:21:13.055 Write completed with error (sct=0, sc=8) 00:21:13.055 Write completed with error (sct=0, sc=8) 00:21:13.055 starting I/O failed: -6 00:21:13.055 Write completed with error (sct=0, sc=8) 00:21:13.055 starting I/O failed: -6 00:21:13.055 Write completed with error (sct=0, sc=8) 00:21:13.055 starting I/O failed: -6 00:21:13.055 Write completed with error (sct=0, sc=8) 00:21:13.055 Write completed with error (sct=0, sc=8) 00:21:13.055 starting I/O failed: -6 00:21:13.055 Write completed with error (sct=0, sc=8) 00:21:13.055 starting I/O failed: -6 00:21:13.055 Write completed with error (sct=0, sc=8) 00:21:13.055 starting I/O failed: -6 00:21:13.055 Write completed with error (sct=0, sc=8) 00:21:13.055 Write completed with error (sct=0, sc=8) 00:21:13.055 starting I/O failed: -6 00:21:13.055 Write completed with error (sct=0, sc=8) 00:21:13.055 starting I/O failed: -6 00:21:13.055 Write completed with error (sct=0, sc=8) 00:21:13.055 starting I/O failed: -6 00:21:13.055 Write completed with error (sct=0, sc=8) 00:21:13.055 Write completed with error (sct=0, sc=8) 00:21:13.055 starting I/O failed: -6 00:21:13.055 Write completed with error (sct=0, sc=8) 00:21:13.055 starting I/O failed: -6 00:21:13.055 Write completed with error (sct=0, sc=8) 00:21:13.055 starting I/O failed: -6 00:21:13.055 Write completed with error (sct=0, sc=8) 00:21:13.055 Write completed with error (sct=0, sc=8) 00:21:13.055 starting I/O failed: -6 00:21:13.055 Write completed with error (sct=0, sc=8) 00:21:13.055 starting I/O failed: -6 00:21:13.055 Write completed with error (sct=0, sc=8) 00:21:13.055 starting I/O failed: -6 00:21:13.055 Write completed with error (sct=0, sc=8) 00:21:13.055 Write completed with error (sct=0, sc=8) 00:21:13.055 starting I/O failed: -6 00:21:13.055 Write completed with error (sct=0, sc=8) 00:21:13.055 starting I/O failed: -6 00:21:13.055 Write completed with error (sct=0, sc=8) 00:21:13.055 starting I/O failed: -6 00:21:13.055 Write completed with error (sct=0, sc=8) 00:21:13.055 Write completed with error (sct=0, sc=8) 00:21:13.055 starting I/O failed: -6 00:21:13.055 Write completed with error (sct=0, sc=8) 00:21:13.055 starting I/O failed: -6 00:21:13.055 Write completed with error (sct=0, sc=8) 00:21:13.055 starting I/O failed: -6 00:21:13.055 Write completed with error (sct=0, sc=8) 00:21:13.055 Write completed with error (sct=0, sc=8) 00:21:13.055 starting I/O failed: -6 00:21:13.055 Write completed with error (sct=0, sc=8) 00:21:13.055 starting I/O failed: -6 00:21:13.055 Write completed with error (sct=0, sc=8) 00:21:13.055 starting I/O failed: -6 00:21:13.055 Write completed with error (sct=0, sc=8) 00:21:13.055 Write completed with error (sct=0, sc=8) 00:21:13.055 starting I/O failed: -6 00:21:13.055 Write completed with error (sct=0, sc=8) 00:21:13.055 starting I/O failed: -6 00:21:13.055 Write completed with error (sct=0, sc=8) 00:21:13.055 starting I/O failed: -6 00:21:13.055 Write completed with error (sct=0, sc=8) 00:21:13.055 Write completed with error (sct=0, sc=8) 00:21:13.055 starting I/O failed: -6 00:21:13.055 Write completed with error (sct=0, sc=8) 00:21:13.055 starting I/O failed: -6 00:21:13.055 Write completed with error (sct=0, sc=8) 00:21:13.055 starting I/O failed: -6 00:21:13.055 Write completed with error (sct=0, sc=8) 00:21:13.055 Write completed with error (sct=0, sc=8) 00:21:13.055 starting I/O failed: -6 00:21:13.055 Write completed with error (sct=0, sc=8) 00:21:13.055 starting I/O failed: -6 00:21:13.055 Write completed with error (sct=0, sc=8) 00:21:13.055 starting I/O failed: -6 00:21:13.055 Write completed with error (sct=0, sc=8) 00:21:13.055 Write completed with error (sct=0, sc=8) 00:21:13.055 starting I/O failed: -6 00:21:13.055 Write completed with error (sct=0, sc=8) 00:21:13.055 starting I/O failed: -6 00:21:13.055 [2024-11-20 11:15:40.089522] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:13.055 starting I/O failed: -6 00:21:13.055 Write completed with error (sct=0, sc=8) 00:21:13.055 starting I/O failed: -6 00:21:13.055 Write completed with error (sct=0, sc=8) 00:21:13.055 starting I/O failed: -6 00:21:13.055 Write completed with error (sct=0, sc=8) 00:21:13.055 starting I/O failed: -6 00:21:13.055 Write completed with error (sct=0, sc=8) 00:21:13.055 starting I/O failed: -6 00:21:13.055 Write completed with error (sct=0, sc=8) 00:21:13.055 starting I/O failed: -6 00:21:13.055 Write completed with error (sct=0, sc=8) 00:21:13.055 starting I/O failed: -6 00:21:13.055 Write completed with error (sct=0, sc=8) 00:21:13.055 starting I/O failed: -6 00:21:13.055 Write completed with error (sct=0, sc=8) 00:21:13.055 starting I/O failed: -6 00:21:13.055 Write completed with error (sct=0, sc=8) 00:21:13.055 starting I/O failed: -6 00:21:13.055 Write completed with error (sct=0, sc=8) 00:21:13.055 starting I/O failed: -6 00:21:13.055 Write completed with error (sct=0, sc=8) 00:21:13.055 starting I/O failed: -6 00:21:13.055 Write completed with error (sct=0, sc=8) 00:21:13.055 starting I/O failed: -6 00:21:13.055 Write completed with error (sct=0, sc=8) 00:21:13.055 starting I/O failed: -6 00:21:13.055 Write completed with error (sct=0, sc=8) 00:21:13.055 starting I/O failed: -6 00:21:13.055 Write completed with error (sct=0, sc=8) 00:21:13.055 starting I/O failed: -6 00:21:13.055 Write completed with error (sct=0, sc=8) 00:21:13.055 starting I/O failed: -6 00:21:13.055 Write completed with error (sct=0, sc=8) 00:21:13.055 starting I/O failed: -6 00:21:13.055 Write completed with error (sct=0, sc=8) 00:21:13.055 starting I/O failed: -6 00:21:13.055 Write completed with error (sct=0, sc=8) 00:21:13.055 starting I/O failed: -6 00:21:13.055 Write completed with error (sct=0, sc=8) 00:21:13.055 starting I/O failed: -6 00:21:13.055 Write completed with error (sct=0, sc=8) 00:21:13.055 starting I/O failed: -6 00:21:13.055 Write completed with error (sct=0, sc=8) 00:21:13.055 starting I/O failed: -6 00:21:13.055 Write completed with error (sct=0, sc=8) 00:21:13.055 starting I/O failed: -6 00:21:13.055 Write completed with error (sct=0, sc=8) 00:21:13.055 starting I/O failed: -6 00:21:13.055 Write completed with error (sct=0, sc=8) 00:21:13.055 starting I/O failed: -6 00:21:13.055 Write completed with error (sct=0, sc=8) 00:21:13.055 starting I/O failed: -6 00:21:13.055 Write completed with error (sct=0, sc=8) 00:21:13.055 starting I/O failed: -6 00:21:13.055 Write completed with error (sct=0, sc=8) 00:21:13.055 starting I/O failed: -6 00:21:13.055 Write completed with error (sct=0, sc=8) 00:21:13.055 starting I/O failed: -6 00:21:13.055 Write completed with error (sct=0, sc=8) 00:21:13.055 starting I/O failed: -6 00:21:13.055 Write completed with error (sct=0, sc=8) 00:21:13.055 starting I/O failed: -6 00:21:13.055 Write completed with error (sct=0, sc=8) 00:21:13.055 starting I/O failed: -6 00:21:13.055 Write completed with error (sct=0, sc=8) 00:21:13.055 starting I/O failed: -6 00:21:13.055 Write completed with error (sct=0, sc=8) 00:21:13.055 starting I/O failed: -6 00:21:13.055 Write completed with error (sct=0, sc=8) 00:21:13.055 starting I/O failed: -6 00:21:13.055 Write completed with error (sct=0, sc=8) 00:21:13.055 starting I/O failed: -6 00:21:13.055 Write completed with error (sct=0, sc=8) 00:21:13.055 starting I/O failed: -6 00:21:13.055 Write completed with error (sct=0, sc=8) 00:21:13.055 starting I/O failed: -6 00:21:13.055 Write completed with error (sct=0, sc=8) 00:21:13.055 starting I/O failed: -6 00:21:13.055 Write completed with error (sct=0, sc=8) 00:21:13.055 starting I/O failed: -6 00:21:13.055 Write completed with error (sct=0, sc=8) 00:21:13.055 starting I/O failed: -6 00:21:13.055 Write completed with error (sct=0, sc=8) 00:21:13.055 starting I/O failed: -6 00:21:13.055 Write completed with error (sct=0, sc=8) 00:21:13.055 starting I/O failed: -6 00:21:13.055 Write completed with error (sct=0, sc=8) 00:21:13.055 starting I/O failed: -6 00:21:13.055 Write completed with error (sct=0, sc=8) 00:21:13.055 starting I/O failed: -6 00:21:13.055 Write completed with error (sct=0, sc=8) 00:21:13.055 starting I/O failed: -6 00:21:13.055 Write completed with error (sct=0, sc=8) 00:21:13.055 starting I/O failed: -6 00:21:13.055 Write completed with error (sct=0, sc=8) 00:21:13.055 starting I/O failed: -6 00:21:13.055 Write completed with error (sct=0, sc=8) 00:21:13.055 starting I/O failed: -6 00:21:13.055 Write completed with error (sct=0, sc=8) 00:21:13.055 starting I/O failed: -6 00:21:13.055 Write completed with error (sct=0, sc=8) 00:21:13.055 starting I/O failed: -6 00:21:13.055 Write completed with error (sct=0, sc=8) 00:21:13.055 starting I/O failed: -6 00:21:13.055 Write completed with error (sct=0, sc=8) 00:21:13.055 starting I/O failed: -6 00:21:13.055 Write completed with error (sct=0, sc=8) 00:21:13.055 starting I/O failed: -6 00:21:13.055 Write completed with error (sct=0, sc=8) 00:21:13.055 starting I/O failed: -6 00:21:13.055 Write completed with error (sct=0, sc=8) 00:21:13.055 starting I/O failed: -6 00:21:13.055 Write completed with error (sct=0, sc=8) 00:21:13.055 starting I/O failed: -6 00:21:13.055 Write completed with error (sct=0, sc=8) 00:21:13.055 starting I/O failed: -6 00:21:13.055 Write completed with error (sct=0, sc=8) 00:21:13.055 starting I/O failed: -6 00:21:13.055 Write completed with error (sct=0, sc=8) 00:21:13.055 starting I/O failed: -6 00:21:13.056 Write completed with error (sct=0, sc=8) 00:21:13.056 starting I/O failed: -6 00:21:13.056 [2024-11-20 11:15:40.094870] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:13.056 NVMe io qpair process completion error 00:21:13.056 Write completed with error (sct=0, sc=8) 00:21:13.056 starting I/O failed: -6 00:21:13.056 Write completed with error (sct=0, sc=8) 00:21:13.056 Write completed with error (sct=0, sc=8) 00:21:13.056 Write completed with error (sct=0, sc=8) 00:21:13.056 Write completed with error (sct=0, sc=8) 00:21:13.056 starting I/O failed: -6 00:21:13.056 Write completed with error (sct=0, sc=8) 00:21:13.056 Write completed with error (sct=0, sc=8) 00:21:13.056 Write completed with error (sct=0, sc=8) 00:21:13.056 Write completed with error (sct=0, sc=8) 00:21:13.056 starting I/O failed: -6 00:21:13.056 Write completed with error (sct=0, sc=8) 00:21:13.056 Write completed with error (sct=0, sc=8) 00:21:13.056 Write completed with error (sct=0, sc=8) 00:21:13.056 Write completed with error (sct=0, sc=8) 00:21:13.056 starting I/O failed: -6 00:21:13.056 Write completed with error (sct=0, sc=8) 00:21:13.056 Write completed with error (sct=0, sc=8) 00:21:13.056 Write completed with error (sct=0, sc=8) 00:21:13.056 Write completed with error (sct=0, sc=8) 00:21:13.056 starting I/O failed: -6 00:21:13.056 Write completed with error (sct=0, sc=8) 00:21:13.056 Write completed with error (sct=0, sc=8) 00:21:13.056 Write completed with error (sct=0, sc=8) 00:21:13.056 Write completed with error (sct=0, sc=8) 00:21:13.056 starting I/O failed: -6 00:21:13.056 Write completed with error (sct=0, sc=8) 00:21:13.056 Write completed with error (sct=0, sc=8) 00:21:13.056 Write completed with error (sct=0, sc=8) 00:21:13.056 Write completed with error (sct=0, sc=8) 00:21:13.056 starting I/O failed: -6 00:21:13.056 Write completed with error (sct=0, sc=8) 00:21:13.056 Write completed with error (sct=0, sc=8) 00:21:13.056 Write completed with error (sct=0, sc=8) 00:21:13.056 Write completed with error (sct=0, sc=8) 00:21:13.056 starting I/O failed: -6 00:21:13.056 Write completed with error (sct=0, sc=8) 00:21:13.056 Write completed with error (sct=0, sc=8) 00:21:13.056 Write completed with error (sct=0, sc=8) 00:21:13.056 Write completed with error (sct=0, sc=8) 00:21:13.056 starting I/O failed: -6 00:21:13.056 Write completed with error (sct=0, sc=8) 00:21:13.056 Write completed with error (sct=0, sc=8) 00:21:13.056 Write completed with error (sct=0, sc=8) 00:21:13.056 Write completed with error (sct=0, sc=8) 00:21:13.056 starting I/O failed: -6 00:21:13.056 Write completed with error (sct=0, sc=8) 00:21:13.056 [2024-11-20 11:15:40.095854] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:13.056 Write completed with error (sct=0, sc=8) 00:21:13.056 starting I/O failed: -6 00:21:13.056 Write completed with error (sct=0, sc=8) 00:21:13.056 Write completed with error (sct=0, sc=8) 00:21:13.056 Write completed with error (sct=0, sc=8) 00:21:13.056 starting I/O failed: -6 00:21:13.056 Write completed with error (sct=0, sc=8) 00:21:13.056 starting I/O failed: -6 00:21:13.056 Write completed with error (sct=0, sc=8) 00:21:13.056 Write completed with error (sct=0, sc=8) 00:21:13.056 Write completed with error (sct=0, sc=8) 00:21:13.056 starting I/O failed: -6 00:21:13.056 Write completed with error (sct=0, sc=8) 00:21:13.056 starting I/O failed: -6 00:21:13.056 Write completed with error (sct=0, sc=8) 00:21:13.056 Write completed with error (sct=0, sc=8) 00:21:13.056 Write completed with error (sct=0, sc=8) 00:21:13.056 starting I/O failed: -6 00:21:13.056 Write completed with error (sct=0, sc=8) 00:21:13.056 starting I/O failed: -6 00:21:13.056 Write completed with error (sct=0, sc=8) 00:21:13.056 Write completed with error (sct=0, sc=8) 00:21:13.056 Write completed with error (sct=0, sc=8) 00:21:13.056 starting I/O failed: -6 00:21:13.056 Write completed with error (sct=0, sc=8) 00:21:13.056 starting I/O failed: -6 00:21:13.056 Write completed with error (sct=0, sc=8) 00:21:13.056 Write completed with error (sct=0, sc=8) 00:21:13.056 Write completed with error (sct=0, sc=8) 00:21:13.056 starting I/O failed: -6 00:21:13.056 Write completed with error (sct=0, sc=8) 00:21:13.056 starting I/O failed: -6 00:21:13.056 Write completed with error (sct=0, sc=8) 00:21:13.056 Write completed with error (sct=0, sc=8) 00:21:13.056 Write completed with error (sct=0, sc=8) 00:21:13.056 starting I/O failed: -6 00:21:13.056 Write completed with error (sct=0, sc=8) 00:21:13.056 starting I/O failed: -6 00:21:13.056 Write completed with error (sct=0, sc=8) 00:21:13.056 Write completed with error (sct=0, sc=8) 00:21:13.056 Write completed with error (sct=0, sc=8) 00:21:13.056 starting I/O failed: -6 00:21:13.056 Write completed with error (sct=0, sc=8) 00:21:13.056 starting I/O failed: -6 00:21:13.056 Write completed with error (sct=0, sc=8) 00:21:13.056 Write completed with error (sct=0, sc=8) 00:21:13.056 Write completed with error (sct=0, sc=8) 00:21:13.056 starting I/O failed: -6 00:21:13.056 Write completed with error (sct=0, sc=8) 00:21:13.056 starting I/O failed: -6 00:21:13.056 Write completed with error (sct=0, sc=8) 00:21:13.056 Write completed with error (sct=0, sc=8) 00:21:13.056 Write completed with error (sct=0, sc=8) 00:21:13.056 starting I/O failed: -6 00:21:13.056 Write completed with error (sct=0, sc=8) 00:21:13.056 starting I/O failed: -6 00:21:13.056 Write completed with error (sct=0, sc=8) 00:21:13.056 Write completed with error (sct=0, sc=8) 00:21:13.056 Write completed with error (sct=0, sc=8) 00:21:13.056 starting I/O failed: -6 00:21:13.056 Write completed with error (sct=0, sc=8) 00:21:13.056 starting I/O failed: -6 00:21:13.056 Write completed with error (sct=0, sc=8) 00:21:13.056 Write completed with error (sct=0, sc=8) 00:21:13.056 Write completed with error (sct=0, sc=8) 00:21:13.056 starting I/O failed: -6 00:21:13.056 Write completed with error (sct=0, sc=8) 00:21:13.056 starting I/O failed: -6 00:21:13.056 Write completed with error (sct=0, sc=8) 00:21:13.056 [2024-11-20 11:15:40.096800] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:13.056 Write completed with error (sct=0, sc=8) 00:21:13.056 starting I/O failed: -6 00:21:13.056 Write completed with error (sct=0, sc=8) 00:21:13.056 starting I/O failed: -6 00:21:13.056 Write completed with error (sct=0, sc=8) 00:21:13.056 Write completed with error (sct=0, sc=8) 00:21:13.056 starting I/O failed: -6 00:21:13.056 Write completed with error (sct=0, sc=8) 00:21:13.056 starting I/O failed: -6 00:21:13.056 Write completed with error (sct=0, sc=8) 00:21:13.056 starting I/O failed: -6 00:21:13.056 Write completed with error (sct=0, sc=8) 00:21:13.056 Write completed with error (sct=0, sc=8) 00:21:13.056 starting I/O failed: -6 00:21:13.056 Write completed with error (sct=0, sc=8) 00:21:13.056 starting I/O failed: -6 00:21:13.056 Write completed with error (sct=0, sc=8) 00:21:13.056 starting I/O failed: -6 00:21:13.056 Write completed with error (sct=0, sc=8) 00:21:13.056 Write completed with error (sct=0, sc=8) 00:21:13.056 starting I/O failed: -6 00:21:13.056 Write completed with error (sct=0, sc=8) 00:21:13.056 starting I/O failed: -6 00:21:13.056 Write completed with error (sct=0, sc=8) 00:21:13.056 starting I/O failed: -6 00:21:13.056 Write completed with error (sct=0, sc=8) 00:21:13.056 Write completed with error (sct=0, sc=8) 00:21:13.056 starting I/O failed: -6 00:21:13.056 Write completed with error (sct=0, sc=8) 00:21:13.056 starting I/O failed: -6 00:21:13.056 Write completed with error (sct=0, sc=8) 00:21:13.056 starting I/O failed: -6 00:21:13.056 Write completed with error (sct=0, sc=8) 00:21:13.056 Write completed with error (sct=0, sc=8) 00:21:13.056 starting I/O failed: -6 00:21:13.056 Write completed with error (sct=0, sc=8) 00:21:13.056 starting I/O failed: -6 00:21:13.056 Write completed with error (sct=0, sc=8) 00:21:13.056 starting I/O failed: -6 00:21:13.056 Write completed with error (sct=0, sc=8) 00:21:13.056 Write completed with error (sct=0, sc=8) 00:21:13.056 starting I/O failed: -6 00:21:13.056 Write completed with error (sct=0, sc=8) 00:21:13.056 starting I/O failed: -6 00:21:13.056 Write completed with error (sct=0, sc=8) 00:21:13.056 starting I/O failed: -6 00:21:13.056 Write completed with error (sct=0, sc=8) 00:21:13.056 Write completed with error (sct=0, sc=8) 00:21:13.056 starting I/O failed: -6 00:21:13.056 Write completed with error (sct=0, sc=8) 00:21:13.056 starting I/O failed: -6 00:21:13.056 Write completed with error (sct=0, sc=8) 00:21:13.056 starting I/O failed: -6 00:21:13.056 Write completed with error (sct=0, sc=8) 00:21:13.056 Write completed with error (sct=0, sc=8) 00:21:13.056 starting I/O failed: -6 00:21:13.057 Write completed with error (sct=0, sc=8) 00:21:13.057 starting I/O failed: -6 00:21:13.057 Write completed with error (sct=0, sc=8) 00:21:13.057 starting I/O failed: -6 00:21:13.057 Write completed with error (sct=0, sc=8) 00:21:13.057 Write completed with error (sct=0, sc=8) 00:21:13.057 starting I/O failed: -6 00:21:13.057 Write completed with error (sct=0, sc=8) 00:21:13.057 starting I/O failed: -6 00:21:13.057 Write completed with error (sct=0, sc=8) 00:21:13.057 starting I/O failed: -6 00:21:13.057 Write completed with error (sct=0, sc=8) 00:21:13.057 Write completed with error (sct=0, sc=8) 00:21:13.057 starting I/O failed: -6 00:21:13.057 Write completed with error (sct=0, sc=8) 00:21:13.057 starting I/O failed: -6 00:21:13.057 Write completed with error (sct=0, sc=8) 00:21:13.057 starting I/O failed: -6 00:21:13.057 Write completed with error (sct=0, sc=8) 00:21:13.057 Write completed with error (sct=0, sc=8) 00:21:13.057 starting I/O failed: -6 00:21:13.057 Write completed with error (sct=0, sc=8) 00:21:13.057 starting I/O failed: -6 00:21:13.057 Write completed with error (sct=0, sc=8) 00:21:13.057 starting I/O failed: -6 00:21:13.057 Write completed with error (sct=0, sc=8) 00:21:13.057 Write completed with error (sct=0, sc=8) 00:21:13.057 starting I/O failed: -6 00:21:13.057 [2024-11-20 11:15:40.097854] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:13.057 Write completed with error (sct=0, sc=8) 00:21:13.057 starting I/O failed: -6 00:21:13.057 Write completed with error (sct=0, sc=8) 00:21:13.057 starting I/O failed: -6 00:21:13.057 Write completed with error (sct=0, sc=8) 00:21:13.057 starting I/O failed: -6 00:21:13.057 Write completed with error (sct=0, sc=8) 00:21:13.057 starting I/O failed: -6 00:21:13.057 Write completed with error (sct=0, sc=8) 00:21:13.057 starting I/O failed: -6 00:21:13.057 Write completed with error (sct=0, sc=8) 00:21:13.057 starting I/O failed: -6 00:21:13.057 Write completed with error (sct=0, sc=8) 00:21:13.057 starting I/O failed: -6 00:21:13.057 Write completed with error (sct=0, sc=8) 00:21:13.057 starting I/O failed: -6 00:21:13.057 Write completed with error (sct=0, sc=8) 00:21:13.057 starting I/O failed: -6 00:21:13.057 Write completed with error (sct=0, sc=8) 00:21:13.057 starting I/O failed: -6 00:21:13.057 Write completed with error (sct=0, sc=8) 00:21:13.057 starting I/O failed: -6 00:21:13.057 Write completed with error (sct=0, sc=8) 00:21:13.057 starting I/O failed: -6 00:21:13.057 Write completed with error (sct=0, sc=8) 00:21:13.057 starting I/O failed: -6 00:21:13.057 Write completed with error (sct=0, sc=8) 00:21:13.057 starting I/O failed: -6 00:21:13.057 Write completed with error (sct=0, sc=8) 00:21:13.057 starting I/O failed: -6 00:21:13.057 Write completed with error (sct=0, sc=8) 00:21:13.057 starting I/O failed: -6 00:21:13.057 Write completed with error (sct=0, sc=8) 00:21:13.057 starting I/O failed: -6 00:21:13.057 Write completed with error (sct=0, sc=8) 00:21:13.057 starting I/O failed: -6 00:21:13.057 Write completed with error (sct=0, sc=8) 00:21:13.057 starting I/O failed: -6 00:21:13.057 Write completed with error (sct=0, sc=8) 00:21:13.057 starting I/O failed: -6 00:21:13.057 Write completed with error (sct=0, sc=8) 00:21:13.057 starting I/O failed: -6 00:21:13.057 Write completed with error (sct=0, sc=8) 00:21:13.057 starting I/O failed: -6 00:21:13.057 Write completed with error (sct=0, sc=8) 00:21:13.057 starting I/O failed: -6 00:21:13.057 Write completed with error (sct=0, sc=8) 00:21:13.057 starting I/O failed: -6 00:21:13.057 Write completed with error (sct=0, sc=8) 00:21:13.057 starting I/O failed: -6 00:21:13.057 Write completed with error (sct=0, sc=8) 00:21:13.057 starting I/O failed: -6 00:21:13.057 Write completed with error (sct=0, sc=8) 00:21:13.057 starting I/O failed: -6 00:21:13.057 Write completed with error (sct=0, sc=8) 00:21:13.057 starting I/O failed: -6 00:21:13.057 Write completed with error (sct=0, sc=8) 00:21:13.057 starting I/O failed: -6 00:21:13.057 Write completed with error (sct=0, sc=8) 00:21:13.057 starting I/O failed: -6 00:21:13.057 Write completed with error (sct=0, sc=8) 00:21:13.057 starting I/O failed: -6 00:21:13.057 Write completed with error (sct=0, sc=8) 00:21:13.057 starting I/O failed: -6 00:21:13.057 Write completed with error (sct=0, sc=8) 00:21:13.057 starting I/O failed: -6 00:21:13.057 Write completed with error (sct=0, sc=8) 00:21:13.057 starting I/O failed: -6 00:21:13.057 Write completed with error (sct=0, sc=8) 00:21:13.057 starting I/O failed: -6 00:21:13.057 Write completed with error (sct=0, sc=8) 00:21:13.057 starting I/O failed: -6 00:21:13.057 Write completed with error (sct=0, sc=8) 00:21:13.057 starting I/O failed: -6 00:21:13.057 Write completed with error (sct=0, sc=8) 00:21:13.057 starting I/O failed: -6 00:21:13.057 Write completed with error (sct=0, sc=8) 00:21:13.057 starting I/O failed: -6 00:21:13.057 Write completed with error (sct=0, sc=8) 00:21:13.057 starting I/O failed: -6 00:21:13.057 Write completed with error (sct=0, sc=8) 00:21:13.057 starting I/O failed: -6 00:21:13.057 Write completed with error (sct=0, sc=8) 00:21:13.057 starting I/O failed: -6 00:21:13.057 Write completed with error (sct=0, sc=8) 00:21:13.057 starting I/O failed: -6 00:21:13.057 Write completed with error (sct=0, sc=8) 00:21:13.057 starting I/O failed: -6 00:21:13.057 Write completed with error (sct=0, sc=8) 00:21:13.057 starting I/O failed: -6 00:21:13.057 Write completed with error (sct=0, sc=8) 00:21:13.057 starting I/O failed: -6 00:21:13.057 Write completed with error (sct=0, sc=8) 00:21:13.057 starting I/O failed: -6 00:21:13.057 Write completed with error (sct=0, sc=8) 00:21:13.057 starting I/O failed: -6 00:21:13.057 Write completed with error (sct=0, sc=8) 00:21:13.057 starting I/O failed: -6 00:21:13.057 Write completed with error (sct=0, sc=8) 00:21:13.057 starting I/O failed: -6 00:21:13.057 Write completed with error (sct=0, sc=8) 00:21:13.057 starting I/O failed: -6 00:21:13.057 Write completed with error (sct=0, sc=8) 00:21:13.057 starting I/O failed: -6 00:21:13.057 Write completed with error (sct=0, sc=8) 00:21:13.057 starting I/O failed: -6 00:21:13.057 Write completed with error (sct=0, sc=8) 00:21:13.057 starting I/O failed: -6 00:21:13.057 Write completed with error (sct=0, sc=8) 00:21:13.057 starting I/O failed: -6 00:21:13.057 Write completed with error (sct=0, sc=8) 00:21:13.057 starting I/O failed: -6 00:21:13.057 Write completed with error (sct=0, sc=8) 00:21:13.057 starting I/O failed: -6 00:21:13.057 Write completed with error (sct=0, sc=8) 00:21:13.057 starting I/O failed: -6 00:21:13.057 Write completed with error (sct=0, sc=8) 00:21:13.057 starting I/O failed: -6 00:21:13.057 [2024-11-20 11:15:40.100376] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:13.057 NVMe io qpair process completion error 00:21:13.057 Initializing NVMe Controllers 00:21:13.057 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:21:13.057 Controller IO queue size 128, less than required. 00:21:13.057 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:13.057 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:21:13.057 Controller IO queue size 128, less than required. 00:21:13.057 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:13.057 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:21:13.057 Controller IO queue size 128, less than required. 00:21:13.057 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:13.057 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:21:13.057 Controller IO queue size 128, less than required. 00:21:13.057 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:13.057 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:13.057 Controller IO queue size 128, less than required. 00:21:13.057 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:13.057 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:21:13.057 Controller IO queue size 128, less than required. 00:21:13.057 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:13.057 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:21:13.057 Controller IO queue size 128, less than required. 00:21:13.057 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:13.057 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:21:13.057 Controller IO queue size 128, less than required. 00:21:13.057 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:13.057 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:21:13.057 Controller IO queue size 128, less than required. 00:21:13.057 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:13.057 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:21:13.057 Controller IO queue size 128, less than required. 00:21:13.057 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:13.057 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:21:13.057 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:21:13.057 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:21:13.057 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:21:13.057 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:13.057 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:21:13.057 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:21:13.057 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:21:13.057 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:21:13.057 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:21:13.057 Initialization complete. Launching workers. 00:21:13.058 ======================================================== 00:21:13.058 Latency(us) 00:21:13.058 Device Information : IOPS MiB/s Average min max 00:21:13.058 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 2202.88 94.65 58111.00 746.05 106778.04 00:21:13.058 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 2141.11 92.00 59799.52 933.58 106704.35 00:21:13.058 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 2171.00 93.29 58992.05 745.21 105492.69 00:21:13.058 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 2154.08 92.56 59475.03 863.28 106244.41 00:21:13.058 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2145.28 92.18 59759.59 846.05 110731.86 00:21:13.058 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 2180.68 93.70 58827.28 699.42 102418.67 00:21:13.058 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 2195.84 94.35 58437.10 848.99 100430.98 00:21:13.058 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 2190.13 94.11 58644.86 789.69 122413.20 00:21:13.058 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 2151.00 92.43 59772.13 883.80 128709.79 00:21:13.058 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 2129.02 91.48 59623.12 1151.11 98632.65 00:21:13.058 ======================================================== 00:21:13.058 Total : 21661.02 930.75 59137.85 699.42 128709.79 00:21:13.058 00:21:13.058 [2024-11-20 11:15:40.103369] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc2bc0 is same with the state(6) to be set 00:21:13.058 [2024-11-20 11:15:40.103415] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc2890 is same with the state(6) to be set 00:21:13.058 [2024-11-20 11:15:40.103445] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4900 is same with the state(6) to be set 00:21:13.058 [2024-11-20 11:15:40.103476] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc2560 is same with the state(6) to be set 00:21:13.058 [2024-11-20 11:15:40.103506] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4720 is same with the state(6) to be set 00:21:13.058 [2024-11-20 11:15:40.103539] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc2ef0 is same with the state(6) to be set 00:21:13.058 [2024-11-20 11:15:40.103569] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc3410 is same with the state(6) to be set 00:21:13.058 [2024-11-20 11:15:40.103598] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc3a70 is same with the state(6) to be set 00:21:13.058 [2024-11-20 11:15:40.103627] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc3740 is same with the state(6) to be set 00:21:13.058 [2024-11-20 11:15:40.103656] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4ae0 is same with the state(6) to be set 00:21:13.058 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:21:13.058 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:21:13.997 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 4118503 00:21:13.997 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:21:13.997 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 4118503 00:21:13.997 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:21:13.997 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:13.997 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:21:13.997 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:13.997 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 4118503 00:21:13.997 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:21:13.997 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:13.997 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:13.997 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:13.997 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:21:13.997 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:21:13.997 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:13.997 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:13.997 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:21:13.997 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:13.997 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:21:13.997 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:13.997 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:21:13.997 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:13.997 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:13.997 rmmod nvme_tcp 00:21:13.997 rmmod nvme_fabrics 00:21:13.997 rmmod nvme_keyring 00:21:13.997 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:14.257 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:21:14.257 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:21:14.257 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 4118205 ']' 00:21:14.257 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 4118205 00:21:14.257 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 4118205 ']' 00:21:14.257 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 4118205 00:21:14.257 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (4118205) - No such process 00:21:14.257 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 4118205 is not found' 00:21:14.257 Process with pid 4118205 is not found 00:21:14.257 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:14.257 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:14.257 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:14.257 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:21:14.257 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:21:14.257 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:14.257 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:21:14.257 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:14.257 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:14.257 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:14.257 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:14.257 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:16.163 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:16.163 00:21:16.163 real 0m10.391s 00:21:16.163 user 0m27.664s 00:21:16.163 sys 0m5.069s 00:21:16.163 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:16.164 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:16.164 ************************************ 00:21:16.164 END TEST nvmf_shutdown_tc4 00:21:16.164 ************************************ 00:21:16.164 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:21:16.164 00:21:16.164 real 0m42.617s 00:21:16.164 user 1m48.218s 00:21:16.164 sys 0m13.954s 00:21:16.164 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:16.164 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:16.164 ************************************ 00:21:16.164 END TEST nvmf_shutdown 00:21:16.164 ************************************ 00:21:16.164 11:15:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:21:16.164 11:15:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:16.164 11:15:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:16.164 11:15:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:16.424 ************************************ 00:21:16.424 START TEST nvmf_nsid 00:21:16.424 ************************************ 00:21:16.424 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:21:16.424 * Looking for test storage... 00:21:16.424 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:16.424 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:16.424 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lcov --version 00:21:16.425 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:16.425 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:16.425 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:16.425 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:16.425 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:16.425 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:21:16.425 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:21:16.425 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:21:16.425 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:21:16.425 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:21:16.425 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:21:16.425 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:21:16.425 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:16.425 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:21:16.425 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:21:16.425 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:16.425 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:16.425 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:21:16.425 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:21:16.425 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:16.425 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:21:16.425 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:21:16.425 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:21:16.425 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:21:16.425 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:16.425 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:21:16.425 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:21:16.425 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:16.425 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:16.425 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:21:16.425 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:16.425 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:16.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:16.425 --rc genhtml_branch_coverage=1 00:21:16.425 --rc genhtml_function_coverage=1 00:21:16.425 --rc genhtml_legend=1 00:21:16.425 --rc geninfo_all_blocks=1 00:21:16.425 --rc geninfo_unexecuted_blocks=1 00:21:16.425 00:21:16.425 ' 00:21:16.425 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:16.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:16.425 --rc genhtml_branch_coverage=1 00:21:16.425 --rc genhtml_function_coverage=1 00:21:16.425 --rc genhtml_legend=1 00:21:16.425 --rc geninfo_all_blocks=1 00:21:16.425 --rc geninfo_unexecuted_blocks=1 00:21:16.425 00:21:16.425 ' 00:21:16.425 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:16.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:16.425 --rc genhtml_branch_coverage=1 00:21:16.425 --rc genhtml_function_coverage=1 00:21:16.425 --rc genhtml_legend=1 00:21:16.425 --rc geninfo_all_blocks=1 00:21:16.425 --rc geninfo_unexecuted_blocks=1 00:21:16.425 00:21:16.425 ' 00:21:16.425 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:16.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:16.425 --rc genhtml_branch_coverage=1 00:21:16.425 --rc genhtml_function_coverage=1 00:21:16.425 --rc genhtml_legend=1 00:21:16.425 --rc geninfo_all_blocks=1 00:21:16.425 --rc geninfo_unexecuted_blocks=1 00:21:16.425 00:21:16.425 ' 00:21:16.425 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:16.425 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:21:16.425 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:16.425 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:16.425 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:16.425 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:16.425 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:16.425 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:16.425 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:16.425 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:16.425 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:16.425 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:16.425 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:16.425 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:16.425 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:16.425 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:16.425 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:16.425 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:16.425 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:16.425 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:21:16.425 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:16.425 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:16.425 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:16.425 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.425 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.425 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.425 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:21:16.425 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.425 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:21:16.425 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:16.425 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:16.425 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:16.425 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:16.425 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:16.425 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:16.425 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:16.425 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:16.425 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:16.425 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:16.425 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:21:16.425 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:21:16.425 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:21:16.425 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:21:16.425 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:21:16.426 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:21:16.426 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:16.426 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:16.426 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:16.426 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:16.426 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:16.426 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:16.426 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:16.426 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:16.426 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:16.426 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:16.426 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:21:16.426 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:22.999 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:22.999 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:21:22.999 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:22.999 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:22.999 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:22.999 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:22.999 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:22.999 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:21:22.999 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:23.000 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:21:23.000 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:21:23.000 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:21:23.000 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:21:23.000 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:21:23.000 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:21:23.000 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:23.000 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:23.000 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:23.000 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:23.000 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:23.000 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:23.000 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:23.000 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:23.000 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:23.000 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:23.000 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:23.000 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:23.000 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:23.000 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:23.000 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:23.000 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:23.000 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:23.000 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:23.000 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:23.000 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:23.000 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:23.000 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:23.000 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:23.000 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:23.000 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:23.000 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:23.000 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:23.000 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:23.000 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:23.000 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:23.000 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:23.000 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:23.000 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:23.000 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:23.000 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:23.000 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:23.000 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:23.000 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:23.000 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:23.000 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:23.000 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:23.000 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:23.000 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:23.000 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:23.000 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:23.000 Found net devices under 0000:86:00.0: cvl_0_0 00:21:23.000 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:23.000 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:23.000 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:23.000 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:23.000 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:23.000 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:23.000 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:23.000 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:23.000 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:23.000 Found net devices under 0000:86:00.1: cvl_0_1 00:21:23.000 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:23.000 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:23.000 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:21:23.000 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:23.000 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:23.000 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:23.000 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:23.000 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:23.000 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:23.000 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:23.000 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:23.000 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:23.000 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:23.000 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:23.000 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:23.000 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:23.000 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:23.000 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:23.000 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:23.000 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:23.000 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:23.000 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:23.000 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:23.000 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:23.000 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:23.000 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:23.000 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:23.000 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:23.000 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:23.000 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:23.000 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.445 ms 00:21:23.000 00:21:23.000 --- 10.0.0.2 ping statistics --- 00:21:23.000 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:23.000 rtt min/avg/max/mdev = 0.445/0.445/0.445/0.000 ms 00:21:23.000 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:23.000 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:23.000 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.192 ms 00:21:23.000 00:21:23.000 --- 10.0.0.1 ping statistics --- 00:21:23.000 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:23.000 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:21:23.000 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:23.000 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:21:23.001 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:23.001 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:23.001 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:23.001 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:23.001 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:23.001 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:23.001 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:23.001 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:21:23.001 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:23.001 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:23.001 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:23.001 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=4122972 00:21:23.001 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 4122972 00:21:23.001 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:21:23.001 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 4122972 ']' 00:21:23.001 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:23.001 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:23.001 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:23.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:23.001 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:23.001 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:23.001 [2024-11-20 11:15:49.885940] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:21:23.001 [2024-11-20 11:15:49.885996] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:23.001 [2024-11-20 11:15:49.968137] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:23.001 [2024-11-20 11:15:50.010335] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:23.001 [2024-11-20 11:15:50.010373] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:23.001 [2024-11-20 11:15:50.010380] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:23.001 [2024-11-20 11:15:50.010386] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:23.001 [2024-11-20 11:15:50.010391] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:23.001 [2024-11-20 11:15:50.010808] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:23.001 11:15:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:23.001 11:15:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:21:23.001 11:15:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:23.001 11:15:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:23.001 11:15:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:23.001 11:15:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:23.001 11:15:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:21:23.001 11:15:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=4122991 00:21:23.001 11:15:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:21:23.001 11:15:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:21:23.001 11:15:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:21:23.001 11:15:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:21:23.001 11:15:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:23.001 11:15:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:23.001 11:15:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:23.001 11:15:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:23.001 11:15:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:23.001 11:15:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:23.001 11:15:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:23.001 11:15:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:23.001 11:15:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:23.001 11:15:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:21:23.001 11:15:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:21:23.001 11:15:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=72834953-1376-4a90-80d2-681e197b00d1 00:21:23.001 11:15:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:21:23.001 11:15:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=c5736909-cc33-4d00-a5ef-208933f891c8 00:21:23.001 11:15:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:21:23.001 11:15:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=61cacd79-8d39-435d-b633-909f8c4dc691 00:21:23.001 11:15:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:21:23.001 11:15:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.001 11:15:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:23.001 null0 00:21:23.001 null1 00:21:23.001 null2 00:21:23.001 [2024-11-20 11:15:50.197955] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:21:23.001 [2024-11-20 11:15:50.197997] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4122991 ] 00:21:23.001 [2024-11-20 11:15:50.200078] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:23.001 [2024-11-20 11:15:50.224256] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:23.001 11:15:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.001 11:15:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 4122991 /var/tmp/tgt2.sock 00:21:23.001 11:15:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 4122991 ']' 00:21:23.001 11:15:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:21:23.001 11:15:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:23.001 11:15:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:21:23.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:21:23.001 11:15:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:23.001 11:15:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:23.001 [2024-11-20 11:15:50.273775] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:23.001 [2024-11-20 11:15:50.321024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:23.261 11:15:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:23.261 11:15:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:21:23.261 11:15:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:21:23.519 [2024-11-20 11:15:50.867246] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:23.519 [2024-11-20 11:15:50.883367] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:21:23.519 nvme0n1 nvme0n2 00:21:23.519 nvme1n1 00:21:23.519 11:15:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:21:23.519 11:15:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:21:23.519 11:15:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:24.897 11:15:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:21:24.897 11:15:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:21:24.897 11:15:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:21:24.897 11:15:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:21:24.897 11:15:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:21:24.897 11:15:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:21:24.897 11:15:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:21:24.897 11:15:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:21:24.897 11:15:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:24.897 11:15:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:21:24.897 11:15:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:21:24.897 11:15:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:21:24.897 11:15:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:21:25.834 11:15:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:25.834 11:15:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:21:25.834 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:21:25.834 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:21:25.834 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:21:25.834 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 72834953-1376-4a90-80d2-681e197b00d1 00:21:25.834 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:21:25.834 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:21:25.834 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:21:25.834 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:21:25.834 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:21:25.834 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=7283495313764a9080d2681e197b00d1 00:21:25.834 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 7283495313764A9080D2681E197B00D1 00:21:25.834 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 7283495313764A9080D2681E197B00D1 == \7\2\8\3\4\9\5\3\1\3\7\6\4\A\9\0\8\0\D\2\6\8\1\E\1\9\7\B\0\0\D\1 ]] 00:21:25.834 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:21:25.834 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:21:25.834 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:25.834 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:21:25.834 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:21:25.834 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:21:25.834 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:21:25.834 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid c5736909-cc33-4d00-a5ef-208933f891c8 00:21:25.834 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:21:25.834 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:21:25.834 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:21:25.834 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:21:25.834 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:21:25.834 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=c5736909cc334d00a5ef208933f891c8 00:21:25.834 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo C5736909CC334D00A5EF208933F891C8 00:21:25.834 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ C5736909CC334D00A5EF208933F891C8 == \C\5\7\3\6\9\0\9\C\C\3\3\4\D\0\0\A\5\E\F\2\0\8\9\3\3\F\8\9\1\C\8 ]] 00:21:25.834 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:21:25.834 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:21:25.834 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:25.834 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:21:25.834 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:21:25.834 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:21:25.834 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:21:25.834 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 61cacd79-8d39-435d-b633-909f8c4dc691 00:21:25.834 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:21:25.834 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:21:25.834 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:21:25.834 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:21:25.834 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:21:25.834 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=61cacd798d39435db633909f8c4dc691 00:21:25.834 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 61CACD798D39435DB633909F8C4DC691 00:21:25.834 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 61CACD798D39435DB633909F8C4DC691 == \6\1\C\A\C\D\7\9\8\D\3\9\4\3\5\D\B\6\3\3\9\0\9\F\8\C\4\D\C\6\9\1 ]] 00:21:25.834 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:21:26.094 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:21:26.094 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:21:26.094 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 4122991 00:21:26.094 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 4122991 ']' 00:21:26.094 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 4122991 00:21:26.094 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:21:26.094 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:26.094 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4122991 00:21:26.094 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:26.094 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:26.094 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4122991' 00:21:26.094 killing process with pid 4122991 00:21:26.094 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 4122991 00:21:26.094 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 4122991 00:21:26.353 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:21:26.353 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:26.353 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:21:26.353 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:26.353 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:21:26.354 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:26.354 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:26.354 rmmod nvme_tcp 00:21:26.354 rmmod nvme_fabrics 00:21:26.354 rmmod nvme_keyring 00:21:26.613 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:26.613 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:21:26.613 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:21:26.613 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 4122972 ']' 00:21:26.613 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 4122972 00:21:26.613 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 4122972 ']' 00:21:26.613 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 4122972 00:21:26.613 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:21:26.613 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:26.613 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4122972 00:21:26.614 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:26.614 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:26.614 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4122972' 00:21:26.614 killing process with pid 4122972 00:21:26.614 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 4122972 00:21:26.614 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 4122972 00:21:26.614 11:15:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:26.614 11:15:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:26.614 11:15:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:26.614 11:15:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:21:26.614 11:15:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:21:26.614 11:15:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:21:26.614 11:15:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:26.614 11:15:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:26.614 11:15:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:26.614 11:15:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:26.614 11:15:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:26.614 11:15:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:29.150 11:15:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:29.150 00:21:29.150 real 0m12.468s 00:21:29.150 user 0m9.753s 00:21:29.150 sys 0m5.546s 00:21:29.150 11:15:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:29.150 11:15:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:29.150 ************************************ 00:21:29.150 END TEST nvmf_nsid 00:21:29.150 ************************************ 00:21:29.150 11:15:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:21:29.150 00:21:29.150 real 12m4.573s 00:21:29.150 user 26m3.376s 00:21:29.150 sys 3m45.104s 00:21:29.150 11:15:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:29.150 11:15:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:29.150 ************************************ 00:21:29.150 END TEST nvmf_target_extra 00:21:29.150 ************************************ 00:21:29.150 11:15:56 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:21:29.150 11:15:56 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:29.150 11:15:56 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:29.150 11:15:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:29.150 ************************************ 00:21:29.150 START TEST nvmf_host 00:21:29.150 ************************************ 00:21:29.150 11:15:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:21:29.150 * Looking for test storage... 00:21:29.150 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:21:29.150 11:15:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:29.150 11:15:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lcov --version 00:21:29.150 11:15:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:29.150 11:15:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:29.150 11:15:56 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:29.150 11:15:56 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:29.150 11:15:56 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:29.150 11:15:56 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:21:29.150 11:15:56 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:21:29.150 11:15:56 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:21:29.150 11:15:56 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:21:29.150 11:15:56 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:21:29.150 11:15:56 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:21:29.150 11:15:56 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:21:29.150 11:15:56 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:29.150 11:15:56 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:21:29.150 11:15:56 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:21:29.150 11:15:56 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:29.150 11:15:56 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:29.150 11:15:56 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:21:29.150 11:15:56 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:21:29.150 11:15:56 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:29.150 11:15:56 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:21:29.150 11:15:56 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:21:29.150 11:15:56 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:21:29.150 11:15:56 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:21:29.150 11:15:56 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:29.150 11:15:56 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:21:29.150 11:15:56 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:21:29.150 11:15:56 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:29.150 11:15:56 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:29.150 11:15:56 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:21:29.150 11:15:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:29.150 11:15:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:29.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:29.150 --rc genhtml_branch_coverage=1 00:21:29.150 --rc genhtml_function_coverage=1 00:21:29.150 --rc genhtml_legend=1 00:21:29.150 --rc geninfo_all_blocks=1 00:21:29.150 --rc geninfo_unexecuted_blocks=1 00:21:29.150 00:21:29.150 ' 00:21:29.150 11:15:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:29.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:29.150 --rc genhtml_branch_coverage=1 00:21:29.150 --rc genhtml_function_coverage=1 00:21:29.150 --rc genhtml_legend=1 00:21:29.150 --rc geninfo_all_blocks=1 00:21:29.150 --rc geninfo_unexecuted_blocks=1 00:21:29.150 00:21:29.150 ' 00:21:29.150 11:15:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:29.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:29.150 --rc genhtml_branch_coverage=1 00:21:29.150 --rc genhtml_function_coverage=1 00:21:29.150 --rc genhtml_legend=1 00:21:29.150 --rc geninfo_all_blocks=1 00:21:29.150 --rc geninfo_unexecuted_blocks=1 00:21:29.150 00:21:29.150 ' 00:21:29.150 11:15:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:29.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:29.150 --rc genhtml_branch_coverage=1 00:21:29.150 --rc genhtml_function_coverage=1 00:21:29.150 --rc genhtml_legend=1 00:21:29.150 --rc geninfo_all_blocks=1 00:21:29.150 --rc geninfo_unexecuted_blocks=1 00:21:29.150 00:21:29.150 ' 00:21:29.150 11:15:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:29.150 11:15:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:21:29.150 11:15:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:29.150 11:15:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:29.150 11:15:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:29.150 11:15:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:29.150 11:15:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:29.150 11:15:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:29.150 11:15:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:29.150 11:15:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:29.150 11:15:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:29.150 11:15:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:29.150 11:15:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:29.151 11:15:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:29.151 11:15:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:29.151 11:15:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:29.151 11:15:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:29.151 11:15:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:29.151 11:15:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:29.151 11:15:56 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:21:29.151 11:15:56 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:29.151 11:15:56 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:29.151 11:15:56 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:29.151 11:15:56 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:29.151 11:15:56 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:29.151 11:15:56 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:29.151 11:15:56 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:21:29.151 11:15:56 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:29.151 11:15:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:21:29.151 11:15:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:29.151 11:15:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:29.151 11:15:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:29.151 11:15:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:29.151 11:15:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:29.151 11:15:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:29.151 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:29.151 11:15:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:29.151 11:15:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:29.151 11:15:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:29.151 11:15:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:21:29.151 11:15:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:21:29.151 11:15:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:21:29.151 11:15:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:21:29.151 11:15:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:29.151 11:15:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:29.151 11:15:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:29.151 ************************************ 00:21:29.151 START TEST nvmf_multicontroller 00:21:29.151 ************************************ 00:21:29.151 11:15:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:21:29.151 * Looking for test storage... 00:21:29.151 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:29.151 11:15:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:29.151 11:15:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lcov --version 00:21:29.151 11:15:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:29.411 11:15:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:29.411 11:15:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:29.411 11:15:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:29.411 11:15:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:29.411 11:15:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:21:29.411 11:15:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:21:29.411 11:15:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:21:29.411 11:15:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:21:29.411 11:15:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:21:29.411 11:15:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:21:29.411 11:15:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:21:29.411 11:15:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:29.411 11:15:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:21:29.411 11:15:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:21:29.411 11:15:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:29.411 11:15:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:29.411 11:15:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:21:29.411 11:15:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:21:29.411 11:15:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:29.411 11:15:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:21:29.411 11:15:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:21:29.411 11:15:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:21:29.411 11:15:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:21:29.411 11:15:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:29.411 11:15:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:21:29.411 11:15:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:21:29.411 11:15:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:29.411 11:15:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:29.411 11:15:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:21:29.411 11:15:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:29.411 11:15:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:29.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:29.411 --rc genhtml_branch_coverage=1 00:21:29.411 --rc genhtml_function_coverage=1 00:21:29.411 --rc genhtml_legend=1 00:21:29.411 --rc geninfo_all_blocks=1 00:21:29.411 --rc geninfo_unexecuted_blocks=1 00:21:29.411 00:21:29.411 ' 00:21:29.411 11:15:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:29.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:29.411 --rc genhtml_branch_coverage=1 00:21:29.411 --rc genhtml_function_coverage=1 00:21:29.411 --rc genhtml_legend=1 00:21:29.411 --rc geninfo_all_blocks=1 00:21:29.411 --rc geninfo_unexecuted_blocks=1 00:21:29.411 00:21:29.411 ' 00:21:29.411 11:15:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:29.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:29.411 --rc genhtml_branch_coverage=1 00:21:29.411 --rc genhtml_function_coverage=1 00:21:29.411 --rc genhtml_legend=1 00:21:29.411 --rc geninfo_all_blocks=1 00:21:29.411 --rc geninfo_unexecuted_blocks=1 00:21:29.411 00:21:29.411 ' 00:21:29.411 11:15:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:29.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:29.411 --rc genhtml_branch_coverage=1 00:21:29.411 --rc genhtml_function_coverage=1 00:21:29.411 --rc genhtml_legend=1 00:21:29.411 --rc geninfo_all_blocks=1 00:21:29.411 --rc geninfo_unexecuted_blocks=1 00:21:29.411 00:21:29.411 ' 00:21:29.411 11:15:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:29.411 11:15:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:21:29.411 11:15:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:29.411 11:15:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:29.411 11:15:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:29.411 11:15:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:29.411 11:15:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:29.411 11:15:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:29.411 11:15:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:29.411 11:15:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:29.411 11:15:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:29.411 11:15:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:29.411 11:15:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:29.411 11:15:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:29.411 11:15:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:29.411 11:15:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:29.411 11:15:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:29.411 11:15:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:29.411 11:15:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:29.411 11:15:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:21:29.411 11:15:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:29.411 11:15:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:29.411 11:15:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:29.411 11:15:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:29.411 11:15:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:29.411 11:15:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:29.411 11:15:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:21:29.412 11:15:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:29.412 11:15:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:21:29.412 11:15:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:29.412 11:15:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:29.412 11:15:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:29.412 11:15:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:29.412 11:15:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:29.412 11:15:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:29.412 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:29.412 11:15:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:29.412 11:15:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:29.412 11:15:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:29.412 11:15:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:29.412 11:15:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:29.412 11:15:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:21:29.412 11:15:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:21:29.412 11:15:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:29.412 11:15:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:21:29.412 11:15:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:21:29.412 11:15:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:29.412 11:15:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:29.412 11:15:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:29.412 11:15:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:29.412 11:15:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:29.412 11:15:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:29.412 11:15:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:29.412 11:15:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:29.412 11:15:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:29.412 11:15:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:29.412 11:15:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:21:29.412 11:15:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:36.127 11:16:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:36.127 11:16:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:21:36.127 11:16:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:36.127 11:16:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:36.127 11:16:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:36.127 11:16:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:36.127 11:16:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:36.127 11:16:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:21:36.127 11:16:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:36.127 11:16:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:21:36.127 11:16:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:21:36.127 11:16:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:21:36.127 11:16:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:21:36.127 11:16:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:21:36.127 11:16:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:21:36.127 11:16:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:36.127 11:16:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:36.128 11:16:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:36.128 11:16:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:36.128 11:16:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:36.128 11:16:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:36.128 11:16:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:36.128 11:16:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:36.128 11:16:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:36.128 11:16:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:36.128 11:16:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:36.128 11:16:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:36.128 11:16:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:36.128 11:16:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:36.128 11:16:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:36.128 11:16:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:36.128 11:16:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:36.128 11:16:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:36.128 11:16:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:36.128 11:16:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:36.128 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:36.128 11:16:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:36.128 11:16:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:36.128 11:16:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:36.128 11:16:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:36.128 11:16:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:36.128 11:16:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:36.128 11:16:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:36.128 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:36.128 11:16:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:36.128 11:16:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:36.128 11:16:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:36.128 11:16:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:36.128 11:16:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:36.128 11:16:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:36.128 11:16:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:36.128 11:16:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:36.128 11:16:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:36.128 11:16:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:36.128 11:16:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:36.128 11:16:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:36.128 11:16:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:36.128 11:16:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:36.128 11:16:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:36.128 11:16:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:36.128 Found net devices under 0000:86:00.0: cvl_0_0 00:21:36.128 11:16:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:36.128 11:16:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:36.128 11:16:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:36.128 11:16:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:36.128 11:16:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:36.128 11:16:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:36.128 11:16:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:36.128 11:16:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:36.128 11:16:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:36.128 Found net devices under 0000:86:00.1: cvl_0_1 00:21:36.128 11:16:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:36.128 11:16:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:36.128 11:16:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:21:36.128 11:16:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:36.128 11:16:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:36.128 11:16:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:36.128 11:16:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:36.128 11:16:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:36.128 11:16:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:36.128 11:16:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:36.128 11:16:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:36.128 11:16:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:36.128 11:16:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:36.128 11:16:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:36.128 11:16:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:36.128 11:16:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:36.128 11:16:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:36.128 11:16:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:36.128 11:16:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:36.128 11:16:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:36.128 11:16:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:36.128 11:16:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:36.128 11:16:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:36.128 11:16:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:36.128 11:16:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:36.128 11:16:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:36.128 11:16:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:36.128 11:16:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:36.128 11:16:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:36.128 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:36.128 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.451 ms 00:21:36.128 00:21:36.128 --- 10.0.0.2 ping statistics --- 00:21:36.128 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:36.128 rtt min/avg/max/mdev = 0.451/0.451/0.451/0.000 ms 00:21:36.128 11:16:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:36.128 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:36.128 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.222 ms 00:21:36.128 00:21:36.128 --- 10.0.0.1 ping statistics --- 00:21:36.128 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:36.128 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:21:36.128 11:16:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:36.128 11:16:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:21:36.128 11:16:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:36.128 11:16:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:36.128 11:16:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:36.128 11:16:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:36.128 11:16:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:36.128 11:16:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:36.128 11:16:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:36.128 11:16:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:21:36.128 11:16:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:36.128 11:16:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:36.128 11:16:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:36.128 11:16:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=4127308 00:21:36.128 11:16:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:36.128 11:16:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 4127308 00:21:36.128 11:16:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 4127308 ']' 00:21:36.129 11:16:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:36.129 11:16:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:36.129 11:16:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:36.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:36.129 11:16:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:36.129 11:16:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:36.129 [2024-11-20 11:16:02.723302] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:21:36.129 [2024-11-20 11:16:02.723348] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:36.129 [2024-11-20 11:16:02.802705] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:36.129 [2024-11-20 11:16:02.845166] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:36.129 [2024-11-20 11:16:02.845204] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:36.129 [2024-11-20 11:16:02.845212] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:36.129 [2024-11-20 11:16:02.845218] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:36.129 [2024-11-20 11:16:02.845223] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:36.129 [2024-11-20 11:16:02.846600] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:36.129 [2024-11-20 11:16:02.846623] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:36.129 [2024-11-20 11:16:02.846625] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:36.129 11:16:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:36.129 11:16:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:21:36.129 11:16:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:36.129 11:16:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:36.129 11:16:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:36.129 11:16:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:36.129 11:16:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:36.129 11:16:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.129 11:16:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:36.129 [2024-11-20 11:16:02.982351] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:36.129 11:16:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.129 11:16:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:36.129 11:16:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.129 11:16:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:36.129 Malloc0 00:21:36.129 11:16:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.129 11:16:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:36.129 11:16:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.129 11:16:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:36.129 11:16:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.129 11:16:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:36.129 11:16:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.129 11:16:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:36.129 11:16:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.129 11:16:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:36.129 11:16:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.129 11:16:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:36.129 [2024-11-20 11:16:03.040513] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:36.129 11:16:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.129 11:16:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:36.129 11:16:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.129 11:16:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:36.129 [2024-11-20 11:16:03.048443] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:36.129 11:16:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.129 11:16:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:36.129 11:16:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.129 11:16:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:36.129 Malloc1 00:21:36.129 11:16:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.129 11:16:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:21:36.129 11:16:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.129 11:16:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:36.129 11:16:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.129 11:16:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:21:36.129 11:16:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.129 11:16:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:36.129 11:16:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.129 11:16:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:21:36.129 11:16:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.129 11:16:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:36.129 11:16:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.129 11:16:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:21:36.129 11:16:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.129 11:16:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:36.129 11:16:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.129 11:16:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=4127336 00:21:36.129 11:16:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:21:36.129 11:16:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:36.129 11:16:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 4127336 /var/tmp/bdevperf.sock 00:21:36.129 11:16:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 4127336 ']' 00:21:36.129 11:16:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:36.129 11:16:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:36.129 11:16:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:36.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:36.129 11:16:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:36.129 11:16:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:36.129 11:16:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:36.129 11:16:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:21:36.129 11:16:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:21:36.129 11:16:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.129 11:16:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:36.129 NVMe0n1 00:21:36.129 11:16:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.129 11:16:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:36.129 11:16:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:21:36.129 11:16:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.129 11:16:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:36.129 11:16:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.129 1 00:21:36.129 11:16:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:21:36.129 11:16:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:21:36.129 11:16:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:21:36.129 11:16:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:36.129 11:16:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:36.129 11:16:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:36.129 11:16:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:36.130 11:16:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:21:36.130 11:16:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.130 11:16:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:36.130 request: 00:21:36.130 { 00:21:36.130 "name": "NVMe0", 00:21:36.130 "trtype": "tcp", 00:21:36.130 "traddr": "10.0.0.2", 00:21:36.130 "adrfam": "ipv4", 00:21:36.130 "trsvcid": "4420", 00:21:36.130 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:36.130 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:21:36.130 "hostaddr": "10.0.0.1", 00:21:36.130 "prchk_reftag": false, 00:21:36.130 "prchk_guard": false, 00:21:36.130 "hdgst": false, 00:21:36.130 "ddgst": false, 00:21:36.130 "allow_unrecognized_csi": false, 00:21:36.130 "method": "bdev_nvme_attach_controller", 00:21:36.130 "req_id": 1 00:21:36.130 } 00:21:36.130 Got JSON-RPC error response 00:21:36.130 response: 00:21:36.130 { 00:21:36.130 "code": -114, 00:21:36.130 "message": "A controller named NVMe0 already exists with the specified network path" 00:21:36.130 } 00:21:36.130 11:16:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:36.130 11:16:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:21:36.130 11:16:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:36.130 11:16:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:36.130 11:16:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:36.130 11:16:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:21:36.130 11:16:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:21:36.130 11:16:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:21:36.130 11:16:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:36.130 11:16:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:36.130 11:16:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:36.130 11:16:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:36.130 11:16:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:21:36.130 11:16:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.130 11:16:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:36.130 request: 00:21:36.130 { 00:21:36.130 "name": "NVMe0", 00:21:36.130 "trtype": "tcp", 00:21:36.130 "traddr": "10.0.0.2", 00:21:36.130 "adrfam": "ipv4", 00:21:36.130 "trsvcid": "4420", 00:21:36.130 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:36.130 "hostaddr": "10.0.0.1", 00:21:36.130 "prchk_reftag": false, 00:21:36.130 "prchk_guard": false, 00:21:36.130 "hdgst": false, 00:21:36.130 "ddgst": false, 00:21:36.130 "allow_unrecognized_csi": false, 00:21:36.130 "method": "bdev_nvme_attach_controller", 00:21:36.130 "req_id": 1 00:21:36.130 } 00:21:36.130 Got JSON-RPC error response 00:21:36.130 response: 00:21:36.130 { 00:21:36.130 "code": -114, 00:21:36.130 "message": "A controller named NVMe0 already exists with the specified network path" 00:21:36.130 } 00:21:36.130 11:16:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:36.130 11:16:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:21:36.130 11:16:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:36.130 11:16:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:36.130 11:16:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:36.130 11:16:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:21:36.130 11:16:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:21:36.130 11:16:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:21:36.130 11:16:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:36.130 11:16:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:36.130 11:16:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:36.130 11:16:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:36.130 11:16:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:21:36.130 11:16:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.130 11:16:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:36.130 request: 00:21:36.130 { 00:21:36.130 "name": "NVMe0", 00:21:36.130 "trtype": "tcp", 00:21:36.130 "traddr": "10.0.0.2", 00:21:36.130 "adrfam": "ipv4", 00:21:36.130 "trsvcid": "4420", 00:21:36.130 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:36.130 "hostaddr": "10.0.0.1", 00:21:36.130 "prchk_reftag": false, 00:21:36.130 "prchk_guard": false, 00:21:36.130 "hdgst": false, 00:21:36.130 "ddgst": false, 00:21:36.130 "multipath": "disable", 00:21:36.130 "allow_unrecognized_csi": false, 00:21:36.130 "method": "bdev_nvme_attach_controller", 00:21:36.130 "req_id": 1 00:21:36.130 } 00:21:36.130 Got JSON-RPC error response 00:21:36.130 response: 00:21:36.130 { 00:21:36.130 "code": -114, 00:21:36.130 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:21:36.130 } 00:21:36.130 11:16:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:36.130 11:16:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:21:36.130 11:16:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:36.130 11:16:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:36.130 11:16:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:36.130 11:16:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:21:36.130 11:16:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:21:36.130 11:16:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:21:36.130 11:16:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:36.130 11:16:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:36.130 11:16:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:36.130 11:16:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:36.130 11:16:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:21:36.130 11:16:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.130 11:16:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:36.130 request: 00:21:36.130 { 00:21:36.130 "name": "NVMe0", 00:21:36.130 "trtype": "tcp", 00:21:36.130 "traddr": "10.0.0.2", 00:21:36.130 "adrfam": "ipv4", 00:21:36.130 "trsvcid": "4420", 00:21:36.130 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:36.130 "hostaddr": "10.0.0.1", 00:21:36.130 "prchk_reftag": false, 00:21:36.130 "prchk_guard": false, 00:21:36.130 "hdgst": false, 00:21:36.130 "ddgst": false, 00:21:36.130 "multipath": "failover", 00:21:36.130 "allow_unrecognized_csi": false, 00:21:36.130 "method": "bdev_nvme_attach_controller", 00:21:36.130 "req_id": 1 00:21:36.130 } 00:21:36.130 Got JSON-RPC error response 00:21:36.130 response: 00:21:36.130 { 00:21:36.130 "code": -114, 00:21:36.130 "message": "A controller named NVMe0 already exists with the specified network path" 00:21:36.130 } 00:21:36.130 11:16:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:36.130 11:16:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:21:36.130 11:16:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:36.130 11:16:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:36.130 11:16:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:36.130 11:16:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:36.130 11:16:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.130 11:16:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:36.389 NVMe0n1 00:21:36.389 11:16:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.390 11:16:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:36.390 11:16:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.390 11:16:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:36.390 11:16:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.390 11:16:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:21:36.390 11:16:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.390 11:16:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:36.390 00:21:36.390 11:16:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.390 11:16:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:36.390 11:16:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:21:36.390 11:16:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.390 11:16:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:36.390 11:16:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.390 11:16:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:21:36.390 11:16:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:37.769 { 00:21:37.769 "results": [ 00:21:37.769 { 00:21:37.769 "job": "NVMe0n1", 00:21:37.769 "core_mask": "0x1", 00:21:37.769 "workload": "write", 00:21:37.769 "status": "finished", 00:21:37.769 "queue_depth": 128, 00:21:37.769 "io_size": 4096, 00:21:37.769 "runtime": 1.003842, 00:21:37.769 "iops": 24230.904863514377, 00:21:37.769 "mibps": 94.65197212310304, 00:21:37.769 "io_failed": 0, 00:21:37.769 "io_timeout": 0, 00:21:37.769 "avg_latency_us": 5275.5912148316565, 00:21:37.769 "min_latency_us": 1495.9304347826087, 00:21:37.769 "max_latency_us": 9232.027826086956 00:21:37.769 } 00:21:37.769 ], 00:21:37.769 "core_count": 1 00:21:37.769 } 00:21:37.769 11:16:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:21:37.769 11:16:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.769 11:16:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:37.769 11:16:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.769 11:16:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:21:37.769 11:16:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 4127336 00:21:37.769 11:16:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 4127336 ']' 00:21:37.769 11:16:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 4127336 00:21:37.769 11:16:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:21:37.769 11:16:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:37.769 11:16:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4127336 00:21:37.769 11:16:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:37.769 11:16:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:37.769 11:16:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4127336' 00:21:37.769 killing process with pid 4127336 00:21:37.769 11:16:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 4127336 00:21:37.769 11:16:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 4127336 00:21:37.769 11:16:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:37.769 11:16:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.769 11:16:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:37.769 11:16:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.769 11:16:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:21:37.769 11:16:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.769 11:16:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:37.769 11:16:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.769 11:16:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:21:37.769 11:16:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:37.769 11:16:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:21:37.769 11:16:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:21:37.769 11:16:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:21:37.769 11:16:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:21:37.769 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:21:37.769 [2024-11-20 11:16:03.148566] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:21:37.769 [2024-11-20 11:16:03.148615] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4127336 ] 00:21:37.769 [2024-11-20 11:16:03.222172] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:37.769 [2024-11-20 11:16:03.263803] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:37.769 [2024-11-20 11:16:03.845187] bdev.c:4700:bdev_name_add: *ERROR*: Bdev name 3728ba11-369d-4fd4-9653-7bc833e31606 already exists 00:21:37.769 [2024-11-20 11:16:03.845217] bdev.c:7842:bdev_register: *ERROR*: Unable to add uuid:3728ba11-369d-4fd4-9653-7bc833e31606 alias for bdev NVMe1n1 00:21:37.769 [2024-11-20 11:16:03.845225] bdev_nvme.c:4658:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:21:37.769 Running I/O for 1 seconds... 00:21:37.769 24196.00 IOPS, 94.52 MiB/s 00:21:37.769 Latency(us) 00:21:37.769 [2024-11-20T10:16:05.265Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:37.769 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:21:37.769 NVMe0n1 : 1.00 24230.90 94.65 0.00 0.00 5275.59 1495.93 9232.03 00:21:37.769 [2024-11-20T10:16:05.265Z] =================================================================================================================== 00:21:37.769 [2024-11-20T10:16:05.265Z] Total : 24230.90 94.65 0.00 0.00 5275.59 1495.93 9232.03 00:21:37.769 Received shutdown signal, test time was about 1.000000 seconds 00:21:37.769 00:21:37.769 Latency(us) 00:21:37.769 [2024-11-20T10:16:05.265Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:37.769 [2024-11-20T10:16:05.265Z] =================================================================================================================== 00:21:37.769 [2024-11-20T10:16:05.265Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:37.769 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:21:37.769 11:16:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:37.769 11:16:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:21:37.769 11:16:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:21:37.769 11:16:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:37.769 11:16:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:21:37.769 11:16:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:37.769 11:16:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:21:37.769 11:16:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:37.769 11:16:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:38.029 rmmod nvme_tcp 00:21:38.029 rmmod nvme_fabrics 00:21:38.029 rmmod nvme_keyring 00:21:38.029 11:16:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:38.029 11:16:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:21:38.029 11:16:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:21:38.029 11:16:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 4127308 ']' 00:21:38.029 11:16:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 4127308 00:21:38.029 11:16:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 4127308 ']' 00:21:38.029 11:16:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 4127308 00:21:38.029 11:16:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:21:38.029 11:16:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:38.029 11:16:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4127308 00:21:38.029 11:16:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:38.029 11:16:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:38.029 11:16:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4127308' 00:21:38.029 killing process with pid 4127308 00:21:38.029 11:16:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 4127308 00:21:38.029 11:16:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 4127308 00:21:38.289 11:16:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:38.289 11:16:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:38.289 11:16:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:38.289 11:16:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:21:38.289 11:16:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:21:38.289 11:16:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:38.289 11:16:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:21:38.289 11:16:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:38.289 11:16:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:38.289 11:16:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:38.289 11:16:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:38.289 11:16:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:40.194 11:16:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:40.194 00:21:40.194 real 0m11.139s 00:21:40.194 user 0m12.186s 00:21:40.194 sys 0m5.142s 00:21:40.194 11:16:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:40.194 11:16:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:40.194 ************************************ 00:21:40.194 END TEST nvmf_multicontroller 00:21:40.194 ************************************ 00:21:40.454 11:16:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:21:40.454 11:16:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:40.454 11:16:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:40.454 11:16:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:40.454 ************************************ 00:21:40.454 START TEST nvmf_aer 00:21:40.454 ************************************ 00:21:40.454 11:16:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:21:40.454 * Looking for test storage... 00:21:40.454 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:40.454 11:16:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:40.454 11:16:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lcov --version 00:21:40.454 11:16:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:40.454 11:16:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:40.454 11:16:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:40.454 11:16:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:40.454 11:16:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:40.454 11:16:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:21:40.454 11:16:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:21:40.454 11:16:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:21:40.454 11:16:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:21:40.454 11:16:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:21:40.454 11:16:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:21:40.454 11:16:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:21:40.454 11:16:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:40.454 11:16:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:21:40.454 11:16:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:21:40.454 11:16:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:40.454 11:16:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:40.454 11:16:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:21:40.454 11:16:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:21:40.454 11:16:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:40.454 11:16:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:21:40.454 11:16:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:21:40.454 11:16:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:21:40.454 11:16:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:21:40.454 11:16:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:40.454 11:16:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:21:40.454 11:16:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:21:40.454 11:16:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:40.454 11:16:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:40.454 11:16:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:21:40.454 11:16:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:40.454 11:16:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:40.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:40.454 --rc genhtml_branch_coverage=1 00:21:40.454 --rc genhtml_function_coverage=1 00:21:40.454 --rc genhtml_legend=1 00:21:40.454 --rc geninfo_all_blocks=1 00:21:40.454 --rc geninfo_unexecuted_blocks=1 00:21:40.454 00:21:40.454 ' 00:21:40.454 11:16:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:40.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:40.454 --rc genhtml_branch_coverage=1 00:21:40.454 --rc genhtml_function_coverage=1 00:21:40.454 --rc genhtml_legend=1 00:21:40.454 --rc geninfo_all_blocks=1 00:21:40.454 --rc geninfo_unexecuted_blocks=1 00:21:40.454 00:21:40.454 ' 00:21:40.454 11:16:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:40.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:40.454 --rc genhtml_branch_coverage=1 00:21:40.454 --rc genhtml_function_coverage=1 00:21:40.454 --rc genhtml_legend=1 00:21:40.454 --rc geninfo_all_blocks=1 00:21:40.454 --rc geninfo_unexecuted_blocks=1 00:21:40.454 00:21:40.454 ' 00:21:40.454 11:16:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:40.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:40.454 --rc genhtml_branch_coverage=1 00:21:40.454 --rc genhtml_function_coverage=1 00:21:40.454 --rc genhtml_legend=1 00:21:40.454 --rc geninfo_all_blocks=1 00:21:40.454 --rc geninfo_unexecuted_blocks=1 00:21:40.454 00:21:40.454 ' 00:21:40.454 11:16:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:40.454 11:16:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:21:40.454 11:16:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:40.454 11:16:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:40.454 11:16:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:40.454 11:16:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:40.454 11:16:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:40.454 11:16:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:40.454 11:16:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:40.454 11:16:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:40.454 11:16:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:40.454 11:16:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:40.454 11:16:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:40.454 11:16:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:40.454 11:16:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:40.454 11:16:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:40.454 11:16:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:40.454 11:16:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:40.454 11:16:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:40.454 11:16:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:21:40.454 11:16:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:40.454 11:16:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:40.454 11:16:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:40.454 11:16:07 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:40.454 11:16:07 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:40.454 11:16:07 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:40.454 11:16:07 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:21:40.454 11:16:07 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:40.454 11:16:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:21:40.454 11:16:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:40.454 11:16:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:40.454 11:16:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:40.454 11:16:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:40.454 11:16:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:40.454 11:16:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:40.455 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:40.455 11:16:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:40.455 11:16:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:40.455 11:16:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:40.455 11:16:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:21:40.455 11:16:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:40.455 11:16:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:40.455 11:16:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:40.455 11:16:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:40.455 11:16:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:40.455 11:16:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:40.455 11:16:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:40.455 11:16:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:40.713 11:16:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:40.713 11:16:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:40.713 11:16:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:21:40.713 11:16:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:47.283 11:16:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:47.283 11:16:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:21:47.283 11:16:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:47.283 11:16:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:47.283 11:16:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:47.283 11:16:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:47.283 11:16:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:47.283 11:16:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:21:47.283 11:16:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:47.283 11:16:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:21:47.283 11:16:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:21:47.283 11:16:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:21:47.283 11:16:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:21:47.283 11:16:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:21:47.283 11:16:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:21:47.283 11:16:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:47.283 11:16:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:47.283 11:16:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:47.283 11:16:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:47.283 11:16:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:47.283 11:16:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:47.283 11:16:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:47.283 11:16:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:47.283 11:16:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:47.283 11:16:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:47.283 11:16:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:47.283 11:16:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:47.283 11:16:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:47.283 11:16:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:47.283 11:16:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:47.283 11:16:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:47.283 11:16:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:47.283 11:16:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:47.283 11:16:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:47.283 11:16:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:47.283 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:47.283 11:16:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:47.283 11:16:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:47.283 11:16:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:47.283 11:16:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:47.283 11:16:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:47.283 11:16:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:47.283 11:16:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:47.283 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:47.283 11:16:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:47.283 11:16:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:47.283 11:16:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:47.283 11:16:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:47.283 11:16:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:47.283 11:16:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:47.283 11:16:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:47.283 11:16:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:47.283 11:16:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:47.283 11:16:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:47.283 11:16:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:47.283 11:16:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:47.283 11:16:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:47.283 11:16:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:47.283 11:16:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:47.283 11:16:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:47.283 Found net devices under 0000:86:00.0: cvl_0_0 00:21:47.283 11:16:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:47.283 11:16:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:47.283 11:16:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:47.283 11:16:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:47.284 11:16:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:47.284 11:16:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:47.284 11:16:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:47.284 11:16:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:47.284 11:16:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:47.284 Found net devices under 0000:86:00.1: cvl_0_1 00:21:47.284 11:16:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:47.284 11:16:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:47.284 11:16:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:21:47.284 11:16:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:47.284 11:16:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:47.284 11:16:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:47.284 11:16:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:47.284 11:16:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:47.284 11:16:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:47.284 11:16:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:47.284 11:16:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:47.284 11:16:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:47.284 11:16:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:47.284 11:16:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:47.284 11:16:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:47.284 11:16:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:47.284 11:16:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:47.284 11:16:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:47.284 11:16:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:47.284 11:16:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:47.284 11:16:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:47.284 11:16:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:47.284 11:16:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:47.284 11:16:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:47.284 11:16:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:47.284 11:16:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:47.284 11:16:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:47.284 11:16:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:47.284 11:16:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:47.284 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:47.284 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.454 ms 00:21:47.284 00:21:47.284 --- 10.0.0.2 ping statistics --- 00:21:47.284 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:47.284 rtt min/avg/max/mdev = 0.454/0.454/0.454/0.000 ms 00:21:47.284 11:16:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:47.284 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:47.284 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:21:47.284 00:21:47.284 --- 10.0.0.1 ping statistics --- 00:21:47.284 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:47.284 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:21:47.284 11:16:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:47.284 11:16:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:21:47.284 11:16:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:47.284 11:16:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:47.284 11:16:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:47.284 11:16:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:47.284 11:16:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:47.284 11:16:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:47.284 11:16:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:47.284 11:16:13 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:21:47.284 11:16:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:47.284 11:16:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:47.284 11:16:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:47.284 11:16:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=4131322 00:21:47.284 11:16:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:47.284 11:16:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 4131322 00:21:47.284 11:16:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 4131322 ']' 00:21:47.284 11:16:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:47.284 11:16:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:47.284 11:16:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:47.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:47.284 11:16:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:47.284 11:16:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:47.284 [2024-11-20 11:16:13.953296] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:21:47.284 [2024-11-20 11:16:13.953342] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:47.284 [2024-11-20 11:16:14.030301] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:47.284 [2024-11-20 11:16:14.073109] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:47.284 [2024-11-20 11:16:14.073147] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:47.284 [2024-11-20 11:16:14.073154] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:47.284 [2024-11-20 11:16:14.073160] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:47.284 [2024-11-20 11:16:14.073165] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:47.284 [2024-11-20 11:16:14.074726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:47.284 [2024-11-20 11:16:14.074833] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:47.284 [2024-11-20 11:16:14.074943] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:47.284 [2024-11-20 11:16:14.074944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:47.284 11:16:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:47.284 11:16:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:21:47.284 11:16:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:47.284 11:16:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:47.284 11:16:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:47.284 11:16:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:47.284 11:16:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:47.284 11:16:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.284 11:16:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:47.284 [2024-11-20 11:16:14.211407] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:47.284 11:16:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.284 11:16:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:21:47.284 11:16:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.284 11:16:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:47.284 Malloc0 00:21:47.284 11:16:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.284 11:16:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:21:47.284 11:16:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.284 11:16:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:47.284 11:16:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.284 11:16:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:47.284 11:16:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.284 11:16:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:47.284 11:16:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.284 11:16:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:47.284 11:16:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.284 11:16:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:47.284 [2024-11-20 11:16:14.279734] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:47.284 11:16:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.284 11:16:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:21:47.284 11:16:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.284 11:16:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:47.284 [ 00:21:47.284 { 00:21:47.284 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:47.284 "subtype": "Discovery", 00:21:47.284 "listen_addresses": [], 00:21:47.284 "allow_any_host": true, 00:21:47.284 "hosts": [] 00:21:47.284 }, 00:21:47.284 { 00:21:47.284 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:47.284 "subtype": "NVMe", 00:21:47.284 "listen_addresses": [ 00:21:47.284 { 00:21:47.284 "trtype": "TCP", 00:21:47.285 "adrfam": "IPv4", 00:21:47.285 "traddr": "10.0.0.2", 00:21:47.285 "trsvcid": "4420" 00:21:47.285 } 00:21:47.285 ], 00:21:47.285 "allow_any_host": true, 00:21:47.285 "hosts": [], 00:21:47.285 "serial_number": "SPDK00000000000001", 00:21:47.285 "model_number": "SPDK bdev Controller", 00:21:47.285 "max_namespaces": 2, 00:21:47.285 "min_cntlid": 1, 00:21:47.285 "max_cntlid": 65519, 00:21:47.285 "namespaces": [ 00:21:47.285 { 00:21:47.285 "nsid": 1, 00:21:47.285 "bdev_name": "Malloc0", 00:21:47.285 "name": "Malloc0", 00:21:47.285 "nguid": "E4251A76E5F143719699209ACC02DF31", 00:21:47.285 "uuid": "e4251a76-e5f1-4371-9699-209acc02df31" 00:21:47.285 } 00:21:47.285 ] 00:21:47.285 } 00:21:47.285 ] 00:21:47.285 11:16:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.285 11:16:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:21:47.285 11:16:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:21:47.285 11:16:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=4131354 00:21:47.285 11:16:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:21:47.285 11:16:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:21:47.285 11:16:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:21:47.285 11:16:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:47.285 11:16:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:21:47.285 11:16:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:21:47.285 11:16:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:21:47.285 11:16:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:47.285 11:16:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:21:47.285 11:16:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:21:47.285 11:16:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:21:47.285 11:16:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:47.285 11:16:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:47.285 11:16:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:21:47.285 11:16:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:21:47.285 11:16:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.285 11:16:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:47.285 Malloc1 00:21:47.285 11:16:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.285 11:16:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:21:47.285 11:16:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.285 11:16:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:47.285 11:16:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.285 11:16:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:21:47.285 11:16:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.285 11:16:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:47.285 Asynchronous Event Request test 00:21:47.285 Attaching to 10.0.0.2 00:21:47.285 Attached to 10.0.0.2 00:21:47.285 Registering asynchronous event callbacks... 00:21:47.285 Starting namespace attribute notice tests for all controllers... 00:21:47.285 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:21:47.285 aer_cb - Changed Namespace 00:21:47.285 Cleaning up... 00:21:47.285 [ 00:21:47.285 { 00:21:47.285 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:47.285 "subtype": "Discovery", 00:21:47.285 "listen_addresses": [], 00:21:47.285 "allow_any_host": true, 00:21:47.285 "hosts": [] 00:21:47.285 }, 00:21:47.285 { 00:21:47.285 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:47.285 "subtype": "NVMe", 00:21:47.285 "listen_addresses": [ 00:21:47.285 { 00:21:47.285 "trtype": "TCP", 00:21:47.285 "adrfam": "IPv4", 00:21:47.285 "traddr": "10.0.0.2", 00:21:47.285 "trsvcid": "4420" 00:21:47.285 } 00:21:47.285 ], 00:21:47.285 "allow_any_host": true, 00:21:47.285 "hosts": [], 00:21:47.285 "serial_number": "SPDK00000000000001", 00:21:47.285 "model_number": "SPDK bdev Controller", 00:21:47.285 "max_namespaces": 2, 00:21:47.285 "min_cntlid": 1, 00:21:47.285 "max_cntlid": 65519, 00:21:47.285 "namespaces": [ 00:21:47.285 { 00:21:47.285 "nsid": 1, 00:21:47.285 "bdev_name": "Malloc0", 00:21:47.285 "name": "Malloc0", 00:21:47.285 "nguid": "E4251A76E5F143719699209ACC02DF31", 00:21:47.285 "uuid": "e4251a76-e5f1-4371-9699-209acc02df31" 00:21:47.285 }, 00:21:47.285 { 00:21:47.285 "nsid": 2, 00:21:47.285 "bdev_name": "Malloc1", 00:21:47.285 "name": "Malloc1", 00:21:47.285 "nguid": "AAF22D687F984B7FA6440B698F8F6A04", 00:21:47.285 "uuid": "aaf22d68-7f98-4b7f-a644-0b698f8f6a04" 00:21:47.285 } 00:21:47.285 ] 00:21:47.285 } 00:21:47.285 ] 00:21:47.285 11:16:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.285 11:16:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 4131354 00:21:47.285 11:16:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:21:47.285 11:16:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.285 11:16:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:47.285 11:16:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.285 11:16:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:21:47.285 11:16:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.285 11:16:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:47.285 11:16:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.285 11:16:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:47.285 11:16:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.285 11:16:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:47.285 11:16:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.285 11:16:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:21:47.285 11:16:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:21:47.285 11:16:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:47.285 11:16:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:21:47.285 11:16:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:47.285 11:16:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:21:47.285 11:16:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:47.285 11:16:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:47.285 rmmod nvme_tcp 00:21:47.285 rmmod nvme_fabrics 00:21:47.285 rmmod nvme_keyring 00:21:47.285 11:16:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:47.285 11:16:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:21:47.285 11:16:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:21:47.285 11:16:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 4131322 ']' 00:21:47.285 11:16:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 4131322 00:21:47.285 11:16:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 4131322 ']' 00:21:47.285 11:16:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 4131322 00:21:47.285 11:16:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:21:47.285 11:16:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:47.285 11:16:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4131322 00:21:47.285 11:16:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:47.285 11:16:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:47.285 11:16:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4131322' 00:21:47.285 killing process with pid 4131322 00:21:47.285 11:16:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 4131322 00:21:47.285 11:16:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 4131322 00:21:47.545 11:16:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:47.545 11:16:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:47.545 11:16:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:47.545 11:16:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:21:47.545 11:16:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:21:47.545 11:16:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:47.545 11:16:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:21:47.545 11:16:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:47.545 11:16:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:47.545 11:16:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:47.545 11:16:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:47.545 11:16:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:50.081 11:16:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:50.081 00:21:50.081 real 0m9.272s 00:21:50.081 user 0m5.177s 00:21:50.081 sys 0m4.873s 00:21:50.081 11:16:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:50.081 11:16:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:50.081 ************************************ 00:21:50.081 END TEST nvmf_aer 00:21:50.081 ************************************ 00:21:50.081 11:16:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:21:50.081 11:16:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:50.081 11:16:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:50.081 11:16:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:50.081 ************************************ 00:21:50.081 START TEST nvmf_async_init 00:21:50.081 ************************************ 00:21:50.081 11:16:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:21:50.081 * Looking for test storage... 00:21:50.081 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:50.081 11:16:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:50.081 11:16:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lcov --version 00:21:50.081 11:16:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:50.081 11:16:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:50.081 11:16:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:50.081 11:16:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:50.081 11:16:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:50.081 11:16:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:21:50.081 11:16:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:21:50.081 11:16:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:21:50.081 11:16:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:21:50.081 11:16:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:21:50.081 11:16:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:21:50.081 11:16:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:21:50.081 11:16:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:50.081 11:16:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:21:50.081 11:16:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:21:50.081 11:16:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:50.081 11:16:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:50.081 11:16:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:21:50.081 11:16:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:21:50.081 11:16:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:50.081 11:16:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:21:50.081 11:16:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:21:50.081 11:16:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:21:50.081 11:16:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:21:50.081 11:16:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:50.081 11:16:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:21:50.081 11:16:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:21:50.081 11:16:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:50.081 11:16:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:50.081 11:16:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:21:50.081 11:16:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:50.081 11:16:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:50.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:50.081 --rc genhtml_branch_coverage=1 00:21:50.081 --rc genhtml_function_coverage=1 00:21:50.081 --rc genhtml_legend=1 00:21:50.081 --rc geninfo_all_blocks=1 00:21:50.081 --rc geninfo_unexecuted_blocks=1 00:21:50.081 00:21:50.081 ' 00:21:50.081 11:16:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:50.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:50.081 --rc genhtml_branch_coverage=1 00:21:50.081 --rc genhtml_function_coverage=1 00:21:50.081 --rc genhtml_legend=1 00:21:50.081 --rc geninfo_all_blocks=1 00:21:50.081 --rc geninfo_unexecuted_blocks=1 00:21:50.081 00:21:50.081 ' 00:21:50.081 11:16:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:50.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:50.081 --rc genhtml_branch_coverage=1 00:21:50.081 --rc genhtml_function_coverage=1 00:21:50.081 --rc genhtml_legend=1 00:21:50.081 --rc geninfo_all_blocks=1 00:21:50.081 --rc geninfo_unexecuted_blocks=1 00:21:50.081 00:21:50.081 ' 00:21:50.081 11:16:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:50.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:50.081 --rc genhtml_branch_coverage=1 00:21:50.081 --rc genhtml_function_coverage=1 00:21:50.081 --rc genhtml_legend=1 00:21:50.081 --rc geninfo_all_blocks=1 00:21:50.081 --rc geninfo_unexecuted_blocks=1 00:21:50.081 00:21:50.081 ' 00:21:50.081 11:16:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:50.081 11:16:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:21:50.081 11:16:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:50.081 11:16:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:50.081 11:16:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:50.081 11:16:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:50.081 11:16:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:50.081 11:16:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:50.081 11:16:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:50.081 11:16:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:50.081 11:16:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:50.081 11:16:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:50.081 11:16:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:50.081 11:16:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:50.081 11:16:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:50.081 11:16:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:50.081 11:16:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:50.081 11:16:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:50.081 11:16:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:50.081 11:16:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:21:50.081 11:16:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:50.081 11:16:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:50.081 11:16:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:50.081 11:16:17 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:50.082 11:16:17 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:50.082 11:16:17 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:50.082 11:16:17 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:21:50.082 11:16:17 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:50.082 11:16:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:21:50.082 11:16:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:50.082 11:16:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:50.082 11:16:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:50.082 11:16:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:50.082 11:16:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:50.082 11:16:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:50.082 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:50.082 11:16:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:50.082 11:16:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:50.082 11:16:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:50.082 11:16:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:21:50.082 11:16:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:21:50.082 11:16:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:21:50.082 11:16:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:21:50.082 11:16:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:21:50.082 11:16:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:21:50.082 11:16:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=0a02f751606047278806b0781b1edfa3 00:21:50.082 11:16:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:21:50.082 11:16:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:50.082 11:16:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:50.082 11:16:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:50.082 11:16:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:50.082 11:16:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:50.082 11:16:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:50.082 11:16:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:50.082 11:16:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:50.082 11:16:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:50.082 11:16:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:50.082 11:16:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:21:50.082 11:16:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:56.651 11:16:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:56.651 11:16:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:21:56.651 11:16:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:56.651 11:16:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:56.651 11:16:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:56.651 11:16:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:56.651 11:16:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:56.651 11:16:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:21:56.651 11:16:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:56.651 11:16:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:21:56.651 11:16:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:21:56.651 11:16:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:21:56.651 11:16:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:21:56.651 11:16:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:21:56.651 11:16:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:21:56.651 11:16:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:56.651 11:16:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:56.651 11:16:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:56.651 11:16:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:56.651 11:16:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:56.651 11:16:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:56.651 11:16:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:56.651 11:16:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:56.651 11:16:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:56.651 11:16:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:56.651 11:16:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:56.651 11:16:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:56.651 11:16:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:56.651 11:16:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:56.651 11:16:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:56.651 11:16:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:56.651 11:16:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:56.651 11:16:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:56.651 11:16:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:56.651 11:16:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:56.651 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:56.651 11:16:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:56.651 11:16:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:56.651 11:16:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:56.651 11:16:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:56.651 11:16:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:56.651 11:16:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:56.651 11:16:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:56.651 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:56.651 11:16:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:56.651 11:16:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:56.651 11:16:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:56.651 11:16:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:56.651 11:16:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:56.651 11:16:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:56.651 11:16:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:56.651 11:16:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:56.651 11:16:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:56.651 11:16:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:56.651 11:16:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:56.651 11:16:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:56.651 11:16:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:56.651 11:16:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:56.651 11:16:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:56.651 11:16:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:56.651 Found net devices under 0000:86:00.0: cvl_0_0 00:21:56.651 11:16:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:56.651 11:16:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:56.651 11:16:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:56.651 11:16:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:56.651 11:16:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:56.651 11:16:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:56.651 11:16:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:56.651 11:16:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:56.651 11:16:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:56.651 Found net devices under 0000:86:00.1: cvl_0_1 00:21:56.651 11:16:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:56.651 11:16:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:56.651 11:16:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:21:56.651 11:16:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:56.651 11:16:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:56.651 11:16:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:56.651 11:16:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:56.652 11:16:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:56.652 11:16:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:56.652 11:16:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:56.652 11:16:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:56.652 11:16:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:56.652 11:16:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:56.652 11:16:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:56.652 11:16:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:56.652 11:16:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:56.652 11:16:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:56.652 11:16:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:56.652 11:16:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:56.652 11:16:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:56.652 11:16:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:56.652 11:16:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:56.652 11:16:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:56.652 11:16:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:56.652 11:16:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:56.652 11:16:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:56.652 11:16:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:56.652 11:16:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:56.652 11:16:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:56.652 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:56.652 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.475 ms 00:21:56.652 00:21:56.652 --- 10.0.0.2 ping statistics --- 00:21:56.652 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:56.652 rtt min/avg/max/mdev = 0.475/0.475/0.475/0.000 ms 00:21:56.652 11:16:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:56.652 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:56.652 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:21:56.652 00:21:56.652 --- 10.0.0.1 ping statistics --- 00:21:56.652 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:56.652 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:21:56.652 11:16:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:56.652 11:16:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:21:56.652 11:16:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:56.652 11:16:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:56.652 11:16:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:56.652 11:16:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:56.652 11:16:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:56.652 11:16:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:56.652 11:16:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:56.652 11:16:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:21:56.652 11:16:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:56.652 11:16:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:56.652 11:16:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:56.652 11:16:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=4134875 00:21:56.652 11:16:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:21:56.652 11:16:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 4134875 00:21:56.652 11:16:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 4134875 ']' 00:21:56.652 11:16:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:56.652 11:16:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:56.652 11:16:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:56.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:56.652 11:16:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:56.652 11:16:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:56.652 [2024-11-20 11:16:23.273244] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:21:56.652 [2024-11-20 11:16:23.273296] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:56.652 [2024-11-20 11:16:23.354616] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:56.652 [2024-11-20 11:16:23.396218] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:56.652 [2024-11-20 11:16:23.396258] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:56.652 [2024-11-20 11:16:23.396265] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:56.652 [2024-11-20 11:16:23.396272] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:56.652 [2024-11-20 11:16:23.396278] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:56.652 [2024-11-20 11:16:23.396826] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:56.652 11:16:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:56.652 11:16:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:21:56.652 11:16:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:56.652 11:16:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:56.652 11:16:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:56.652 11:16:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:56.652 11:16:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:21:56.652 11:16:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.652 11:16:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:56.652 [2024-11-20 11:16:23.532942] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:56.652 11:16:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.652 11:16:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:21:56.652 11:16:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.652 11:16:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:56.652 null0 00:21:56.652 11:16:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.652 11:16:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:21:56.652 11:16:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.652 11:16:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:56.652 11:16:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.652 11:16:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:21:56.652 11:16:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.652 11:16:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:56.652 11:16:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.652 11:16:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 0a02f751606047278806b0781b1edfa3 00:21:56.652 11:16:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.652 11:16:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:56.652 11:16:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.652 11:16:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:56.652 11:16:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.652 11:16:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:56.652 [2024-11-20 11:16:23.585203] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:56.652 11:16:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.652 11:16:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:21:56.652 11:16:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.652 11:16:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:56.652 nvme0n1 00:21:56.652 11:16:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.652 11:16:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:56.652 11:16:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.652 11:16:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:56.652 [ 00:21:56.652 { 00:21:56.652 "name": "nvme0n1", 00:21:56.652 "aliases": [ 00:21:56.652 "0a02f751-6060-4727-8806-b0781b1edfa3" 00:21:56.652 ], 00:21:56.652 "product_name": "NVMe disk", 00:21:56.652 "block_size": 512, 00:21:56.652 "num_blocks": 2097152, 00:21:56.652 "uuid": "0a02f751-6060-4727-8806-b0781b1edfa3", 00:21:56.652 "numa_id": 1, 00:21:56.652 "assigned_rate_limits": { 00:21:56.652 "rw_ios_per_sec": 0, 00:21:56.653 "rw_mbytes_per_sec": 0, 00:21:56.653 "r_mbytes_per_sec": 0, 00:21:56.653 "w_mbytes_per_sec": 0 00:21:56.653 }, 00:21:56.653 "claimed": false, 00:21:56.653 "zoned": false, 00:21:56.653 "supported_io_types": { 00:21:56.653 "read": true, 00:21:56.653 "write": true, 00:21:56.653 "unmap": false, 00:21:56.653 "flush": true, 00:21:56.653 "reset": true, 00:21:56.653 "nvme_admin": true, 00:21:56.653 "nvme_io": true, 00:21:56.653 "nvme_io_md": false, 00:21:56.653 "write_zeroes": true, 00:21:56.653 "zcopy": false, 00:21:56.653 "get_zone_info": false, 00:21:56.653 "zone_management": false, 00:21:56.653 "zone_append": false, 00:21:56.653 "compare": true, 00:21:56.653 "compare_and_write": true, 00:21:56.653 "abort": true, 00:21:56.653 "seek_hole": false, 00:21:56.653 "seek_data": false, 00:21:56.653 "copy": true, 00:21:56.653 "nvme_iov_md": false 00:21:56.653 }, 00:21:56.653 "memory_domains": [ 00:21:56.653 { 00:21:56.653 "dma_device_id": "system", 00:21:56.653 "dma_device_type": 1 00:21:56.653 } 00:21:56.653 ], 00:21:56.653 "driver_specific": { 00:21:56.653 "nvme": [ 00:21:56.653 { 00:21:56.653 "trid": { 00:21:56.653 "trtype": "TCP", 00:21:56.653 "adrfam": "IPv4", 00:21:56.653 "traddr": "10.0.0.2", 00:21:56.653 "trsvcid": "4420", 00:21:56.653 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:56.653 }, 00:21:56.653 "ctrlr_data": { 00:21:56.653 "cntlid": 1, 00:21:56.653 "vendor_id": "0x8086", 00:21:56.653 "model_number": "SPDK bdev Controller", 00:21:56.653 "serial_number": "00000000000000000000", 00:21:56.653 "firmware_revision": "25.01", 00:21:56.653 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:56.653 "oacs": { 00:21:56.653 "security": 0, 00:21:56.653 "format": 0, 00:21:56.653 "firmware": 0, 00:21:56.653 "ns_manage": 0 00:21:56.653 }, 00:21:56.653 "multi_ctrlr": true, 00:21:56.653 "ana_reporting": false 00:21:56.653 }, 00:21:56.653 "vs": { 00:21:56.653 "nvme_version": "1.3" 00:21:56.653 }, 00:21:56.653 "ns_data": { 00:21:56.653 "id": 1, 00:21:56.653 "can_share": true 00:21:56.653 } 00:21:56.653 } 00:21:56.653 ], 00:21:56.653 "mp_policy": "active_passive" 00:21:56.653 } 00:21:56.653 } 00:21:56.653 ] 00:21:56.653 11:16:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.653 11:16:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:21:56.653 11:16:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.653 11:16:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:56.653 [2024-11-20 11:16:23.849726] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:21:56.653 [2024-11-20 11:16:23.849781] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbec220 (9): Bad file descriptor 00:21:56.653 [2024-11-20 11:16:23.982028] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:21:56.653 11:16:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.653 11:16:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:56.653 11:16:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.653 11:16:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:56.653 [ 00:21:56.653 { 00:21:56.653 "name": "nvme0n1", 00:21:56.653 "aliases": [ 00:21:56.653 "0a02f751-6060-4727-8806-b0781b1edfa3" 00:21:56.653 ], 00:21:56.653 "product_name": "NVMe disk", 00:21:56.653 "block_size": 512, 00:21:56.653 "num_blocks": 2097152, 00:21:56.653 "uuid": "0a02f751-6060-4727-8806-b0781b1edfa3", 00:21:56.653 "numa_id": 1, 00:21:56.653 "assigned_rate_limits": { 00:21:56.653 "rw_ios_per_sec": 0, 00:21:56.653 "rw_mbytes_per_sec": 0, 00:21:56.653 "r_mbytes_per_sec": 0, 00:21:56.653 "w_mbytes_per_sec": 0 00:21:56.653 }, 00:21:56.653 "claimed": false, 00:21:56.653 "zoned": false, 00:21:56.653 "supported_io_types": { 00:21:56.653 "read": true, 00:21:56.653 "write": true, 00:21:56.653 "unmap": false, 00:21:56.653 "flush": true, 00:21:56.653 "reset": true, 00:21:56.653 "nvme_admin": true, 00:21:56.653 "nvme_io": true, 00:21:56.653 "nvme_io_md": false, 00:21:56.653 "write_zeroes": true, 00:21:56.653 "zcopy": false, 00:21:56.653 "get_zone_info": false, 00:21:56.653 "zone_management": false, 00:21:56.653 "zone_append": false, 00:21:56.653 "compare": true, 00:21:56.653 "compare_and_write": true, 00:21:56.653 "abort": true, 00:21:56.653 "seek_hole": false, 00:21:56.653 "seek_data": false, 00:21:56.653 "copy": true, 00:21:56.653 "nvme_iov_md": false 00:21:56.653 }, 00:21:56.653 "memory_domains": [ 00:21:56.653 { 00:21:56.653 "dma_device_id": "system", 00:21:56.653 "dma_device_type": 1 00:21:56.653 } 00:21:56.653 ], 00:21:56.653 "driver_specific": { 00:21:56.653 "nvme": [ 00:21:56.653 { 00:21:56.653 "trid": { 00:21:56.653 "trtype": "TCP", 00:21:56.653 "adrfam": "IPv4", 00:21:56.653 "traddr": "10.0.0.2", 00:21:56.653 "trsvcid": "4420", 00:21:56.653 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:56.653 }, 00:21:56.653 "ctrlr_data": { 00:21:56.653 "cntlid": 2, 00:21:56.653 "vendor_id": "0x8086", 00:21:56.653 "model_number": "SPDK bdev Controller", 00:21:56.653 "serial_number": "00000000000000000000", 00:21:56.653 "firmware_revision": "25.01", 00:21:56.653 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:56.653 "oacs": { 00:21:56.653 "security": 0, 00:21:56.653 "format": 0, 00:21:56.653 "firmware": 0, 00:21:56.653 "ns_manage": 0 00:21:56.653 }, 00:21:56.653 "multi_ctrlr": true, 00:21:56.653 "ana_reporting": false 00:21:56.653 }, 00:21:56.653 "vs": { 00:21:56.653 "nvme_version": "1.3" 00:21:56.653 }, 00:21:56.653 "ns_data": { 00:21:56.653 "id": 1, 00:21:56.653 "can_share": true 00:21:56.653 } 00:21:56.653 } 00:21:56.653 ], 00:21:56.653 "mp_policy": "active_passive" 00:21:56.653 } 00:21:56.653 } 00:21:56.653 ] 00:21:56.653 11:16:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.653 11:16:24 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:56.653 11:16:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.653 11:16:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:56.653 11:16:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.653 11:16:24 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:21:56.653 11:16:24 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.UgUTuHFiSZ 00:21:56.653 11:16:24 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:56.653 11:16:24 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.UgUTuHFiSZ 00:21:56.653 11:16:24 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.UgUTuHFiSZ 00:21:56.653 11:16:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.653 11:16:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:56.653 11:16:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.653 11:16:24 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:21:56.653 11:16:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.653 11:16:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:56.653 11:16:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.653 11:16:24 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:21:56.653 11:16:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.653 11:16:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:56.653 [2024-11-20 11:16:24.054341] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:56.653 [2024-11-20 11:16:24.054435] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:56.653 11:16:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.653 11:16:24 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:21:56.653 11:16:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.653 11:16:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:56.653 11:16:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.653 11:16:24 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:56.653 11:16:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.653 11:16:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:56.653 [2024-11-20 11:16:24.074402] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:56.653 nvme0n1 00:21:56.653 11:16:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.653 11:16:24 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:56.653 11:16:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.653 11:16:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:56.912 [ 00:21:56.912 { 00:21:56.912 "name": "nvme0n1", 00:21:56.912 "aliases": [ 00:21:56.912 "0a02f751-6060-4727-8806-b0781b1edfa3" 00:21:56.912 ], 00:21:56.912 "product_name": "NVMe disk", 00:21:56.912 "block_size": 512, 00:21:56.912 "num_blocks": 2097152, 00:21:56.912 "uuid": "0a02f751-6060-4727-8806-b0781b1edfa3", 00:21:56.912 "numa_id": 1, 00:21:56.912 "assigned_rate_limits": { 00:21:56.912 "rw_ios_per_sec": 0, 00:21:56.912 "rw_mbytes_per_sec": 0, 00:21:56.912 "r_mbytes_per_sec": 0, 00:21:56.912 "w_mbytes_per_sec": 0 00:21:56.912 }, 00:21:56.912 "claimed": false, 00:21:56.912 "zoned": false, 00:21:56.912 "supported_io_types": { 00:21:56.912 "read": true, 00:21:56.912 "write": true, 00:21:56.912 "unmap": false, 00:21:56.912 "flush": true, 00:21:56.912 "reset": true, 00:21:56.912 "nvme_admin": true, 00:21:56.912 "nvme_io": true, 00:21:56.912 "nvme_io_md": false, 00:21:56.912 "write_zeroes": true, 00:21:56.912 "zcopy": false, 00:21:56.912 "get_zone_info": false, 00:21:56.912 "zone_management": false, 00:21:56.912 "zone_append": false, 00:21:56.912 "compare": true, 00:21:56.912 "compare_and_write": true, 00:21:56.912 "abort": true, 00:21:56.912 "seek_hole": false, 00:21:56.912 "seek_data": false, 00:21:56.912 "copy": true, 00:21:56.912 "nvme_iov_md": false 00:21:56.912 }, 00:21:56.912 "memory_domains": [ 00:21:56.912 { 00:21:56.912 "dma_device_id": "system", 00:21:56.912 "dma_device_type": 1 00:21:56.912 } 00:21:56.912 ], 00:21:56.912 "driver_specific": { 00:21:56.912 "nvme": [ 00:21:56.912 { 00:21:56.912 "trid": { 00:21:56.912 "trtype": "TCP", 00:21:56.912 "adrfam": "IPv4", 00:21:56.912 "traddr": "10.0.0.2", 00:21:56.912 "trsvcid": "4421", 00:21:56.912 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:56.912 }, 00:21:56.912 "ctrlr_data": { 00:21:56.912 "cntlid": 3, 00:21:56.912 "vendor_id": "0x8086", 00:21:56.912 "model_number": "SPDK bdev Controller", 00:21:56.912 "serial_number": "00000000000000000000", 00:21:56.912 "firmware_revision": "25.01", 00:21:56.912 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:56.912 "oacs": { 00:21:56.912 "security": 0, 00:21:56.912 "format": 0, 00:21:56.912 "firmware": 0, 00:21:56.912 "ns_manage": 0 00:21:56.912 }, 00:21:56.912 "multi_ctrlr": true, 00:21:56.912 "ana_reporting": false 00:21:56.912 }, 00:21:56.912 "vs": { 00:21:56.912 "nvme_version": "1.3" 00:21:56.912 }, 00:21:56.912 "ns_data": { 00:21:56.912 "id": 1, 00:21:56.912 "can_share": true 00:21:56.912 } 00:21:56.912 } 00:21:56.912 ], 00:21:56.912 "mp_policy": "active_passive" 00:21:56.912 } 00:21:56.912 } 00:21:56.912 ] 00:21:56.912 11:16:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.912 11:16:24 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:56.912 11:16:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.912 11:16:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:56.912 11:16:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.912 11:16:24 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.UgUTuHFiSZ 00:21:56.912 11:16:24 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:21:56.912 11:16:24 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:21:56.912 11:16:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:56.912 11:16:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:21:56.912 11:16:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:56.912 11:16:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:21:56.912 11:16:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:56.912 11:16:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:56.912 rmmod nvme_tcp 00:21:56.912 rmmod nvme_fabrics 00:21:56.912 rmmod nvme_keyring 00:21:56.912 11:16:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:56.912 11:16:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:21:56.912 11:16:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:21:56.912 11:16:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 4134875 ']' 00:21:56.912 11:16:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 4134875 00:21:56.912 11:16:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 4134875 ']' 00:21:56.912 11:16:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 4134875 00:21:56.912 11:16:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:21:56.912 11:16:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:56.912 11:16:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4134875 00:21:56.912 11:16:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:56.912 11:16:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:56.912 11:16:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4134875' 00:21:56.912 killing process with pid 4134875 00:21:56.912 11:16:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 4134875 00:21:56.912 11:16:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 4134875 00:21:57.172 11:16:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:57.172 11:16:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:57.172 11:16:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:57.172 11:16:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:21:57.172 11:16:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:57.172 11:16:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:21:57.172 11:16:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:21:57.172 11:16:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:57.172 11:16:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:57.172 11:16:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:57.172 11:16:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:57.172 11:16:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:59.078 11:16:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:59.078 00:21:59.078 real 0m9.421s 00:21:59.078 user 0m3.059s 00:21:59.078 sys 0m4.806s 00:21:59.078 11:16:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:59.079 11:16:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:59.079 ************************************ 00:21:59.079 END TEST nvmf_async_init 00:21:59.079 ************************************ 00:21:59.079 11:16:26 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:21:59.079 11:16:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:59.079 11:16:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:59.079 11:16:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:59.339 ************************************ 00:21:59.339 START TEST dma 00:21:59.339 ************************************ 00:21:59.339 11:16:26 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:21:59.339 * Looking for test storage... 00:21:59.339 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:59.339 11:16:26 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:59.339 11:16:26 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lcov --version 00:21:59.339 11:16:26 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:59.339 11:16:26 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:59.339 11:16:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:59.339 11:16:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:59.339 11:16:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:59.339 11:16:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:21:59.339 11:16:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:21:59.339 11:16:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:21:59.339 11:16:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:21:59.339 11:16:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:21:59.339 11:16:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:21:59.339 11:16:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:21:59.339 11:16:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:59.339 11:16:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:21:59.339 11:16:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:21:59.339 11:16:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:59.339 11:16:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:59.339 11:16:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:21:59.339 11:16:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:21:59.339 11:16:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:59.339 11:16:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:21:59.339 11:16:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:21:59.339 11:16:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:21:59.339 11:16:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:21:59.339 11:16:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:59.339 11:16:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:21:59.339 11:16:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:21:59.339 11:16:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:59.339 11:16:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:59.339 11:16:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:21:59.339 11:16:26 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:59.339 11:16:26 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:59.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:59.339 --rc genhtml_branch_coverage=1 00:21:59.339 --rc genhtml_function_coverage=1 00:21:59.339 --rc genhtml_legend=1 00:21:59.339 --rc geninfo_all_blocks=1 00:21:59.339 --rc geninfo_unexecuted_blocks=1 00:21:59.339 00:21:59.339 ' 00:21:59.339 11:16:26 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:59.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:59.339 --rc genhtml_branch_coverage=1 00:21:59.339 --rc genhtml_function_coverage=1 00:21:59.339 --rc genhtml_legend=1 00:21:59.339 --rc geninfo_all_blocks=1 00:21:59.339 --rc geninfo_unexecuted_blocks=1 00:21:59.339 00:21:59.339 ' 00:21:59.339 11:16:26 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:59.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:59.339 --rc genhtml_branch_coverage=1 00:21:59.339 --rc genhtml_function_coverage=1 00:21:59.339 --rc genhtml_legend=1 00:21:59.339 --rc geninfo_all_blocks=1 00:21:59.339 --rc geninfo_unexecuted_blocks=1 00:21:59.339 00:21:59.339 ' 00:21:59.339 11:16:26 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:59.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:59.339 --rc genhtml_branch_coverage=1 00:21:59.339 --rc genhtml_function_coverage=1 00:21:59.339 --rc genhtml_legend=1 00:21:59.339 --rc geninfo_all_blocks=1 00:21:59.339 --rc geninfo_unexecuted_blocks=1 00:21:59.339 00:21:59.339 ' 00:21:59.339 11:16:26 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:59.339 11:16:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:21:59.339 11:16:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:59.339 11:16:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:59.339 11:16:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:59.339 11:16:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:59.339 11:16:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:59.339 11:16:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:59.339 11:16:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:59.339 11:16:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:59.339 11:16:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:59.340 11:16:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:59.340 11:16:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:59.340 11:16:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:59.340 11:16:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:59.340 11:16:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:59.340 11:16:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:59.340 11:16:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:59.340 11:16:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:59.340 11:16:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:21:59.340 11:16:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:59.340 11:16:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:59.340 11:16:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:59.340 11:16:26 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.340 11:16:26 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.340 11:16:26 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.340 11:16:26 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:21:59.340 11:16:26 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.340 11:16:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:21:59.340 11:16:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:59.340 11:16:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:59.340 11:16:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:59.340 11:16:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:59.340 11:16:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:59.340 11:16:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:59.340 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:59.340 11:16:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:59.340 11:16:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:59.340 11:16:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:59.340 11:16:26 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:21:59.340 11:16:26 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:21:59.340 00:21:59.340 real 0m0.217s 00:21:59.340 user 0m0.130s 00:21:59.340 sys 0m0.099s 00:21:59.340 11:16:26 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:59.340 11:16:26 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:21:59.340 ************************************ 00:21:59.340 END TEST dma 00:21:59.340 ************************************ 00:21:59.340 11:16:26 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:21:59.340 11:16:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:59.340 11:16:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:59.340 11:16:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:59.601 ************************************ 00:21:59.601 START TEST nvmf_identify 00:21:59.601 ************************************ 00:21:59.601 11:16:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:21:59.601 * Looking for test storage... 00:21:59.601 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:59.601 11:16:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:59.601 11:16:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lcov --version 00:21:59.601 11:16:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:59.601 11:16:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:59.601 11:16:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:59.601 11:16:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:59.601 11:16:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:59.601 11:16:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:21:59.601 11:16:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:21:59.601 11:16:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:21:59.601 11:16:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:21:59.601 11:16:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:21:59.601 11:16:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:21:59.601 11:16:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:21:59.601 11:16:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:59.601 11:16:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:21:59.601 11:16:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:21:59.601 11:16:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:59.601 11:16:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:59.601 11:16:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:21:59.601 11:16:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:21:59.601 11:16:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:59.601 11:16:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:21:59.601 11:16:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:21:59.601 11:16:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:21:59.601 11:16:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:21:59.601 11:16:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:59.601 11:16:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:21:59.601 11:16:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:21:59.601 11:16:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:59.601 11:16:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:59.601 11:16:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:21:59.601 11:16:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:59.601 11:16:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:59.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:59.601 --rc genhtml_branch_coverage=1 00:21:59.601 --rc genhtml_function_coverage=1 00:21:59.601 --rc genhtml_legend=1 00:21:59.601 --rc geninfo_all_blocks=1 00:21:59.601 --rc geninfo_unexecuted_blocks=1 00:21:59.601 00:21:59.601 ' 00:21:59.601 11:16:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:59.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:59.601 --rc genhtml_branch_coverage=1 00:21:59.601 --rc genhtml_function_coverage=1 00:21:59.601 --rc genhtml_legend=1 00:21:59.601 --rc geninfo_all_blocks=1 00:21:59.601 --rc geninfo_unexecuted_blocks=1 00:21:59.601 00:21:59.601 ' 00:21:59.601 11:16:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:59.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:59.601 --rc genhtml_branch_coverage=1 00:21:59.601 --rc genhtml_function_coverage=1 00:21:59.601 --rc genhtml_legend=1 00:21:59.601 --rc geninfo_all_blocks=1 00:21:59.601 --rc geninfo_unexecuted_blocks=1 00:21:59.601 00:21:59.601 ' 00:21:59.601 11:16:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:59.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:59.601 --rc genhtml_branch_coverage=1 00:21:59.601 --rc genhtml_function_coverage=1 00:21:59.601 --rc genhtml_legend=1 00:21:59.601 --rc geninfo_all_blocks=1 00:21:59.601 --rc geninfo_unexecuted_blocks=1 00:21:59.601 00:21:59.601 ' 00:21:59.601 11:16:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:59.601 11:16:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:21:59.601 11:16:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:59.601 11:16:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:59.601 11:16:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:59.601 11:16:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:59.601 11:16:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:59.601 11:16:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:59.601 11:16:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:59.601 11:16:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:59.601 11:16:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:59.601 11:16:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:59.601 11:16:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:59.601 11:16:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:59.601 11:16:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:59.601 11:16:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:59.601 11:16:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:59.601 11:16:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:59.601 11:16:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:59.601 11:16:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:21:59.601 11:16:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:59.601 11:16:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:59.601 11:16:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:59.601 11:16:27 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.601 11:16:27 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.601 11:16:27 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.601 11:16:27 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:21:59.601 11:16:27 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.601 11:16:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:21:59.601 11:16:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:59.601 11:16:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:59.601 11:16:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:59.601 11:16:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:59.601 11:16:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:59.601 11:16:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:59.602 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:59.602 11:16:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:59.602 11:16:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:59.602 11:16:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:59.602 11:16:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:59.602 11:16:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:59.602 11:16:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:21:59.602 11:16:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:59.602 11:16:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:59.602 11:16:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:59.602 11:16:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:59.602 11:16:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:59.602 11:16:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:59.602 11:16:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:59.602 11:16:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:59.602 11:16:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:59.602 11:16:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:59.602 11:16:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:21:59.602 11:16:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:06.180 11:16:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:06.180 11:16:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:22:06.180 11:16:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:06.180 11:16:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:06.180 11:16:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:06.180 11:16:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:06.180 11:16:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:06.180 11:16:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:22:06.180 11:16:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:06.180 11:16:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:22:06.180 11:16:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:22:06.180 11:16:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:22:06.180 11:16:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:22:06.180 11:16:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:22:06.180 11:16:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:22:06.180 11:16:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:06.180 11:16:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:06.180 11:16:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:06.180 11:16:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:06.180 11:16:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:06.180 11:16:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:06.180 11:16:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:06.180 11:16:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:06.180 11:16:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:06.180 11:16:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:06.180 11:16:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:06.180 11:16:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:06.180 11:16:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:06.180 11:16:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:06.180 11:16:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:06.180 11:16:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:06.180 11:16:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:06.180 11:16:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:06.180 11:16:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:06.180 11:16:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:06.180 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:06.180 11:16:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:06.180 11:16:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:06.180 11:16:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:06.180 11:16:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:06.180 11:16:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:06.180 11:16:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:06.180 11:16:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:06.180 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:06.180 11:16:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:06.180 11:16:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:06.180 11:16:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:06.180 11:16:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:06.180 11:16:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:06.180 11:16:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:06.180 11:16:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:06.180 11:16:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:06.180 11:16:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:06.180 11:16:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:06.180 11:16:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:06.180 11:16:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:06.180 11:16:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:06.180 11:16:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:06.180 11:16:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:06.181 11:16:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:06.181 Found net devices under 0000:86:00.0: cvl_0_0 00:22:06.181 11:16:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:06.181 11:16:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:06.181 11:16:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:06.181 11:16:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:06.181 11:16:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:06.181 11:16:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:06.181 11:16:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:06.181 11:16:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:06.181 11:16:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:06.181 Found net devices under 0000:86:00.1: cvl_0_1 00:22:06.181 11:16:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:06.181 11:16:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:06.181 11:16:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:22:06.181 11:16:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:06.181 11:16:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:06.181 11:16:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:06.181 11:16:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:06.181 11:16:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:06.181 11:16:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:06.181 11:16:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:06.181 11:16:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:06.181 11:16:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:06.181 11:16:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:06.181 11:16:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:06.181 11:16:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:06.181 11:16:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:06.181 11:16:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:06.181 11:16:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:06.181 11:16:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:06.181 11:16:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:06.181 11:16:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:06.181 11:16:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:06.181 11:16:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:06.181 11:16:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:06.181 11:16:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:06.181 11:16:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:06.181 11:16:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:06.181 11:16:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:06.181 11:16:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:06.181 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:06.181 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.448 ms 00:22:06.181 00:22:06.181 --- 10.0.0.2 ping statistics --- 00:22:06.181 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:06.181 rtt min/avg/max/mdev = 0.448/0.448/0.448/0.000 ms 00:22:06.181 11:16:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:06.181 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:06.181 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.195 ms 00:22:06.181 00:22:06.181 --- 10.0.0.1 ping statistics --- 00:22:06.181 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:06.181 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:22:06.181 11:16:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:06.181 11:16:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:22:06.181 11:16:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:06.181 11:16:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:06.181 11:16:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:06.181 11:16:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:06.181 11:16:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:06.181 11:16:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:06.181 11:16:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:06.181 11:16:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:22:06.181 11:16:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:06.181 11:16:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:06.181 11:16:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=4138696 00:22:06.181 11:16:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:06.181 11:16:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:06.181 11:16:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 4138696 00:22:06.181 11:16:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 4138696 ']' 00:22:06.181 11:16:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:06.181 11:16:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:06.181 11:16:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:06.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:06.181 11:16:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:06.181 11:16:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:06.181 [2024-11-20 11:16:33.072193] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:22:06.181 [2024-11-20 11:16:33.072240] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:06.181 [2024-11-20 11:16:33.152942] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:06.181 [2024-11-20 11:16:33.196104] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:06.181 [2024-11-20 11:16:33.196143] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:06.181 [2024-11-20 11:16:33.196151] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:06.181 [2024-11-20 11:16:33.196158] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:06.181 [2024-11-20 11:16:33.196163] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:06.181 [2024-11-20 11:16:33.197731] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:06.181 [2024-11-20 11:16:33.197843] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:06.181 [2024-11-20 11:16:33.197925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:06.181 [2024-11-20 11:16:33.197926] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:06.181 11:16:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:06.181 11:16:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:22:06.181 11:16:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:06.181 11:16:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.181 11:16:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:06.181 [2024-11-20 11:16:33.310405] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:06.181 11:16:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.181 11:16:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:22:06.181 11:16:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:06.181 11:16:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:06.181 11:16:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:06.181 11:16:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.181 11:16:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:06.181 Malloc0 00:22:06.181 11:16:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.181 11:16:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:06.181 11:16:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.181 11:16:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:06.181 11:16:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.181 11:16:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:22:06.181 11:16:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.181 11:16:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:06.181 11:16:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.181 11:16:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:06.181 11:16:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.181 11:16:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:06.182 [2024-11-20 11:16:33.408393] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:06.182 11:16:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.182 11:16:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:06.182 11:16:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.182 11:16:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:06.182 11:16:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.182 11:16:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:22:06.182 11:16:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.182 11:16:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:06.182 [ 00:22:06.182 { 00:22:06.182 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:06.182 "subtype": "Discovery", 00:22:06.182 "listen_addresses": [ 00:22:06.182 { 00:22:06.182 "trtype": "TCP", 00:22:06.182 "adrfam": "IPv4", 00:22:06.182 "traddr": "10.0.0.2", 00:22:06.182 "trsvcid": "4420" 00:22:06.182 } 00:22:06.182 ], 00:22:06.182 "allow_any_host": true, 00:22:06.182 "hosts": [] 00:22:06.182 }, 00:22:06.182 { 00:22:06.182 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:06.182 "subtype": "NVMe", 00:22:06.182 "listen_addresses": [ 00:22:06.182 { 00:22:06.182 "trtype": "TCP", 00:22:06.182 "adrfam": "IPv4", 00:22:06.182 "traddr": "10.0.0.2", 00:22:06.182 "trsvcid": "4420" 00:22:06.182 } 00:22:06.182 ], 00:22:06.182 "allow_any_host": true, 00:22:06.182 "hosts": [], 00:22:06.182 "serial_number": "SPDK00000000000001", 00:22:06.182 "model_number": "SPDK bdev Controller", 00:22:06.182 "max_namespaces": 32, 00:22:06.182 "min_cntlid": 1, 00:22:06.182 "max_cntlid": 65519, 00:22:06.182 "namespaces": [ 00:22:06.182 { 00:22:06.182 "nsid": 1, 00:22:06.182 "bdev_name": "Malloc0", 00:22:06.182 "name": "Malloc0", 00:22:06.182 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:22:06.182 "eui64": "ABCDEF0123456789", 00:22:06.182 "uuid": "1fc42d71-1818-4c6c-9741-2de8f10f99df" 00:22:06.182 } 00:22:06.182 ] 00:22:06.182 } 00:22:06.182 ] 00:22:06.182 11:16:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.182 11:16:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:22:06.182 [2024-11-20 11:16:33.456652] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:22:06.182 [2024-11-20 11:16:33.456690] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4138726 ] 00:22:06.182 [2024-11-20 11:16:33.498952] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:22:06.182 [2024-11-20 11:16:33.499006] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:06.182 [2024-11-20 11:16:33.499011] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:06.182 [2024-11-20 11:16:33.499023] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:06.182 [2024-11-20 11:16:33.499033] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:06.182 [2024-11-20 11:16:33.499491] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:22:06.182 [2024-11-20 11:16:33.499522] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x10b9690 0 00:22:06.182 [2024-11-20 11:16:33.512961] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:06.182 [2024-11-20 11:16:33.512975] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:06.182 [2024-11-20 11:16:33.512980] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:06.182 [2024-11-20 11:16:33.512983] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:06.182 [2024-11-20 11:16:33.513016] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.182 [2024-11-20 11:16:33.513022] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.182 [2024-11-20 11:16:33.513026] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10b9690) 00:22:06.182 [2024-11-20 11:16:33.513038] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:06.182 [2024-11-20 11:16:33.513056] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x111b100, cid 0, qid 0 00:22:06.182 [2024-11-20 11:16:33.520959] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.182 [2024-11-20 11:16:33.520967] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.182 [2024-11-20 11:16:33.520970] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.182 [2024-11-20 11:16:33.520975] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x111b100) on tqpair=0x10b9690 00:22:06.182 [2024-11-20 11:16:33.520987] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:06.182 [2024-11-20 11:16:33.520994] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:22:06.182 [2024-11-20 11:16:33.520999] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:22:06.182 [2024-11-20 11:16:33.521011] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.182 [2024-11-20 11:16:33.521015] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.182 [2024-11-20 11:16:33.521018] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10b9690) 00:22:06.182 [2024-11-20 11:16:33.521025] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.182 [2024-11-20 11:16:33.521038] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x111b100, cid 0, qid 0 00:22:06.182 [2024-11-20 11:16:33.521212] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.182 [2024-11-20 11:16:33.521218] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.182 [2024-11-20 11:16:33.521221] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.182 [2024-11-20 11:16:33.521224] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x111b100) on tqpair=0x10b9690 00:22:06.182 [2024-11-20 11:16:33.521229] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:22:06.182 [2024-11-20 11:16:33.521239] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:22:06.182 [2024-11-20 11:16:33.521246] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.182 [2024-11-20 11:16:33.521249] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.182 [2024-11-20 11:16:33.521252] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10b9690) 00:22:06.182 [2024-11-20 11:16:33.521258] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.182 [2024-11-20 11:16:33.521268] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x111b100, cid 0, qid 0 00:22:06.182 [2024-11-20 11:16:33.521334] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.182 [2024-11-20 11:16:33.521340] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.182 [2024-11-20 11:16:33.521342] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.182 [2024-11-20 11:16:33.521346] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x111b100) on tqpair=0x10b9690 00:22:06.182 [2024-11-20 11:16:33.521352] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:22:06.182 [2024-11-20 11:16:33.521359] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:22:06.182 [2024-11-20 11:16:33.521364] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.182 [2024-11-20 11:16:33.521368] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.182 [2024-11-20 11:16:33.521371] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10b9690) 00:22:06.182 [2024-11-20 11:16:33.521376] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.182 [2024-11-20 11:16:33.521386] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x111b100, cid 0, qid 0 00:22:06.182 [2024-11-20 11:16:33.521452] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.182 [2024-11-20 11:16:33.521457] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.182 [2024-11-20 11:16:33.521460] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.182 [2024-11-20 11:16:33.521463] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x111b100) on tqpair=0x10b9690 00:22:06.182 [2024-11-20 11:16:33.521468] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:06.182 [2024-11-20 11:16:33.521477] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.182 [2024-11-20 11:16:33.521481] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.182 [2024-11-20 11:16:33.521484] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10b9690) 00:22:06.182 [2024-11-20 11:16:33.521489] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.182 [2024-11-20 11:16:33.521499] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x111b100, cid 0, qid 0 00:22:06.182 [2024-11-20 11:16:33.521562] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.182 [2024-11-20 11:16:33.521568] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.182 [2024-11-20 11:16:33.521570] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.182 [2024-11-20 11:16:33.521574] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x111b100) on tqpair=0x10b9690 00:22:06.182 [2024-11-20 11:16:33.521578] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:22:06.182 [2024-11-20 11:16:33.521582] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:22:06.182 [2024-11-20 11:16:33.521589] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:06.182 [2024-11-20 11:16:33.521698] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:22:06.183 [2024-11-20 11:16:33.521703] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:06.183 [2024-11-20 11:16:33.521711] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.183 [2024-11-20 11:16:33.521715] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.183 [2024-11-20 11:16:33.521718] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10b9690) 00:22:06.183 [2024-11-20 11:16:33.521723] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.183 [2024-11-20 11:16:33.521733] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x111b100, cid 0, qid 0 00:22:06.183 [2024-11-20 11:16:33.521807] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.183 [2024-11-20 11:16:33.521812] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.183 [2024-11-20 11:16:33.521815] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.183 [2024-11-20 11:16:33.521818] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x111b100) on tqpair=0x10b9690 00:22:06.183 [2024-11-20 11:16:33.521822] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:06.183 [2024-11-20 11:16:33.521831] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.183 [2024-11-20 11:16:33.521835] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.183 [2024-11-20 11:16:33.521838] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10b9690) 00:22:06.183 [2024-11-20 11:16:33.521843] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.183 [2024-11-20 11:16:33.521853] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x111b100, cid 0, qid 0 00:22:06.183 [2024-11-20 11:16:33.521919] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.183 [2024-11-20 11:16:33.521925] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.183 [2024-11-20 11:16:33.521927] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.183 [2024-11-20 11:16:33.521931] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x111b100) on tqpair=0x10b9690 00:22:06.183 [2024-11-20 11:16:33.521935] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:06.183 [2024-11-20 11:16:33.521939] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:22:06.183 [2024-11-20 11:16:33.521946] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:22:06.183 [2024-11-20 11:16:33.521960] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:22:06.183 [2024-11-20 11:16:33.521968] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.183 [2024-11-20 11:16:33.521971] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10b9690) 00:22:06.183 [2024-11-20 11:16:33.521977] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.183 [2024-11-20 11:16:33.521987] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x111b100, cid 0, qid 0 00:22:06.183 [2024-11-20 11:16:33.522084] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:06.183 [2024-11-20 11:16:33.522089] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:06.183 [2024-11-20 11:16:33.522097] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:06.183 [2024-11-20 11:16:33.522101] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x10b9690): datao=0, datal=4096, cccid=0 00:22:06.183 [2024-11-20 11:16:33.522105] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x111b100) on tqpair(0x10b9690): expected_datao=0, payload_size=4096 00:22:06.183 [2024-11-20 11:16:33.522109] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.183 [2024-11-20 11:16:33.522116] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:06.183 [2024-11-20 11:16:33.522120] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:06.183 [2024-11-20 11:16:33.522132] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.183 [2024-11-20 11:16:33.522137] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.183 [2024-11-20 11:16:33.522140] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.183 [2024-11-20 11:16:33.522144] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x111b100) on tqpair=0x10b9690 00:22:06.183 [2024-11-20 11:16:33.522151] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:22:06.183 [2024-11-20 11:16:33.522156] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:22:06.183 [2024-11-20 11:16:33.522160] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:22:06.183 [2024-11-20 11:16:33.522167] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:22:06.183 [2024-11-20 11:16:33.522172] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:22:06.183 [2024-11-20 11:16:33.522176] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:22:06.183 [2024-11-20 11:16:33.522186] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:22:06.183 [2024-11-20 11:16:33.522192] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.183 [2024-11-20 11:16:33.522196] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.183 [2024-11-20 11:16:33.522199] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10b9690) 00:22:06.183 [2024-11-20 11:16:33.522205] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:06.183 [2024-11-20 11:16:33.522215] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x111b100, cid 0, qid 0 00:22:06.183 [2024-11-20 11:16:33.522286] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.183 [2024-11-20 11:16:33.522292] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.183 [2024-11-20 11:16:33.522294] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.183 [2024-11-20 11:16:33.522298] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x111b100) on tqpair=0x10b9690 00:22:06.183 [2024-11-20 11:16:33.522304] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.183 [2024-11-20 11:16:33.522308] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.183 [2024-11-20 11:16:33.522311] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10b9690) 00:22:06.183 [2024-11-20 11:16:33.522316] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.183 [2024-11-20 11:16:33.522321] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.183 [2024-11-20 11:16:33.522325] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.183 [2024-11-20 11:16:33.522327] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x10b9690) 00:22:06.183 [2024-11-20 11:16:33.522332] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.183 [2024-11-20 11:16:33.522339] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.183 [2024-11-20 11:16:33.522342] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.183 [2024-11-20 11:16:33.522345] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x10b9690) 00:22:06.183 [2024-11-20 11:16:33.522350] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.183 [2024-11-20 11:16:33.522355] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.183 [2024-11-20 11:16:33.522358] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.183 [2024-11-20 11:16:33.522362] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10b9690) 00:22:06.183 [2024-11-20 11:16:33.522366] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.183 [2024-11-20 11:16:33.522370] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:22:06.183 [2024-11-20 11:16:33.522379] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:06.183 [2024-11-20 11:16:33.522384] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.183 [2024-11-20 11:16:33.522387] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x10b9690) 00:22:06.183 [2024-11-20 11:16:33.522393] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.183 [2024-11-20 11:16:33.522404] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x111b100, cid 0, qid 0 00:22:06.183 [2024-11-20 11:16:33.522408] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x111b280, cid 1, qid 0 00:22:06.183 [2024-11-20 11:16:33.522412] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x111b400, cid 2, qid 0 00:22:06.183 [2024-11-20 11:16:33.522417] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x111b580, cid 3, qid 0 00:22:06.183 [2024-11-20 11:16:33.522420] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x111b700, cid 4, qid 0 00:22:06.183 [2024-11-20 11:16:33.522516] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.183 [2024-11-20 11:16:33.522522] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.183 [2024-11-20 11:16:33.522525] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.183 [2024-11-20 11:16:33.522529] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x111b700) on tqpair=0x10b9690 00:22:06.183 [2024-11-20 11:16:33.522536] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:22:06.183 [2024-11-20 11:16:33.522540] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:22:06.183 [2024-11-20 11:16:33.522549] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.183 [2024-11-20 11:16:33.522553] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x10b9690) 00:22:06.183 [2024-11-20 11:16:33.522558] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.183 [2024-11-20 11:16:33.522568] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x111b700, cid 4, qid 0 00:22:06.183 [2024-11-20 11:16:33.522641] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:06.183 [2024-11-20 11:16:33.522647] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:06.183 [2024-11-20 11:16:33.522650] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:06.183 [2024-11-20 11:16:33.522654] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x10b9690): datao=0, datal=4096, cccid=4 00:22:06.183 [2024-11-20 11:16:33.522657] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x111b700) on tqpair(0x10b9690): expected_datao=0, payload_size=4096 00:22:06.183 [2024-11-20 11:16:33.522664] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.183 [2024-11-20 11:16:33.522674] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:06.184 [2024-11-20 11:16:33.522677] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:06.184 [2024-11-20 11:16:33.563101] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.184 [2024-11-20 11:16:33.563113] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.184 [2024-11-20 11:16:33.563116] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.184 [2024-11-20 11:16:33.563119] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x111b700) on tqpair=0x10b9690 00:22:06.184 [2024-11-20 11:16:33.563132] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:22:06.184 [2024-11-20 11:16:33.563156] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.184 [2024-11-20 11:16:33.563161] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x10b9690) 00:22:06.184 [2024-11-20 11:16:33.563168] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.184 [2024-11-20 11:16:33.563175] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.184 [2024-11-20 11:16:33.563178] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.184 [2024-11-20 11:16:33.563181] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x10b9690) 00:22:06.184 [2024-11-20 11:16:33.563186] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.184 [2024-11-20 11:16:33.563203] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x111b700, cid 4, qid 0 00:22:06.184 [2024-11-20 11:16:33.563208] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x111b880, cid 5, qid 0 00:22:06.184 [2024-11-20 11:16:33.563313] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:06.184 [2024-11-20 11:16:33.563319] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:06.184 [2024-11-20 11:16:33.563322] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:06.184 [2024-11-20 11:16:33.563325] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x10b9690): datao=0, datal=1024, cccid=4 00:22:06.184 [2024-11-20 11:16:33.563329] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x111b700) on tqpair(0x10b9690): expected_datao=0, payload_size=1024 00:22:06.184 [2024-11-20 11:16:33.563333] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.184 [2024-11-20 11:16:33.563339] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:06.184 [2024-11-20 11:16:33.563342] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:06.184 [2024-11-20 11:16:33.563347] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.184 [2024-11-20 11:16:33.563352] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.184 [2024-11-20 11:16:33.563355] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.184 [2024-11-20 11:16:33.563358] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x111b880) on tqpair=0x10b9690 00:22:06.184 [2024-11-20 11:16:33.607959] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.184 [2024-11-20 11:16:33.607968] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.184 [2024-11-20 11:16:33.607971] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.184 [2024-11-20 11:16:33.607974] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x111b700) on tqpair=0x10b9690 00:22:06.184 [2024-11-20 11:16:33.607984] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.184 [2024-11-20 11:16:33.607987] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x10b9690) 00:22:06.184 [2024-11-20 11:16:33.607994] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.184 [2024-11-20 11:16:33.608013] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x111b700, cid 4, qid 0 00:22:06.184 [2024-11-20 11:16:33.608095] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:06.184 [2024-11-20 11:16:33.608101] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:06.184 [2024-11-20 11:16:33.608104] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:06.184 [2024-11-20 11:16:33.608107] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x10b9690): datao=0, datal=3072, cccid=4 00:22:06.184 [2024-11-20 11:16:33.608111] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x111b700) on tqpair(0x10b9690): expected_datao=0, payload_size=3072 00:22:06.184 [2024-11-20 11:16:33.608115] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.184 [2024-11-20 11:16:33.608120] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:06.184 [2024-11-20 11:16:33.608124] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:06.184 [2024-11-20 11:16:33.608134] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.184 [2024-11-20 11:16:33.608139] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.184 [2024-11-20 11:16:33.608142] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.184 [2024-11-20 11:16:33.608145] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x111b700) on tqpair=0x10b9690 00:22:06.184 [2024-11-20 11:16:33.608153] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.184 [2024-11-20 11:16:33.608156] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x10b9690) 00:22:06.184 [2024-11-20 11:16:33.608162] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.184 [2024-11-20 11:16:33.608175] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x111b700, cid 4, qid 0 00:22:06.184 [2024-11-20 11:16:33.608253] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:06.184 [2024-11-20 11:16:33.608258] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:06.184 [2024-11-20 11:16:33.608261] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:06.184 [2024-11-20 11:16:33.608264] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x10b9690): datao=0, datal=8, cccid=4 00:22:06.184 [2024-11-20 11:16:33.608268] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x111b700) on tqpair(0x10b9690): expected_datao=0, payload_size=8 00:22:06.184 [2024-11-20 11:16:33.608272] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.184 [2024-11-20 11:16:33.608277] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:06.184 [2024-11-20 11:16:33.608280] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:06.184 [2024-11-20 11:16:33.649066] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.184 [2024-11-20 11:16:33.649076] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.184 [2024-11-20 11:16:33.649079] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.184 [2024-11-20 11:16:33.649082] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x111b700) on tqpair=0x10b9690 00:22:06.184 ===================================================== 00:22:06.184 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:22:06.184 ===================================================== 00:22:06.184 Controller Capabilities/Features 00:22:06.184 ================================ 00:22:06.184 Vendor ID: 0000 00:22:06.184 Subsystem Vendor ID: 0000 00:22:06.184 Serial Number: .................... 00:22:06.184 Model Number: ........................................ 00:22:06.184 Firmware Version: 25.01 00:22:06.184 Recommended Arb Burst: 0 00:22:06.184 IEEE OUI Identifier: 00 00 00 00:22:06.184 Multi-path I/O 00:22:06.184 May have multiple subsystem ports: No 00:22:06.184 May have multiple controllers: No 00:22:06.184 Associated with SR-IOV VF: No 00:22:06.184 Max Data Transfer Size: 131072 00:22:06.184 Max Number of Namespaces: 0 00:22:06.184 Max Number of I/O Queues: 1024 00:22:06.184 NVMe Specification Version (VS): 1.3 00:22:06.184 NVMe Specification Version (Identify): 1.3 00:22:06.184 Maximum Queue Entries: 128 00:22:06.184 Contiguous Queues Required: Yes 00:22:06.184 Arbitration Mechanisms Supported 00:22:06.184 Weighted Round Robin: Not Supported 00:22:06.184 Vendor Specific: Not Supported 00:22:06.184 Reset Timeout: 15000 ms 00:22:06.184 Doorbell Stride: 4 bytes 00:22:06.184 NVM Subsystem Reset: Not Supported 00:22:06.184 Command Sets Supported 00:22:06.184 NVM Command Set: Supported 00:22:06.184 Boot Partition: Not Supported 00:22:06.184 Memory Page Size Minimum: 4096 bytes 00:22:06.184 Memory Page Size Maximum: 4096 bytes 00:22:06.184 Persistent Memory Region: Not Supported 00:22:06.184 Optional Asynchronous Events Supported 00:22:06.184 Namespace Attribute Notices: Not Supported 00:22:06.184 Firmware Activation Notices: Not Supported 00:22:06.184 ANA Change Notices: Not Supported 00:22:06.184 PLE Aggregate Log Change Notices: Not Supported 00:22:06.184 LBA Status Info Alert Notices: Not Supported 00:22:06.184 EGE Aggregate Log Change Notices: Not Supported 00:22:06.184 Normal NVM Subsystem Shutdown event: Not Supported 00:22:06.184 Zone Descriptor Change Notices: Not Supported 00:22:06.184 Discovery Log Change Notices: Supported 00:22:06.184 Controller Attributes 00:22:06.184 128-bit Host Identifier: Not Supported 00:22:06.184 Non-Operational Permissive Mode: Not Supported 00:22:06.184 NVM Sets: Not Supported 00:22:06.184 Read Recovery Levels: Not Supported 00:22:06.184 Endurance Groups: Not Supported 00:22:06.184 Predictable Latency Mode: Not Supported 00:22:06.184 Traffic Based Keep ALive: Not Supported 00:22:06.184 Namespace Granularity: Not Supported 00:22:06.184 SQ Associations: Not Supported 00:22:06.184 UUID List: Not Supported 00:22:06.184 Multi-Domain Subsystem: Not Supported 00:22:06.184 Fixed Capacity Management: Not Supported 00:22:06.184 Variable Capacity Management: Not Supported 00:22:06.184 Delete Endurance Group: Not Supported 00:22:06.184 Delete NVM Set: Not Supported 00:22:06.184 Extended LBA Formats Supported: Not Supported 00:22:06.184 Flexible Data Placement Supported: Not Supported 00:22:06.184 00:22:06.184 Controller Memory Buffer Support 00:22:06.184 ================================ 00:22:06.184 Supported: No 00:22:06.184 00:22:06.184 Persistent Memory Region Support 00:22:06.184 ================================ 00:22:06.184 Supported: No 00:22:06.184 00:22:06.184 Admin Command Set Attributes 00:22:06.184 ============================ 00:22:06.184 Security Send/Receive: Not Supported 00:22:06.184 Format NVM: Not Supported 00:22:06.184 Firmware Activate/Download: Not Supported 00:22:06.185 Namespace Management: Not Supported 00:22:06.185 Device Self-Test: Not Supported 00:22:06.185 Directives: Not Supported 00:22:06.185 NVMe-MI: Not Supported 00:22:06.185 Virtualization Management: Not Supported 00:22:06.185 Doorbell Buffer Config: Not Supported 00:22:06.185 Get LBA Status Capability: Not Supported 00:22:06.185 Command & Feature Lockdown Capability: Not Supported 00:22:06.185 Abort Command Limit: 1 00:22:06.185 Async Event Request Limit: 4 00:22:06.185 Number of Firmware Slots: N/A 00:22:06.185 Firmware Slot 1 Read-Only: N/A 00:22:06.185 Firmware Activation Without Reset: N/A 00:22:06.185 Multiple Update Detection Support: N/A 00:22:06.185 Firmware Update Granularity: No Information Provided 00:22:06.185 Per-Namespace SMART Log: No 00:22:06.185 Asymmetric Namespace Access Log Page: Not Supported 00:22:06.185 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:22:06.185 Command Effects Log Page: Not Supported 00:22:06.185 Get Log Page Extended Data: Supported 00:22:06.185 Telemetry Log Pages: Not Supported 00:22:06.185 Persistent Event Log Pages: Not Supported 00:22:06.185 Supported Log Pages Log Page: May Support 00:22:06.185 Commands Supported & Effects Log Page: Not Supported 00:22:06.185 Feature Identifiers & Effects Log Page:May Support 00:22:06.185 NVMe-MI Commands & Effects Log Page: May Support 00:22:06.185 Data Area 4 for Telemetry Log: Not Supported 00:22:06.185 Error Log Page Entries Supported: 128 00:22:06.185 Keep Alive: Not Supported 00:22:06.185 00:22:06.185 NVM Command Set Attributes 00:22:06.185 ========================== 00:22:06.185 Submission Queue Entry Size 00:22:06.185 Max: 1 00:22:06.185 Min: 1 00:22:06.185 Completion Queue Entry Size 00:22:06.185 Max: 1 00:22:06.185 Min: 1 00:22:06.185 Number of Namespaces: 0 00:22:06.185 Compare Command: Not Supported 00:22:06.185 Write Uncorrectable Command: Not Supported 00:22:06.185 Dataset Management Command: Not Supported 00:22:06.185 Write Zeroes Command: Not Supported 00:22:06.185 Set Features Save Field: Not Supported 00:22:06.185 Reservations: Not Supported 00:22:06.185 Timestamp: Not Supported 00:22:06.185 Copy: Not Supported 00:22:06.185 Volatile Write Cache: Not Present 00:22:06.185 Atomic Write Unit (Normal): 1 00:22:06.185 Atomic Write Unit (PFail): 1 00:22:06.185 Atomic Compare & Write Unit: 1 00:22:06.185 Fused Compare & Write: Supported 00:22:06.185 Scatter-Gather List 00:22:06.185 SGL Command Set: Supported 00:22:06.185 SGL Keyed: Supported 00:22:06.185 SGL Bit Bucket Descriptor: Not Supported 00:22:06.185 SGL Metadata Pointer: Not Supported 00:22:06.185 Oversized SGL: Not Supported 00:22:06.185 SGL Metadata Address: Not Supported 00:22:06.185 SGL Offset: Supported 00:22:06.185 Transport SGL Data Block: Not Supported 00:22:06.185 Replay Protected Memory Block: Not Supported 00:22:06.185 00:22:06.185 Firmware Slot Information 00:22:06.185 ========================= 00:22:06.185 Active slot: 0 00:22:06.185 00:22:06.185 00:22:06.185 Error Log 00:22:06.185 ========= 00:22:06.185 00:22:06.185 Active Namespaces 00:22:06.185 ================= 00:22:06.185 Discovery Log Page 00:22:06.185 ================== 00:22:06.185 Generation Counter: 2 00:22:06.185 Number of Records: 2 00:22:06.185 Record Format: 0 00:22:06.185 00:22:06.185 Discovery Log Entry 0 00:22:06.185 ---------------------- 00:22:06.185 Transport Type: 3 (TCP) 00:22:06.185 Address Family: 1 (IPv4) 00:22:06.185 Subsystem Type: 3 (Current Discovery Subsystem) 00:22:06.185 Entry Flags: 00:22:06.185 Duplicate Returned Information: 1 00:22:06.185 Explicit Persistent Connection Support for Discovery: 1 00:22:06.185 Transport Requirements: 00:22:06.185 Secure Channel: Not Required 00:22:06.185 Port ID: 0 (0x0000) 00:22:06.185 Controller ID: 65535 (0xffff) 00:22:06.185 Admin Max SQ Size: 128 00:22:06.185 Transport Service Identifier: 4420 00:22:06.185 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:22:06.185 Transport Address: 10.0.0.2 00:22:06.185 Discovery Log Entry 1 00:22:06.185 ---------------------- 00:22:06.185 Transport Type: 3 (TCP) 00:22:06.185 Address Family: 1 (IPv4) 00:22:06.185 Subsystem Type: 2 (NVM Subsystem) 00:22:06.185 Entry Flags: 00:22:06.185 Duplicate Returned Information: 0 00:22:06.185 Explicit Persistent Connection Support for Discovery: 0 00:22:06.185 Transport Requirements: 00:22:06.185 Secure Channel: Not Required 00:22:06.185 Port ID: 0 (0x0000) 00:22:06.185 Controller ID: 65535 (0xffff) 00:22:06.185 Admin Max SQ Size: 128 00:22:06.185 Transport Service Identifier: 4420 00:22:06.185 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:22:06.185 Transport Address: 10.0.0.2 [2024-11-20 11:16:33.649166] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:22:06.185 [2024-11-20 11:16:33.649178] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x111b100) on tqpair=0x10b9690 00:22:06.185 [2024-11-20 11:16:33.649185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.185 [2024-11-20 11:16:33.649190] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x111b280) on tqpair=0x10b9690 00:22:06.185 [2024-11-20 11:16:33.649194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.185 [2024-11-20 11:16:33.649199] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x111b400) on tqpair=0x10b9690 00:22:06.185 [2024-11-20 11:16:33.649205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.185 [2024-11-20 11:16:33.649210] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x111b580) on tqpair=0x10b9690 00:22:06.185 [2024-11-20 11:16:33.649215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.185 [2024-11-20 11:16:33.649225] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.185 [2024-11-20 11:16:33.649229] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.185 [2024-11-20 11:16:33.649232] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10b9690) 00:22:06.185 [2024-11-20 11:16:33.649239] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.185 [2024-11-20 11:16:33.649253] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x111b580, cid 3, qid 0 00:22:06.185 [2024-11-20 11:16:33.649315] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.185 [2024-11-20 11:16:33.649322] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.185 [2024-11-20 11:16:33.649326] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.185 [2024-11-20 11:16:33.649330] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x111b580) on tqpair=0x10b9690 00:22:06.185 [2024-11-20 11:16:33.649337] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.185 [2024-11-20 11:16:33.649340] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.185 [2024-11-20 11:16:33.649343] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10b9690) 00:22:06.185 [2024-11-20 11:16:33.649349] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.185 [2024-11-20 11:16:33.649361] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x111b580, cid 3, qid 0 00:22:06.185 [2024-11-20 11:16:33.649432] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.185 [2024-11-20 11:16:33.649439] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.185 [2024-11-20 11:16:33.649442] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.185 [2024-11-20 11:16:33.649445] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x111b580) on tqpair=0x10b9690 00:22:06.185 [2024-11-20 11:16:33.649450] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:22:06.186 [2024-11-20 11:16:33.649454] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:22:06.186 [2024-11-20 11:16:33.649462] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.186 [2024-11-20 11:16:33.649466] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.186 [2024-11-20 11:16:33.649470] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10b9690) 00:22:06.186 [2024-11-20 11:16:33.649476] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.186 [2024-11-20 11:16:33.649486] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x111b580, cid 3, qid 0 00:22:06.186 [2024-11-20 11:16:33.649551] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.186 [2024-11-20 11:16:33.649558] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.186 [2024-11-20 11:16:33.649561] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.186 [2024-11-20 11:16:33.649565] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x111b580) on tqpair=0x10b9690 00:22:06.186 [2024-11-20 11:16:33.649574] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.186 [2024-11-20 11:16:33.649578] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.186 [2024-11-20 11:16:33.649581] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10b9690) 00:22:06.186 [2024-11-20 11:16:33.649586] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.186 [2024-11-20 11:16:33.649598] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x111b580, cid 3, qid 0 00:22:06.186 [2024-11-20 11:16:33.649669] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.186 [2024-11-20 11:16:33.649675] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.186 [2024-11-20 11:16:33.649678] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.186 [2024-11-20 11:16:33.649681] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x111b580) on tqpair=0x10b9690 00:22:06.186 [2024-11-20 11:16:33.649690] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.186 [2024-11-20 11:16:33.649695] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.186 [2024-11-20 11:16:33.649698] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10b9690) 00:22:06.186 [2024-11-20 11:16:33.649705] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.186 [2024-11-20 11:16:33.649714] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x111b580, cid 3, qid 0 00:22:06.186 [2024-11-20 11:16:33.649781] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.186 [2024-11-20 11:16:33.649786] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.186 [2024-11-20 11:16:33.649789] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.186 [2024-11-20 11:16:33.649792] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x111b580) on tqpair=0x10b9690 00:22:06.186 [2024-11-20 11:16:33.649801] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.186 [2024-11-20 11:16:33.649806] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.186 [2024-11-20 11:16:33.649809] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10b9690) 00:22:06.186 [2024-11-20 11:16:33.649814] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.186 [2024-11-20 11:16:33.649824] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x111b580, cid 3, qid 0 00:22:06.186 [2024-11-20 11:16:33.649903] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.186 [2024-11-20 11:16:33.649909] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.186 [2024-11-20 11:16:33.649911] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.186 [2024-11-20 11:16:33.649915] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x111b580) on tqpair=0x10b9690 00:22:06.186 [2024-11-20 11:16:33.649922] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.186 [2024-11-20 11:16:33.649926] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.186 [2024-11-20 11:16:33.649929] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10b9690) 00:22:06.186 [2024-11-20 11:16:33.649935] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.186 [2024-11-20 11:16:33.649944] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x111b580, cid 3, qid 0 00:22:06.186 [2024-11-20 11:16:33.650019] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.186 [2024-11-20 11:16:33.650025] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.186 [2024-11-20 11:16:33.650028] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.186 [2024-11-20 11:16:33.650031] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x111b580) on tqpair=0x10b9690 00:22:06.186 [2024-11-20 11:16:33.650039] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.186 [2024-11-20 11:16:33.650043] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.186 [2024-11-20 11:16:33.650046] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10b9690) 00:22:06.186 [2024-11-20 11:16:33.650052] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.186 [2024-11-20 11:16:33.650064] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x111b580, cid 3, qid 0 00:22:06.186 [2024-11-20 11:16:33.650136] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.186 [2024-11-20 11:16:33.650142] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.186 [2024-11-20 11:16:33.650145] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.186 [2024-11-20 11:16:33.650148] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x111b580) on tqpair=0x10b9690 00:22:06.186 [2024-11-20 11:16:33.650156] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.186 [2024-11-20 11:16:33.650159] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.186 [2024-11-20 11:16:33.650163] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10b9690) 00:22:06.186 [2024-11-20 11:16:33.650168] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.186 [2024-11-20 11:16:33.650178] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x111b580, cid 3, qid 0 00:22:06.186 [2024-11-20 11:16:33.650243] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.186 [2024-11-20 11:16:33.650249] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.186 [2024-11-20 11:16:33.650252] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.186 [2024-11-20 11:16:33.650256] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x111b580) on tqpair=0x10b9690 00:22:06.186 [2024-11-20 11:16:33.650265] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.186 [2024-11-20 11:16:33.650268] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.186 [2024-11-20 11:16:33.650271] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10b9690) 00:22:06.186 [2024-11-20 11:16:33.650277] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.186 [2024-11-20 11:16:33.650286] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x111b580, cid 3, qid 0 00:22:06.186 [2024-11-20 11:16:33.650345] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.186 [2024-11-20 11:16:33.650350] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.186 [2024-11-20 11:16:33.650353] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.186 [2024-11-20 11:16:33.650356] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x111b580) on tqpair=0x10b9690 00:22:06.186 [2024-11-20 11:16:33.650364] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.186 [2024-11-20 11:16:33.650368] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.186 [2024-11-20 11:16:33.650371] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10b9690) 00:22:06.186 [2024-11-20 11:16:33.650377] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.186 [2024-11-20 11:16:33.650386] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x111b580, cid 3, qid 0 00:22:06.186 [2024-11-20 11:16:33.650463] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.186 [2024-11-20 11:16:33.650468] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.186 [2024-11-20 11:16:33.650471] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.186 [2024-11-20 11:16:33.650474] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x111b580) on tqpair=0x10b9690 00:22:06.186 [2024-11-20 11:16:33.650482] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.186 [2024-11-20 11:16:33.650486] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.186 [2024-11-20 11:16:33.650489] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10b9690) 00:22:06.186 [2024-11-20 11:16:33.650494] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.186 [2024-11-20 11:16:33.650503] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x111b580, cid 3, qid 0 00:22:06.186 [2024-11-20 11:16:33.650598] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.186 [2024-11-20 11:16:33.650604] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.186 [2024-11-20 11:16:33.650607] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.186 [2024-11-20 11:16:33.650610] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x111b580) on tqpair=0x10b9690 00:22:06.186 [2024-11-20 11:16:33.650619] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.186 [2024-11-20 11:16:33.650622] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.186 [2024-11-20 11:16:33.650626] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10b9690) 00:22:06.186 [2024-11-20 11:16:33.650632] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.186 [2024-11-20 11:16:33.650643] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x111b580, cid 3, qid 0 00:22:06.186 [2024-11-20 11:16:33.650711] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.186 [2024-11-20 11:16:33.650716] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.186 [2024-11-20 11:16:33.650719] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.186 [2024-11-20 11:16:33.650724] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x111b580) on tqpair=0x10b9690 00:22:06.186 [2024-11-20 11:16:33.650732] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.186 [2024-11-20 11:16:33.650736] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.186 [2024-11-20 11:16:33.650739] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10b9690) 00:22:06.186 [2024-11-20 11:16:33.650744] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.186 [2024-11-20 11:16:33.650754] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x111b580, cid 3, qid 0 00:22:06.186 [2024-11-20 11:16:33.650828] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.186 [2024-11-20 11:16:33.650834] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.187 [2024-11-20 11:16:33.650836] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.187 [2024-11-20 11:16:33.650840] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x111b580) on tqpair=0x10b9690 00:22:06.187 [2024-11-20 11:16:33.650847] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.187 [2024-11-20 11:16:33.650851] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.187 [2024-11-20 11:16:33.650854] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10b9690) 00:22:06.187 [2024-11-20 11:16:33.650860] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.187 [2024-11-20 11:16:33.650869] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x111b580, cid 3, qid 0 00:22:06.187 [2024-11-20 11:16:33.650944] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.187 [2024-11-20 11:16:33.650956] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.187 [2024-11-20 11:16:33.650959] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.187 [2024-11-20 11:16:33.650962] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x111b580) on tqpair=0x10b9690 00:22:06.187 [2024-11-20 11:16:33.650971] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.187 [2024-11-20 11:16:33.650974] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.187 [2024-11-20 11:16:33.650977] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10b9690) 00:22:06.187 [2024-11-20 11:16:33.650983] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.187 [2024-11-20 11:16:33.650992] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x111b580, cid 3, qid 0 00:22:06.187 [2024-11-20 11:16:33.651063] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.187 [2024-11-20 11:16:33.651070] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.187 [2024-11-20 11:16:33.651073] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.187 [2024-11-20 11:16:33.651077] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x111b580) on tqpair=0x10b9690 00:22:06.187 [2024-11-20 11:16:33.651085] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.187 [2024-11-20 11:16:33.651088] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.187 [2024-11-20 11:16:33.651091] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10b9690) 00:22:06.187 [2024-11-20 11:16:33.651097] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.187 [2024-11-20 11:16:33.651106] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x111b580, cid 3, qid 0 00:22:06.187 [2024-11-20 11:16:33.651168] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.187 [2024-11-20 11:16:33.651173] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.187 [2024-11-20 11:16:33.651176] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.187 [2024-11-20 11:16:33.651179] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x111b580) on tqpair=0x10b9690 00:22:06.187 [2024-11-20 11:16:33.651189] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.187 [2024-11-20 11:16:33.651192] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.187 [2024-11-20 11:16:33.651195] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10b9690) 00:22:06.187 [2024-11-20 11:16:33.651201] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.187 [2024-11-20 11:16:33.651210] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x111b580, cid 3, qid 0 00:22:06.187 [2024-11-20 11:16:33.651277] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.187 [2024-11-20 11:16:33.651282] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.187 [2024-11-20 11:16:33.651285] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.187 [2024-11-20 11:16:33.651288] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x111b580) on tqpair=0x10b9690 00:22:06.187 [2024-11-20 11:16:33.651296] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.187 [2024-11-20 11:16:33.651300] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.187 [2024-11-20 11:16:33.651303] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10b9690) 00:22:06.187 [2024-11-20 11:16:33.651309] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.187 [2024-11-20 11:16:33.651318] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x111b580, cid 3, qid 0 00:22:06.187 [2024-11-20 11:16:33.654956] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.187 [2024-11-20 11:16:33.654965] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.187 [2024-11-20 11:16:33.654969] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.187 [2024-11-20 11:16:33.654973] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x111b580) on tqpair=0x10b9690 00:22:06.187 [2024-11-20 11:16:33.654984] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.187 [2024-11-20 11:16:33.654987] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.187 [2024-11-20 11:16:33.654990] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10b9690) 00:22:06.187 [2024-11-20 11:16:33.654996] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.187 [2024-11-20 11:16:33.655008] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x111b580, cid 3, qid 0 00:22:06.187 [2024-11-20 11:16:33.655131] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.187 [2024-11-20 11:16:33.655137] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.187 [2024-11-20 11:16:33.655143] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.187 [2024-11-20 11:16:33.655147] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x111b580) on tqpair=0x10b9690 00:22:06.187 [2024-11-20 11:16:33.655153] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 5 milliseconds 00:22:06.187 00:22:06.187 11:16:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:22:06.451 [2024-11-20 11:16:33.694508] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:22:06.451 [2024-11-20 11:16:33.694542] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4138843 ] 00:22:06.451 [2024-11-20 11:16:33.732791] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:22:06.451 [2024-11-20 11:16:33.732835] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:06.451 [2024-11-20 11:16:33.732840] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:06.451 [2024-11-20 11:16:33.732851] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:06.451 [2024-11-20 11:16:33.732859] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:06.451 [2024-11-20 11:16:33.740177] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:22:06.451 [2024-11-20 11:16:33.740202] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x12d8690 0 00:22:06.451 [2024-11-20 11:16:33.747964] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:06.451 [2024-11-20 11:16:33.747977] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:06.451 [2024-11-20 11:16:33.747982] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:06.451 [2024-11-20 11:16:33.747985] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:06.452 [2024-11-20 11:16:33.748009] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.452 [2024-11-20 11:16:33.748014] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.452 [2024-11-20 11:16:33.748017] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12d8690) 00:22:06.452 [2024-11-20 11:16:33.748026] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:06.452 [2024-11-20 11:16:33.748042] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x133a100, cid 0, qid 0 00:22:06.452 [2024-11-20 11:16:33.758960] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.452 [2024-11-20 11:16:33.758970] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.452 [2024-11-20 11:16:33.758973] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.452 [2024-11-20 11:16:33.758977] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x133a100) on tqpair=0x12d8690 00:22:06.452 [2024-11-20 11:16:33.758988] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:06.452 [2024-11-20 11:16:33.758994] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:22:06.452 [2024-11-20 11:16:33.758999] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:22:06.452 [2024-11-20 11:16:33.759010] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.452 [2024-11-20 11:16:33.759017] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.452 [2024-11-20 11:16:33.759020] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12d8690) 00:22:06.452 [2024-11-20 11:16:33.759027] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.452 [2024-11-20 11:16:33.759041] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x133a100, cid 0, qid 0 00:22:06.452 [2024-11-20 11:16:33.759215] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.452 [2024-11-20 11:16:33.759223] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.452 [2024-11-20 11:16:33.759226] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.452 [2024-11-20 11:16:33.759230] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x133a100) on tqpair=0x12d8690 00:22:06.452 [2024-11-20 11:16:33.759234] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:22:06.452 [2024-11-20 11:16:33.759241] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:22:06.452 [2024-11-20 11:16:33.759248] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.452 [2024-11-20 11:16:33.759252] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.452 [2024-11-20 11:16:33.759256] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12d8690) 00:22:06.452 [2024-11-20 11:16:33.759262] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.452 [2024-11-20 11:16:33.759273] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x133a100, cid 0, qid 0 00:22:06.452 [2024-11-20 11:16:33.759334] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.452 [2024-11-20 11:16:33.759340] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.452 [2024-11-20 11:16:33.759343] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.452 [2024-11-20 11:16:33.759346] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x133a100) on tqpair=0x12d8690 00:22:06.452 [2024-11-20 11:16:33.759352] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:22:06.452 [2024-11-20 11:16:33.759359] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:22:06.452 [2024-11-20 11:16:33.759365] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.452 [2024-11-20 11:16:33.759370] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.452 [2024-11-20 11:16:33.759373] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12d8690) 00:22:06.452 [2024-11-20 11:16:33.759379] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.452 [2024-11-20 11:16:33.759389] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x133a100, cid 0, qid 0 00:22:06.452 [2024-11-20 11:16:33.759453] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.452 [2024-11-20 11:16:33.759460] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.452 [2024-11-20 11:16:33.759465] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.452 [2024-11-20 11:16:33.759469] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x133a100) on tqpair=0x12d8690 00:22:06.452 [2024-11-20 11:16:33.759474] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:06.452 [2024-11-20 11:16:33.759483] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.452 [2024-11-20 11:16:33.759486] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.452 [2024-11-20 11:16:33.759489] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12d8690) 00:22:06.452 [2024-11-20 11:16:33.759495] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.452 [2024-11-20 11:16:33.759507] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x133a100, cid 0, qid 0 00:22:06.452 [2024-11-20 11:16:33.759572] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.452 [2024-11-20 11:16:33.759578] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.452 [2024-11-20 11:16:33.759581] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.452 [2024-11-20 11:16:33.759584] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x133a100) on tqpair=0x12d8690 00:22:06.452 [2024-11-20 11:16:33.759588] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:22:06.452 [2024-11-20 11:16:33.759592] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:22:06.452 [2024-11-20 11:16:33.759599] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:06.452 [2024-11-20 11:16:33.759707] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:22:06.452 [2024-11-20 11:16:33.759711] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:06.452 [2024-11-20 11:16:33.759717] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.452 [2024-11-20 11:16:33.759721] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.452 [2024-11-20 11:16:33.759724] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12d8690) 00:22:06.452 [2024-11-20 11:16:33.759730] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.452 [2024-11-20 11:16:33.759739] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x133a100, cid 0, qid 0 00:22:06.452 [2024-11-20 11:16:33.759804] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.452 [2024-11-20 11:16:33.759810] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.452 [2024-11-20 11:16:33.759813] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.452 [2024-11-20 11:16:33.759816] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x133a100) on tqpair=0x12d8690 00:22:06.452 [2024-11-20 11:16:33.759820] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:06.452 [2024-11-20 11:16:33.759828] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.452 [2024-11-20 11:16:33.759832] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.452 [2024-11-20 11:16:33.759835] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12d8690) 00:22:06.452 [2024-11-20 11:16:33.759840] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.452 [2024-11-20 11:16:33.759850] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x133a100, cid 0, qid 0 00:22:06.452 [2024-11-20 11:16:33.759911] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.452 [2024-11-20 11:16:33.759917] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.452 [2024-11-20 11:16:33.759920] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.452 [2024-11-20 11:16:33.759923] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x133a100) on tqpair=0x12d8690 00:22:06.452 [2024-11-20 11:16:33.759927] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:06.452 [2024-11-20 11:16:33.759931] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:22:06.452 [2024-11-20 11:16:33.759939] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:22:06.452 [2024-11-20 11:16:33.759961] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:22:06.452 [2024-11-20 11:16:33.759969] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.452 [2024-11-20 11:16:33.759973] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12d8690) 00:22:06.452 [2024-11-20 11:16:33.759978] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.452 [2024-11-20 11:16:33.759988] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x133a100, cid 0, qid 0 00:22:06.452 [2024-11-20 11:16:33.760095] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:06.452 [2024-11-20 11:16:33.760101] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:06.452 [2024-11-20 11:16:33.760104] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:06.452 [2024-11-20 11:16:33.760108] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x12d8690): datao=0, datal=4096, cccid=0 00:22:06.452 [2024-11-20 11:16:33.760112] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x133a100) on tqpair(0x12d8690): expected_datao=0, payload_size=4096 00:22:06.452 [2024-11-20 11:16:33.760116] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.452 [2024-11-20 11:16:33.760122] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:06.452 [2024-11-20 11:16:33.760125] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:06.452 [2024-11-20 11:16:33.760147] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.452 [2024-11-20 11:16:33.760153] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.452 [2024-11-20 11:16:33.760156] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.452 [2024-11-20 11:16:33.760159] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x133a100) on tqpair=0x12d8690 00:22:06.452 [2024-11-20 11:16:33.760165] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:22:06.453 [2024-11-20 11:16:33.760169] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:22:06.453 [2024-11-20 11:16:33.760173] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:22:06.453 [2024-11-20 11:16:33.760179] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:22:06.453 [2024-11-20 11:16:33.760183] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:22:06.453 [2024-11-20 11:16:33.760187] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:22:06.453 [2024-11-20 11:16:33.760197] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:22:06.453 [2024-11-20 11:16:33.760202] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.453 [2024-11-20 11:16:33.760206] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.453 [2024-11-20 11:16:33.760209] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12d8690) 00:22:06.453 [2024-11-20 11:16:33.760215] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:06.453 [2024-11-20 11:16:33.760225] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x133a100, cid 0, qid 0 00:22:06.453 [2024-11-20 11:16:33.760295] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.453 [2024-11-20 11:16:33.760301] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.453 [2024-11-20 11:16:33.760304] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.453 [2024-11-20 11:16:33.760307] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x133a100) on tqpair=0x12d8690 00:22:06.453 [2024-11-20 11:16:33.760313] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.453 [2024-11-20 11:16:33.760318] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.453 [2024-11-20 11:16:33.760321] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12d8690) 00:22:06.453 [2024-11-20 11:16:33.760326] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.453 [2024-11-20 11:16:33.760332] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.453 [2024-11-20 11:16:33.760335] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.453 [2024-11-20 11:16:33.760338] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x12d8690) 00:22:06.453 [2024-11-20 11:16:33.760343] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.453 [2024-11-20 11:16:33.760348] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.453 [2024-11-20 11:16:33.760351] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.453 [2024-11-20 11:16:33.760354] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x12d8690) 00:22:06.453 [2024-11-20 11:16:33.760359] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.453 [2024-11-20 11:16:33.760364] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.453 [2024-11-20 11:16:33.760368] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.453 [2024-11-20 11:16:33.760371] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12d8690) 00:22:06.453 [2024-11-20 11:16:33.760376] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.453 [2024-11-20 11:16:33.760380] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:22:06.453 [2024-11-20 11:16:33.760388] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:06.453 [2024-11-20 11:16:33.760393] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.453 [2024-11-20 11:16:33.760396] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x12d8690) 00:22:06.453 [2024-11-20 11:16:33.760402] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.453 [2024-11-20 11:16:33.760413] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x133a100, cid 0, qid 0 00:22:06.453 [2024-11-20 11:16:33.760418] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x133a280, cid 1, qid 0 00:22:06.453 [2024-11-20 11:16:33.760422] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x133a400, cid 2, qid 0 00:22:06.453 [2024-11-20 11:16:33.760426] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x133a580, cid 3, qid 0 00:22:06.453 [2024-11-20 11:16:33.760430] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x133a700, cid 4, qid 0 00:22:06.453 [2024-11-20 11:16:33.760527] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.453 [2024-11-20 11:16:33.760533] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.453 [2024-11-20 11:16:33.760536] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.453 [2024-11-20 11:16:33.760539] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x133a700) on tqpair=0x12d8690 00:22:06.453 [2024-11-20 11:16:33.760546] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:22:06.453 [2024-11-20 11:16:33.760550] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:22:06.453 [2024-11-20 11:16:33.760558] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:22:06.453 [2024-11-20 11:16:33.760565] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:22:06.453 [2024-11-20 11:16:33.760570] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.453 [2024-11-20 11:16:33.760574] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.453 [2024-11-20 11:16:33.760577] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x12d8690) 00:22:06.453 [2024-11-20 11:16:33.760582] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:06.453 [2024-11-20 11:16:33.760592] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x133a700, cid 4, qid 0 00:22:06.453 [2024-11-20 11:16:33.760667] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.453 [2024-11-20 11:16:33.760673] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.453 [2024-11-20 11:16:33.760676] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.453 [2024-11-20 11:16:33.760679] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x133a700) on tqpair=0x12d8690 00:22:06.453 [2024-11-20 11:16:33.760731] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:22:06.453 [2024-11-20 11:16:33.760740] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:22:06.453 [2024-11-20 11:16:33.760746] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.453 [2024-11-20 11:16:33.760750] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x12d8690) 00:22:06.453 [2024-11-20 11:16:33.760755] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.453 [2024-11-20 11:16:33.760766] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x133a700, cid 4, qid 0 00:22:06.453 [2024-11-20 11:16:33.760839] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:06.453 [2024-11-20 11:16:33.760845] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:06.453 [2024-11-20 11:16:33.760848] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:06.453 [2024-11-20 11:16:33.760851] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x12d8690): datao=0, datal=4096, cccid=4 00:22:06.453 [2024-11-20 11:16:33.760855] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x133a700) on tqpair(0x12d8690): expected_datao=0, payload_size=4096 00:22:06.453 [2024-11-20 11:16:33.760859] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.453 [2024-11-20 11:16:33.760869] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:06.453 [2024-11-20 11:16:33.760873] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:06.453 [2024-11-20 11:16:33.801151] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.453 [2024-11-20 11:16:33.801161] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.453 [2024-11-20 11:16:33.801165] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.453 [2024-11-20 11:16:33.801168] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x133a700) on tqpair=0x12d8690 00:22:06.453 [2024-11-20 11:16:33.801178] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:22:06.453 [2024-11-20 11:16:33.801187] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:22:06.453 [2024-11-20 11:16:33.801195] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:22:06.453 [2024-11-20 11:16:33.801202] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.453 [2024-11-20 11:16:33.801205] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x12d8690) 00:22:06.453 [2024-11-20 11:16:33.801211] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.453 [2024-11-20 11:16:33.801225] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x133a700, cid 4, qid 0 00:22:06.453 [2024-11-20 11:16:33.801305] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:06.453 [2024-11-20 11:16:33.801310] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:06.453 [2024-11-20 11:16:33.801313] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:06.453 [2024-11-20 11:16:33.801316] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x12d8690): datao=0, datal=4096, cccid=4 00:22:06.453 [2024-11-20 11:16:33.801321] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x133a700) on tqpair(0x12d8690): expected_datao=0, payload_size=4096 00:22:06.453 [2024-11-20 11:16:33.801324] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.453 [2024-11-20 11:16:33.801340] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:06.453 [2024-11-20 11:16:33.801344] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:06.453 [2024-11-20 11:16:33.801381] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.453 [2024-11-20 11:16:33.801386] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.453 [2024-11-20 11:16:33.801389] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.453 [2024-11-20 11:16:33.801392] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x133a700) on tqpair=0x12d8690 00:22:06.453 [2024-11-20 11:16:33.801403] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:22:06.453 [2024-11-20 11:16:33.801412] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:22:06.454 [2024-11-20 11:16:33.801418] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.454 [2024-11-20 11:16:33.801421] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x12d8690) 00:22:06.454 [2024-11-20 11:16:33.801427] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.454 [2024-11-20 11:16:33.801437] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x133a700, cid 4, qid 0 00:22:06.454 [2024-11-20 11:16:33.801516] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:06.454 [2024-11-20 11:16:33.801522] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:06.454 [2024-11-20 11:16:33.801525] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:06.454 [2024-11-20 11:16:33.801528] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x12d8690): datao=0, datal=4096, cccid=4 00:22:06.454 [2024-11-20 11:16:33.801532] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x133a700) on tqpair(0x12d8690): expected_datao=0, payload_size=4096 00:22:06.454 [2024-11-20 11:16:33.801536] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.454 [2024-11-20 11:16:33.801546] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:06.454 [2024-11-20 11:16:33.801549] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:06.454 [2024-11-20 11:16:33.842013] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.454 [2024-11-20 11:16:33.842023] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.454 [2024-11-20 11:16:33.842026] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.454 [2024-11-20 11:16:33.842029] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x133a700) on tqpair=0x12d8690 00:22:06.454 [2024-11-20 11:16:33.842038] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:22:06.454 [2024-11-20 11:16:33.842045] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:22:06.454 [2024-11-20 11:16:33.842053] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:22:06.454 [2024-11-20 11:16:33.842062] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:22:06.454 [2024-11-20 11:16:33.842067] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:22:06.454 [2024-11-20 11:16:33.842072] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:22:06.454 [2024-11-20 11:16:33.842077] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:22:06.454 [2024-11-20 11:16:33.842081] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:22:06.454 [2024-11-20 11:16:33.842085] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:22:06.454 [2024-11-20 11:16:33.842097] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.454 [2024-11-20 11:16:33.842100] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x12d8690) 00:22:06.454 [2024-11-20 11:16:33.842106] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.454 [2024-11-20 11:16:33.842112] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.454 [2024-11-20 11:16:33.842116] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.454 [2024-11-20 11:16:33.842119] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x12d8690) 00:22:06.454 [2024-11-20 11:16:33.842124] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.454 [2024-11-20 11:16:33.842138] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x133a700, cid 4, qid 0 00:22:06.454 [2024-11-20 11:16:33.842142] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x133a880, cid 5, qid 0 00:22:06.454 [2024-11-20 11:16:33.842217] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.454 [2024-11-20 11:16:33.842223] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.454 [2024-11-20 11:16:33.842226] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.454 [2024-11-20 11:16:33.842229] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x133a700) on tqpair=0x12d8690 00:22:06.454 [2024-11-20 11:16:33.842235] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.454 [2024-11-20 11:16:33.842240] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.454 [2024-11-20 11:16:33.842243] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.454 [2024-11-20 11:16:33.842246] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x133a880) on tqpair=0x12d8690 00:22:06.454 [2024-11-20 11:16:33.842254] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.454 [2024-11-20 11:16:33.842257] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x12d8690) 00:22:06.454 [2024-11-20 11:16:33.842263] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.454 [2024-11-20 11:16:33.842273] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x133a880, cid 5, qid 0 00:22:06.454 [2024-11-20 11:16:33.842355] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.454 [2024-11-20 11:16:33.842361] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.454 [2024-11-20 11:16:33.842364] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.454 [2024-11-20 11:16:33.842367] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x133a880) on tqpair=0x12d8690 00:22:06.454 [2024-11-20 11:16:33.842375] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.454 [2024-11-20 11:16:33.842379] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x12d8690) 00:22:06.454 [2024-11-20 11:16:33.842386] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.454 [2024-11-20 11:16:33.842396] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x133a880, cid 5, qid 0 00:22:06.454 [2024-11-20 11:16:33.842460] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.454 [2024-11-20 11:16:33.842465] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.454 [2024-11-20 11:16:33.842468] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.454 [2024-11-20 11:16:33.842471] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x133a880) on tqpair=0x12d8690 00:22:06.454 [2024-11-20 11:16:33.842479] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.454 [2024-11-20 11:16:33.842482] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x12d8690) 00:22:06.454 [2024-11-20 11:16:33.842488] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.454 [2024-11-20 11:16:33.842497] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x133a880, cid 5, qid 0 00:22:06.454 [2024-11-20 11:16:33.842564] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.454 [2024-11-20 11:16:33.842570] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.454 [2024-11-20 11:16:33.842573] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.454 [2024-11-20 11:16:33.842576] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x133a880) on tqpair=0x12d8690 00:22:06.454 [2024-11-20 11:16:33.842588] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.454 [2024-11-20 11:16:33.842592] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x12d8690) 00:22:06.454 [2024-11-20 11:16:33.842598] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.454 [2024-11-20 11:16:33.842604] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.454 [2024-11-20 11:16:33.842607] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x12d8690) 00:22:06.454 [2024-11-20 11:16:33.842612] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.454 [2024-11-20 11:16:33.842618] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.454 [2024-11-20 11:16:33.842621] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x12d8690) 00:22:06.454 [2024-11-20 11:16:33.842627] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.454 [2024-11-20 11:16:33.842633] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.454 [2024-11-20 11:16:33.842636] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x12d8690) 00:22:06.454 [2024-11-20 11:16:33.842641] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.454 [2024-11-20 11:16:33.842652] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x133a880, cid 5, qid 0 00:22:06.454 [2024-11-20 11:16:33.842656] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x133a700, cid 4, qid 0 00:22:06.454 [2024-11-20 11:16:33.842661] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x133aa00, cid 6, qid 0 00:22:06.454 [2024-11-20 11:16:33.842664] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x133ab80, cid 7, qid 0 00:22:06.454 [2024-11-20 11:16:33.842810] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:06.454 [2024-11-20 11:16:33.842816] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:06.454 [2024-11-20 11:16:33.842820] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:06.454 [2024-11-20 11:16:33.842827] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x12d8690): datao=0, datal=8192, cccid=5 00:22:06.454 [2024-11-20 11:16:33.842831] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x133a880) on tqpair(0x12d8690): expected_datao=0, payload_size=8192 00:22:06.454 [2024-11-20 11:16:33.842835] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.454 [2024-11-20 11:16:33.842849] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:06.454 [2024-11-20 11:16:33.842853] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:06.454 [2024-11-20 11:16:33.842858] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:06.454 [2024-11-20 11:16:33.842863] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:06.454 [2024-11-20 11:16:33.842866] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:06.454 [2024-11-20 11:16:33.842868] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x12d8690): datao=0, datal=512, cccid=4 00:22:06.454 [2024-11-20 11:16:33.842872] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x133a700) on tqpair(0x12d8690): expected_datao=0, payload_size=512 00:22:06.454 [2024-11-20 11:16:33.842876] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.454 [2024-11-20 11:16:33.842882] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:06.454 [2024-11-20 11:16:33.842885] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:06.454 [2024-11-20 11:16:33.842889] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:06.455 [2024-11-20 11:16:33.842894] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:06.455 [2024-11-20 11:16:33.842897] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:06.455 [2024-11-20 11:16:33.842900] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x12d8690): datao=0, datal=512, cccid=6 00:22:06.455 [2024-11-20 11:16:33.842904] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x133aa00) on tqpair(0x12d8690): expected_datao=0, payload_size=512 00:22:06.455 [2024-11-20 11:16:33.842907] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.455 [2024-11-20 11:16:33.842913] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:06.455 [2024-11-20 11:16:33.842916] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:06.455 [2024-11-20 11:16:33.842921] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:06.455 [2024-11-20 11:16:33.842925] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:06.455 [2024-11-20 11:16:33.842928] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:06.455 [2024-11-20 11:16:33.842931] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x12d8690): datao=0, datal=4096, cccid=7 00:22:06.455 [2024-11-20 11:16:33.842935] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x133ab80) on tqpair(0x12d8690): expected_datao=0, payload_size=4096 00:22:06.455 [2024-11-20 11:16:33.842939] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.455 [2024-11-20 11:16:33.842944] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:06.455 [2024-11-20 11:16:33.846952] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:06.455 [2024-11-20 11:16:33.846961] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.455 [2024-11-20 11:16:33.846966] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.455 [2024-11-20 11:16:33.846969] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.455 [2024-11-20 11:16:33.846972] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x133a880) on tqpair=0x12d8690 00:22:06.455 [2024-11-20 11:16:33.846982] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.455 [2024-11-20 11:16:33.846988] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.455 [2024-11-20 11:16:33.846990] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.455 [2024-11-20 11:16:33.846994] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x133a700) on tqpair=0x12d8690 00:22:06.455 [2024-11-20 11:16:33.847002] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.455 [2024-11-20 11:16:33.847009] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.455 [2024-11-20 11:16:33.847012] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.455 [2024-11-20 11:16:33.847015] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x133aa00) on tqpair=0x12d8690 00:22:06.455 [2024-11-20 11:16:33.847021] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.455 [2024-11-20 11:16:33.847026] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.455 [2024-11-20 11:16:33.847029] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.455 [2024-11-20 11:16:33.847032] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x133ab80) on tqpair=0x12d8690 00:22:06.455 ===================================================== 00:22:06.455 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:06.455 ===================================================== 00:22:06.455 Controller Capabilities/Features 00:22:06.455 ================================ 00:22:06.455 Vendor ID: 8086 00:22:06.455 Subsystem Vendor ID: 8086 00:22:06.455 Serial Number: SPDK00000000000001 00:22:06.455 Model Number: SPDK bdev Controller 00:22:06.455 Firmware Version: 25.01 00:22:06.455 Recommended Arb Burst: 6 00:22:06.455 IEEE OUI Identifier: e4 d2 5c 00:22:06.455 Multi-path I/O 00:22:06.455 May have multiple subsystem ports: Yes 00:22:06.455 May have multiple controllers: Yes 00:22:06.455 Associated with SR-IOV VF: No 00:22:06.455 Max Data Transfer Size: 131072 00:22:06.455 Max Number of Namespaces: 32 00:22:06.455 Max Number of I/O Queues: 127 00:22:06.455 NVMe Specification Version (VS): 1.3 00:22:06.455 NVMe Specification Version (Identify): 1.3 00:22:06.455 Maximum Queue Entries: 128 00:22:06.455 Contiguous Queues Required: Yes 00:22:06.455 Arbitration Mechanisms Supported 00:22:06.455 Weighted Round Robin: Not Supported 00:22:06.455 Vendor Specific: Not Supported 00:22:06.455 Reset Timeout: 15000 ms 00:22:06.455 Doorbell Stride: 4 bytes 00:22:06.455 NVM Subsystem Reset: Not Supported 00:22:06.455 Command Sets Supported 00:22:06.455 NVM Command Set: Supported 00:22:06.455 Boot Partition: Not Supported 00:22:06.455 Memory Page Size Minimum: 4096 bytes 00:22:06.455 Memory Page Size Maximum: 4096 bytes 00:22:06.455 Persistent Memory Region: Not Supported 00:22:06.455 Optional Asynchronous Events Supported 00:22:06.455 Namespace Attribute Notices: Supported 00:22:06.455 Firmware Activation Notices: Not Supported 00:22:06.455 ANA Change Notices: Not Supported 00:22:06.455 PLE Aggregate Log Change Notices: Not Supported 00:22:06.455 LBA Status Info Alert Notices: Not Supported 00:22:06.455 EGE Aggregate Log Change Notices: Not Supported 00:22:06.455 Normal NVM Subsystem Shutdown event: Not Supported 00:22:06.455 Zone Descriptor Change Notices: Not Supported 00:22:06.455 Discovery Log Change Notices: Not Supported 00:22:06.455 Controller Attributes 00:22:06.455 128-bit Host Identifier: Supported 00:22:06.455 Non-Operational Permissive Mode: Not Supported 00:22:06.455 NVM Sets: Not Supported 00:22:06.455 Read Recovery Levels: Not Supported 00:22:06.455 Endurance Groups: Not Supported 00:22:06.455 Predictable Latency Mode: Not Supported 00:22:06.455 Traffic Based Keep ALive: Not Supported 00:22:06.455 Namespace Granularity: Not Supported 00:22:06.455 SQ Associations: Not Supported 00:22:06.455 UUID List: Not Supported 00:22:06.455 Multi-Domain Subsystem: Not Supported 00:22:06.455 Fixed Capacity Management: Not Supported 00:22:06.455 Variable Capacity Management: Not Supported 00:22:06.455 Delete Endurance Group: Not Supported 00:22:06.455 Delete NVM Set: Not Supported 00:22:06.455 Extended LBA Formats Supported: Not Supported 00:22:06.455 Flexible Data Placement Supported: Not Supported 00:22:06.455 00:22:06.455 Controller Memory Buffer Support 00:22:06.455 ================================ 00:22:06.455 Supported: No 00:22:06.455 00:22:06.455 Persistent Memory Region Support 00:22:06.455 ================================ 00:22:06.455 Supported: No 00:22:06.455 00:22:06.455 Admin Command Set Attributes 00:22:06.455 ============================ 00:22:06.455 Security Send/Receive: Not Supported 00:22:06.455 Format NVM: Not Supported 00:22:06.455 Firmware Activate/Download: Not Supported 00:22:06.455 Namespace Management: Not Supported 00:22:06.455 Device Self-Test: Not Supported 00:22:06.455 Directives: Not Supported 00:22:06.455 NVMe-MI: Not Supported 00:22:06.455 Virtualization Management: Not Supported 00:22:06.455 Doorbell Buffer Config: Not Supported 00:22:06.455 Get LBA Status Capability: Not Supported 00:22:06.455 Command & Feature Lockdown Capability: Not Supported 00:22:06.455 Abort Command Limit: 4 00:22:06.455 Async Event Request Limit: 4 00:22:06.455 Number of Firmware Slots: N/A 00:22:06.455 Firmware Slot 1 Read-Only: N/A 00:22:06.455 Firmware Activation Without Reset: N/A 00:22:06.455 Multiple Update Detection Support: N/A 00:22:06.455 Firmware Update Granularity: No Information Provided 00:22:06.455 Per-Namespace SMART Log: No 00:22:06.455 Asymmetric Namespace Access Log Page: Not Supported 00:22:06.455 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:22:06.455 Command Effects Log Page: Supported 00:22:06.455 Get Log Page Extended Data: Supported 00:22:06.455 Telemetry Log Pages: Not Supported 00:22:06.455 Persistent Event Log Pages: Not Supported 00:22:06.455 Supported Log Pages Log Page: May Support 00:22:06.455 Commands Supported & Effects Log Page: Not Supported 00:22:06.455 Feature Identifiers & Effects Log Page:May Support 00:22:06.455 NVMe-MI Commands & Effects Log Page: May Support 00:22:06.455 Data Area 4 for Telemetry Log: Not Supported 00:22:06.455 Error Log Page Entries Supported: 128 00:22:06.455 Keep Alive: Supported 00:22:06.455 Keep Alive Granularity: 10000 ms 00:22:06.455 00:22:06.455 NVM Command Set Attributes 00:22:06.455 ========================== 00:22:06.455 Submission Queue Entry Size 00:22:06.455 Max: 64 00:22:06.455 Min: 64 00:22:06.455 Completion Queue Entry Size 00:22:06.455 Max: 16 00:22:06.455 Min: 16 00:22:06.455 Number of Namespaces: 32 00:22:06.455 Compare Command: Supported 00:22:06.455 Write Uncorrectable Command: Not Supported 00:22:06.455 Dataset Management Command: Supported 00:22:06.455 Write Zeroes Command: Supported 00:22:06.455 Set Features Save Field: Not Supported 00:22:06.455 Reservations: Supported 00:22:06.455 Timestamp: Not Supported 00:22:06.455 Copy: Supported 00:22:06.455 Volatile Write Cache: Present 00:22:06.455 Atomic Write Unit (Normal): 1 00:22:06.455 Atomic Write Unit (PFail): 1 00:22:06.455 Atomic Compare & Write Unit: 1 00:22:06.455 Fused Compare & Write: Supported 00:22:06.455 Scatter-Gather List 00:22:06.455 SGL Command Set: Supported 00:22:06.455 SGL Keyed: Supported 00:22:06.455 SGL Bit Bucket Descriptor: Not Supported 00:22:06.455 SGL Metadata Pointer: Not Supported 00:22:06.455 Oversized SGL: Not Supported 00:22:06.455 SGL Metadata Address: Not Supported 00:22:06.455 SGL Offset: Supported 00:22:06.456 Transport SGL Data Block: Not Supported 00:22:06.456 Replay Protected Memory Block: Not Supported 00:22:06.456 00:22:06.456 Firmware Slot Information 00:22:06.456 ========================= 00:22:06.456 Active slot: 1 00:22:06.456 Slot 1 Firmware Revision: 25.01 00:22:06.456 00:22:06.456 00:22:06.456 Commands Supported and Effects 00:22:06.456 ============================== 00:22:06.456 Admin Commands 00:22:06.456 -------------- 00:22:06.456 Get Log Page (02h): Supported 00:22:06.456 Identify (06h): Supported 00:22:06.456 Abort (08h): Supported 00:22:06.456 Set Features (09h): Supported 00:22:06.456 Get Features (0Ah): Supported 00:22:06.456 Asynchronous Event Request (0Ch): Supported 00:22:06.456 Keep Alive (18h): Supported 00:22:06.456 I/O Commands 00:22:06.456 ------------ 00:22:06.456 Flush (00h): Supported LBA-Change 00:22:06.456 Write (01h): Supported LBA-Change 00:22:06.456 Read (02h): Supported 00:22:06.456 Compare (05h): Supported 00:22:06.456 Write Zeroes (08h): Supported LBA-Change 00:22:06.456 Dataset Management (09h): Supported LBA-Change 00:22:06.456 Copy (19h): Supported LBA-Change 00:22:06.456 00:22:06.456 Error Log 00:22:06.456 ========= 00:22:06.456 00:22:06.456 Arbitration 00:22:06.456 =========== 00:22:06.456 Arbitration Burst: 1 00:22:06.456 00:22:06.456 Power Management 00:22:06.456 ================ 00:22:06.456 Number of Power States: 1 00:22:06.456 Current Power State: Power State #0 00:22:06.456 Power State #0: 00:22:06.456 Max Power: 0.00 W 00:22:06.456 Non-Operational State: Operational 00:22:06.456 Entry Latency: Not Reported 00:22:06.456 Exit Latency: Not Reported 00:22:06.456 Relative Read Throughput: 0 00:22:06.456 Relative Read Latency: 0 00:22:06.456 Relative Write Throughput: 0 00:22:06.456 Relative Write Latency: 0 00:22:06.456 Idle Power: Not Reported 00:22:06.456 Active Power: Not Reported 00:22:06.456 Non-Operational Permissive Mode: Not Supported 00:22:06.456 00:22:06.456 Health Information 00:22:06.456 ================== 00:22:06.456 Critical Warnings: 00:22:06.456 Available Spare Space: OK 00:22:06.456 Temperature: OK 00:22:06.456 Device Reliability: OK 00:22:06.456 Read Only: No 00:22:06.456 Volatile Memory Backup: OK 00:22:06.456 Current Temperature: 0 Kelvin (-273 Celsius) 00:22:06.456 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:22:06.456 Available Spare: 0% 00:22:06.456 Available Spare Threshold: 0% 00:22:06.456 Life Percentage Used:[2024-11-20 11:16:33.847114] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.456 [2024-11-20 11:16:33.847119] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x12d8690) 00:22:06.456 [2024-11-20 11:16:33.847125] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.456 [2024-11-20 11:16:33.847137] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x133ab80, cid 7, qid 0 00:22:06.456 [2024-11-20 11:16:33.847213] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.456 [2024-11-20 11:16:33.847219] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.456 [2024-11-20 11:16:33.847222] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.456 [2024-11-20 11:16:33.847225] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x133ab80) on tqpair=0x12d8690 00:22:06.456 [2024-11-20 11:16:33.847250] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:22:06.456 [2024-11-20 11:16:33.847259] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x133a100) on tqpair=0x12d8690 00:22:06.456 [2024-11-20 11:16:33.847264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.456 [2024-11-20 11:16:33.847269] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x133a280) on tqpair=0x12d8690 00:22:06.456 [2024-11-20 11:16:33.847273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.456 [2024-11-20 11:16:33.847277] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x133a400) on tqpair=0x12d8690 00:22:06.456 [2024-11-20 11:16:33.847281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.456 [2024-11-20 11:16:33.847285] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x133a580) on tqpair=0x12d8690 00:22:06.456 [2024-11-20 11:16:33.847289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.456 [2024-11-20 11:16:33.847296] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.456 [2024-11-20 11:16:33.847299] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.456 [2024-11-20 11:16:33.847302] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12d8690) 00:22:06.456 [2024-11-20 11:16:33.847308] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.456 [2024-11-20 11:16:33.847320] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x133a580, cid 3, qid 0 00:22:06.456 [2024-11-20 11:16:33.847381] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.456 [2024-11-20 11:16:33.847387] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.456 [2024-11-20 11:16:33.847390] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.456 [2024-11-20 11:16:33.847393] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x133a580) on tqpair=0x12d8690 00:22:06.456 [2024-11-20 11:16:33.847399] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.456 [2024-11-20 11:16:33.847402] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.456 [2024-11-20 11:16:33.847408] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12d8690) 00:22:06.456 [2024-11-20 11:16:33.847414] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.456 [2024-11-20 11:16:33.847426] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x133a580, cid 3, qid 0 00:22:06.456 [2024-11-20 11:16:33.847496] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.456 [2024-11-20 11:16:33.847502] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.456 [2024-11-20 11:16:33.847505] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.456 [2024-11-20 11:16:33.847508] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x133a580) on tqpair=0x12d8690 00:22:06.456 [2024-11-20 11:16:33.847511] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:22:06.456 [2024-11-20 11:16:33.847516] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:22:06.456 [2024-11-20 11:16:33.847523] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.456 [2024-11-20 11:16:33.847527] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.456 [2024-11-20 11:16:33.847530] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12d8690) 00:22:06.456 [2024-11-20 11:16:33.847536] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.456 [2024-11-20 11:16:33.847545] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x133a580, cid 3, qid 0 00:22:06.456 [2024-11-20 11:16:33.847608] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.456 [2024-11-20 11:16:33.847613] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.456 [2024-11-20 11:16:33.847616] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.456 [2024-11-20 11:16:33.847619] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x133a580) on tqpair=0x12d8690 00:22:06.456 [2024-11-20 11:16:33.847628] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.456 [2024-11-20 11:16:33.847632] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.456 [2024-11-20 11:16:33.847635] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12d8690) 00:22:06.456 [2024-11-20 11:16:33.847640] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.456 [2024-11-20 11:16:33.847650] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x133a580, cid 3, qid 0 00:22:06.456 [2024-11-20 11:16:33.847712] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.456 [2024-11-20 11:16:33.847717] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.456 [2024-11-20 11:16:33.847720] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.456 [2024-11-20 11:16:33.847723] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x133a580) on tqpair=0x12d8690 00:22:06.456 [2024-11-20 11:16:33.847731] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.457 [2024-11-20 11:16:33.847735] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.457 [2024-11-20 11:16:33.847738] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12d8690) 00:22:06.457 [2024-11-20 11:16:33.847744] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.457 [2024-11-20 11:16:33.847753] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x133a580, cid 3, qid 0 00:22:06.457 [2024-11-20 11:16:33.847820] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.457 [2024-11-20 11:16:33.847825] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.457 [2024-11-20 11:16:33.847828] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.457 [2024-11-20 11:16:33.847831] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x133a580) on tqpair=0x12d8690 00:22:06.457 [2024-11-20 11:16:33.847842] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.457 [2024-11-20 11:16:33.847846] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.457 [2024-11-20 11:16:33.847849] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12d8690) 00:22:06.457 [2024-11-20 11:16:33.847854] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.457 [2024-11-20 11:16:33.847864] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x133a580, cid 3, qid 0 00:22:06.457 [2024-11-20 11:16:33.847925] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.457 [2024-11-20 11:16:33.847931] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.457 [2024-11-20 11:16:33.847934] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.457 [2024-11-20 11:16:33.847937] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x133a580) on tqpair=0x12d8690 00:22:06.457 [2024-11-20 11:16:33.847945] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.457 [2024-11-20 11:16:33.847955] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.457 [2024-11-20 11:16:33.847959] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12d8690) 00:22:06.457 [2024-11-20 11:16:33.847964] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.457 [2024-11-20 11:16:33.847974] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x133a580, cid 3, qid 0 00:22:06.457 [2024-11-20 11:16:33.848035] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.457 [2024-11-20 11:16:33.848041] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.457 [2024-11-20 11:16:33.848043] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.457 [2024-11-20 11:16:33.848046] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x133a580) on tqpair=0x12d8690 00:22:06.457 [2024-11-20 11:16:33.848055] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.457 [2024-11-20 11:16:33.848058] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.457 [2024-11-20 11:16:33.848061] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12d8690) 00:22:06.457 [2024-11-20 11:16:33.848067] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.457 [2024-11-20 11:16:33.848076] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x133a580, cid 3, qid 0 00:22:06.457 [2024-11-20 11:16:33.848142] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.457 [2024-11-20 11:16:33.848147] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.457 [2024-11-20 11:16:33.848150] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.457 [2024-11-20 11:16:33.848153] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x133a580) on tqpair=0x12d8690 00:22:06.457 [2024-11-20 11:16:33.848162] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.457 [2024-11-20 11:16:33.848165] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.457 [2024-11-20 11:16:33.848169] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12d8690) 00:22:06.457 [2024-11-20 11:16:33.848174] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.457 [2024-11-20 11:16:33.848183] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x133a580, cid 3, qid 0 00:22:06.457 [2024-11-20 11:16:33.848246] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.457 [2024-11-20 11:16:33.848251] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.457 [2024-11-20 11:16:33.848255] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.457 [2024-11-20 11:16:33.848258] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x133a580) on tqpair=0x12d8690 00:22:06.457 [2024-11-20 11:16:33.848266] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.457 [2024-11-20 11:16:33.848271] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.457 [2024-11-20 11:16:33.848274] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12d8690) 00:22:06.457 [2024-11-20 11:16:33.848279] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.457 [2024-11-20 11:16:33.848288] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x133a580, cid 3, qid 0 00:22:06.457 [2024-11-20 11:16:33.848351] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.457 [2024-11-20 11:16:33.848357] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.457 [2024-11-20 11:16:33.848360] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.457 [2024-11-20 11:16:33.848363] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x133a580) on tqpair=0x12d8690 00:22:06.457 [2024-11-20 11:16:33.848371] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.457 [2024-11-20 11:16:33.848374] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.457 [2024-11-20 11:16:33.848377] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12d8690) 00:22:06.457 [2024-11-20 11:16:33.848383] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.457 [2024-11-20 11:16:33.848392] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x133a580, cid 3, qid 0 00:22:06.457 [2024-11-20 11:16:33.848455] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.457 [2024-11-20 11:16:33.848460] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.457 [2024-11-20 11:16:33.848463] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.457 [2024-11-20 11:16:33.848467] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x133a580) on tqpair=0x12d8690 00:22:06.457 [2024-11-20 11:16:33.848475] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.457 [2024-11-20 11:16:33.848478] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.457 [2024-11-20 11:16:33.848481] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12d8690) 00:22:06.457 [2024-11-20 11:16:33.848487] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.457 [2024-11-20 11:16:33.848496] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x133a580, cid 3, qid 0 00:22:06.457 [2024-11-20 11:16:33.848553] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.457 [2024-11-20 11:16:33.848558] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.457 [2024-11-20 11:16:33.848561] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.457 [2024-11-20 11:16:33.848564] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x133a580) on tqpair=0x12d8690 00:22:06.457 [2024-11-20 11:16:33.848573] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.457 [2024-11-20 11:16:33.848576] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.457 [2024-11-20 11:16:33.848579] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12d8690) 00:22:06.457 [2024-11-20 11:16:33.848585] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.457 [2024-11-20 11:16:33.848594] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x133a580, cid 3, qid 0 00:22:06.457 [2024-11-20 11:16:33.848654] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.457 [2024-11-20 11:16:33.848659] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.457 [2024-11-20 11:16:33.848662] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.457 [2024-11-20 11:16:33.848665] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x133a580) on tqpair=0x12d8690 00:22:06.457 [2024-11-20 11:16:33.848674] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.457 [2024-11-20 11:16:33.848677] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.457 [2024-11-20 11:16:33.848682] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12d8690) 00:22:06.457 [2024-11-20 11:16:33.848687] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.457 [2024-11-20 11:16:33.848697] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x133a580, cid 3, qid 0 00:22:06.457 [2024-11-20 11:16:33.848763] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.457 [2024-11-20 11:16:33.848768] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.457 [2024-11-20 11:16:33.848771] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.457 [2024-11-20 11:16:33.848774] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x133a580) on tqpair=0x12d8690 00:22:06.457 [2024-11-20 11:16:33.848783] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.457 [2024-11-20 11:16:33.848787] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.457 [2024-11-20 11:16:33.848790] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12d8690) 00:22:06.457 [2024-11-20 11:16:33.848795] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.457 [2024-11-20 11:16:33.848805] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x133a580, cid 3, qid 0 00:22:06.457 [2024-11-20 11:16:33.848866] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.457 [2024-11-20 11:16:33.848871] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.457 [2024-11-20 11:16:33.848874] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.457 [2024-11-20 11:16:33.848877] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x133a580) on tqpair=0x12d8690 00:22:06.457 [2024-11-20 11:16:33.848885] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.457 [2024-11-20 11:16:33.848889] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.457 [2024-11-20 11:16:33.848892] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12d8690) 00:22:06.457 [2024-11-20 11:16:33.848897] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.457 [2024-11-20 11:16:33.848906] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x133a580, cid 3, qid 0 00:22:06.457 [2024-11-20 11:16:33.848973] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.457 [2024-11-20 11:16:33.848979] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.457 [2024-11-20 11:16:33.848982] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.457 [2024-11-20 11:16:33.848985] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x133a580) on tqpair=0x12d8690 00:22:06.458 [2024-11-20 11:16:33.848993] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.458 [2024-11-20 11:16:33.848997] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.458 [2024-11-20 11:16:33.849000] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12d8690) 00:22:06.458 [2024-11-20 11:16:33.849005] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.458 [2024-11-20 11:16:33.849015] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x133a580, cid 3, qid 0 00:22:06.458 [2024-11-20 11:16:33.849080] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.458 [2024-11-20 11:16:33.849085] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.458 [2024-11-20 11:16:33.849088] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.458 [2024-11-20 11:16:33.849092] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x133a580) on tqpair=0x12d8690 00:22:06.458 [2024-11-20 11:16:33.849100] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.458 [2024-11-20 11:16:33.849103] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.458 [2024-11-20 11:16:33.849106] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12d8690) 00:22:06.458 [2024-11-20 11:16:33.849114] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.458 [2024-11-20 11:16:33.849123] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x133a580, cid 3, qid 0 00:22:06.458 [2024-11-20 11:16:33.849186] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.458 [2024-11-20 11:16:33.849192] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.458 [2024-11-20 11:16:33.849195] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.458 [2024-11-20 11:16:33.849198] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x133a580) on tqpair=0x12d8690 00:22:06.458 [2024-11-20 11:16:33.849206] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.458 [2024-11-20 11:16:33.849209] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.458 [2024-11-20 11:16:33.849212] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12d8690) 00:22:06.458 [2024-11-20 11:16:33.849218] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.458 [2024-11-20 11:16:33.849227] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x133a580, cid 3, qid 0 00:22:06.458 [2024-11-20 11:16:33.849288] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.458 [2024-11-20 11:16:33.849293] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.458 [2024-11-20 11:16:33.849296] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.458 [2024-11-20 11:16:33.849299] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x133a580) on tqpair=0x12d8690 00:22:06.458 [2024-11-20 11:16:33.849307] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.458 [2024-11-20 11:16:33.849311] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.458 [2024-11-20 11:16:33.849314] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12d8690) 00:22:06.458 [2024-11-20 11:16:33.849319] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.458 [2024-11-20 11:16:33.849328] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x133a580, cid 3, qid 0 00:22:06.458 [2024-11-20 11:16:33.849394] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.458 [2024-11-20 11:16:33.849399] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.458 [2024-11-20 11:16:33.849402] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.458 [2024-11-20 11:16:33.849405] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x133a580) on tqpair=0x12d8690 00:22:06.458 [2024-11-20 11:16:33.849414] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.458 [2024-11-20 11:16:33.849417] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.458 [2024-11-20 11:16:33.849420] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12d8690) 00:22:06.458 [2024-11-20 11:16:33.849426] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.458 [2024-11-20 11:16:33.849435] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x133a580, cid 3, qid 0 00:22:06.458 [2024-11-20 11:16:33.849502] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.458 [2024-11-20 11:16:33.849507] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.458 [2024-11-20 11:16:33.849510] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.458 [2024-11-20 11:16:33.849514] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x133a580) on tqpair=0x12d8690 00:22:06.458 [2024-11-20 11:16:33.849522] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.458 [2024-11-20 11:16:33.849526] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.458 [2024-11-20 11:16:33.849529] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12d8690) 00:22:06.458 [2024-11-20 11:16:33.849534] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.458 [2024-11-20 11:16:33.849545] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x133a580, cid 3, qid 0 00:22:06.458 [2024-11-20 11:16:33.849609] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.458 [2024-11-20 11:16:33.849614] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.458 [2024-11-20 11:16:33.849617] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.458 [2024-11-20 11:16:33.849620] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x133a580) on tqpair=0x12d8690 00:22:06.458 [2024-11-20 11:16:33.849629] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.458 [2024-11-20 11:16:33.849633] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.458 [2024-11-20 11:16:33.849636] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12d8690) 00:22:06.458 [2024-11-20 11:16:33.849641] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.458 [2024-11-20 11:16:33.849650] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x133a580, cid 3, qid 0 00:22:06.458 [2024-11-20 11:16:33.849718] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.458 [2024-11-20 11:16:33.849724] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.458 [2024-11-20 11:16:33.849727] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.458 [2024-11-20 11:16:33.849730] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x133a580) on tqpair=0x12d8690 00:22:06.458 [2024-11-20 11:16:33.849738] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.458 [2024-11-20 11:16:33.849741] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.458 [2024-11-20 11:16:33.849744] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12d8690) 00:22:06.458 [2024-11-20 11:16:33.849750] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.458 [2024-11-20 11:16:33.849759] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x133a580, cid 3, qid 0 00:22:06.458 [2024-11-20 11:16:33.849826] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.458 [2024-11-20 11:16:33.849831] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.458 [2024-11-20 11:16:33.849834] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.458 [2024-11-20 11:16:33.849837] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x133a580) on tqpair=0x12d8690 00:22:06.458 [2024-11-20 11:16:33.849846] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.458 [2024-11-20 11:16:33.849849] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.458 [2024-11-20 11:16:33.849852] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12d8690) 00:22:06.458 [2024-11-20 11:16:33.849858] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.458 [2024-11-20 11:16:33.849867] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x133a580, cid 3, qid 0 00:22:06.458 [2024-11-20 11:16:33.849928] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.458 [2024-11-20 11:16:33.849933] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.458 [2024-11-20 11:16:33.849936] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.458 [2024-11-20 11:16:33.849939] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x133a580) on tqpair=0x12d8690 00:22:06.458 [2024-11-20 11:16:33.849951] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.458 [2024-11-20 11:16:33.849955] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.458 [2024-11-20 11:16:33.849958] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12d8690) 00:22:06.458 [2024-11-20 11:16:33.849964] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.458 [2024-11-20 11:16:33.849973] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x133a580, cid 3, qid 0 00:22:06.458 [2024-11-20 11:16:33.850041] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.458 [2024-11-20 11:16:33.850046] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.458 [2024-11-20 11:16:33.850049] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.458 [2024-11-20 11:16:33.850053] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x133a580) on tqpair=0x12d8690 00:22:06.458 [2024-11-20 11:16:33.850062] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.458 [2024-11-20 11:16:33.850065] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.458 [2024-11-20 11:16:33.850068] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12d8690) 00:22:06.458 [2024-11-20 11:16:33.850074] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.458 [2024-11-20 11:16:33.850084] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x133a580, cid 3, qid 0 00:22:06.458 [2024-11-20 11:16:33.850148] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.458 [2024-11-20 11:16:33.850154] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.458 [2024-11-20 11:16:33.850157] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.458 [2024-11-20 11:16:33.850160] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x133a580) on tqpair=0x12d8690 00:22:06.458 [2024-11-20 11:16:33.850168] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.458 [2024-11-20 11:16:33.850171] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.458 [2024-11-20 11:16:33.850174] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12d8690) 00:22:06.458 [2024-11-20 11:16:33.850180] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.458 [2024-11-20 11:16:33.850189] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x133a580, cid 3, qid 0 00:22:06.458 [2024-11-20 11:16:33.850251] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.458 [2024-11-20 11:16:33.850257] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.458 [2024-11-20 11:16:33.850260] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.459 [2024-11-20 11:16:33.850263] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x133a580) on tqpair=0x12d8690 00:22:06.459 [2024-11-20 11:16:33.850271] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.459 [2024-11-20 11:16:33.850274] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.459 [2024-11-20 11:16:33.850277] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12d8690) 00:22:06.459 [2024-11-20 11:16:33.850283] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.459 [2024-11-20 11:16:33.850292] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x133a580, cid 3, qid 0 00:22:06.459 [2024-11-20 11:16:33.850360] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.459 [2024-11-20 11:16:33.850365] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.459 [2024-11-20 11:16:33.850368] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.459 [2024-11-20 11:16:33.850371] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x133a580) on tqpair=0x12d8690 00:22:06.459 [2024-11-20 11:16:33.850380] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.459 [2024-11-20 11:16:33.850383] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.459 [2024-11-20 11:16:33.850386] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12d8690) 00:22:06.459 [2024-11-20 11:16:33.850392] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.459 [2024-11-20 11:16:33.850401] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x133a580, cid 3, qid 0 00:22:06.459 [2024-11-20 11:16:33.850464] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.459 [2024-11-20 11:16:33.850472] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.459 [2024-11-20 11:16:33.850475] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.459 [2024-11-20 11:16:33.850478] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x133a580) on tqpair=0x12d8690 00:22:06.459 [2024-11-20 11:16:33.850486] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.459 [2024-11-20 11:16:33.850489] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.459 [2024-11-20 11:16:33.850492] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12d8690) 00:22:06.459 [2024-11-20 11:16:33.850498] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.459 [2024-11-20 11:16:33.850507] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x133a580, cid 3, qid 0 00:22:06.459 [2024-11-20 11:16:33.850572] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.459 [2024-11-20 11:16:33.850577] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.459 [2024-11-20 11:16:33.850580] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.459 [2024-11-20 11:16:33.850584] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x133a580) on tqpair=0x12d8690 00:22:06.459 [2024-11-20 11:16:33.850592] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.459 [2024-11-20 11:16:33.850595] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.459 [2024-11-20 11:16:33.850598] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12d8690) 00:22:06.459 [2024-11-20 11:16:33.850604] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.459 [2024-11-20 11:16:33.850613] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x133a580, cid 3, qid 0 00:22:06.459 [2024-11-20 11:16:33.850682] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.459 [2024-11-20 11:16:33.850687] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.459 [2024-11-20 11:16:33.850690] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.459 [2024-11-20 11:16:33.850693] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x133a580) on tqpair=0x12d8690 00:22:06.459 [2024-11-20 11:16:33.850702] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.459 [2024-11-20 11:16:33.850706] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.459 [2024-11-20 11:16:33.850709] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12d8690) 00:22:06.459 [2024-11-20 11:16:33.850714] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.459 [2024-11-20 11:16:33.850724] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x133a580, cid 3, qid 0 00:22:06.459 [2024-11-20 11:16:33.850784] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.459 [2024-11-20 11:16:33.850789] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.459 [2024-11-20 11:16:33.850792] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.459 [2024-11-20 11:16:33.850795] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x133a580) on tqpair=0x12d8690 00:22:06.459 [2024-11-20 11:16:33.850803] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.459 [2024-11-20 11:16:33.850807] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.459 [2024-11-20 11:16:33.850810] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12d8690) 00:22:06.459 [2024-11-20 11:16:33.850815] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.459 [2024-11-20 11:16:33.850825] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x133a580, cid 3, qid 0 00:22:06.459 [2024-11-20 11:16:33.850892] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.459 [2024-11-20 11:16:33.850897] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.459 [2024-11-20 11:16:33.850902] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.459 [2024-11-20 11:16:33.850906] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x133a580) on tqpair=0x12d8690 00:22:06.459 [2024-11-20 11:16:33.850915] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.459 [2024-11-20 11:16:33.850918] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.459 [2024-11-20 11:16:33.850921] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12d8690) 00:22:06.459 [2024-11-20 11:16:33.850927] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.459 [2024-11-20 11:16:33.850937] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x133a580, cid 3, qid 0 00:22:06.459 [2024-11-20 11:16:33.854955] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.459 [2024-11-20 11:16:33.854963] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.459 [2024-11-20 11:16:33.854966] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.459 [2024-11-20 11:16:33.854969] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x133a580) on tqpair=0x12d8690 00:22:06.459 [2024-11-20 11:16:33.854979] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.459 [2024-11-20 11:16:33.854983] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.459 [2024-11-20 11:16:33.854986] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12d8690) 00:22:06.459 [2024-11-20 11:16:33.854992] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.459 [2024-11-20 11:16:33.855002] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x133a580, cid 3, qid 0 00:22:06.459 [2024-11-20 11:16:33.855068] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.459 [2024-11-20 11:16:33.855073] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.459 [2024-11-20 11:16:33.855076] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.459 [2024-11-20 11:16:33.855079] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x133a580) on tqpair=0x12d8690 00:22:06.459 [2024-11-20 11:16:33.855085] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 7 milliseconds 00:22:06.459 0% 00:22:06.459 Data Units Read: 0 00:22:06.459 Data Units Written: 0 00:22:06.459 Host Read Commands: 0 00:22:06.459 Host Write Commands: 0 00:22:06.459 Controller Busy Time: 0 minutes 00:22:06.459 Power Cycles: 0 00:22:06.459 Power On Hours: 0 hours 00:22:06.459 Unsafe Shutdowns: 0 00:22:06.459 Unrecoverable Media Errors: 0 00:22:06.459 Lifetime Error Log Entries: 0 00:22:06.459 Warning Temperature Time: 0 minutes 00:22:06.459 Critical Temperature Time: 0 minutes 00:22:06.459 00:22:06.459 Number of Queues 00:22:06.459 ================ 00:22:06.459 Number of I/O Submission Queues: 127 00:22:06.459 Number of I/O Completion Queues: 127 00:22:06.459 00:22:06.459 Active Namespaces 00:22:06.459 ================= 00:22:06.459 Namespace ID:1 00:22:06.459 Error Recovery Timeout: Unlimited 00:22:06.459 Command Set Identifier: NVM (00h) 00:22:06.459 Deallocate: Supported 00:22:06.459 Deallocated/Unwritten Error: Not Supported 00:22:06.459 Deallocated Read Value: Unknown 00:22:06.459 Deallocate in Write Zeroes: Not Supported 00:22:06.459 Deallocated Guard Field: 0xFFFF 00:22:06.459 Flush: Supported 00:22:06.459 Reservation: Supported 00:22:06.459 Namespace Sharing Capabilities: Multiple Controllers 00:22:06.459 Size (in LBAs): 131072 (0GiB) 00:22:06.459 Capacity (in LBAs): 131072 (0GiB) 00:22:06.459 Utilization (in LBAs): 131072 (0GiB) 00:22:06.459 NGUID: ABCDEF0123456789ABCDEF0123456789 00:22:06.459 EUI64: ABCDEF0123456789 00:22:06.459 UUID: 1fc42d71-1818-4c6c-9741-2de8f10f99df 00:22:06.459 Thin Provisioning: Not Supported 00:22:06.459 Per-NS Atomic Units: Yes 00:22:06.459 Atomic Boundary Size (Normal): 0 00:22:06.459 Atomic Boundary Size (PFail): 0 00:22:06.459 Atomic Boundary Offset: 0 00:22:06.459 Maximum Single Source Range Length: 65535 00:22:06.459 Maximum Copy Length: 65535 00:22:06.459 Maximum Source Range Count: 1 00:22:06.459 NGUID/EUI64 Never Reused: No 00:22:06.459 Namespace Write Protected: No 00:22:06.459 Number of LBA Formats: 1 00:22:06.459 Current LBA Format: LBA Format #00 00:22:06.459 LBA Format #00: Data Size: 512 Metadata Size: 0 00:22:06.459 00:22:06.459 11:16:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:22:06.459 11:16:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:06.459 11:16:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.459 11:16:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:06.459 11:16:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.459 11:16:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:22:06.460 11:16:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:22:06.460 11:16:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:06.460 11:16:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:22:06.460 11:16:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:06.460 11:16:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:22:06.460 11:16:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:06.460 11:16:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:06.460 rmmod nvme_tcp 00:22:06.460 rmmod nvme_fabrics 00:22:06.460 rmmod nvme_keyring 00:22:06.460 11:16:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:06.460 11:16:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:22:06.460 11:16:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:22:06.460 11:16:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 4138696 ']' 00:22:06.460 11:16:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 4138696 00:22:06.460 11:16:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 4138696 ']' 00:22:06.460 11:16:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 4138696 00:22:06.460 11:16:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:22:06.719 11:16:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:06.719 11:16:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4138696 00:22:06.719 11:16:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:06.719 11:16:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:06.719 11:16:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4138696' 00:22:06.719 killing process with pid 4138696 00:22:06.719 11:16:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 4138696 00:22:06.719 11:16:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 4138696 00:22:06.719 11:16:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:06.719 11:16:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:06.719 11:16:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:06.719 11:16:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:22:06.719 11:16:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:22:06.719 11:16:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:06.719 11:16:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:22:06.719 11:16:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:06.719 11:16:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:06.719 11:16:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:06.719 11:16:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:06.719 11:16:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:09.259 11:16:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:09.259 00:22:09.259 real 0m9.384s 00:22:09.259 user 0m5.561s 00:22:09.259 sys 0m4.936s 00:22:09.259 11:16:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:09.259 11:16:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:09.259 ************************************ 00:22:09.259 END TEST nvmf_identify 00:22:09.259 ************************************ 00:22:09.259 11:16:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:09.259 11:16:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:09.259 11:16:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:09.259 11:16:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:09.259 ************************************ 00:22:09.259 START TEST nvmf_perf 00:22:09.259 ************************************ 00:22:09.259 11:16:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:09.259 * Looking for test storage... 00:22:09.259 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:09.259 11:16:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:09.259 11:16:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lcov --version 00:22:09.259 11:16:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:09.259 11:16:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:09.259 11:16:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:09.259 11:16:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:09.259 11:16:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:09.259 11:16:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:22:09.259 11:16:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:22:09.259 11:16:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:22:09.259 11:16:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:22:09.259 11:16:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:22:09.259 11:16:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:22:09.259 11:16:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:22:09.259 11:16:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:09.259 11:16:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:22:09.259 11:16:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:22:09.259 11:16:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:09.259 11:16:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:09.259 11:16:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:22:09.259 11:16:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:22:09.259 11:16:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:09.259 11:16:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:22:09.259 11:16:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:22:09.259 11:16:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:22:09.259 11:16:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:22:09.259 11:16:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:09.259 11:16:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:22:09.259 11:16:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:22:09.259 11:16:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:09.259 11:16:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:09.259 11:16:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:22:09.259 11:16:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:09.259 11:16:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:09.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:09.259 --rc genhtml_branch_coverage=1 00:22:09.259 --rc genhtml_function_coverage=1 00:22:09.259 --rc genhtml_legend=1 00:22:09.259 --rc geninfo_all_blocks=1 00:22:09.259 --rc geninfo_unexecuted_blocks=1 00:22:09.259 00:22:09.259 ' 00:22:09.259 11:16:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:09.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:09.259 --rc genhtml_branch_coverage=1 00:22:09.259 --rc genhtml_function_coverage=1 00:22:09.259 --rc genhtml_legend=1 00:22:09.259 --rc geninfo_all_blocks=1 00:22:09.259 --rc geninfo_unexecuted_blocks=1 00:22:09.259 00:22:09.259 ' 00:22:09.259 11:16:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:09.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:09.259 --rc genhtml_branch_coverage=1 00:22:09.259 --rc genhtml_function_coverage=1 00:22:09.259 --rc genhtml_legend=1 00:22:09.259 --rc geninfo_all_blocks=1 00:22:09.259 --rc geninfo_unexecuted_blocks=1 00:22:09.259 00:22:09.259 ' 00:22:09.259 11:16:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:09.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:09.259 --rc genhtml_branch_coverage=1 00:22:09.259 --rc genhtml_function_coverage=1 00:22:09.259 --rc genhtml_legend=1 00:22:09.259 --rc geninfo_all_blocks=1 00:22:09.259 --rc geninfo_unexecuted_blocks=1 00:22:09.259 00:22:09.259 ' 00:22:09.259 11:16:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:09.260 11:16:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:22:09.260 11:16:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:09.260 11:16:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:09.260 11:16:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:09.260 11:16:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:09.260 11:16:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:09.260 11:16:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:09.260 11:16:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:09.260 11:16:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:09.260 11:16:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:09.260 11:16:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:09.260 11:16:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:09.260 11:16:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:09.260 11:16:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:09.260 11:16:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:09.260 11:16:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:09.260 11:16:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:09.260 11:16:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:09.260 11:16:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:22:09.260 11:16:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:09.260 11:16:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:09.260 11:16:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:09.260 11:16:36 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:09.260 11:16:36 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:09.260 11:16:36 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:09.260 11:16:36 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:22:09.260 11:16:36 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:09.260 11:16:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:22:09.260 11:16:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:09.260 11:16:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:09.260 11:16:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:09.260 11:16:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:09.260 11:16:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:09.260 11:16:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:09.260 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:09.260 11:16:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:09.260 11:16:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:09.260 11:16:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:09.260 11:16:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:09.260 11:16:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:09.260 11:16:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:09.260 11:16:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:22:09.260 11:16:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:09.260 11:16:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:09.260 11:16:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:09.260 11:16:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:09.260 11:16:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:09.260 11:16:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:09.260 11:16:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:09.260 11:16:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:09.260 11:16:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:09.260 11:16:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:09.260 11:16:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:22:09.260 11:16:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:15.834 11:16:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:15.834 11:16:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:22:15.834 11:16:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:15.834 11:16:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:15.834 11:16:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:15.834 11:16:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:15.834 11:16:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:15.834 11:16:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:22:15.834 11:16:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:15.834 11:16:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:22:15.834 11:16:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:22:15.834 11:16:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:22:15.834 11:16:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:22:15.834 11:16:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:22:15.834 11:16:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:22:15.834 11:16:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:15.834 11:16:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:15.834 11:16:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:15.834 11:16:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:15.834 11:16:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:15.834 11:16:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:15.834 11:16:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:15.834 11:16:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:15.834 11:16:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:15.834 11:16:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:15.834 11:16:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:15.834 11:16:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:15.835 11:16:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:15.835 11:16:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:15.835 11:16:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:15.835 11:16:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:15.835 11:16:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:15.835 11:16:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:15.835 11:16:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:15.835 11:16:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:15.835 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:15.835 11:16:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:15.835 11:16:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:15.835 11:16:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:15.835 11:16:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:15.835 11:16:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:15.835 11:16:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:15.835 11:16:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:15.835 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:15.835 11:16:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:15.835 11:16:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:15.835 11:16:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:15.835 11:16:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:15.835 11:16:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:15.835 11:16:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:15.835 11:16:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:15.835 11:16:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:15.835 11:16:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:15.835 11:16:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:15.835 11:16:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:15.835 11:16:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:15.835 11:16:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:15.835 11:16:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:15.835 11:16:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:15.835 11:16:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:15.835 Found net devices under 0000:86:00.0: cvl_0_0 00:22:15.835 11:16:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:15.835 11:16:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:15.835 11:16:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:15.835 11:16:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:15.835 11:16:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:15.835 11:16:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:15.835 11:16:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:15.835 11:16:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:15.835 11:16:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:15.835 Found net devices under 0000:86:00.1: cvl_0_1 00:22:15.835 11:16:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:15.835 11:16:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:15.835 11:16:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:22:15.835 11:16:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:15.835 11:16:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:15.835 11:16:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:15.835 11:16:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:15.835 11:16:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:15.835 11:16:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:15.835 11:16:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:15.835 11:16:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:15.835 11:16:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:15.835 11:16:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:15.835 11:16:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:15.835 11:16:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:15.835 11:16:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:15.835 11:16:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:15.835 11:16:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:15.835 11:16:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:15.835 11:16:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:15.835 11:16:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:15.835 11:16:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:15.835 11:16:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:15.835 11:16:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:15.835 11:16:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:15.835 11:16:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:15.835 11:16:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:15.835 11:16:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:15.835 11:16:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:15.835 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:15.835 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.430 ms 00:22:15.835 00:22:15.835 --- 10.0.0.2 ping statistics --- 00:22:15.835 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:15.835 rtt min/avg/max/mdev = 0.430/0.430/0.430/0.000 ms 00:22:15.835 11:16:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:15.835 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:15.835 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.250 ms 00:22:15.835 00:22:15.835 --- 10.0.0.1 ping statistics --- 00:22:15.835 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:15.835 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:22:15.835 11:16:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:15.835 11:16:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:22:15.835 11:16:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:15.835 11:16:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:15.835 11:16:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:15.835 11:16:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:15.835 11:16:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:15.835 11:16:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:15.835 11:16:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:15.835 11:16:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:22:15.835 11:16:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:15.835 11:16:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:15.835 11:16:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:15.835 11:16:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=4142437 00:22:15.835 11:16:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:15.835 11:16:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 4142437 00:22:15.835 11:16:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 4142437 ']' 00:22:15.835 11:16:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:15.835 11:16:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:15.835 11:16:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:15.835 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:15.835 11:16:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:15.835 11:16:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:15.835 [2024-11-20 11:16:42.521748] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:22:15.835 [2024-11-20 11:16:42.521799] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:15.835 [2024-11-20 11:16:42.601354] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:15.835 [2024-11-20 11:16:42.644528] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:15.835 [2024-11-20 11:16:42.644565] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:15.835 [2024-11-20 11:16:42.644572] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:15.835 [2024-11-20 11:16:42.644578] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:15.835 [2024-11-20 11:16:42.644583] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:15.835 [2024-11-20 11:16:42.646180] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:15.835 [2024-11-20 11:16:42.646289] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:15.835 [2024-11-20 11:16:42.646399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:15.835 [2024-11-20 11:16:42.646400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:15.836 11:16:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:15.836 11:16:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:22:15.836 11:16:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:15.836 11:16:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:15.836 11:16:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:15.836 11:16:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:15.836 11:16:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:22:15.836 11:16:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:22:18.372 11:16:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:22:18.372 11:16:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:22:18.631 11:16:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:22:18.631 11:16:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:22:18.891 11:16:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:22:18.891 11:16:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:22:18.891 11:16:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:22:18.891 11:16:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:22:18.891 11:16:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:19.150 [2024-11-20 11:16:46.433464] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:19.150 11:16:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:19.409 11:16:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:19.409 11:16:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:19.409 11:16:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:19.409 11:16:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:22:19.669 11:16:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:19.928 [2024-11-20 11:16:47.260597] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:19.928 11:16:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:20.187 11:16:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:22:20.187 11:16:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:22:20.187 11:16:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:22:20.187 11:16:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:22:21.566 Initializing NVMe Controllers 00:22:21.566 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:22:21.566 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:22:21.566 Initialization complete. Launching workers. 00:22:21.566 ======================================================== 00:22:21.566 Latency(us) 00:22:21.566 Device Information : IOPS MiB/s Average min max 00:22:21.566 PCIE (0000:5e:00.0) NSID 1 from core 0: 97552.66 381.07 327.61 10.57 5394.26 00:22:21.566 ======================================================== 00:22:21.566 Total : 97552.66 381.07 327.61 10.57 5394.26 00:22:21.566 00:22:21.566 11:16:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:22.504 Initializing NVMe Controllers 00:22:22.505 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:22.505 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:22.505 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:22.505 Initialization complete. Launching workers. 00:22:22.505 ======================================================== 00:22:22.505 Latency(us) 00:22:22.505 Device Information : IOPS MiB/s Average min max 00:22:22.505 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 95.00 0.37 10694.55 107.71 44749.52 00:22:22.505 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 51.00 0.20 19700.44 7198.38 48851.05 00:22:22.505 ======================================================== 00:22:22.505 Total : 146.00 0.57 13840.45 107.71 48851.05 00:22:22.505 00:22:22.505 11:16:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:24.411 Initializing NVMe Controllers 00:22:24.411 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:24.411 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:24.411 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:24.411 Initialization complete. Launching workers. 00:22:24.411 ======================================================== 00:22:24.411 Latency(us) 00:22:24.411 Device Information : IOPS MiB/s Average min max 00:22:24.411 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10869.49 42.46 2959.43 439.51 41592.85 00:22:24.411 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3798.12 14.84 8445.47 7008.65 15802.78 00:22:24.411 ======================================================== 00:22:24.411 Total : 14667.61 57.30 4380.02 439.51 41592.85 00:22:24.411 00:22:24.411 11:16:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:22:24.411 11:16:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:22:24.411 11:16:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:26.949 Initializing NVMe Controllers 00:22:26.949 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:26.949 Controller IO queue size 128, less than required. 00:22:26.949 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:26.949 Controller IO queue size 128, less than required. 00:22:26.949 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:26.949 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:26.949 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:26.949 Initialization complete. Launching workers. 00:22:26.949 ======================================================== 00:22:26.949 Latency(us) 00:22:26.949 Device Information : IOPS MiB/s Average min max 00:22:26.949 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1759.36 439.84 74366.78 48266.84 119566.21 00:22:26.949 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 582.30 145.57 228369.92 87439.00 362027.73 00:22:26.949 ======================================================== 00:22:26.949 Total : 2341.66 585.41 112662.47 48266.84 362027.73 00:22:26.949 00:22:26.949 11:16:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:22:26.949 No valid NVMe controllers or AIO or URING devices found 00:22:26.949 Initializing NVMe Controllers 00:22:26.949 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:26.949 Controller IO queue size 128, less than required. 00:22:26.949 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:26.949 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:22:26.949 Controller IO queue size 128, less than required. 00:22:26.949 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:26.949 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:22:26.949 WARNING: Some requested NVMe devices were skipped 00:22:26.949 11:16:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:22:30.239 Initializing NVMe Controllers 00:22:30.240 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:30.240 Controller IO queue size 128, less than required. 00:22:30.240 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:30.240 Controller IO queue size 128, less than required. 00:22:30.240 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:30.240 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:30.240 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:30.240 Initialization complete. Launching workers. 00:22:30.240 00:22:30.240 ==================== 00:22:30.240 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:22:30.240 TCP transport: 00:22:30.240 polls: 11449 00:22:30.240 idle_polls: 8058 00:22:30.240 sock_completions: 3391 00:22:30.240 nvme_completions: 6039 00:22:30.240 submitted_requests: 9032 00:22:30.240 queued_requests: 1 00:22:30.240 00:22:30.240 ==================== 00:22:30.240 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:22:30.240 TCP transport: 00:22:30.240 polls: 11714 00:22:30.240 idle_polls: 7769 00:22:30.240 sock_completions: 3945 00:22:30.240 nvme_completions: 6631 00:22:30.240 submitted_requests: 9892 00:22:30.240 queued_requests: 1 00:22:30.240 ======================================================== 00:22:30.240 Latency(us) 00:22:30.240 Device Information : IOPS MiB/s Average min max 00:22:30.240 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1506.06 376.51 86721.95 54883.49 143616.27 00:22:30.240 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1653.72 413.43 78712.57 40765.07 134886.60 00:22:30.240 ======================================================== 00:22:30.240 Total : 3159.78 789.94 82530.12 40765.07 143616.27 00:22:30.240 00:22:30.240 11:16:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:22:30.240 11:16:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:30.240 11:16:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:22:30.240 11:16:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:22:30.240 11:16:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:22:30.240 11:16:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:30.240 11:16:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:22:30.240 11:16:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:30.240 11:16:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:22:30.240 11:16:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:30.240 11:16:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:30.240 rmmod nvme_tcp 00:22:30.240 rmmod nvme_fabrics 00:22:30.240 rmmod nvme_keyring 00:22:30.240 11:16:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:30.240 11:16:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:22:30.240 11:16:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:22:30.240 11:16:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 4142437 ']' 00:22:30.240 11:16:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 4142437 00:22:30.240 11:16:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 4142437 ']' 00:22:30.240 11:16:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 4142437 00:22:30.240 11:16:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:22:30.240 11:16:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:30.240 11:16:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4142437 00:22:30.240 11:16:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:30.240 11:16:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:30.240 11:16:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4142437' 00:22:30.240 killing process with pid 4142437 00:22:30.240 11:16:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 4142437 00:22:30.240 11:16:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 4142437 00:22:31.620 11:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:31.620 11:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:31.620 11:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:31.620 11:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:22:31.620 11:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:22:31.620 11:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:31.620 11:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:22:31.620 11:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:31.620 11:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:31.620 11:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:31.620 11:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:31.620 11:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:33.531 11:17:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:33.531 00:22:33.531 real 0m24.562s 00:22:33.531 user 1m4.199s 00:22:33.531 sys 0m8.415s 00:22:33.531 11:17:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:33.531 11:17:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:33.531 ************************************ 00:22:33.531 END TEST nvmf_perf 00:22:33.531 ************************************ 00:22:33.531 11:17:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:22:33.531 11:17:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:33.531 11:17:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:33.531 11:17:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:33.531 ************************************ 00:22:33.531 START TEST nvmf_fio_host 00:22:33.531 ************************************ 00:22:33.531 11:17:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:22:33.791 * Looking for test storage... 00:22:33.791 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:33.791 11:17:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:33.791 11:17:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lcov --version 00:22:33.791 11:17:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:33.791 11:17:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:33.791 11:17:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:33.791 11:17:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:33.791 11:17:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:33.791 11:17:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:22:33.791 11:17:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:22:33.791 11:17:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:22:33.791 11:17:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:22:33.791 11:17:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:22:33.791 11:17:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:22:33.791 11:17:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:22:33.791 11:17:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:33.791 11:17:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:22:33.791 11:17:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:22:33.791 11:17:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:33.791 11:17:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:33.791 11:17:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:22:33.791 11:17:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:22:33.791 11:17:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:33.791 11:17:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:22:33.791 11:17:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:22:33.791 11:17:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:22:33.791 11:17:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:22:33.791 11:17:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:33.791 11:17:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:22:33.791 11:17:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:22:33.791 11:17:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:33.791 11:17:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:33.791 11:17:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:22:33.791 11:17:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:33.791 11:17:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:33.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:33.791 --rc genhtml_branch_coverage=1 00:22:33.791 --rc genhtml_function_coverage=1 00:22:33.791 --rc genhtml_legend=1 00:22:33.791 --rc geninfo_all_blocks=1 00:22:33.791 --rc geninfo_unexecuted_blocks=1 00:22:33.791 00:22:33.791 ' 00:22:33.792 11:17:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:33.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:33.792 --rc genhtml_branch_coverage=1 00:22:33.792 --rc genhtml_function_coverage=1 00:22:33.792 --rc genhtml_legend=1 00:22:33.792 --rc geninfo_all_blocks=1 00:22:33.792 --rc geninfo_unexecuted_blocks=1 00:22:33.792 00:22:33.792 ' 00:22:33.792 11:17:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:33.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:33.792 --rc genhtml_branch_coverage=1 00:22:33.792 --rc genhtml_function_coverage=1 00:22:33.792 --rc genhtml_legend=1 00:22:33.792 --rc geninfo_all_blocks=1 00:22:33.792 --rc geninfo_unexecuted_blocks=1 00:22:33.792 00:22:33.792 ' 00:22:33.792 11:17:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:33.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:33.792 --rc genhtml_branch_coverage=1 00:22:33.792 --rc genhtml_function_coverage=1 00:22:33.792 --rc genhtml_legend=1 00:22:33.792 --rc geninfo_all_blocks=1 00:22:33.792 --rc geninfo_unexecuted_blocks=1 00:22:33.792 00:22:33.792 ' 00:22:33.792 11:17:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:33.792 11:17:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:22:33.792 11:17:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:33.792 11:17:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:33.792 11:17:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:33.792 11:17:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:33.792 11:17:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:33.792 11:17:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:33.792 11:17:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:22:33.792 11:17:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:33.792 11:17:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:33.792 11:17:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:22:33.792 11:17:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:33.792 11:17:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:33.792 11:17:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:33.792 11:17:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:33.792 11:17:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:33.792 11:17:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:33.792 11:17:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:33.792 11:17:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:33.792 11:17:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:33.792 11:17:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:33.792 11:17:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:33.792 11:17:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:33.792 11:17:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:33.792 11:17:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:33.792 11:17:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:33.792 11:17:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:33.792 11:17:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:33.792 11:17:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:22:33.792 11:17:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:33.792 11:17:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:33.792 11:17:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:33.792 11:17:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:33.792 11:17:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:33.792 11:17:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:33.792 11:17:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:22:33.792 11:17:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:33.792 11:17:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:22:33.792 11:17:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:33.792 11:17:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:33.792 11:17:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:33.792 11:17:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:33.792 11:17:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:33.792 11:17:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:33.792 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:33.792 11:17:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:33.792 11:17:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:33.792 11:17:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:33.792 11:17:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:33.792 11:17:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:22:33.792 11:17:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:33.792 11:17:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:33.792 11:17:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:33.792 11:17:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:33.792 11:17:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:33.792 11:17:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:33.792 11:17:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:33.792 11:17:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:33.792 11:17:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:33.792 11:17:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:33.792 11:17:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:22:33.792 11:17:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:40.368 11:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:40.368 11:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:22:40.368 11:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:40.368 11:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:40.368 11:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:40.368 11:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:40.368 11:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:40.368 11:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:22:40.368 11:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:40.368 11:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:22:40.368 11:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:22:40.368 11:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:22:40.368 11:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:22:40.368 11:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:22:40.368 11:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:22:40.368 11:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:40.368 11:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:40.368 11:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:40.368 11:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:40.368 11:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:40.368 11:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:40.368 11:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:40.368 11:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:40.368 11:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:40.368 11:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:40.368 11:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:40.368 11:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:40.368 11:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:40.368 11:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:40.368 11:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:40.368 11:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:40.368 11:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:40.368 11:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:40.368 11:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:40.368 11:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:40.368 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:40.368 11:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:40.368 11:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:40.368 11:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:40.368 11:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:40.368 11:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:40.368 11:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:40.368 11:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:40.368 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:40.368 11:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:40.368 11:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:40.368 11:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:40.368 11:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:40.368 11:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:40.368 11:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:40.368 11:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:40.368 11:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:40.368 11:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:40.368 11:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:40.368 11:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:40.368 11:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:40.368 11:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:40.368 11:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:40.368 11:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:40.368 11:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:40.368 Found net devices under 0000:86:00.0: cvl_0_0 00:22:40.368 11:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:40.368 11:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:40.368 11:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:40.368 11:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:40.368 11:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:40.368 11:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:40.368 11:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:40.368 11:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:40.368 11:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:40.368 Found net devices under 0000:86:00.1: cvl_0_1 00:22:40.368 11:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:40.368 11:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:40.368 11:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:22:40.368 11:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:40.368 11:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:40.368 11:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:40.368 11:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:40.368 11:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:40.368 11:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:40.368 11:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:40.368 11:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:40.368 11:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:40.368 11:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:40.368 11:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:40.368 11:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:40.368 11:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:40.368 11:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:40.368 11:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:40.368 11:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:40.368 11:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:40.368 11:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:40.369 11:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:40.369 11:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:40.369 11:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:40.369 11:17:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:40.369 11:17:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:40.369 11:17:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:40.369 11:17:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:40.369 11:17:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:40.369 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:40.369 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.432 ms 00:22:40.369 00:22:40.369 --- 10.0.0.2 ping statistics --- 00:22:40.369 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:40.369 rtt min/avg/max/mdev = 0.432/0.432/0.432/0.000 ms 00:22:40.369 11:17:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:40.369 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:40.369 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:22:40.369 00:22:40.369 --- 10.0.0.1 ping statistics --- 00:22:40.369 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:40.369 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:22:40.369 11:17:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:40.369 11:17:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:22:40.369 11:17:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:40.369 11:17:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:40.369 11:17:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:40.369 11:17:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:40.369 11:17:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:40.369 11:17:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:40.369 11:17:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:40.369 11:17:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:22:40.369 11:17:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:22:40.369 11:17:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:40.369 11:17:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:40.369 11:17:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=4148582 00:22:40.369 11:17:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:40.369 11:17:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:40.369 11:17:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 4148582 00:22:40.369 11:17:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 4148582 ']' 00:22:40.369 11:17:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:40.369 11:17:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:40.369 11:17:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:40.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:40.369 11:17:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:40.369 11:17:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:40.369 [2024-11-20 11:17:07.158116] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:22:40.369 [2024-11-20 11:17:07.158162] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:40.369 [2024-11-20 11:17:07.238521] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:40.369 [2024-11-20 11:17:07.282805] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:40.369 [2024-11-20 11:17:07.282841] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:40.369 [2024-11-20 11:17:07.282848] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:40.369 [2024-11-20 11:17:07.282854] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:40.369 [2024-11-20 11:17:07.282860] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:40.369 [2024-11-20 11:17:07.284454] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:40.369 [2024-11-20 11:17:07.284485] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:40.369 [2024-11-20 11:17:07.284526] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:40.369 [2024-11-20 11:17:07.284527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:40.369 11:17:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:40.369 11:17:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:22:40.369 11:17:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:40.369 [2024-11-20 11:17:07.549556] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:40.369 11:17:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:22:40.369 11:17:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:40.369 11:17:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:40.369 11:17:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:22:40.369 Malloc1 00:22:40.369 11:17:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:40.628 11:17:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:40.888 11:17:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:41.147 [2024-11-20 11:17:08.439459] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:41.147 11:17:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:41.411 11:17:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:22:41.411 11:17:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:41.411 11:17:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:41.411 11:17:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:22:41.411 11:17:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:41.411 11:17:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:22:41.412 11:17:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:41.412 11:17:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:22:41.412 11:17:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:22:41.412 11:17:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:41.412 11:17:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:41.412 11:17:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:22:41.412 11:17:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:41.412 11:17:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:22:41.412 11:17:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:22:41.412 11:17:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:41.412 11:17:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:41.412 11:17:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:22:41.412 11:17:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:41.412 11:17:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:22:41.412 11:17:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:22:41.412 11:17:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:22:41.412 11:17:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:41.677 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:22:41.677 fio-3.35 00:22:41.677 Starting 1 thread 00:22:44.233 [2024-11-20 11:17:11.413782] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126f3d0 is same with the state(6) to be set 00:22:44.233 [2024-11-20 11:17:11.413836] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126f3d0 is same with the state(6) to be set 00:22:44.233 [2024-11-20 11:17:11.413844] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126f3d0 is same with the state(6) to be set 00:22:44.233 [2024-11-20 11:17:11.413851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126f3d0 is same with the state(6) to be set 00:22:44.233 [2024-11-20 11:17:11.413857] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126f3d0 is same with the state(6) to be set 00:22:44.233 [2024-11-20 11:17:11.413863] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126f3d0 is same with the state(6) to be set 00:22:44.233 00:22:44.233 test: (groupid=0, jobs=1): err= 0: pid=4149003: Wed Nov 20 11:17:11 2024 00:22:44.233 read: IOPS=11.5k, BW=44.9MiB/s (47.1MB/s)(90.1MiB/2005msec) 00:22:44.233 slat (nsec): min=1538, max=240338, avg=1725.43, stdev=2234.96 00:22:44.233 clat (usec): min=3098, max=10134, avg=6141.49, stdev=488.54 00:22:44.233 lat (usec): min=3133, max=10136, avg=6143.22, stdev=488.38 00:22:44.233 clat percentiles (usec): 00:22:44.233 | 1.00th=[ 5014], 5.00th=[ 5407], 10.00th=[ 5538], 20.00th=[ 5735], 00:22:44.233 | 30.00th=[ 5932], 40.00th=[ 5997], 50.00th=[ 6128], 60.00th=[ 6259], 00:22:44.233 | 70.00th=[ 6390], 80.00th=[ 6521], 90.00th=[ 6718], 95.00th=[ 6849], 00:22:44.233 | 99.00th=[ 7242], 99.50th=[ 7832], 99.90th=[ 8979], 99.95th=[ 9634], 00:22:44.233 | 99.99th=[10028] 00:22:44.233 bw ( KiB/s): min=44840, max=46936, per=99.95%, avg=45986.00, stdev=873.46, samples=4 00:22:44.233 iops : min=11210, max=11734, avg=11496.50, stdev=218.37, samples=4 00:22:44.233 write: IOPS=11.4k, BW=44.6MiB/s (46.8MB/s)(89.4MiB/2005msec); 0 zone resets 00:22:44.233 slat (nsec): min=1593, max=224675, avg=1782.04, stdev=1654.97 00:22:44.233 clat (usec): min=2444, max=9936, avg=4958.74, stdev=403.86 00:22:44.233 lat (usec): min=2460, max=9938, avg=4960.52, stdev=403.76 00:22:44.233 clat percentiles (usec): 00:22:44.233 | 1.00th=[ 4113], 5.00th=[ 4359], 10.00th=[ 4490], 20.00th=[ 4686], 00:22:44.233 | 30.00th=[ 4752], 40.00th=[ 4883], 50.00th=[ 4948], 60.00th=[ 5014], 00:22:44.233 | 70.00th=[ 5145], 80.00th=[ 5276], 90.00th=[ 5407], 95.00th=[ 5538], 00:22:44.233 | 99.00th=[ 5866], 99.50th=[ 6194], 99.90th=[ 8029], 99.95th=[ 8848], 00:22:44.233 | 99.99th=[ 9896] 00:22:44.233 bw ( KiB/s): min=45248, max=46560, per=100.00%, avg=45666.00, stdev=602.23, samples=4 00:22:44.233 iops : min=11312, max=11640, avg=11416.50, stdev=150.56, samples=4 00:22:44.233 lat (msec) : 4=0.31%, 10=99.68%, 20=0.01% 00:22:44.233 cpu : usr=74.65%, sys=24.30%, ctx=96, majf=0, minf=3 00:22:44.233 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:22:44.233 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:44.233 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:44.233 issued rwts: total=23062,22891,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:44.233 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:44.233 00:22:44.233 Run status group 0 (all jobs): 00:22:44.233 READ: bw=44.9MiB/s (47.1MB/s), 44.9MiB/s-44.9MiB/s (47.1MB/s-47.1MB/s), io=90.1MiB (94.5MB), run=2005-2005msec 00:22:44.233 WRITE: bw=44.6MiB/s (46.8MB/s), 44.6MiB/s-44.6MiB/s (46.8MB/s-46.8MB/s), io=89.4MiB (93.8MB), run=2005-2005msec 00:22:44.233 11:17:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:44.233 11:17:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:44.233 11:17:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:22:44.233 11:17:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:44.233 11:17:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:22:44.233 11:17:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:44.233 11:17:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:22:44.233 11:17:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:22:44.233 11:17:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:44.233 11:17:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:44.233 11:17:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:22:44.233 11:17:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:44.233 11:17:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:22:44.233 11:17:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:22:44.233 11:17:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:44.233 11:17:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:44.233 11:17:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:22:44.233 11:17:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:44.233 11:17:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:22:44.233 11:17:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:22:44.233 11:17:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:22:44.233 11:17:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:44.502 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:22:44.502 fio-3.35 00:22:44.502 Starting 1 thread 00:22:47.036 00:22:47.036 test: (groupid=0, jobs=1): err= 0: pid=4149540: Wed Nov 20 11:17:14 2024 00:22:47.036 read: IOPS=10.7k, BW=167MiB/s (176MB/s)(336MiB/2007msec) 00:22:47.036 slat (usec): min=2, max=100, avg= 2.84, stdev= 1.28 00:22:47.036 clat (usec): min=1320, max=12537, avg=6769.25, stdev=1496.72 00:22:47.036 lat (usec): min=1322, max=12539, avg=6772.09, stdev=1496.81 00:22:47.036 clat percentiles (usec): 00:22:47.036 | 1.00th=[ 3687], 5.00th=[ 4359], 10.00th=[ 4817], 20.00th=[ 5407], 00:22:47.036 | 30.00th=[ 5866], 40.00th=[ 6325], 50.00th=[ 6849], 60.00th=[ 7242], 00:22:47.036 | 70.00th=[ 7635], 80.00th=[ 8029], 90.00th=[ 8586], 95.00th=[ 9110], 00:22:47.036 | 99.00th=[10290], 99.50th=[10814], 99.90th=[11469], 99.95th=[11600], 00:22:47.036 | 99.99th=[12125] 00:22:47.036 bw ( KiB/s): min=80864, max=92416, per=51.15%, avg=87704.00, stdev=5262.51, samples=4 00:22:47.036 iops : min= 5054, max= 5776, avg=5481.50, stdev=328.91, samples=4 00:22:47.036 write: IOPS=6273, BW=98.0MiB/s (103MB/s)(179MiB/1825msec); 0 zone resets 00:22:47.036 slat (usec): min=29, max=379, avg=31.70, stdev= 6.66 00:22:47.036 clat (usec): min=5046, max=14703, avg=8817.31, stdev=1484.71 00:22:47.036 lat (usec): min=5077, max=14734, avg=8849.01, stdev=1485.47 00:22:47.036 clat percentiles (usec): 00:22:47.036 | 1.00th=[ 6063], 5.00th=[ 6718], 10.00th=[ 7046], 20.00th=[ 7570], 00:22:47.036 | 30.00th=[ 7898], 40.00th=[ 8225], 50.00th=[ 8586], 60.00th=[ 8979], 00:22:47.036 | 70.00th=[ 9503], 80.00th=[10159], 90.00th=[10945], 95.00th=[11469], 00:22:47.036 | 99.00th=[12518], 99.50th=[13042], 99.90th=[14091], 99.95th=[14353], 00:22:47.036 | 99.99th=[14615] 00:22:47.036 bw ( KiB/s): min=85824, max=94880, per=90.92%, avg=91272.00, stdev=4184.08, samples=4 00:22:47.036 iops : min= 5364, max= 5930, avg=5704.50, stdev=261.51, samples=4 00:22:47.036 lat (msec) : 2=0.08%, 4=1.43%, 10=89.84%, 20=8.66% 00:22:47.036 cpu : usr=85.94%, sys=13.36%, ctx=43, majf=0, minf=3 00:22:47.036 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:22:47.036 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:47.036 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:47.036 issued rwts: total=21509,11450,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:47.036 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:47.036 00:22:47.036 Run status group 0 (all jobs): 00:22:47.036 READ: bw=167MiB/s (176MB/s), 167MiB/s-167MiB/s (176MB/s-176MB/s), io=336MiB (352MB), run=2007-2007msec 00:22:47.036 WRITE: bw=98.0MiB/s (103MB/s), 98.0MiB/s-98.0MiB/s (103MB/s-103MB/s), io=179MiB (188MB), run=1825-1825msec 00:22:47.036 11:17:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:47.036 11:17:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:22:47.036 11:17:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:22:47.036 11:17:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:22:47.036 11:17:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:22:47.036 11:17:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:47.036 11:17:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:22:47.036 11:17:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:47.036 11:17:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:22:47.036 11:17:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:47.036 11:17:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:47.036 rmmod nvme_tcp 00:22:47.036 rmmod nvme_fabrics 00:22:47.036 rmmod nvme_keyring 00:22:47.036 11:17:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:47.036 11:17:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:22:47.036 11:17:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:22:47.036 11:17:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 4148582 ']' 00:22:47.036 11:17:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 4148582 00:22:47.036 11:17:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 4148582 ']' 00:22:47.036 11:17:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 4148582 00:22:47.036 11:17:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:22:47.036 11:17:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:47.036 11:17:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4148582 00:22:47.294 11:17:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:47.294 11:17:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:47.294 11:17:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4148582' 00:22:47.294 killing process with pid 4148582 00:22:47.294 11:17:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 4148582 00:22:47.294 11:17:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 4148582 00:22:47.294 11:17:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:47.294 11:17:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:47.294 11:17:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:47.294 11:17:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:22:47.294 11:17:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:22:47.294 11:17:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:47.294 11:17:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:22:47.294 11:17:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:47.294 11:17:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:47.294 11:17:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:47.294 11:17:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:47.294 11:17:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:49.827 11:17:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:49.827 00:22:49.827 real 0m15.842s 00:22:49.827 user 0m46.913s 00:22:49.827 sys 0m6.572s 00:22:49.827 11:17:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:49.827 11:17:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:49.827 ************************************ 00:22:49.827 END TEST nvmf_fio_host 00:22:49.827 ************************************ 00:22:49.827 11:17:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:22:49.827 11:17:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:49.827 11:17:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:49.827 11:17:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:49.827 ************************************ 00:22:49.827 START TEST nvmf_failover 00:22:49.827 ************************************ 00:22:49.827 11:17:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:22:49.827 * Looking for test storage... 00:22:49.827 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:49.827 11:17:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:49.827 11:17:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lcov --version 00:22:49.827 11:17:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:49.827 11:17:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:49.827 11:17:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:49.827 11:17:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:49.827 11:17:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:49.827 11:17:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:22:49.827 11:17:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:22:49.827 11:17:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:22:49.827 11:17:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:22:49.827 11:17:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:22:49.827 11:17:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:22:49.827 11:17:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:22:49.827 11:17:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:49.827 11:17:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:22:49.827 11:17:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:22:49.827 11:17:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:49.827 11:17:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:49.827 11:17:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:22:49.827 11:17:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:22:49.827 11:17:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:49.827 11:17:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:22:49.827 11:17:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:22:49.827 11:17:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:22:49.827 11:17:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:22:49.827 11:17:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:49.827 11:17:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:22:49.827 11:17:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:22:49.827 11:17:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:49.827 11:17:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:49.827 11:17:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:22:49.827 11:17:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:49.827 11:17:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:49.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:49.827 --rc genhtml_branch_coverage=1 00:22:49.827 --rc genhtml_function_coverage=1 00:22:49.827 --rc genhtml_legend=1 00:22:49.827 --rc geninfo_all_blocks=1 00:22:49.827 --rc geninfo_unexecuted_blocks=1 00:22:49.827 00:22:49.827 ' 00:22:49.827 11:17:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:49.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:49.827 --rc genhtml_branch_coverage=1 00:22:49.827 --rc genhtml_function_coverage=1 00:22:49.827 --rc genhtml_legend=1 00:22:49.827 --rc geninfo_all_blocks=1 00:22:49.827 --rc geninfo_unexecuted_blocks=1 00:22:49.827 00:22:49.827 ' 00:22:49.827 11:17:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:49.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:49.827 --rc genhtml_branch_coverage=1 00:22:49.827 --rc genhtml_function_coverage=1 00:22:49.827 --rc genhtml_legend=1 00:22:49.827 --rc geninfo_all_blocks=1 00:22:49.827 --rc geninfo_unexecuted_blocks=1 00:22:49.827 00:22:49.827 ' 00:22:49.827 11:17:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:49.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:49.827 --rc genhtml_branch_coverage=1 00:22:49.827 --rc genhtml_function_coverage=1 00:22:49.827 --rc genhtml_legend=1 00:22:49.827 --rc geninfo_all_blocks=1 00:22:49.827 --rc geninfo_unexecuted_blocks=1 00:22:49.827 00:22:49.827 ' 00:22:49.827 11:17:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:49.827 11:17:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:22:49.827 11:17:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:49.827 11:17:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:49.827 11:17:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:49.827 11:17:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:49.827 11:17:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:49.827 11:17:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:49.827 11:17:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:49.827 11:17:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:49.827 11:17:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:49.827 11:17:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:49.827 11:17:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:49.827 11:17:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:49.827 11:17:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:49.827 11:17:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:49.827 11:17:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:49.827 11:17:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:49.827 11:17:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:49.827 11:17:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:22:49.827 11:17:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:49.827 11:17:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:49.827 11:17:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:49.828 11:17:17 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:49.828 11:17:17 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:49.828 11:17:17 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:49.828 11:17:17 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:22:49.828 11:17:17 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:49.828 11:17:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:22:49.828 11:17:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:49.828 11:17:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:49.828 11:17:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:49.828 11:17:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:49.828 11:17:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:49.828 11:17:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:49.828 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:49.828 11:17:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:49.828 11:17:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:49.828 11:17:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:49.828 11:17:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:49.828 11:17:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:49.828 11:17:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:49.828 11:17:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:49.828 11:17:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:22:49.828 11:17:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:49.828 11:17:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:49.828 11:17:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:49.828 11:17:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:49.828 11:17:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:49.828 11:17:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:49.828 11:17:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:49.828 11:17:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:49.828 11:17:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:49.828 11:17:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:49.828 11:17:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:22:49.828 11:17:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:56.399 11:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:56.399 11:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:22:56.399 11:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:56.399 11:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:56.399 11:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:56.399 11:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:56.399 11:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:56.399 11:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:22:56.399 11:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:56.399 11:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:22:56.399 11:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:22:56.400 11:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:22:56.400 11:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:22:56.400 11:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:22:56.400 11:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:22:56.400 11:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:56.400 11:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:56.400 11:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:56.400 11:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:56.400 11:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:56.400 11:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:56.400 11:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:56.400 11:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:56.400 11:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:56.400 11:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:56.400 11:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:56.400 11:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:56.400 11:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:56.400 11:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:56.400 11:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:56.400 11:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:56.400 11:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:56.400 11:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:56.400 11:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:56.400 11:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:56.400 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:56.400 11:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:56.400 11:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:56.400 11:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:56.400 11:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:56.400 11:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:56.400 11:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:56.400 11:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:56.400 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:56.400 11:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:56.400 11:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:56.400 11:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:56.400 11:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:56.400 11:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:56.400 11:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:56.400 11:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:56.400 11:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:56.400 11:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:56.400 11:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:56.400 11:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:56.400 11:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:56.400 11:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:56.400 11:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:56.400 11:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:56.400 11:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:56.400 Found net devices under 0000:86:00.0: cvl_0_0 00:22:56.400 11:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:56.400 11:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:56.400 11:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:56.400 11:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:56.400 11:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:56.400 11:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:56.400 11:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:56.400 11:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:56.400 11:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:56.400 Found net devices under 0000:86:00.1: cvl_0_1 00:22:56.400 11:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:56.400 11:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:56.400 11:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:22:56.400 11:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:56.400 11:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:56.400 11:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:56.400 11:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:56.400 11:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:56.400 11:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:56.400 11:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:56.400 11:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:56.400 11:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:56.400 11:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:56.400 11:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:56.400 11:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:56.400 11:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:56.400 11:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:56.400 11:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:56.400 11:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:56.400 11:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:56.400 11:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:56.400 11:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:56.400 11:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:56.400 11:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:56.400 11:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:56.400 11:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:56.400 11:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:56.400 11:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:56.400 11:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:56.400 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:56.400 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.386 ms 00:22:56.400 00:22:56.400 --- 10.0.0.2 ping statistics --- 00:22:56.400 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:56.400 rtt min/avg/max/mdev = 0.386/0.386/0.386/0.000 ms 00:22:56.400 11:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:56.400 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:56.400 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.129 ms 00:22:56.400 00:22:56.400 --- 10.0.0.1 ping statistics --- 00:22:56.400 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:56.400 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:22:56.400 11:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:56.400 11:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:22:56.400 11:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:56.400 11:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:56.400 11:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:56.400 11:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:56.400 11:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:56.400 11:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:56.400 11:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:56.400 11:17:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:22:56.400 11:17:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:56.400 11:17:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:56.400 11:17:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:56.400 11:17:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=4153506 00:22:56.400 11:17:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 4153506 00:22:56.400 11:17:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:22:56.400 11:17:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 4153506 ']' 00:22:56.401 11:17:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:56.401 11:17:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:56.401 11:17:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:56.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:56.401 11:17:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:56.401 11:17:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:56.401 [2024-11-20 11:17:23.069581] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:22:56.401 [2024-11-20 11:17:23.069628] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:56.401 [2024-11-20 11:17:23.150494] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:56.401 [2024-11-20 11:17:23.192078] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:56.401 [2024-11-20 11:17:23.192114] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:56.401 [2024-11-20 11:17:23.192121] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:56.401 [2024-11-20 11:17:23.192127] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:56.401 [2024-11-20 11:17:23.192132] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:56.401 [2024-11-20 11:17:23.193537] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:56.401 [2024-11-20 11:17:23.193648] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:56.401 [2024-11-20 11:17:23.193649] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:56.401 11:17:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:56.401 11:17:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:22:56.401 11:17:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:56.401 11:17:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:56.401 11:17:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:56.401 11:17:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:56.401 11:17:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:56.401 [2024-11-20 11:17:23.489532] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:56.401 11:17:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:56.401 Malloc0 00:22:56.401 11:17:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:56.658 11:17:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:56.915 11:17:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:56.915 [2024-11-20 11:17:24.342857] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:56.915 11:17:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:57.172 [2024-11-20 11:17:24.535403] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:57.172 11:17:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:57.431 [2024-11-20 11:17:24.732040] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:22:57.431 11:17:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=4153768 00:22:57.431 11:17:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:57.431 11:17:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:22:57.431 11:17:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 4153768 /var/tmp/bdevperf.sock 00:22:57.431 11:17:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 4153768 ']' 00:22:57.431 11:17:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:57.431 11:17:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:57.431 11:17:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:57.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:57.431 11:17:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:57.431 11:17:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:57.688 11:17:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:57.688 11:17:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:22:57.688 11:17:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:57.944 NVMe0n1 00:22:57.944 11:17:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:58.201 00:22:58.458 11:17:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=4153994 00:22:58.458 11:17:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:22:58.458 11:17:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:59.389 11:17:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:59.647 11:17:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:23:02.922 11:17:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:02.922 00:23:02.922 11:17:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:03.180 [2024-11-20 11:17:30.486810] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e1060 is same with the state(6) to be set 00:23:03.180 [2024-11-20 11:17:30.486856] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e1060 is same with the state(6) to be set 00:23:03.180 [2024-11-20 11:17:30.486864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e1060 is same with the state(6) to be set 00:23:03.180 [2024-11-20 11:17:30.486870] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e1060 is same with the state(6) to be set 00:23:03.180 [2024-11-20 11:17:30.486877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e1060 is same with the state(6) to be set 00:23:03.180 [2024-11-20 11:17:30.486883] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e1060 is same with the state(6) to be set 00:23:03.180 [2024-11-20 11:17:30.486890] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e1060 is same with the state(6) to be set 00:23:03.180 [2024-11-20 11:17:30.486896] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e1060 is same with the state(6) to be set 00:23:03.180 [2024-11-20 11:17:30.486902] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e1060 is same with the state(6) to be set 00:23:03.180 [2024-11-20 11:17:30.486908] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e1060 is same with the state(6) to be set 00:23:03.180 [2024-11-20 11:17:30.486914] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e1060 is same with the state(6) to be set 00:23:03.180 [2024-11-20 11:17:30.486920] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e1060 is same with the state(6) to be set 00:23:03.180 [2024-11-20 11:17:30.486926] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e1060 is same with the state(6) to be set 00:23:03.180 [2024-11-20 11:17:30.486932] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e1060 is same with the state(6) to be set 00:23:03.180 [2024-11-20 11:17:30.486938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e1060 is same with the state(6) to be set 00:23:03.180 [2024-11-20 11:17:30.486945] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e1060 is same with the state(6) to be set 00:23:03.180 [2024-11-20 11:17:30.486957] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e1060 is same with the state(6) to be set 00:23:03.180 [2024-11-20 11:17:30.486964] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e1060 is same with the state(6) to be set 00:23:03.180 [2024-11-20 11:17:30.486969] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e1060 is same with the state(6) to be set 00:23:03.180 [2024-11-20 11:17:30.486976] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e1060 is same with the state(6) to be set 00:23:03.180 [2024-11-20 11:17:30.486988] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e1060 is same with the state(6) to be set 00:23:03.180 [2024-11-20 11:17:30.486994] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e1060 is same with the state(6) to be set 00:23:03.181 [2024-11-20 11:17:30.487000] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e1060 is same with the state(6) to be set 00:23:03.181 [2024-11-20 11:17:30.487006] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e1060 is same with the state(6) to be set 00:23:03.181 [2024-11-20 11:17:30.487014] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e1060 is same with the state(6) to be set 00:23:03.181 [2024-11-20 11:17:30.487020] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e1060 is same with the state(6) to be set 00:23:03.181 [2024-11-20 11:17:30.487026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e1060 is same with the state(6) to be set 00:23:03.181 [2024-11-20 11:17:30.487033] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e1060 is same with the state(6) to be set 00:23:03.181 [2024-11-20 11:17:30.487041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e1060 is same with the state(6) to be set 00:23:03.181 [2024-11-20 11:17:30.487048] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e1060 is same with the state(6) to be set 00:23:03.181 [2024-11-20 11:17:30.487054] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e1060 is same with the state(6) to be set 00:23:03.181 [2024-11-20 11:17:30.487060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e1060 is same with the state(6) to be set 00:23:03.181 [2024-11-20 11:17:30.487067] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e1060 is same with the state(6) to be set 00:23:03.181 [2024-11-20 11:17:30.487075] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e1060 is same with the state(6) to be set 00:23:03.181 [2024-11-20 11:17:30.487083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e1060 is same with the state(6) to be set 00:23:03.181 [2024-11-20 11:17:30.487090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e1060 is same with the state(6) to be set 00:23:03.181 [2024-11-20 11:17:30.487097] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e1060 is same with the state(6) to be set 00:23:03.181 [2024-11-20 11:17:30.487104] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e1060 is same with the state(6) to be set 00:23:03.181 [2024-11-20 11:17:30.487111] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e1060 is same with the state(6) to be set 00:23:03.181 [2024-11-20 11:17:30.487119] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e1060 is same with the state(6) to be set 00:23:03.181 [2024-11-20 11:17:30.487125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e1060 is same with the state(6) to be set 00:23:03.181 [2024-11-20 11:17:30.487131] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e1060 is same with the state(6) to be set 00:23:03.181 [2024-11-20 11:17:30.487137] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e1060 is same with the state(6) to be set 00:23:03.181 [2024-11-20 11:17:30.487144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e1060 is same with the state(6) to be set 00:23:03.181 [2024-11-20 11:17:30.487150] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e1060 is same with the state(6) to be set 00:23:03.181 [2024-11-20 11:17:30.487157] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e1060 is same with the state(6) to be set 00:23:03.181 [2024-11-20 11:17:30.487163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e1060 is same with the state(6) to be set 00:23:03.181 [2024-11-20 11:17:30.487175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e1060 is same with the state(6) to be set 00:23:03.181 [2024-11-20 11:17:30.487181] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e1060 is same with the state(6) to be set 00:23:03.181 [2024-11-20 11:17:30.487187] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e1060 is same with the state(6) to be set 00:23:03.181 [2024-11-20 11:17:30.487193] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e1060 is same with the state(6) to be set 00:23:03.181 [2024-11-20 11:17:30.487200] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e1060 is same with the state(6) to be set 00:23:03.181 [2024-11-20 11:17:30.487206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e1060 is same with the state(6) to be set 00:23:03.181 [2024-11-20 11:17:30.487211] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e1060 is same with the state(6) to be set 00:23:03.181 [2024-11-20 11:17:30.487217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e1060 is same with the state(6) to be set 00:23:03.181 [2024-11-20 11:17:30.487223] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e1060 is same with the state(6) to be set 00:23:03.181 [2024-11-20 11:17:30.487229] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e1060 is same with the state(6) to be set 00:23:03.181 [2024-11-20 11:17:30.487235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e1060 is same with the state(6) to be set 00:23:03.181 [2024-11-20 11:17:30.487242] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e1060 is same with the state(6) to be set 00:23:03.181 [2024-11-20 11:17:30.487248] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e1060 is same with the state(6) to be set 00:23:03.181 [2024-11-20 11:17:30.487254] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e1060 is same with the state(6) to be set 00:23:03.181 [2024-11-20 11:17:30.487260] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e1060 is same with the state(6) to be set 00:23:03.181 [2024-11-20 11:17:30.487266] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e1060 is same with the state(6) to be set 00:23:03.181 [2024-11-20 11:17:30.487272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e1060 is same with the state(6) to be set 00:23:03.181 [2024-11-20 11:17:30.487278] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e1060 is same with the state(6) to be set 00:23:03.181 [2024-11-20 11:17:30.487284] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e1060 is same with the state(6) to be set 00:23:03.181 [2024-11-20 11:17:30.487291] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e1060 is same with the state(6) to be set 00:23:03.181 [2024-11-20 11:17:30.487298] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e1060 is same with the state(6) to be set 00:23:03.181 [2024-11-20 11:17:30.487304] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e1060 is same with the state(6) to be set 00:23:03.181 [2024-11-20 11:17:30.487310] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e1060 is same with the state(6) to be set 00:23:03.181 [2024-11-20 11:17:30.487317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e1060 is same with the state(6) to be set 00:23:03.181 [2024-11-20 11:17:30.487323] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e1060 is same with the state(6) to be set 00:23:03.181 [2024-11-20 11:17:30.487329] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e1060 is same with the state(6) to be set 00:23:03.181 [2024-11-20 11:17:30.487335] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e1060 is same with the state(6) to be set 00:23:03.181 [2024-11-20 11:17:30.487342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e1060 is same with the state(6) to be set 00:23:03.181 [2024-11-20 11:17:30.487348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e1060 is same with the state(6) to be set 00:23:03.181 [2024-11-20 11:17:30.487353] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e1060 is same with the state(6) to be set 00:23:03.181 [2024-11-20 11:17:30.487360] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e1060 is same with the state(6) to be set 00:23:03.181 11:17:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:23:06.461 11:17:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:06.461 [2024-11-20 11:17:33.701071] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:06.461 11:17:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:23:07.394 11:17:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:07.651 [2024-11-20 11:17:34.929399] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e1e30 is same with the state(6) to be set 00:23:07.651 [2024-11-20 11:17:34.929436] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e1e30 is same with the state(6) to be set 00:23:07.652 [2024-11-20 11:17:34.929444] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e1e30 is same with the state(6) to be set 00:23:07.652 [2024-11-20 11:17:34.929450] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e1e30 is same with the state(6) to be set 00:23:07.652 [2024-11-20 11:17:34.929457] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e1e30 is same with the state(6) to be set 00:23:07.652 [2024-11-20 11:17:34.929463] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e1e30 is same with the state(6) to be set 00:23:07.652 [2024-11-20 11:17:34.929470] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e1e30 is same with the state(6) to be set 00:23:07.652 [2024-11-20 11:17:34.929476] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e1e30 is same with the state(6) to be set 00:23:07.652 [2024-11-20 11:17:34.929482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e1e30 is same with the state(6) to be set 00:23:07.652 [2024-11-20 11:17:34.929488] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e1e30 is same with the state(6) to be set 00:23:07.652 [2024-11-20 11:17:34.929494] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e1e30 is same with the state(6) to be set 00:23:07.652 11:17:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 4153994 00:23:14.283 { 00:23:14.283 "results": [ 00:23:14.283 { 00:23:14.283 "job": "NVMe0n1", 00:23:14.283 "core_mask": "0x1", 00:23:14.283 "workload": "verify", 00:23:14.283 "status": "finished", 00:23:14.283 "verify_range": { 00:23:14.283 "start": 0, 00:23:14.283 "length": 16384 00:23:14.283 }, 00:23:14.283 "queue_depth": 128, 00:23:14.283 "io_size": 4096, 00:23:14.283 "runtime": 15.004095, 00:23:14.283 "iops": 10910.488103414435, 00:23:14.283 "mibps": 42.619094153962635, 00:23:14.283 "io_failed": 11429, 00:23:14.283 "io_timeout": 0, 00:23:14.283 "avg_latency_us": 10944.026139498555, 00:23:14.283 "min_latency_us": 432.7513043478261, 00:23:14.283 "max_latency_us": 21427.42260869565 00:23:14.283 } 00:23:14.283 ], 00:23:14.283 "core_count": 1 00:23:14.283 } 00:23:14.283 11:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 4153768 00:23:14.283 11:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 4153768 ']' 00:23:14.283 11:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 4153768 00:23:14.283 11:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:23:14.283 11:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:14.283 11:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4153768 00:23:14.283 11:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:14.283 11:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:14.283 11:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4153768' 00:23:14.283 killing process with pid 4153768 00:23:14.283 11:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 4153768 00:23:14.283 11:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 4153768 00:23:14.283 11:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:14.283 [2024-11-20 11:17:24.804453] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:23:14.284 [2024-11-20 11:17:24.804509] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4153768 ] 00:23:14.284 [2024-11-20 11:17:24.882968] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:14.284 [2024-11-20 11:17:24.924968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:14.284 Running I/O for 15 seconds... 00:23:14.284 11119.00 IOPS, 43.43 MiB/s [2024-11-20T10:17:41.780Z] [2024-11-20 11:17:26.900233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.284 [2024-11-20 11:17:26.900274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.284 [2024-11-20 11:17:26.900291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:98088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.284 [2024-11-20 11:17:26.900300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.284 [2024-11-20 11:17:26.900309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:98096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.284 [2024-11-20 11:17:26.900320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.284 [2024-11-20 11:17:26.900329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:98104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.284 [2024-11-20 11:17:26.900337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.284 [2024-11-20 11:17:26.900346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:98112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.284 [2024-11-20 11:17:26.900353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.284 [2024-11-20 11:17:26.900361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:98120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.284 [2024-11-20 11:17:26.900368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.284 [2024-11-20 11:17:26.900376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:98128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.284 [2024-11-20 11:17:26.900383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.284 [2024-11-20 11:17:26.900392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:97256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.284 [2024-11-20 11:17:26.900398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.284 [2024-11-20 11:17:26.900407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:97264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.284 [2024-11-20 11:17:26.900414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.284 [2024-11-20 11:17:26.900422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:97272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.284 [2024-11-20 11:17:26.900430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.284 [2024-11-20 11:17:26.900439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:97280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.284 [2024-11-20 11:17:26.900445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.284 [2024-11-20 11:17:26.900460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:97288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.284 [2024-11-20 11:17:26.900468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.284 [2024-11-20 11:17:26.900477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:97296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.284 [2024-11-20 11:17:26.900484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.284 [2024-11-20 11:17:26.900492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:97304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.284 [2024-11-20 11:17:26.900500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.284 [2024-11-20 11:17:26.900508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:98136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.284 [2024-11-20 11:17:26.900514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.284 [2024-11-20 11:17:26.900522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:98144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.284 [2024-11-20 11:17:26.900529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.284 [2024-11-20 11:17:26.900540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:98152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.284 [2024-11-20 11:17:26.900548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.284 [2024-11-20 11:17:26.900558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:98160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.284 [2024-11-20 11:17:26.900565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.284 [2024-11-20 11:17:26.900573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:98168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.284 [2024-11-20 11:17:26.900581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.284 [2024-11-20 11:17:26.900589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:98176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.284 [2024-11-20 11:17:26.900597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.284 [2024-11-20 11:17:26.900606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:98184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.284 [2024-11-20 11:17:26.900613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.284 [2024-11-20 11:17:26.900622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:98192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.284 [2024-11-20 11:17:26.900629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.284 [2024-11-20 11:17:26.900638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:98200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.284 [2024-11-20 11:17:26.900645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.284 [2024-11-20 11:17:26.900653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:98208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.284 [2024-11-20 11:17:26.900662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.284 [2024-11-20 11:17:26.900671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:98216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.284 [2024-11-20 11:17:26.900678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.284 [2024-11-20 11:17:26.900686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:98224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.284 [2024-11-20 11:17:26.900693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.284 [2024-11-20 11:17:26.900701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:98232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.284 [2024-11-20 11:17:26.900708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.284 [2024-11-20 11:17:26.900717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:98240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.284 [2024-11-20 11:17:26.900723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.284 [2024-11-20 11:17:26.900732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:98248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.284 [2024-11-20 11:17:26.900738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.284 [2024-11-20 11:17:26.900747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:97312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.284 [2024-11-20 11:17:26.900755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.284 [2024-11-20 11:17:26.900763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:97320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.284 [2024-11-20 11:17:26.900770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.284 [2024-11-20 11:17:26.900778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:97328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.284 [2024-11-20 11:17:26.900784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.284 [2024-11-20 11:17:26.900792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:97336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.284 [2024-11-20 11:17:26.900800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.284 [2024-11-20 11:17:26.900809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:97344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.285 [2024-11-20 11:17:26.900816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.285 [2024-11-20 11:17:26.900825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:97352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.285 [2024-11-20 11:17:26.900832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.285 [2024-11-20 11:17:26.900841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:97360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.285 [2024-11-20 11:17:26.900848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.285 [2024-11-20 11:17:26.900859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:97368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.285 [2024-11-20 11:17:26.900867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.285 [2024-11-20 11:17:26.900875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:97376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.285 [2024-11-20 11:17:26.900882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.285 [2024-11-20 11:17:26.900890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:97384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.285 [2024-11-20 11:17:26.900897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.285 [2024-11-20 11:17:26.900906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:97392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.285 [2024-11-20 11:17:26.900912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.285 [2024-11-20 11:17:26.900920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:97400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.285 [2024-11-20 11:17:26.900926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.285 [2024-11-20 11:17:26.900934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:97408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.285 [2024-11-20 11:17:26.900941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.285 [2024-11-20 11:17:26.900954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:97416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.285 [2024-11-20 11:17:26.900961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.285 [2024-11-20 11:17:26.900969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:97424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.285 [2024-11-20 11:17:26.900975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.285 [2024-11-20 11:17:26.900983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:97432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.285 [2024-11-20 11:17:26.900990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.285 [2024-11-20 11:17:26.900998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:97440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.285 [2024-11-20 11:17:26.901004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.285 [2024-11-20 11:17:26.901012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:97448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.285 [2024-11-20 11:17:26.901018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.285 [2024-11-20 11:17:26.901027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:97456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.285 [2024-11-20 11:17:26.901035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.285 [2024-11-20 11:17:26.901043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:97464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.285 [2024-11-20 11:17:26.901054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.285 [2024-11-20 11:17:26.901063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:97472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.285 [2024-11-20 11:17:26.901070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.285 [2024-11-20 11:17:26.901078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:97480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.285 [2024-11-20 11:17:26.901084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.285 [2024-11-20 11:17:26.901092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:97488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.285 [2024-11-20 11:17:26.901099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.285 [2024-11-20 11:17:26.901107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:97496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.285 [2024-11-20 11:17:26.901114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.285 [2024-11-20 11:17:26.901122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:97504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.285 [2024-11-20 11:17:26.901129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.285 [2024-11-20 11:17:26.901137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:97512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.285 [2024-11-20 11:17:26.901143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.285 [2024-11-20 11:17:26.901151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:97520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.285 [2024-11-20 11:17:26.901157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.285 [2024-11-20 11:17:26.901166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:97528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.285 [2024-11-20 11:17:26.901173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.285 [2024-11-20 11:17:26.901181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:97536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.285 [2024-11-20 11:17:26.901187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.285 [2024-11-20 11:17:26.901196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:97544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.285 [2024-11-20 11:17:26.901202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.285 [2024-11-20 11:17:26.901210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:97552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.285 [2024-11-20 11:17:26.901216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.285 [2024-11-20 11:17:26.901225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:97560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.285 [2024-11-20 11:17:26.901232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.285 [2024-11-20 11:17:26.901240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:98256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.285 [2024-11-20 11:17:26.901249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.285 [2024-11-20 11:17:26.901257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:97568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.285 [2024-11-20 11:17:26.901263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.285 [2024-11-20 11:17:26.901272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:97576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.285 [2024-11-20 11:17:26.901278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.285 [2024-11-20 11:17:26.901286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:97584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.285 [2024-11-20 11:17:26.901292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.285 [2024-11-20 11:17:26.901301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:97592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.285 [2024-11-20 11:17:26.901308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.285 [2024-11-20 11:17:26.901316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:97600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.285 [2024-11-20 11:17:26.901323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.285 [2024-11-20 11:17:26.901331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:97608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.285 [2024-11-20 11:17:26.901337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.285 [2024-11-20 11:17:26.901345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:97616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.285 [2024-11-20 11:17:26.901353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.285 [2024-11-20 11:17:26.901362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:97624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.285 [2024-11-20 11:17:26.901369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.285 [2024-11-20 11:17:26.901377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:97632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.285 [2024-11-20 11:17:26.901384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.285 [2024-11-20 11:17:26.901392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:97640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.285 [2024-11-20 11:17:26.901398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.285 [2024-11-20 11:17:26.901406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:97648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.285 [2024-11-20 11:17:26.901413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.286 [2024-11-20 11:17:26.901422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:97656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.286 [2024-11-20 11:17:26.901429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.286 [2024-11-20 11:17:26.901439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:97664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.286 [2024-11-20 11:17:26.901445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.286 [2024-11-20 11:17:26.901453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:97672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.286 [2024-11-20 11:17:26.901460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.286 [2024-11-20 11:17:26.901469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:97680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.286 [2024-11-20 11:17:26.901476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.286 [2024-11-20 11:17:26.901485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:97688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.286 [2024-11-20 11:17:26.901492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.286 [2024-11-20 11:17:26.901499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:98264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.286 [2024-11-20 11:17:26.901506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.286 [2024-11-20 11:17:26.901514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:98272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.286 [2024-11-20 11:17:26.901522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.286 [2024-11-20 11:17:26.901530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:97696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.286 [2024-11-20 11:17:26.901537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.286 [2024-11-20 11:17:26.901545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:97704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.286 [2024-11-20 11:17:26.901552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.286 [2024-11-20 11:17:26.901560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:97712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.286 [2024-11-20 11:17:26.901566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.286 [2024-11-20 11:17:26.901574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:97720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.286 [2024-11-20 11:17:26.901581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.286 [2024-11-20 11:17:26.901590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:97728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.286 [2024-11-20 11:17:26.901597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.286 [2024-11-20 11:17:26.901605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:97736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.286 [2024-11-20 11:17:26.901611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.286 [2024-11-20 11:17:26.901619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:97744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.286 [2024-11-20 11:17:26.901627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.286 [2024-11-20 11:17:26.901637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:97752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.286 [2024-11-20 11:17:26.901643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.286 [2024-11-20 11:17:26.901653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:97760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.286 [2024-11-20 11:17:26.901659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.286 [2024-11-20 11:17:26.901667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:97768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.286 [2024-11-20 11:17:26.901674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.286 [2024-11-20 11:17:26.901681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:97776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.286 [2024-11-20 11:17:26.901688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.286 [2024-11-20 11:17:26.901697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:97784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.286 [2024-11-20 11:17:26.901703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.286 [2024-11-20 11:17:26.901712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:97792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.286 [2024-11-20 11:17:26.901718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.286 [2024-11-20 11:17:26.901726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:97800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.286 [2024-11-20 11:17:26.901732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.286 [2024-11-20 11:17:26.901740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:97808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.286 [2024-11-20 11:17:26.901747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.286 [2024-11-20 11:17:26.901756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:97816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.286 [2024-11-20 11:17:26.901765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.286 [2024-11-20 11:17:26.901774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:97824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.286 [2024-11-20 11:17:26.901781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.286 [2024-11-20 11:17:26.901789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:97832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.286 [2024-11-20 11:17:26.901795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.286 [2024-11-20 11:17:26.901803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:97840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.286 [2024-11-20 11:17:26.901811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.286 [2024-11-20 11:17:26.901821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:97848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.286 [2024-11-20 11:17:26.901828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.286 [2024-11-20 11:17:26.901835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:97856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.286 [2024-11-20 11:17:26.901842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.286 [2024-11-20 11:17:26.901850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:97864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.286 [2024-11-20 11:17:26.901857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.286 [2024-11-20 11:17:26.901866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:97872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.286 [2024-11-20 11:17:26.901874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.286 [2024-11-20 11:17:26.901882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:97880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.286 [2024-11-20 11:17:26.901888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.286 [2024-11-20 11:17:26.901896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:97888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.286 [2024-11-20 11:17:26.901902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.286 [2024-11-20 11:17:26.901910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:97896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.286 [2024-11-20 11:17:26.901917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.286 [2024-11-20 11:17:26.901925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:97904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.286 [2024-11-20 11:17:26.901932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.286 [2024-11-20 11:17:26.901940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:97912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.286 [2024-11-20 11:17:26.901949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.286 [2024-11-20 11:17:26.901958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:97920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.286 [2024-11-20 11:17:26.901964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.286 [2024-11-20 11:17:26.901972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:97928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.286 [2024-11-20 11:17:26.901979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.286 [2024-11-20 11:17:26.901987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:97936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.286 [2024-11-20 11:17:26.901994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.286 [2024-11-20 11:17:26.902003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:97944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.286 [2024-11-20 11:17:26.902012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.286 [2024-11-20 11:17:26.902020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:97952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.287 [2024-11-20 11:17:26.902028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.287 [2024-11-20 11:17:26.902037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:97960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.287 [2024-11-20 11:17:26.902044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.287 [2024-11-20 11:17:26.902053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:97968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.287 [2024-11-20 11:17:26.902059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.287 [2024-11-20 11:17:26.902067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:97976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.287 [2024-11-20 11:17:26.902074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.287 [2024-11-20 11:17:26.902082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:97984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.287 [2024-11-20 11:17:26.902088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.287 [2024-11-20 11:17:26.902097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:97992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.287 [2024-11-20 11:17:26.902104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.287 [2024-11-20 11:17:26.902113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:98000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.287 [2024-11-20 11:17:26.902119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.287 [2024-11-20 11:17:26.902127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:98008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.287 [2024-11-20 11:17:26.902134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.287 [2024-11-20 11:17:26.902142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:98016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.287 [2024-11-20 11:17:26.902150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.287 [2024-11-20 11:17:26.902158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:98024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.287 [2024-11-20 11:17:26.902165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.287 [2024-11-20 11:17:26.902173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:98032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.287 [2024-11-20 11:17:26.902179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.287 [2024-11-20 11:17:26.902188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:98040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.287 [2024-11-20 11:17:26.902194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.287 [2024-11-20 11:17:26.902203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:98048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.287 [2024-11-20 11:17:26.902211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.287 [2024-11-20 11:17:26.902220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:98056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.287 [2024-11-20 11:17:26.902227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.287 [2024-11-20 11:17:26.902235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:98064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.287 [2024-11-20 11:17:26.902242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.287 [2024-11-20 11:17:26.902250] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaced60 is same with the state(6) to be set 00:23:14.287 [2024-11-20 11:17:26.902260] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.287 [2024-11-20 11:17:26.902266] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.287 [2024-11-20 11:17:26.902273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98072 len:8 PRP1 0x0 PRP2 0x0 00:23:14.287 [2024-11-20 11:17:26.902279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.287 [2024-11-20 11:17:26.902322] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:23:14.287 [2024-11-20 11:17:26.902346] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:14.287 [2024-11-20 11:17:26.902353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.287 [2024-11-20 11:17:26.902362] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:14.287 [2024-11-20 11:17:26.902368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.287 [2024-11-20 11:17:26.902376] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:14.287 [2024-11-20 11:17:26.902382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.287 [2024-11-20 11:17:26.902389] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:14.287 [2024-11-20 11:17:26.902396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.287 [2024-11-20 11:17:26.902403] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:23:14.287 [2024-11-20 11:17:26.905269] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:23:14.287 [2024-11-20 11:17:26.905295] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaaa340 (9): Bad file descriptor 00:23:14.287 [2024-11-20 11:17:26.932519] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:23:14.287 10924.00 IOPS, 42.67 MiB/s [2024-11-20T10:17:41.783Z] 11017.67 IOPS, 43.04 MiB/s [2024-11-20T10:17:41.783Z] 11043.50 IOPS, 43.14 MiB/s [2024-11-20T10:17:41.783Z] [2024-11-20 11:17:30.488858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:31184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.287 [2024-11-20 11:17:30.488892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.287 [2024-11-20 11:17:30.488906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:31192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.287 [2024-11-20 11:17:30.488919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.287 [2024-11-20 11:17:30.488928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:31200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.287 [2024-11-20 11:17:30.488935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.287 [2024-11-20 11:17:30.488944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:31208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.287 [2024-11-20 11:17:30.488957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.287 [2024-11-20 11:17:30.488966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:31216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.287 [2024-11-20 11:17:30.488972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.287 [2024-11-20 11:17:30.488981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:31224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.287 [2024-11-20 11:17:30.488988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.287 [2024-11-20 11:17:30.488996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:31232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.287 [2024-11-20 11:17:30.489003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.287 [2024-11-20 11:17:30.489012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:31240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.287 [2024-11-20 11:17:30.489018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.287 [2024-11-20 11:17:30.489027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:31248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.287 [2024-11-20 11:17:30.489034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.287 [2024-11-20 11:17:30.489042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:31256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.287 [2024-11-20 11:17:30.489049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.287 [2024-11-20 11:17:30.489057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:31264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.287 [2024-11-20 11:17:30.489064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.287 [2024-11-20 11:17:30.489072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:31272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.287 [2024-11-20 11:17:30.489078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.287 [2024-11-20 11:17:30.489087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:31280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.287 [2024-11-20 11:17:30.489093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.287 [2024-11-20 11:17:30.489102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:31288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.287 [2024-11-20 11:17:30.489109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.287 [2024-11-20 11:17:30.489119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:31296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.287 [2024-11-20 11:17:30.489126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.287 [2024-11-20 11:17:30.489134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:31304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.287 [2024-11-20 11:17:30.489143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.287 [2024-11-20 11:17:30.489152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:31312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.288 [2024-11-20 11:17:30.489159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.288 [2024-11-20 11:17:30.489168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:31320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.288 [2024-11-20 11:17:30.489175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.288 [2024-11-20 11:17:30.489183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:31328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.288 [2024-11-20 11:17:30.489190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.288 [2024-11-20 11:17:30.489198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:31336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.288 [2024-11-20 11:17:30.489205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.288 [2024-11-20 11:17:30.489213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:31344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.288 [2024-11-20 11:17:30.489221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.288 [2024-11-20 11:17:30.489229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:31352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.288 [2024-11-20 11:17:30.489235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.288 [2024-11-20 11:17:30.489243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:31360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.288 [2024-11-20 11:17:30.489250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.288 [2024-11-20 11:17:30.489259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:31368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.288 [2024-11-20 11:17:30.489266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.288 [2024-11-20 11:17:30.489274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.288 [2024-11-20 11:17:30.489281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.288 [2024-11-20 11:17:30.489289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:31384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.288 [2024-11-20 11:17:30.489297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.288 [2024-11-20 11:17:30.489305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:31392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.288 [2024-11-20 11:17:30.489314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.288 [2024-11-20 11:17:30.489322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:31400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.288 [2024-11-20 11:17:30.489329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.288 [2024-11-20 11:17:30.489337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:31408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.288 [2024-11-20 11:17:30.489344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.288 [2024-11-20 11:17:30.489352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:31416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.288 [2024-11-20 11:17:30.489359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.288 [2024-11-20 11:17:30.489367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:31424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.288 [2024-11-20 11:17:30.489375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.288 [2024-11-20 11:17:30.489383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:31432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.288 [2024-11-20 11:17:30.489390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.288 [2024-11-20 11:17:30.489398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:31440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.288 [2024-11-20 11:17:30.489405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.288 [2024-11-20 11:17:30.489413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:31448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.288 [2024-11-20 11:17:30.489420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.288 [2024-11-20 11:17:30.489429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:31456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.288 [2024-11-20 11:17:30.489436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.288 [2024-11-20 11:17:30.489445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:31464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.288 [2024-11-20 11:17:30.489451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.288 [2024-11-20 11:17:30.489459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:31472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.288 [2024-11-20 11:17:30.489466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.288 [2024-11-20 11:17:30.489474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:31480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.288 [2024-11-20 11:17:30.489481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.288 [2024-11-20 11:17:30.489489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:31488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.288 [2024-11-20 11:17:30.489496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.288 [2024-11-20 11:17:30.489506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:31496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.288 [2024-11-20 11:17:30.489512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.288 [2024-11-20 11:17:30.489521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:31504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.288 [2024-11-20 11:17:30.489529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.288 [2024-11-20 11:17:30.489538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:31512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.288 [2024-11-20 11:17:30.489545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.288 [2024-11-20 11:17:30.489554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:31520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.288 [2024-11-20 11:17:30.489560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.288 [2024-11-20 11:17:30.489568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:31528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.288 [2024-11-20 11:17:30.489575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.288 [2024-11-20 11:17:30.489584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:31536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.288 [2024-11-20 11:17:30.489591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.288 [2024-11-20 11:17:30.489599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:31544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.288 [2024-11-20 11:17:30.489606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.288 [2024-11-20 11:17:30.489614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:31552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.288 [2024-11-20 11:17:30.489621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.288 [2024-11-20 11:17:30.489633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:31560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.288 [2024-11-20 11:17:30.489640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.288 [2024-11-20 11:17:30.489648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:31568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.288 [2024-11-20 11:17:30.489655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.289 [2024-11-20 11:17:30.489663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:31576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.289 [2024-11-20 11:17:30.489669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.289 [2024-11-20 11:17:30.489677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:31584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.289 [2024-11-20 11:17:30.489684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.289 [2024-11-20 11:17:30.489693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:31592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.289 [2024-11-20 11:17:30.489701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.289 [2024-11-20 11:17:30.489710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:31600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.289 [2024-11-20 11:17:30.489716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.289 [2024-11-20 11:17:30.489724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:31608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.289 [2024-11-20 11:17:30.489731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.289 [2024-11-20 11:17:30.489740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:31616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.289 [2024-11-20 11:17:30.489747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.289 [2024-11-20 11:17:30.489755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:31624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.289 [2024-11-20 11:17:30.489762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.289 [2024-11-20 11:17:30.489770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:31632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.289 [2024-11-20 11:17:30.489776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.289 [2024-11-20 11:17:30.489784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:31640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.289 [2024-11-20 11:17:30.489793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.289 [2024-11-20 11:17:30.489801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:31648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.289 [2024-11-20 11:17:30.489808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.289 [2024-11-20 11:17:30.489816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:31656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.289 [2024-11-20 11:17:30.489823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.289 [2024-11-20 11:17:30.489831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:31664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.289 [2024-11-20 11:17:30.489837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.289 [2024-11-20 11:17:30.489845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:31672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.289 [2024-11-20 11:17:30.489852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.289 [2024-11-20 11:17:30.489860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:31680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.289 [2024-11-20 11:17:30.489867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.289 [2024-11-20 11:17:30.489876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:31688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.289 [2024-11-20 11:17:30.489882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.289 [2024-11-20 11:17:30.489890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:31696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.289 [2024-11-20 11:17:30.489899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.289 [2024-11-20 11:17:30.489907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:31704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.289 [2024-11-20 11:17:30.489914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.289 [2024-11-20 11:17:30.489922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:31712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.289 [2024-11-20 11:17:30.489928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.289 [2024-11-20 11:17:30.489936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:31720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.289 [2024-11-20 11:17:30.489942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.289 [2024-11-20 11:17:30.489955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.289 [2024-11-20 11:17:30.489963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.289 [2024-11-20 11:17:30.489971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:31736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.289 [2024-11-20 11:17:30.489978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.289 [2024-11-20 11:17:30.489986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:31744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.289 [2024-11-20 11:17:30.489993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.289 [2024-11-20 11:17:30.490001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:31752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.289 [2024-11-20 11:17:30.490007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.289 [2024-11-20 11:17:30.490015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:31760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.289 [2024-11-20 11:17:30.490022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.289 [2024-11-20 11:17:30.490030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:31768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.289 [2024-11-20 11:17:30.490037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.289 [2024-11-20 11:17:30.490045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:31776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.289 [2024-11-20 11:17:30.490052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.289 [2024-11-20 11:17:30.490060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:31784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.289 [2024-11-20 11:17:30.490066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.289 [2024-11-20 11:17:30.490075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:31792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.289 [2024-11-20 11:17:30.490082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.289 [2024-11-20 11:17:30.490091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:31800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.289 [2024-11-20 11:17:30.490098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.289 [2024-11-20 11:17:30.490106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:31808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.289 [2024-11-20 11:17:30.490113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.289 [2024-11-20 11:17:30.490122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:31816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.289 [2024-11-20 11:17:30.490130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.289 [2024-11-20 11:17:30.490139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.289 [2024-11-20 11:17:30.490145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.289 [2024-11-20 11:17:30.490153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:31832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.289 [2024-11-20 11:17:30.490160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.289 [2024-11-20 11:17:30.490168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:31840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.289 [2024-11-20 11:17:30.490175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.289 [2024-11-20 11:17:30.490183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:31848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.289 [2024-11-20 11:17:30.490190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.289 [2024-11-20 11:17:30.490199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:31856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.289 [2024-11-20 11:17:30.490205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.289 [2024-11-20 11:17:30.490214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:31864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.289 [2024-11-20 11:17:30.490221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.289 [2024-11-20 11:17:30.490229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:31872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.289 [2024-11-20 11:17:30.490236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.289 [2024-11-20 11:17:30.490245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:31880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.289 [2024-11-20 11:17:30.490251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.290 [2024-11-20 11:17:30.490259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:31888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.290 [2024-11-20 11:17:30.490266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.290 [2024-11-20 11:17:30.490274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:31896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.290 [2024-11-20 11:17:30.490281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.290 [2024-11-20 11:17:30.490290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:31904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.290 [2024-11-20 11:17:30.490296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.290 [2024-11-20 11:17:30.490305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.290 [2024-11-20 11:17:30.490311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.290 [2024-11-20 11:17:30.490319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:31920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.290 [2024-11-20 11:17:30.490326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.290 [2024-11-20 11:17:30.490334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:31928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.290 [2024-11-20 11:17:30.490341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.290 [2024-11-20 11:17:30.490350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:31936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.290 [2024-11-20 11:17:30.490357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.290 [2024-11-20 11:17:30.490366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:31944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.290 [2024-11-20 11:17:30.490372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.290 [2024-11-20 11:17:30.490380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:31952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.290 [2024-11-20 11:17:30.490386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.290 [2024-11-20 11:17:30.490395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:31960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.290 [2024-11-20 11:17:30.490402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.290 [2024-11-20 11:17:30.490411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:31968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.290 [2024-11-20 11:17:30.490417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.290 [2024-11-20 11:17:30.490425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:31976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.290 [2024-11-20 11:17:30.490432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.290 [2024-11-20 11:17:30.490440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.290 [2024-11-20 11:17:30.490447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.290 [2024-11-20 11:17:30.490455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:31992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.290 [2024-11-20 11:17:30.490462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.290 [2024-11-20 11:17:30.490470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:32000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.290 [2024-11-20 11:17:30.490478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.290 [2024-11-20 11:17:30.490486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:32008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.290 [2024-11-20 11:17:30.490493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.290 [2024-11-20 11:17:30.490501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:32016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.290 [2024-11-20 11:17:30.490508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.290 [2024-11-20 11:17:30.490517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:32024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.290 [2024-11-20 11:17:30.490524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.290 [2024-11-20 11:17:30.490532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:32032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.290 [2024-11-20 11:17:30.490538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.290 [2024-11-20 11:17:30.490546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:32040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.290 [2024-11-20 11:17:30.490553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.290 [2024-11-20 11:17:30.490561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:32048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.290 [2024-11-20 11:17:30.490568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.290 [2024-11-20 11:17:30.490578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:32056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.290 [2024-11-20 11:17:30.490584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.290 [2024-11-20 11:17:30.490593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:32064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.290 [2024-11-20 11:17:30.490599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.290 [2024-11-20 11:17:30.490610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:32072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.290 [2024-11-20 11:17:30.490616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.290 [2024-11-20 11:17:30.490625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:32080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.290 [2024-11-20 11:17:30.490632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.290 [2024-11-20 11:17:30.490641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:32088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.290 [2024-11-20 11:17:30.490648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.290 [2024-11-20 11:17:30.490656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:32096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.290 [2024-11-20 11:17:30.490663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.290 [2024-11-20 11:17:30.490673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:32104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.290 [2024-11-20 11:17:30.490680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.290 [2024-11-20 11:17:30.490689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.290 [2024-11-20 11:17:30.490696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.290 [2024-11-20 11:17:30.490704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:32120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.290 [2024-11-20 11:17:30.490710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.290 [2024-11-20 11:17:30.490719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:32128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.290 [2024-11-20 11:17:30.490726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.290 [2024-11-20 11:17:30.490746] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.290 [2024-11-20 11:17:30.490754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32136 len:8 PRP1 0x0 PRP2 0x0 00:23:14.290 [2024-11-20 11:17:30.490761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.290 [2024-11-20 11:17:30.490771] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.290 [2024-11-20 11:17:30.490776] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.290 [2024-11-20 11:17:30.490782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32144 len:8 PRP1 0x0 PRP2 0x0 00:23:14.290 [2024-11-20 11:17:30.490789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.290 [2024-11-20 11:17:30.490797] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.290 [2024-11-20 11:17:30.490802] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.290 [2024-11-20 11:17:30.490808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32152 len:8 PRP1 0x0 PRP2 0x0 00:23:14.290 [2024-11-20 11:17:30.490814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.290 [2024-11-20 11:17:30.490821] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.290 [2024-11-20 11:17:30.490827] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.290 [2024-11-20 11:17:30.490832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32160 len:8 PRP1 0x0 PRP2 0x0 00:23:14.290 [2024-11-20 11:17:30.490839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.290 [2024-11-20 11:17:30.490846] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.290 [2024-11-20 11:17:30.490852] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.290 [2024-11-20 11:17:30.490858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32168 len:8 PRP1 0x0 PRP2 0x0 00:23:14.291 [2024-11-20 11:17:30.490865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.291 [2024-11-20 11:17:30.490871] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.291 [2024-11-20 11:17:30.490877] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.291 [2024-11-20 11:17:30.490885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32176 len:8 PRP1 0x0 PRP2 0x0 00:23:14.291 [2024-11-20 11:17:30.490893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.291 [2024-11-20 11:17:30.490900] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.291 [2024-11-20 11:17:30.490905] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.291 [2024-11-20 11:17:30.490910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32184 len:8 PRP1 0x0 PRP2 0x0 00:23:14.291 [2024-11-20 11:17:30.490917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.291 [2024-11-20 11:17:30.490923] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.291 [2024-11-20 11:17:30.490928] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.291 [2024-11-20 11:17:30.490934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32192 len:8 PRP1 0x0 PRP2 0x0 00:23:14.291 [2024-11-20 11:17:30.490941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.291 [2024-11-20 11:17:30.490953] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.291 [2024-11-20 11:17:30.490958] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.291 [2024-11-20 11:17:30.490963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32200 len:8 PRP1 0x0 PRP2 0x0 00:23:14.291 [2024-11-20 11:17:30.490970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.291 [2024-11-20 11:17:30.491012] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:23:14.291 [2024-11-20 11:17:30.491035] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:14.291 [2024-11-20 11:17:30.491042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.291 [2024-11-20 11:17:30.491051] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:14.291 [2024-11-20 11:17:30.491058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.291 [2024-11-20 11:17:30.491065] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:14.291 [2024-11-20 11:17:30.501527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.291 [2024-11-20 11:17:30.501538] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:14.291 [2024-11-20 11:17:30.501546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.291 [2024-11-20 11:17:30.501554] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:23:14.291 [2024-11-20 11:17:30.501577] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaaa340 (9): Bad file descriptor 00:23:14.291 [2024-11-20 11:17:30.504398] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:23:14.291 [2024-11-20 11:17:30.646479] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:23:14.291 10699.40 IOPS, 41.79 MiB/s [2024-11-20T10:17:41.787Z] 10792.50 IOPS, 42.16 MiB/s [2024-11-20T10:17:41.787Z] 10835.00 IOPS, 42.32 MiB/s [2024-11-20T10:17:41.787Z] 10869.50 IOPS, 42.46 MiB/s [2024-11-20T10:17:41.787Z] 10896.22 IOPS, 42.56 MiB/s [2024-11-20T10:17:41.787Z] [2024-11-20 11:17:34.931959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:72832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.291 [2024-11-20 11:17:34.931997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.291 [2024-11-20 11:17:34.932013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:72840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.291 [2024-11-20 11:17:34.932022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.291 [2024-11-20 11:17:34.932031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:72848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.291 [2024-11-20 11:17:34.932040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.291 [2024-11-20 11:17:34.932049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:72856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.291 [2024-11-20 11:17:34.932056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.291 [2024-11-20 11:17:34.932066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:72864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.291 [2024-11-20 11:17:34.932074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.291 [2024-11-20 11:17:34.932083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:72872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.291 [2024-11-20 11:17:34.932091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.291 [2024-11-20 11:17:34.932099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:72880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.291 [2024-11-20 11:17:34.932106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.291 [2024-11-20 11:17:34.932116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:72888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.291 [2024-11-20 11:17:34.932124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.291 [2024-11-20 11:17:34.932133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:72896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.291 [2024-11-20 11:17:34.932141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.291 [2024-11-20 11:17:34.932149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:72904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.291 [2024-11-20 11:17:34.932158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.291 [2024-11-20 11:17:34.932167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:72912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.291 [2024-11-20 11:17:34.932175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.291 [2024-11-20 11:17:34.932184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:72920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.291 [2024-11-20 11:17:34.932192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.291 [2024-11-20 11:17:34.932201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:72928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.291 [2024-11-20 11:17:34.932214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.291 [2024-11-20 11:17:34.932223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:72936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.291 [2024-11-20 11:17:34.932230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.291 [2024-11-20 11:17:34.932238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:72944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.291 [2024-11-20 11:17:34.932245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.291 [2024-11-20 11:17:34.932254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:72952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.291 [2024-11-20 11:17:34.932260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.291 [2024-11-20 11:17:34.932269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:72960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.291 [2024-11-20 11:17:34.932275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.291 [2024-11-20 11:17:34.932285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:72968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.291 [2024-11-20 11:17:34.932292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.291 [2024-11-20 11:17:34.932300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:72976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.291 [2024-11-20 11:17:34.932307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.291 [2024-11-20 11:17:34.932315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:72984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.291 [2024-11-20 11:17:34.932321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.291 [2024-11-20 11:17:34.932329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:72992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.291 [2024-11-20 11:17:34.932337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.291 [2024-11-20 11:17:34.932346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:73000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.291 [2024-11-20 11:17:34.932353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.291 [2024-11-20 11:17:34.932361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:73008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.291 [2024-11-20 11:17:34.932369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.291 [2024-11-20 11:17:34.932377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:73016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.291 [2024-11-20 11:17:34.932383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.291 [2024-11-20 11:17:34.932391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:73024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.291 [2024-11-20 11:17:34.932398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.291 [2024-11-20 11:17:34.932408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:73032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.292 [2024-11-20 11:17:34.932416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.292 [2024-11-20 11:17:34.932425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:73040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.292 [2024-11-20 11:17:34.932432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.292 [2024-11-20 11:17:34.932440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:73048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.292 [2024-11-20 11:17:34.932447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.292 [2024-11-20 11:17:34.932455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:73056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.292 [2024-11-20 11:17:34.932462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.292 [2024-11-20 11:17:34.932470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:73064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.292 [2024-11-20 11:17:34.932477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.292 [2024-11-20 11:17:34.932486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:73072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.292 [2024-11-20 11:17:34.932493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.292 [2024-11-20 11:17:34.932501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:73080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.292 [2024-11-20 11:17:34.932508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.292 [2024-11-20 11:17:34.932516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:73088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.292 [2024-11-20 11:17:34.932523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.292 [2024-11-20 11:17:34.932532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:73096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.292 [2024-11-20 11:17:34.932539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.292 [2024-11-20 11:17:34.932547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:73104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.292 [2024-11-20 11:17:34.932554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.292 [2024-11-20 11:17:34.932562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:73112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.292 [2024-11-20 11:17:34.932568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.292 [2024-11-20 11:17:34.932576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:73120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.292 [2024-11-20 11:17:34.932584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.292 [2024-11-20 11:17:34.932592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:73128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.292 [2024-11-20 11:17:34.932599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.292 [2024-11-20 11:17:34.932608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:73136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.292 [2024-11-20 11:17:34.932615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.292 [2024-11-20 11:17:34.932623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:73144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.292 [2024-11-20 11:17:34.932630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.292 [2024-11-20 11:17:34.932639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:73152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.292 [2024-11-20 11:17:34.932647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.292 [2024-11-20 11:17:34.932656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:73160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.292 [2024-11-20 11:17:34.932662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.292 [2024-11-20 11:17:34.932671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:73168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.292 [2024-11-20 11:17:34.932677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.292 [2024-11-20 11:17:34.932685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:73176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.292 [2024-11-20 11:17:34.932694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.292 [2024-11-20 11:17:34.932703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:73184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.292 [2024-11-20 11:17:34.932710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.292 [2024-11-20 11:17:34.932718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:73192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.292 [2024-11-20 11:17:34.932725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.292 [2024-11-20 11:17:34.932732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:73200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.292 [2024-11-20 11:17:34.932740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.292 [2024-11-20 11:17:34.932747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:73208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.292 [2024-11-20 11:17:34.932754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.292 [2024-11-20 11:17:34.932763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:73216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.292 [2024-11-20 11:17:34.932770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.292 [2024-11-20 11:17:34.932779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:73224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.292 [2024-11-20 11:17:34.932786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.292 [2024-11-20 11:17:34.932794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:73232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.292 [2024-11-20 11:17:34.932802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.292 [2024-11-20 11:17:34.932811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:73240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.292 [2024-11-20 11:17:34.932818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.292 [2024-11-20 11:17:34.932826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:73248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.292 [2024-11-20 11:17:34.932833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.292 [2024-11-20 11:17:34.932841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:73256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.292 [2024-11-20 11:17:34.932847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.292 [2024-11-20 11:17:34.932855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:73264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.292 [2024-11-20 11:17:34.932862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.292 [2024-11-20 11:17:34.932870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:73272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.292 [2024-11-20 11:17:34.932877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.292 [2024-11-20 11:17:34.932885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:73280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.292 [2024-11-20 11:17:34.932892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.292 [2024-11-20 11:17:34.932900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:73288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.292 [2024-11-20 11:17:34.932906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.292 [2024-11-20 11:17:34.932914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:73296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.292 [2024-11-20 11:17:34.932922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.292 [2024-11-20 11:17:34.932930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:73304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.292 [2024-11-20 11:17:34.932937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.292 [2024-11-20 11:17:34.932946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:73312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.293 [2024-11-20 11:17:34.932958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.293 [2024-11-20 11:17:34.932966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:73320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.293 [2024-11-20 11:17:34.932973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.293 [2024-11-20 11:17:34.932981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:73328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.293 [2024-11-20 11:17:34.932988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.293 [2024-11-20 11:17:34.932997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:73336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.293 [2024-11-20 11:17:34.933006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.293 [2024-11-20 11:17:34.933015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:73344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.293 [2024-11-20 11:17:34.933021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.293 [2024-11-20 11:17:34.933030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:73352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.293 [2024-11-20 11:17:34.933037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.293 [2024-11-20 11:17:34.933046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:73360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.293 [2024-11-20 11:17:34.933053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.293 [2024-11-20 11:17:34.933062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:73368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.293 [2024-11-20 11:17:34.933068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.293 [2024-11-20 11:17:34.933075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:73376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.293 [2024-11-20 11:17:34.933082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.293 [2024-11-20 11:17:34.933090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:73384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.293 [2024-11-20 11:17:34.933097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.293 [2024-11-20 11:17:34.933105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:73392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.293 [2024-11-20 11:17:34.933112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.293 [2024-11-20 11:17:34.933121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:73400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.293 [2024-11-20 11:17:34.933127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.293 [2024-11-20 11:17:34.933135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:73408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.293 [2024-11-20 11:17:34.933142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.293 [2024-11-20 11:17:34.933150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:73416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.293 [2024-11-20 11:17:34.933156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.293 [2024-11-20 11:17:34.933165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:73424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.293 [2024-11-20 11:17:34.933171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.293 [2024-11-20 11:17:34.933180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:73432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.293 [2024-11-20 11:17:34.933186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.293 [2024-11-20 11:17:34.933196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:73440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.293 [2024-11-20 11:17:34.933202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.293 [2024-11-20 11:17:34.933212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:73448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.293 [2024-11-20 11:17:34.933219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.293 [2024-11-20 11:17:34.933243] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.293 [2024-11-20 11:17:34.933250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73456 len:8 PRP1 0x0 PRP2 0x0 00:23:14.293 [2024-11-20 11:17:34.933257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.293 [2024-11-20 11:17:34.933267] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.293 [2024-11-20 11:17:34.933273] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.293 [2024-11-20 11:17:34.933279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73464 len:8 PRP1 0x0 PRP2 0x0 00:23:14.293 [2024-11-20 11:17:34.933287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.293 [2024-11-20 11:17:34.933294] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.293 [2024-11-20 11:17:34.933300] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.293 [2024-11-20 11:17:34.933305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73472 len:8 PRP1 0x0 PRP2 0x0 00:23:14.293 [2024-11-20 11:17:34.933312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.293 [2024-11-20 11:17:34.933318] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.293 [2024-11-20 11:17:34.933323] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.293 [2024-11-20 11:17:34.933331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73480 len:8 PRP1 0x0 PRP2 0x0 00:23:14.293 [2024-11-20 11:17:34.933338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.293 [2024-11-20 11:17:34.933345] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.293 [2024-11-20 11:17:34.933350] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.293 [2024-11-20 11:17:34.933355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73488 len:8 PRP1 0x0 PRP2 0x0 00:23:14.293 [2024-11-20 11:17:34.933362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.293 [2024-11-20 11:17:34.933368] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.293 [2024-11-20 11:17:34.933374] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.293 [2024-11-20 11:17:34.933380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73496 len:8 PRP1 0x0 PRP2 0x0 00:23:14.293 [2024-11-20 11:17:34.933387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.293 [2024-11-20 11:17:34.933395] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.293 [2024-11-20 11:17:34.933400] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.293 [2024-11-20 11:17:34.933406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73504 len:8 PRP1 0x0 PRP2 0x0 00:23:14.293 [2024-11-20 11:17:34.933414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.293 [2024-11-20 11:17:34.933421] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.293 [2024-11-20 11:17:34.933426] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.293 [2024-11-20 11:17:34.933433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73512 len:8 PRP1 0x0 PRP2 0x0 00:23:14.293 [2024-11-20 11:17:34.933440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.293 [2024-11-20 11:17:34.933447] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.293 [2024-11-20 11:17:34.933452] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.293 [2024-11-20 11:17:34.933458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73520 len:8 PRP1 0x0 PRP2 0x0 00:23:14.293 [2024-11-20 11:17:34.933464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.293 [2024-11-20 11:17:34.933471] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.293 [2024-11-20 11:17:34.933475] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.293 [2024-11-20 11:17:34.933481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73528 len:8 PRP1 0x0 PRP2 0x0 00:23:14.293 [2024-11-20 11:17:34.933487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.293 [2024-11-20 11:17:34.933495] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.293 [2024-11-20 11:17:34.933500] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.293 [2024-11-20 11:17:34.933506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73536 len:8 PRP1 0x0 PRP2 0x0 00:23:14.293 [2024-11-20 11:17:34.933512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.293 [2024-11-20 11:17:34.933519] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.293 [2024-11-20 11:17:34.933525] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.293 [2024-11-20 11:17:34.933530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73544 len:8 PRP1 0x0 PRP2 0x0 00:23:14.293 [2024-11-20 11:17:34.933537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.293 [2024-11-20 11:17:34.933545] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.293 [2024-11-20 11:17:34.933551] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.293 [2024-11-20 11:17:34.933556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73552 len:8 PRP1 0x0 PRP2 0x0 00:23:14.293 [2024-11-20 11:17:34.933563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.293 [2024-11-20 11:17:34.933570] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.293 [2024-11-20 11:17:34.933574] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.294 [2024-11-20 11:17:34.933580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73560 len:8 PRP1 0x0 PRP2 0x0 00:23:14.294 [2024-11-20 11:17:34.933586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.294 [2024-11-20 11:17:34.933593] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.294 [2024-11-20 11:17:34.933601] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.294 [2024-11-20 11:17:34.933607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73568 len:8 PRP1 0x0 PRP2 0x0 00:23:14.294 [2024-11-20 11:17:34.933613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.294 [2024-11-20 11:17:34.933620] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.294 [2024-11-20 11:17:34.933625] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.294 [2024-11-20 11:17:34.933631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73576 len:8 PRP1 0x0 PRP2 0x0 00:23:14.294 [2024-11-20 11:17:34.933637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.294 [2024-11-20 11:17:34.933644] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.294 [2024-11-20 11:17:34.933649] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.294 [2024-11-20 11:17:34.933656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73584 len:8 PRP1 0x0 PRP2 0x0 00:23:14.294 [2024-11-20 11:17:34.933662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.294 [2024-11-20 11:17:34.933669] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.294 [2024-11-20 11:17:34.933675] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.294 [2024-11-20 11:17:34.933680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73592 len:8 PRP1 0x0 PRP2 0x0 00:23:14.294 [2024-11-20 11:17:34.933687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.294 [2024-11-20 11:17:34.933694] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.294 [2024-11-20 11:17:34.933701] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.294 [2024-11-20 11:17:34.933706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73600 len:8 PRP1 0x0 PRP2 0x0 00:23:14.294 [2024-11-20 11:17:34.933714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.294 [2024-11-20 11:17:34.933721] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.294 [2024-11-20 11:17:34.933727] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.294 [2024-11-20 11:17:34.933732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73608 len:8 PRP1 0x0 PRP2 0x0 00:23:14.294 [2024-11-20 11:17:34.933739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.294 [2024-11-20 11:17:34.933745] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.294 [2024-11-20 11:17:34.933750] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.294 [2024-11-20 11:17:34.933755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73616 len:8 PRP1 0x0 PRP2 0x0 00:23:14.294 [2024-11-20 11:17:34.933762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.294 [2024-11-20 11:17:34.933769] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.294 [2024-11-20 11:17:34.933774] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.294 [2024-11-20 11:17:34.933780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73624 len:8 PRP1 0x0 PRP2 0x0 00:23:14.294 [2024-11-20 11:17:34.933786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.294 [2024-11-20 11:17:34.933794] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.294 [2024-11-20 11:17:34.933799] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.294 [2024-11-20 11:17:34.933804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73632 len:8 PRP1 0x0 PRP2 0x0 00:23:14.294 [2024-11-20 11:17:34.933810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.294 [2024-11-20 11:17:34.933818] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.294 [2024-11-20 11:17:34.933824] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.294 [2024-11-20 11:17:34.933829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73640 len:8 PRP1 0x0 PRP2 0x0 00:23:14.294 [2024-11-20 11:17:34.933836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.294 [2024-11-20 11:17:34.933842] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.294 [2024-11-20 11:17:34.933847] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.294 [2024-11-20 11:17:34.933852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73648 len:8 PRP1 0x0 PRP2 0x0 00:23:14.294 [2024-11-20 11:17:34.933859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.294 [2024-11-20 11:17:34.933865] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.294 [2024-11-20 11:17:34.933871] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.294 [2024-11-20 11:17:34.933877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73656 len:8 PRP1 0x0 PRP2 0x0 00:23:14.294 [2024-11-20 11:17:34.933884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.294 [2024-11-20 11:17:34.933891] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.294 [2024-11-20 11:17:34.933896] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.294 [2024-11-20 11:17:34.933902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73664 len:8 PRP1 0x0 PRP2 0x0 00:23:14.294 [2024-11-20 11:17:34.933908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.294 [2024-11-20 11:17:34.933915] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.294 [2024-11-20 11:17:34.933920] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.294 [2024-11-20 11:17:34.933926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73672 len:8 PRP1 0x0 PRP2 0x0 00:23:14.294 [2024-11-20 11:17:34.933933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.294 [2024-11-20 11:17:34.933940] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.294 [2024-11-20 11:17:34.933945] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.294 [2024-11-20 11:17:34.933955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73680 len:8 PRP1 0x0 PRP2 0x0 00:23:14.294 [2024-11-20 11:17:34.933962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.294 [2024-11-20 11:17:34.933969] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.294 [2024-11-20 11:17:34.933974] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.294 [2024-11-20 11:17:34.933980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73688 len:8 PRP1 0x0 PRP2 0x0 00:23:14.294 [2024-11-20 11:17:34.933988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.294 [2024-11-20 11:17:34.933995] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.294 [2024-11-20 11:17:34.934000] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.294 [2024-11-20 11:17:34.934006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73696 len:8 PRP1 0x0 PRP2 0x0 00:23:14.294 [2024-11-20 11:17:34.934013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.294 [2024-11-20 11:17:34.934020] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.294 [2024-11-20 11:17:34.934024] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.294 [2024-11-20 11:17:34.934030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73704 len:8 PRP1 0x0 PRP2 0x0 00:23:14.294 [2024-11-20 11:17:34.934037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.294 [2024-11-20 11:17:34.934043] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.294 [2024-11-20 11:17:34.934049] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.294 [2024-11-20 11:17:34.934055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73712 len:8 PRP1 0x0 PRP2 0x0 00:23:14.294 [2024-11-20 11:17:34.934061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.294 [2024-11-20 11:17:34.934069] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.294 [2024-11-20 11:17:34.934074] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.294 [2024-11-20 11:17:34.934080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73720 len:8 PRP1 0x0 PRP2 0x0 00:23:14.294 [2024-11-20 11:17:34.934086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.294 [2024-11-20 11:17:34.934093] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.294 [2024-11-20 11:17:34.934099] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.294 [2024-11-20 11:17:34.934105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73728 len:8 PRP1 0x0 PRP2 0x0 00:23:14.294 [2024-11-20 11:17:34.934112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.294 [2024-11-20 11:17:34.934119] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.294 [2024-11-20 11:17:34.934124] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.294 [2024-11-20 11:17:34.934129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73736 len:8 PRP1 0x0 PRP2 0x0 00:23:14.294 [2024-11-20 11:17:34.934136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.294 [2024-11-20 11:17:34.934142] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.294 [2024-11-20 11:17:34.934147] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.294 [2024-11-20 11:17:34.934152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73744 len:8 PRP1 0x0 PRP2 0x0 00:23:14.295 [2024-11-20 11:17:34.934159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.295 [2024-11-20 11:17:34.934166] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.295 [2024-11-20 11:17:34.934171] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.295 [2024-11-20 11:17:34.934185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73752 len:8 PRP1 0x0 PRP2 0x0 00:23:14.295 [2024-11-20 11:17:34.934191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.295 [2024-11-20 11:17:34.934198] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.295 [2024-11-20 11:17:34.934203] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.295 [2024-11-20 11:17:34.934210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73760 len:8 PRP1 0x0 PRP2 0x0 00:23:14.295 [2024-11-20 11:17:34.934217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.295 [2024-11-20 11:17:34.934225] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.295 [2024-11-20 11:17:34.934230] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.295 [2024-11-20 11:17:34.934235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73768 len:8 PRP1 0x0 PRP2 0x0 00:23:14.295 [2024-11-20 11:17:34.944275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.295 [2024-11-20 11:17:34.944290] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.295 [2024-11-20 11:17:34.944299] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.295 [2024-11-20 11:17:34.944307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73776 len:8 PRP1 0x0 PRP2 0x0 00:23:14.295 [2024-11-20 11:17:34.944316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.295 [2024-11-20 11:17:34.944327] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.295 [2024-11-20 11:17:34.944333] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.295 [2024-11-20 11:17:34.944342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73784 len:8 PRP1 0x0 PRP2 0x0 00:23:14.295 [2024-11-20 11:17:34.944351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.295 [2024-11-20 11:17:34.944361] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.295 [2024-11-20 11:17:34.944367] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.295 [2024-11-20 11:17:34.944375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73792 len:8 PRP1 0x0 PRP2 0x0 00:23:14.295 [2024-11-20 11:17:34.944383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.295 [2024-11-20 11:17:34.944393] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.295 [2024-11-20 11:17:34.944401] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.295 [2024-11-20 11:17:34.944408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73800 len:8 PRP1 0x0 PRP2 0x0 00:23:14.295 [2024-11-20 11:17:34.944416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.295 [2024-11-20 11:17:34.944425] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.295 [2024-11-20 11:17:34.944432] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.295 [2024-11-20 11:17:34.944440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73808 len:8 PRP1 0x0 PRP2 0x0 00:23:14.295 [2024-11-20 11:17:34.944449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.295 [2024-11-20 11:17:34.944458] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.295 [2024-11-20 11:17:34.944468] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.295 [2024-11-20 11:17:34.944477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73816 len:8 PRP1 0x0 PRP2 0x0 00:23:14.295 [2024-11-20 11:17:34.944487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.295 [2024-11-20 11:17:34.944496] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.295 [2024-11-20 11:17:34.944503] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.295 [2024-11-20 11:17:34.944512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73824 len:8 PRP1 0x0 PRP2 0x0 00:23:14.295 [2024-11-20 11:17:34.944520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.295 [2024-11-20 11:17:34.944530] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.295 [2024-11-20 11:17:34.944537] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.295 [2024-11-20 11:17:34.944544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73832 len:8 PRP1 0x0 PRP2 0x0 00:23:14.295 [2024-11-20 11:17:34.944553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.295 [2024-11-20 11:17:34.944562] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.295 [2024-11-20 11:17:34.944569] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.295 [2024-11-20 11:17:34.944577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73840 len:8 PRP1 0x0 PRP2 0x0 00:23:14.295 [2024-11-20 11:17:34.944586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.295 [2024-11-20 11:17:34.944595] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.295 [2024-11-20 11:17:34.944602] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.295 [2024-11-20 11:17:34.944610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73848 len:8 PRP1 0x0 PRP2 0x0 00:23:14.295 [2024-11-20 11:17:34.944619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.295 [2024-11-20 11:17:34.944668] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:23:14.295 [2024-11-20 11:17:34.944698] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:14.295 [2024-11-20 11:17:34.944709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.295 [2024-11-20 11:17:34.944720] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:14.295 [2024-11-20 11:17:34.944729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.295 [2024-11-20 11:17:34.944739] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:14.295 [2024-11-20 11:17:34.944748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.295 [2024-11-20 11:17:34.944757] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:14.295 [2024-11-20 11:17:34.944767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.295 [2024-11-20 11:17:34.944776] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:23:14.295 [2024-11-20 11:17:34.944817] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaaa340 (9): Bad file descriptor 00:23:14.295 [2024-11-20 11:17:34.948666] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:23:14.295 [2024-11-20 11:17:35.017566] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:23:14.295 10816.60 IOPS, 42.25 MiB/s [2024-11-20T10:17:41.791Z] 10838.73 IOPS, 42.34 MiB/s [2024-11-20T10:17:41.791Z] 10855.42 IOPS, 42.40 MiB/s [2024-11-20T10:17:41.791Z] 10879.62 IOPS, 42.50 MiB/s [2024-11-20T10:17:41.791Z] 10902.36 IOPS, 42.59 MiB/s 00:23:14.295 Latency(us) 00:23:14.295 [2024-11-20T10:17:41.791Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:14.295 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:14.295 Verification LBA range: start 0x0 length 0x4000 00:23:14.295 NVMe0n1 : 15.00 10910.49 42.62 761.73 0.00 10944.03 432.75 21427.42 00:23:14.295 [2024-11-20T10:17:41.791Z] =================================================================================================================== 00:23:14.295 [2024-11-20T10:17:41.791Z] Total : 10910.49 42.62 761.73 0.00 10944.03 432.75 21427.42 00:23:14.295 Received shutdown signal, test time was about 15.000000 seconds 00:23:14.295 00:23:14.295 Latency(us) 00:23:14.295 [2024-11-20T10:17:41.791Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:14.295 [2024-11-20T10:17:41.791Z] =================================================================================================================== 00:23:14.295 [2024-11-20T10:17:41.791Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:14.295 11:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:23:14.295 11:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:23:14.295 11:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:23:14.295 11:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=4156517 00:23:14.295 11:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 4156517 /var/tmp/bdevperf.sock 00:23:14.295 11:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:23:14.295 11:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 4156517 ']' 00:23:14.295 11:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:14.295 11:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:14.295 11:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:14.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:14.295 11:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:14.295 11:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:14.295 11:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:14.296 11:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:23:14.296 11:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:14.296 [2024-11-20 11:17:41.519661] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:14.296 11:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:14.296 [2024-11-20 11:17:41.720232] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:23:14.296 11:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:14.584 NVMe0n1 00:23:14.584 11:17:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:15.226 00:23:15.226 11:17:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:15.484 00:23:15.484 11:17:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:15.484 11:17:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:23:15.484 11:17:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:15.741 11:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:23:19.018 11:17:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:19.018 11:17:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:23:19.018 11:17:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=4157300 00:23:19.018 11:17:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:19.019 11:17:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 4157300 00:23:20.389 { 00:23:20.389 "results": [ 00:23:20.389 { 00:23:20.389 "job": "NVMe0n1", 00:23:20.389 "core_mask": "0x1", 00:23:20.389 "workload": "verify", 00:23:20.389 "status": "finished", 00:23:20.389 "verify_range": { 00:23:20.389 "start": 0, 00:23:20.389 "length": 16384 00:23:20.389 }, 00:23:20.389 "queue_depth": 128, 00:23:20.389 "io_size": 4096, 00:23:20.389 "runtime": 1.009007, 00:23:20.389 "iops": 11122.816789179857, 00:23:20.389 "mibps": 43.448503082733815, 00:23:20.389 "io_failed": 0, 00:23:20.389 "io_timeout": 0, 00:23:20.389 "avg_latency_us": 11466.754610291753, 00:23:20.389 "min_latency_us": 1795.1165217391303, 00:23:20.389 "max_latency_us": 13620.090434782609 00:23:20.389 } 00:23:20.389 ], 00:23:20.389 "core_count": 1 00:23:20.389 } 00:23:20.389 11:17:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:20.389 [2024-11-20 11:17:41.136029] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:23:20.389 [2024-11-20 11:17:41.136081] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4156517 ] 00:23:20.389 [2024-11-20 11:17:41.210869] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:20.389 [2024-11-20 11:17:41.248697] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:20.389 [2024-11-20 11:17:43.110340] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:23:20.389 [2024-11-20 11:17:43.110386] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:20.389 [2024-11-20 11:17:43.110398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.389 [2024-11-20 11:17:43.110406] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:20.389 [2024-11-20 11:17:43.110413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.389 [2024-11-20 11:17:43.110421] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:20.389 [2024-11-20 11:17:43.110428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.389 [2024-11-20 11:17:43.110435] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:20.389 [2024-11-20 11:17:43.110442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.389 [2024-11-20 11:17:43.110449] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:23:20.389 [2024-11-20 11:17:43.110473] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:23:20.389 [2024-11-20 11:17:43.110487] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1113340 (9): Bad file descriptor 00:23:20.389 [2024-11-20 11:17:43.244124] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:23:20.389 Running I/O for 1 seconds... 00:23:20.389 11092.00 IOPS, 43.33 MiB/s 00:23:20.389 Latency(us) 00:23:20.389 [2024-11-20T10:17:47.885Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:20.389 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:20.389 Verification LBA range: start 0x0 length 0x4000 00:23:20.389 NVMe0n1 : 1.01 11122.82 43.45 0.00 0.00 11466.75 1795.12 13620.09 00:23:20.390 [2024-11-20T10:17:47.886Z] =================================================================================================================== 00:23:20.390 [2024-11-20T10:17:47.886Z] Total : 11122.82 43.45 0.00 0.00 11466.75 1795.12 13620.09 00:23:20.390 11:17:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:20.390 11:17:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:23:20.390 11:17:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:20.647 11:17:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:20.647 11:17:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:23:20.647 11:17:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:20.904 11:17:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:23:24.180 11:17:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:24.180 11:17:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:23:24.180 11:17:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 4156517 00:23:24.180 11:17:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 4156517 ']' 00:23:24.180 11:17:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 4156517 00:23:24.180 11:17:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:23:24.180 11:17:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:24.180 11:17:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4156517 00:23:24.180 11:17:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:24.180 11:17:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:24.180 11:17:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4156517' 00:23:24.180 killing process with pid 4156517 00:23:24.180 11:17:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 4156517 00:23:24.180 11:17:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 4156517 00:23:24.438 11:17:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:23:24.438 11:17:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:24.438 11:17:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:23:24.438 11:17:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:24.696 11:17:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:23:24.696 11:17:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:24.696 11:17:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:23:24.696 11:17:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:24.696 11:17:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:23:24.696 11:17:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:24.696 11:17:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:24.696 rmmod nvme_tcp 00:23:24.696 rmmod nvme_fabrics 00:23:24.696 rmmod nvme_keyring 00:23:24.696 11:17:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:24.696 11:17:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:23:24.696 11:17:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:23:24.696 11:17:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 4153506 ']' 00:23:24.696 11:17:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 4153506 00:23:24.696 11:17:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 4153506 ']' 00:23:24.696 11:17:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 4153506 00:23:24.696 11:17:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:23:24.696 11:17:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:24.696 11:17:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4153506 00:23:24.696 11:17:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:24.696 11:17:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:24.696 11:17:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4153506' 00:23:24.696 killing process with pid 4153506 00:23:24.696 11:17:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 4153506 00:23:24.696 11:17:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 4153506 00:23:24.955 11:17:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:24.955 11:17:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:24.955 11:17:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:24.955 11:17:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:23:24.955 11:17:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:23:24.955 11:17:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:24.955 11:17:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:23:24.955 11:17:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:24.955 11:17:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:24.955 11:17:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:24.955 11:17:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:24.955 11:17:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:26.861 11:17:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:26.861 00:23:26.861 real 0m37.438s 00:23:26.861 user 1m58.587s 00:23:26.861 sys 0m7.948s 00:23:26.861 11:17:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:26.861 11:17:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:26.861 ************************************ 00:23:26.861 END TEST nvmf_failover 00:23:26.861 ************************************ 00:23:26.861 11:17:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:23:26.861 11:17:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:26.861 11:17:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:26.861 11:17:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.121 ************************************ 00:23:27.121 START TEST nvmf_host_discovery 00:23:27.121 ************************************ 00:23:27.121 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:23:27.121 * Looking for test storage... 00:23:27.121 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:27.121 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:27.121 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:23:27.121 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:27.121 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:27.121 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:27.121 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:27.121 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:27.121 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:23:27.121 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:23:27.121 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:23:27.121 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:23:27.121 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:23:27.121 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:23:27.121 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:23:27.121 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:27.121 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:23:27.121 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:23:27.121 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:27.121 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:27.121 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:23:27.121 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:23:27.121 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:27.121 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:23:27.121 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:23:27.121 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:23:27.121 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:23:27.121 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:27.121 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:23:27.121 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:23:27.121 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:27.121 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:27.121 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:23:27.121 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:27.121 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:27.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:27.121 --rc genhtml_branch_coverage=1 00:23:27.121 --rc genhtml_function_coverage=1 00:23:27.121 --rc genhtml_legend=1 00:23:27.121 --rc geninfo_all_blocks=1 00:23:27.121 --rc geninfo_unexecuted_blocks=1 00:23:27.121 00:23:27.121 ' 00:23:27.121 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:27.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:27.121 --rc genhtml_branch_coverage=1 00:23:27.121 --rc genhtml_function_coverage=1 00:23:27.121 --rc genhtml_legend=1 00:23:27.121 --rc geninfo_all_blocks=1 00:23:27.121 --rc geninfo_unexecuted_blocks=1 00:23:27.121 00:23:27.121 ' 00:23:27.121 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:27.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:27.121 --rc genhtml_branch_coverage=1 00:23:27.121 --rc genhtml_function_coverage=1 00:23:27.121 --rc genhtml_legend=1 00:23:27.121 --rc geninfo_all_blocks=1 00:23:27.121 --rc geninfo_unexecuted_blocks=1 00:23:27.121 00:23:27.121 ' 00:23:27.121 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:27.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:27.121 --rc genhtml_branch_coverage=1 00:23:27.121 --rc genhtml_function_coverage=1 00:23:27.121 --rc genhtml_legend=1 00:23:27.121 --rc geninfo_all_blocks=1 00:23:27.121 --rc geninfo_unexecuted_blocks=1 00:23:27.121 00:23:27.121 ' 00:23:27.121 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:27.121 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:23:27.121 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:27.121 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:27.121 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:27.121 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:27.121 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:27.121 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:27.121 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:27.121 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:27.121 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:27.121 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:27.121 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:27.121 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:27.121 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:27.121 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:27.122 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:27.122 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:27.122 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:27.122 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:23:27.122 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:27.122 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:27.122 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:27.122 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:27.122 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:27.122 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:27.122 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:23:27.122 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:27.122 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:23:27.122 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:27.122 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:27.122 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:27.122 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:27.122 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:27.122 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:27.122 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:27.122 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:27.122 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:27.122 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:27.122 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:23:27.122 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:23:27.122 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:23:27.122 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:23:27.122 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:23:27.122 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:23:27.122 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:23:27.122 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:27.122 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:27.122 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:27.122 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:27.122 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:27.122 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:27.122 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:27.122 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:27.122 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:27.122 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:27.122 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:23:27.122 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:33.689 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:33.689 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:23:33.689 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:33.689 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:33.689 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:33.689 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:33.689 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:33.689 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:23:33.689 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:33.689 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:23:33.689 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:23:33.689 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:23:33.689 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:23:33.689 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:23:33.689 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:23:33.689 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:33.689 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:33.689 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:33.689 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:33.689 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:33.689 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:33.689 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:33.689 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:33.689 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:33.689 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:33.689 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:33.689 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:33.689 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:33.689 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:33.689 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:33.689 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:33.689 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:33.689 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:33.689 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:33.689 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:33.689 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:33.689 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:33.689 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:33.689 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:33.689 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:33.689 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:33.689 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:33.689 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:33.689 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:33.689 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:33.689 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:33.689 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:33.689 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:33.689 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:33.689 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:33.689 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:33.689 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:33.689 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:33.689 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:33.689 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:33.689 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:33.689 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:33.689 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:33.689 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:33.689 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:33.689 Found net devices under 0000:86:00.0: cvl_0_0 00:23:33.690 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:33.690 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:33.690 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:33.690 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:33.690 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:33.690 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:33.690 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:33.690 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:33.690 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:33.690 Found net devices under 0000:86:00.1: cvl_0_1 00:23:33.690 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:33.690 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:33.690 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:23:33.690 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:33.690 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:33.690 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:33.690 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:33.690 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:33.690 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:33.690 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:33.690 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:33.690 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:33.690 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:33.690 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:33.690 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:33.690 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:33.690 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:33.690 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:33.690 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:33.690 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:33.690 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:33.690 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:33.690 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:33.690 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:33.690 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:33.690 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:33.690 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:33.690 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:33.690 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:33.690 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:33.690 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.504 ms 00:23:33.690 00:23:33.690 --- 10.0.0.2 ping statistics --- 00:23:33.690 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:33.690 rtt min/avg/max/mdev = 0.504/0.504/0.504/0.000 ms 00:23:33.690 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:33.690 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:33.690 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.127 ms 00:23:33.690 00:23:33.690 --- 10.0.0.1 ping statistics --- 00:23:33.690 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:33.690 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:23:33.690 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:33.690 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:23:33.690 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:33.690 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:33.690 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:33.690 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:33.690 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:33.690 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:33.690 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:33.690 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:23:33.690 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:33.690 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:33.690 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:33.690 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=4161731 00:23:33.690 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:33.690 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 4161731 00:23:33.690 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 4161731 ']' 00:23:33.690 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:33.690 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:33.690 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:33.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:33.690 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:33.690 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:33.690 [2024-11-20 11:18:00.598797] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:23:33.690 [2024-11-20 11:18:00.598866] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:33.690 [2024-11-20 11:18:00.680213] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:33.690 [2024-11-20 11:18:00.722532] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:33.690 [2024-11-20 11:18:00.722568] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:33.690 [2024-11-20 11:18:00.722575] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:33.690 [2024-11-20 11:18:00.722582] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:33.690 [2024-11-20 11:18:00.722587] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:33.690 [2024-11-20 11:18:00.723162] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:33.690 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:33.690 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:23:33.690 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:33.690 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:33.690 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:33.690 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:33.690 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:33.690 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.690 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:33.690 [2024-11-20 11:18:00.859675] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:33.690 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.690 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:23:33.690 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.690 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:33.690 [2024-11-20 11:18:00.871841] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:23:33.690 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.690 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:23:33.690 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.690 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:33.690 null0 00:23:33.690 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.690 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:23:33.690 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.690 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:33.690 null1 00:23:33.690 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.690 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:23:33.690 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.691 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:33.691 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.691 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=4161959 00:23:33.691 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:23:33.691 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 4161959 /tmp/host.sock 00:23:33.691 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 4161959 ']' 00:23:33.691 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:23:33.691 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:33.691 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:23:33.691 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:23:33.691 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:33.691 11:18:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:33.691 [2024-11-20 11:18:00.950932] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:23:33.691 [2024-11-20 11:18:00.950979] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4161959 ] 00:23:33.691 [2024-11-20 11:18:01.024114] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:33.691 [2024-11-20 11:18:01.067243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:33.691 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:33.691 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:23:33.691 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:33.691 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:23:33.691 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.691 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:33.691 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.691 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:23:33.691 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.691 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:33.948 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.948 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:23:33.948 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:23:33.948 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:33.948 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:33.949 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.949 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:33.949 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:33.949 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:33.949 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.949 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:23:33.949 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:23:33.949 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:33.949 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:33.949 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.949 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:33.949 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:33.949 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:33.949 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.949 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:23:33.949 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:23:33.949 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.949 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:33.949 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.949 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:23:33.949 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:33.949 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:33.949 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.949 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:33.949 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:33.949 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:33.949 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.949 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:23:33.949 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:23:33.949 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:33.949 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:33.949 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:33.949 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:33.949 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.949 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:33.949 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.949 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:23:33.949 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:23:33.949 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.949 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:33.949 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.949 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:23:33.949 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:33.949 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:33.949 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.949 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:33.949 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:33.949 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:33.949 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.949 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:23:33.949 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:23:33.949 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:33.949 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:33.949 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:33.949 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.949 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:33.949 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:34.206 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.206 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:23:34.206 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:34.206 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.206 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:34.206 [2024-11-20 11:18:01.489425] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:34.206 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.206 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:23:34.206 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:34.206 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:34.206 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:34.206 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.206 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:34.206 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:34.207 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.207 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:23:34.207 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:23:34.207 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:34.207 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:34.207 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:34.207 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.207 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:34.207 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:34.207 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.207 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:23:34.207 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:23:34.207 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:34.207 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:34.207 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:34.207 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:34.207 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:34.207 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:34.207 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:23:34.207 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:34.207 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.207 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:34.207 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:34.207 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.207 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:34.207 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:23:34.207 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:23:34.207 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:34.207 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:23:34.207 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.207 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:34.207 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.207 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:34.207 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:34.207 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:34.207 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:34.207 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:34.207 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:23:34.207 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:34.207 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:34.207 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.207 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:34.207 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:34.207 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:34.207 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.207 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:23:34.207 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:23:34.771 [2024-11-20 11:18:02.197299] bdev_nvme.c:7478:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:34.771 [2024-11-20 11:18:02.197319] bdev_nvme.c:7564:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:34.771 [2024-11-20 11:18:02.197330] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:35.029 [2024-11-20 11:18:02.283587] bdev_nvme.c:7407:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:23:35.029 [2024-11-20 11:18:02.458576] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:23:35.029 [2024-11-20 11:18:02.459268] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0xde7dd0:1 started. 00:23:35.029 [2024-11-20 11:18:02.460650] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:35.029 [2024-11-20 11:18:02.460666] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:35.029 [2024-11-20 11:18:02.466736] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0xde7dd0 was disconnected and freed. delete nvme_qpair. 00:23:35.286 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:35.286 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:35.286 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:23:35.286 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:35.286 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:35.286 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.286 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:35.286 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:35.286 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:35.286 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.286 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:35.286 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:35.286 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:23:35.286 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:23:35.286 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:35.286 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:35.286 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:23:35.286 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:23:35.286 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:35.286 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:35.286 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.286 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:35.286 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:35.286 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:35.286 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.544 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:23:35.544 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:35.544 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:23:35.544 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:23:35.544 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:35.544 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:35.544 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:23:35.544 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:23:35.544 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:35.544 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:35.544 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.544 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:35.544 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:35.544 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:35.544 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.544 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:23:35.544 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:35.544 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:23:35.544 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:23:35.544 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:35.544 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:35.544 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:35.544 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:35.544 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:35.544 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:23:35.544 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:35.544 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:35.544 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.544 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:35.545 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.545 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:23:35.545 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:23:35.545 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:23:35.545 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:35.545 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:23:35.545 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.545 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:35.545 [2024-11-20 11:18:02.891033] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0xde81a0:1 started. 00:23:35.545 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.545 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:35.545 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:35.545 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:35.545 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:35.545 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:35.545 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:23:35.545 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:35.545 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:35.545 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.545 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:35.545 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:35.545 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:35.545 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.545 [2024-11-20 11:18:02.938458] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0xde81a0 was disconnected and freed. delete nvme_qpair. 00:23:35.545 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:35.545 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:35.545 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:23:35.545 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:23:35.545 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:35.545 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:35.545 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:35.545 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:35.545 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:35.545 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:23:35.545 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:23:35.545 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:35.545 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.545 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:35.545 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.545 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:23:35.545 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:35.545 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:23:35.545 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:35.545 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:23:35.545 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.545 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:35.545 [2024-11-20 11:18:02.997521] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:35.545 [2024-11-20 11:18:02.997860] bdev_nvme.c:7460:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:35.545 [2024-11-20 11:18:02.997883] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:35.545 11:18:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.545 11:18:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:35.545 11:18:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:35.545 11:18:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:35.545 11:18:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:35.545 11:18:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:35.545 11:18:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:23:35.545 11:18:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:35.545 11:18:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:35.545 11:18:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.545 11:18:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:35.545 11:18:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:35.545 11:18:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:35.545 11:18:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.803 11:18:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:35.803 11:18:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:35.803 11:18:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:35.803 11:18:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:35.803 11:18:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:35.803 11:18:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:35.803 11:18:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:35.803 11:18:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:23:35.803 11:18:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:35.803 11:18:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:35.803 11:18:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.803 11:18:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:35.803 11:18:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:35.803 11:18:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:35.803 11:18:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.803 11:18:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:35.803 [2024-11-20 11:18:03.085136] bdev_nvme.c:7402:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:23:35.803 11:18:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:35.803 11:18:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:23:35.803 11:18:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:23:35.803 11:18:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:35.803 11:18:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:35.803 11:18:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:23:35.803 11:18:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:23:35.803 11:18:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:35.803 11:18:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:35.803 11:18:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.803 11:18:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:35.803 11:18:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:35.803 11:18:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:35.803 11:18:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.803 11:18:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:23:35.803 11:18:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:23:36.060 [2024-11-20 11:18:03.389496] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:23:36.060 [2024-11-20 11:18:03.389536] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:36.060 [2024-11-20 11:18:03.389546] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:36.060 [2024-11-20 11:18:03.389552] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:36.992 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:36.992 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:23:36.992 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:23:36.992 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:36.992 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:36.992 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.992 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:36.992 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:36.992 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:36.992 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.992 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:23:36.992 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:36.992 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:23:36.992 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:36.992 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:36.992 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:36.992 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:36.992 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:36.992 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:36.992 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:23:36.992 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:36.992 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:36.992 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.992 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:36.993 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.993 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:36.993 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:36.993 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:23:36.993 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:36.993 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:36.993 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.993 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:36.993 [2024-11-20 11:18:04.245913] bdev_nvme.c:7460:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:36.993 [2024-11-20 11:18:04.245937] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:36.993 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.993 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:36.993 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:36.993 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:36.993 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:36.993 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:36.993 [2024-11-20 11:18:04.253134] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.993 [2024-11-20 11:18:04.253155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.993 [2024-11-20 11:18:04.253165] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.993 [2024-11-20 11:18:04.253172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.993 [2024-11-20 11:18:04.253180] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.993 [2024-11-20 11:18:04.253186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.993 [2024-11-20 11:18:04.253193] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.993 [2024-11-20 11:18:04.253200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.993 [2024-11-20 11:18:04.253207] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb8390 is same with the state(6) to be set 00:23:36.993 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:23:36.993 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:36.993 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:36.993 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.993 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:36.993 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:36.993 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:36.993 [2024-11-20 11:18:04.263145] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb8390 (9): Bad file descriptor 00:23:36.993 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.993 [2024-11-20 11:18:04.273180] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:36.993 [2024-11-20 11:18:04.273192] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:36.993 [2024-11-20 11:18:04.273196] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:36.993 [2024-11-20 11:18:04.273201] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:36.993 [2024-11-20 11:18:04.273219] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:36.993 [2024-11-20 11:18:04.273474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.993 [2024-11-20 11:18:04.273490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb8390 with addr=10.0.0.2, port=4420 00:23:36.993 [2024-11-20 11:18:04.273498] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb8390 is same with the state(6) to be set 00:23:36.993 [2024-11-20 11:18:04.273511] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb8390 (9): Bad file descriptor 00:23:36.993 [2024-11-20 11:18:04.273522] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:36.993 [2024-11-20 11:18:04.273529] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:36.993 [2024-11-20 11:18:04.273538] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:36.993 [2024-11-20 11:18:04.273544] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:36.993 [2024-11-20 11:18:04.273550] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:36.993 [2024-11-20 11:18:04.273554] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:36.993 [2024-11-20 11:18:04.283251] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:36.993 [2024-11-20 11:18:04.283261] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:36.993 [2024-11-20 11:18:04.283265] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:36.993 [2024-11-20 11:18:04.283269] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:36.993 [2024-11-20 11:18:04.283283] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:36.993 [2024-11-20 11:18:04.283499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.993 [2024-11-20 11:18:04.283512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb8390 with addr=10.0.0.2, port=4420 00:23:36.993 [2024-11-20 11:18:04.283520] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb8390 is same with the state(6) to be set 00:23:36.993 [2024-11-20 11:18:04.283535] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb8390 (9): Bad file descriptor 00:23:36.993 [2024-11-20 11:18:04.283546] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:36.993 [2024-11-20 11:18:04.283552] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:36.993 [2024-11-20 11:18:04.283559] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:36.993 [2024-11-20 11:18:04.283565] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:36.993 [2024-11-20 11:18:04.283569] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:36.993 [2024-11-20 11:18:04.283573] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:36.993 [2024-11-20 11:18:04.293316] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:36.993 [2024-11-20 11:18:04.293330] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:36.993 [2024-11-20 11:18:04.293333] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:36.993 [2024-11-20 11:18:04.293338] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:36.993 [2024-11-20 11:18:04.293354] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:36.993 [2024-11-20 11:18:04.293559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.993 [2024-11-20 11:18:04.293573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb8390 with addr=10.0.0.2, port=4420 00:23:36.993 [2024-11-20 11:18:04.293581] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb8390 is same with the state(6) to be set 00:23:36.993 [2024-11-20 11:18:04.293593] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb8390 (9): Bad file descriptor 00:23:36.993 [2024-11-20 11:18:04.293603] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:36.993 [2024-11-20 11:18:04.293611] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:36.993 [2024-11-20 11:18:04.293617] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:36.993 [2024-11-20 11:18:04.293623] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:36.993 [2024-11-20 11:18:04.293628] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:36.993 [2024-11-20 11:18:04.293632] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:36.993 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:36.994 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:36.994 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:36.994 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:36.994 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:36.994 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:36.994 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:36.994 [2024-11-20 11:18:04.303386] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:36.994 [2024-11-20 11:18:04.303403] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:36.994 [2024-11-20 11:18:04.303407] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:36.994 [2024-11-20 11:18:04.303411] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:36.994 [2024-11-20 11:18:04.303425] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:36.994 [2024-11-20 11:18:04.303654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.994 [2024-11-20 11:18:04.303667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb8390 with addr=10.0.0.2, port=4420 00:23:36.994 [2024-11-20 11:18:04.303675] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb8390 is same with the state(6) to be set 00:23:36.994 [2024-11-20 11:18:04.303686] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb8390 (9): Bad file descriptor 00:23:36.994 [2024-11-20 11:18:04.303696] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:36.994 [2024-11-20 11:18:04.303702] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:36.994 [2024-11-20 11:18:04.303710] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:36.994 [2024-11-20 11:18:04.303716] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:36.994 [2024-11-20 11:18:04.303720] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:36.994 [2024-11-20 11:18:04.303725] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:36.994 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:23:36.994 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:36.994 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:36.994 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.994 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:36.994 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:36.994 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:36.994 [2024-11-20 11:18:04.313456] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:36.994 [2024-11-20 11:18:04.313470] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:36.994 [2024-11-20 11:18:04.313474] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:36.994 [2024-11-20 11:18:04.313478] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:36.994 [2024-11-20 11:18:04.313493] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:36.994 [2024-11-20 11:18:04.313721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.994 [2024-11-20 11:18:04.313735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb8390 with addr=10.0.0.2, port=4420 00:23:36.994 [2024-11-20 11:18:04.313743] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb8390 is same with the state(6) to be set 00:23:36.994 [2024-11-20 11:18:04.313754] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb8390 (9): Bad file descriptor 00:23:36.994 [2024-11-20 11:18:04.313764] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:36.994 [2024-11-20 11:18:04.313771] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:36.994 [2024-11-20 11:18:04.313782] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:36.994 [2024-11-20 11:18:04.313789] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:36.994 [2024-11-20 11:18:04.313793] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:36.994 [2024-11-20 11:18:04.313797] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:36.994 [2024-11-20 11:18:04.323524] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:36.994 [2024-11-20 11:18:04.323535] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:36.994 [2024-11-20 11:18:04.323539] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:36.994 [2024-11-20 11:18:04.323543] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:36.994 [2024-11-20 11:18:04.323557] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:36.994 [2024-11-20 11:18:04.323787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.994 [2024-11-20 11:18:04.323800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb8390 with addr=10.0.0.2, port=4420 00:23:36.994 [2024-11-20 11:18:04.323808] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb8390 is same with the state(6) to be set 00:23:36.994 [2024-11-20 11:18:04.323818] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb8390 (9): Bad file descriptor 00:23:36.994 [2024-11-20 11:18:04.323829] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:36.994 [2024-11-20 11:18:04.323835] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:36.994 [2024-11-20 11:18:04.323843] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:36.994 [2024-11-20 11:18:04.323849] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:36.994 [2024-11-20 11:18:04.323854] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:36.994 [2024-11-20 11:18:04.323857] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:36.994 [2024-11-20 11:18:04.333190] bdev_nvme.c:7265:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:23:36.994 [2024-11-20 11:18:04.333206] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:36.994 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.994 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:36.994 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:36.994 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:23:36.994 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:23:36.994 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:36.994 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:36.994 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:23:36.994 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:23:36.994 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:36.994 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:36.994 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:36.994 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:36.994 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.994 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:36.994 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.995 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:23:36.995 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:36.995 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:23:36.995 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:36.995 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:36.995 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:36.995 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:36.995 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:36.995 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:36.995 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:23:36.995 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:36.995 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:36.995 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.995 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:36.995 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.995 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:36.995 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:36.995 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:23:36.995 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:36.995 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:23:36.995 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.995 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:36.995 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.995 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:23:36.995 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:23:36.995 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:36.995 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:36.995 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:23:36.995 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:23:36.995 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:36.995 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:36.995 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.995 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:36.995 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:36.995 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:36.995 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.253 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:23:37.253 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:37.253 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:23:37.253 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:23:37.253 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:37.253 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:37.253 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:23:37.253 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:23:37.253 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:37.253 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.253 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:37.253 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:37.253 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:37.253 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:37.253 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.253 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:23:37.253 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:37.253 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:23:37.253 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:23:37.253 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:37.253 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:37.253 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:37.253 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:37.253 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:37.253 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:23:37.253 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:37.253 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:37.253 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.253 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:37.253 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.253 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:23:37.253 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:23:37.253 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:23:37.253 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:37.253 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:37.253 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.253 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.184 [2024-11-20 11:18:05.646434] bdev_nvme.c:7478:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:38.184 [2024-11-20 11:18:05.646463] bdev_nvme.c:7564:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:38.184 [2024-11-20 11:18:05.646478] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:38.441 [2024-11-20 11:18:05.775865] bdev_nvme.c:7407:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:23:38.699 [2024-11-20 11:18:06.084375] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:23:38.700 [2024-11-20 11:18:06.085075] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0xde1ac0:1 started. 00:23:38.700 [2024-11-20 11:18:06.086740] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:38.700 [2024-11-20 11:18:06.086771] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:38.700 11:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.700 11:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:38.700 11:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:23:38.700 11:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:38.700 11:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:38.700 11:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:38.700 11:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:38.700 11:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:38.700 11:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:38.700 11:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.700 11:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.700 [2024-11-20 11:18:06.096153] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0xde1ac0 was disconnected and freed. delete nvme_qpair. 00:23:38.700 request: 00:23:38.700 { 00:23:38.700 "name": "nvme", 00:23:38.700 "trtype": "tcp", 00:23:38.700 "traddr": "10.0.0.2", 00:23:38.700 "adrfam": "ipv4", 00:23:38.700 "trsvcid": "8009", 00:23:38.700 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:38.700 "wait_for_attach": true, 00:23:38.700 "method": "bdev_nvme_start_discovery", 00:23:38.700 "req_id": 1 00:23:38.700 } 00:23:38.700 Got JSON-RPC error response 00:23:38.700 response: 00:23:38.700 { 00:23:38.700 "code": -17, 00:23:38.700 "message": "File exists" 00:23:38.700 } 00:23:38.700 11:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:38.700 11:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:23:38.700 11:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:38.700 11:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:38.700 11:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:38.700 11:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:23:38.700 11:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:38.700 11:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:38.700 11:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.700 11:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:38.700 11:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.700 11:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:38.700 11:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.700 11:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:23:38.700 11:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:23:38.700 11:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:38.700 11:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:38.700 11:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.700 11:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:38.700 11:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.700 11:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:38.700 11:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.957 11:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:38.957 11:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:38.957 11:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:23:38.957 11:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:38.957 11:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:38.957 11:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:38.957 11:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:38.957 11:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:38.957 11:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:38.957 11:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.957 11:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.957 request: 00:23:38.957 { 00:23:38.957 "name": "nvme_second", 00:23:38.957 "trtype": "tcp", 00:23:38.957 "traddr": "10.0.0.2", 00:23:38.957 "adrfam": "ipv4", 00:23:38.957 "trsvcid": "8009", 00:23:38.957 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:38.957 "wait_for_attach": true, 00:23:38.957 "method": "bdev_nvme_start_discovery", 00:23:38.957 "req_id": 1 00:23:38.957 } 00:23:38.957 Got JSON-RPC error response 00:23:38.957 response: 00:23:38.957 { 00:23:38.957 "code": -17, 00:23:38.957 "message": "File exists" 00:23:38.957 } 00:23:38.957 11:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:38.957 11:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:23:38.957 11:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:38.957 11:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:38.957 11:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:38.957 11:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:23:38.957 11:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:38.957 11:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:38.957 11:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.957 11:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:38.957 11:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.957 11:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:38.957 11:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.957 11:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:23:38.957 11:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:23:38.957 11:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:38.957 11:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:38.957 11:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.957 11:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:38.957 11:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.957 11:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:38.957 11:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.957 11:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:38.957 11:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:38.957 11:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:23:38.957 11:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:38.957 11:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:38.957 11:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:38.957 11:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:38.957 11:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:38.957 11:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:38.957 11:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.957 11:18:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:39.888 [2024-11-20 11:18:07.330221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:39.888 [2024-11-20 11:18:07.330250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde9d90 with addr=10.0.0.2, port=8010 00:23:39.888 [2024-11-20 11:18:07.330265] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:23:39.888 [2024-11-20 11:18:07.330272] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:39.888 [2024-11-20 11:18:07.330278] bdev_nvme.c:7546:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:23:41.257 [2024-11-20 11:18:08.332587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:41.257 [2024-11-20 11:18:08.332612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde9d90 with addr=10.0.0.2, port=8010 00:23:41.257 [2024-11-20 11:18:08.332624] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:23:41.257 [2024-11-20 11:18:08.332631] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:41.257 [2024-11-20 11:18:08.332637] bdev_nvme.c:7546:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:23:42.189 [2024-11-20 11:18:09.334841] bdev_nvme.c:7521:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:23:42.189 request: 00:23:42.189 { 00:23:42.189 "name": "nvme_second", 00:23:42.189 "trtype": "tcp", 00:23:42.189 "traddr": "10.0.0.2", 00:23:42.189 "adrfam": "ipv4", 00:23:42.189 "trsvcid": "8010", 00:23:42.189 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:42.189 "wait_for_attach": false, 00:23:42.189 "attach_timeout_ms": 3000, 00:23:42.189 "method": "bdev_nvme_start_discovery", 00:23:42.189 "req_id": 1 00:23:42.189 } 00:23:42.189 Got JSON-RPC error response 00:23:42.189 response: 00:23:42.189 { 00:23:42.189 "code": -110, 00:23:42.189 "message": "Connection timed out" 00:23:42.189 } 00:23:42.190 11:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:42.190 11:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:23:42.190 11:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:42.190 11:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:42.190 11:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:42.190 11:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:23:42.190 11:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:42.190 11:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:42.190 11:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.190 11:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:42.190 11:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:42.190 11:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:42.190 11:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.190 11:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:23:42.190 11:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:23:42.190 11:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 4161959 00:23:42.190 11:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:23:42.190 11:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:42.190 11:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:23:42.190 11:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:42.190 11:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:23:42.190 11:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:42.190 11:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:42.190 rmmod nvme_tcp 00:23:42.190 rmmod nvme_fabrics 00:23:42.190 rmmod nvme_keyring 00:23:42.190 11:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:42.190 11:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:23:42.190 11:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:23:42.190 11:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 4161731 ']' 00:23:42.190 11:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 4161731 00:23:42.190 11:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 4161731 ']' 00:23:42.190 11:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 4161731 00:23:42.190 11:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:23:42.190 11:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:42.190 11:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4161731 00:23:42.190 11:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:42.190 11:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:42.190 11:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4161731' 00:23:42.190 killing process with pid 4161731 00:23:42.190 11:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 4161731 00:23:42.190 11:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 4161731 00:23:42.190 11:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:42.190 11:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:42.190 11:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:42.190 11:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:23:42.190 11:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:23:42.190 11:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:23:42.190 11:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:42.190 11:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:42.190 11:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:42.190 11:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:42.190 11:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:42.190 11:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:44.725 11:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:44.725 00:23:44.725 real 0m17.356s 00:23:44.725 user 0m20.709s 00:23:44.725 sys 0m5.915s 00:23:44.725 11:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:44.725 11:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:44.725 ************************************ 00:23:44.725 END TEST nvmf_host_discovery 00:23:44.725 ************************************ 00:23:44.725 11:18:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:23:44.725 11:18:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:44.725 11:18:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:44.725 11:18:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.725 ************************************ 00:23:44.725 START TEST nvmf_host_multipath_status 00:23:44.725 ************************************ 00:23:44.725 11:18:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:23:44.725 * Looking for test storage... 00:23:44.725 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:44.725 11:18:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:44.725 11:18:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lcov --version 00:23:44.725 11:18:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:44.725 11:18:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:44.725 11:18:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:44.725 11:18:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:44.725 11:18:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:44.725 11:18:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:23:44.725 11:18:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:23:44.725 11:18:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:23:44.725 11:18:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:23:44.725 11:18:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:23:44.725 11:18:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:23:44.725 11:18:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:23:44.725 11:18:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:44.725 11:18:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:23:44.725 11:18:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:23:44.725 11:18:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:44.725 11:18:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:44.725 11:18:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:23:44.725 11:18:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:23:44.725 11:18:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:44.725 11:18:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:23:44.725 11:18:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:23:44.725 11:18:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:23:44.725 11:18:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:23:44.725 11:18:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:44.725 11:18:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:23:44.725 11:18:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:23:44.725 11:18:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:44.725 11:18:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:44.725 11:18:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:23:44.725 11:18:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:44.725 11:18:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:44.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:44.725 --rc genhtml_branch_coverage=1 00:23:44.725 --rc genhtml_function_coverage=1 00:23:44.725 --rc genhtml_legend=1 00:23:44.725 --rc geninfo_all_blocks=1 00:23:44.725 --rc geninfo_unexecuted_blocks=1 00:23:44.725 00:23:44.725 ' 00:23:44.725 11:18:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:44.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:44.725 --rc genhtml_branch_coverage=1 00:23:44.725 --rc genhtml_function_coverage=1 00:23:44.725 --rc genhtml_legend=1 00:23:44.725 --rc geninfo_all_blocks=1 00:23:44.725 --rc geninfo_unexecuted_blocks=1 00:23:44.725 00:23:44.725 ' 00:23:44.725 11:18:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:44.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:44.725 --rc genhtml_branch_coverage=1 00:23:44.725 --rc genhtml_function_coverage=1 00:23:44.725 --rc genhtml_legend=1 00:23:44.725 --rc geninfo_all_blocks=1 00:23:44.725 --rc geninfo_unexecuted_blocks=1 00:23:44.725 00:23:44.725 ' 00:23:44.725 11:18:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:44.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:44.725 --rc genhtml_branch_coverage=1 00:23:44.725 --rc genhtml_function_coverage=1 00:23:44.725 --rc genhtml_legend=1 00:23:44.725 --rc geninfo_all_blocks=1 00:23:44.725 --rc geninfo_unexecuted_blocks=1 00:23:44.725 00:23:44.725 ' 00:23:44.726 11:18:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:44.726 11:18:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:23:44.726 11:18:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:44.726 11:18:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:44.726 11:18:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:44.726 11:18:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:44.726 11:18:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:44.726 11:18:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:44.726 11:18:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:44.726 11:18:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:44.726 11:18:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:44.726 11:18:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:44.726 11:18:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:44.726 11:18:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:44.726 11:18:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:44.726 11:18:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:44.726 11:18:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:44.726 11:18:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:44.726 11:18:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:44.726 11:18:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:23:44.726 11:18:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:44.726 11:18:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:44.726 11:18:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:44.726 11:18:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:44.726 11:18:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:44.726 11:18:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:44.726 11:18:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:23:44.726 11:18:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:44.726 11:18:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:23:44.726 11:18:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:44.726 11:18:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:44.726 11:18:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:44.726 11:18:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:44.726 11:18:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:44.726 11:18:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:44.726 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:44.726 11:18:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:44.726 11:18:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:44.726 11:18:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:44.726 11:18:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:44.726 11:18:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:44.726 11:18:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:44.726 11:18:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:23:44.726 11:18:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:44.726 11:18:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:23:44.726 11:18:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:23:44.726 11:18:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:44.726 11:18:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:44.726 11:18:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:44.726 11:18:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:44.726 11:18:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:44.726 11:18:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:44.726 11:18:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:44.726 11:18:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:44.726 11:18:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:44.726 11:18:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:44.726 11:18:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:23:44.726 11:18:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:51.296 11:18:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:51.296 11:18:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:23:51.296 11:18:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:51.296 11:18:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:51.296 11:18:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:51.296 11:18:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:51.296 11:18:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:51.296 11:18:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:23:51.296 11:18:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:51.296 11:18:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:23:51.296 11:18:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:23:51.296 11:18:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:23:51.296 11:18:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:23:51.296 11:18:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:23:51.296 11:18:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:23:51.296 11:18:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:51.296 11:18:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:51.297 11:18:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:51.297 11:18:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:51.297 11:18:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:51.297 11:18:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:51.297 11:18:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:51.297 11:18:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:51.297 11:18:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:51.297 11:18:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:51.297 11:18:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:51.297 11:18:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:51.297 11:18:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:51.297 11:18:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:51.297 11:18:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:51.297 11:18:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:51.297 11:18:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:51.297 11:18:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:51.297 11:18:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:51.297 11:18:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:51.297 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:51.297 11:18:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:51.297 11:18:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:51.297 11:18:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:51.297 11:18:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:51.297 11:18:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:51.297 11:18:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:51.297 11:18:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:51.297 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:51.297 11:18:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:51.297 11:18:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:51.297 11:18:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:51.297 11:18:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:51.297 11:18:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:51.297 11:18:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:51.297 11:18:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:51.297 11:18:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:51.297 11:18:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:51.297 11:18:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:51.297 11:18:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:51.297 11:18:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:51.297 11:18:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:51.297 11:18:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:51.297 11:18:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:51.297 11:18:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:51.297 Found net devices under 0000:86:00.0: cvl_0_0 00:23:51.297 11:18:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:51.297 11:18:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:51.297 11:18:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:51.297 11:18:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:51.297 11:18:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:51.297 11:18:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:51.297 11:18:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:51.297 11:18:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:51.297 11:18:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:51.297 Found net devices under 0000:86:00.1: cvl_0_1 00:23:51.297 11:18:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:51.297 11:18:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:51.297 11:18:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:23:51.297 11:18:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:51.297 11:18:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:51.297 11:18:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:51.297 11:18:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:51.297 11:18:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:51.297 11:18:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:51.297 11:18:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:51.297 11:18:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:51.297 11:18:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:51.297 11:18:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:51.297 11:18:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:51.297 11:18:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:51.297 11:18:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:51.297 11:18:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:51.297 11:18:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:51.297 11:18:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:51.297 11:18:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:51.297 11:18:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:51.297 11:18:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:51.297 11:18:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:51.297 11:18:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:51.297 11:18:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:51.297 11:18:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:51.297 11:18:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:51.297 11:18:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:51.297 11:18:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:51.297 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:51.297 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.414 ms 00:23:51.297 00:23:51.297 --- 10.0.0.2 ping statistics --- 00:23:51.297 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:51.297 rtt min/avg/max/mdev = 0.414/0.414/0.414/0.000 ms 00:23:51.297 11:18:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:51.297 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:51.297 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:23:51.297 00:23:51.297 --- 10.0.0.1 ping statistics --- 00:23:51.297 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:51.297 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:23:51.297 11:18:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:51.297 11:18:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:23:51.297 11:18:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:51.297 11:18:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:51.297 11:18:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:51.297 11:18:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:51.297 11:18:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:51.297 11:18:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:51.297 11:18:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:51.297 11:18:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:23:51.297 11:18:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:51.297 11:18:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:51.297 11:18:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:51.298 11:18:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=4167445 00:23:51.298 11:18:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 4167445 00:23:51.298 11:18:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:23:51.298 11:18:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 4167445 ']' 00:23:51.298 11:18:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:51.298 11:18:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:51.298 11:18:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:51.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:51.298 11:18:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:51.298 11:18:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:51.298 [2024-11-20 11:18:18.036263] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:23:51.298 [2024-11-20 11:18:18.036306] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:51.298 [2024-11-20 11:18:18.113254] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:51.298 [2024-11-20 11:18:18.154884] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:51.298 [2024-11-20 11:18:18.154923] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:51.298 [2024-11-20 11:18:18.154930] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:51.298 [2024-11-20 11:18:18.154936] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:51.298 [2024-11-20 11:18:18.154941] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:51.298 [2024-11-20 11:18:18.156134] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:51.298 [2024-11-20 11:18:18.156135] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:51.298 11:18:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:51.298 11:18:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:23:51.298 11:18:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:51.298 11:18:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:51.298 11:18:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:51.298 11:18:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:51.298 11:18:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=4167445 00:23:51.298 11:18:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:51.298 [2024-11-20 11:18:18.457065] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:51.298 11:18:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:51.298 Malloc0 00:23:51.298 11:18:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:23:51.556 11:18:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:51.821 11:18:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:51.821 [2024-11-20 11:18:19.259200] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:51.821 11:18:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:52.079 [2024-11-20 11:18:19.471766] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:52.079 11:18:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=4167753 00:23:52.079 11:18:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:23:52.079 11:18:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:52.079 11:18:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 4167753 /var/tmp/bdevperf.sock 00:23:52.079 11:18:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 4167753 ']' 00:23:52.079 11:18:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:52.079 11:18:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:52.079 11:18:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:52.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:52.079 11:18:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:52.079 11:18:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:52.335 11:18:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:52.335 11:18:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:23:52.335 11:18:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:23:52.592 11:18:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:23:52.850 Nvme0n1 00:23:52.850 11:18:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:23:53.415 Nvme0n1 00:23:53.415 11:18:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:23:53.415 11:18:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:23:55.314 11:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:23:55.314 11:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:23:55.579 11:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:55.841 11:18:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:23:56.774 11:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:23:56.774 11:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:56.774 11:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:56.774 11:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:57.032 11:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:57.032 11:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:57.032 11:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:57.032 11:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:57.289 11:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:57.289 11:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:57.289 11:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:57.289 11:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:57.289 11:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:57.290 11:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:57.290 11:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:57.290 11:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:57.576 11:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:57.576 11:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:57.576 11:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:57.576 11:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:57.853 11:18:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:57.853 11:18:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:57.853 11:18:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:57.853 11:18:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:58.120 11:18:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:58.120 11:18:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:23:58.120 11:18:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:58.378 11:18:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:58.378 11:18:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:23:59.752 11:18:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:23:59.752 11:18:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:59.752 11:18:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:59.752 11:18:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:59.752 11:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:59.752 11:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:59.752 11:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:59.752 11:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:00.011 11:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:00.011 11:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:00.011 11:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:00.011 11:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:00.011 11:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:00.011 11:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:00.011 11:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:00.011 11:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:00.268 11:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:00.268 11:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:00.268 11:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:00.268 11:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:00.527 11:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:00.527 11:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:00.527 11:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:00.527 11:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:00.785 11:18:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:00.785 11:18:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:24:00.785 11:18:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:01.043 11:18:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:24:01.301 11:18:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:24:02.240 11:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:24:02.240 11:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:02.240 11:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:02.240 11:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:02.498 11:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:02.498 11:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:02.498 11:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:02.498 11:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:02.498 11:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:02.499 11:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:02.499 11:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:02.499 11:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:02.756 11:18:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:02.756 11:18:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:02.756 11:18:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:02.756 11:18:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:03.015 11:18:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:03.015 11:18:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:03.015 11:18:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:03.015 11:18:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:03.273 11:18:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:03.273 11:18:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:03.273 11:18:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:03.273 11:18:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:03.530 11:18:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:03.531 11:18:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:24:03.531 11:18:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:03.531 11:18:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:03.789 11:18:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:24:05.162 11:18:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:24:05.162 11:18:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:05.162 11:18:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:05.162 11:18:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:05.162 11:18:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:05.162 11:18:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:05.162 11:18:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:05.162 11:18:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:05.162 11:18:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:05.162 11:18:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:05.162 11:18:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:05.162 11:18:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:05.420 11:18:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:05.420 11:18:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:05.420 11:18:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:05.420 11:18:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:05.678 11:18:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:05.678 11:18:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:05.678 11:18:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:05.678 11:18:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:05.939 11:18:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:05.939 11:18:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:05.939 11:18:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:05.939 11:18:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:06.197 11:18:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:06.197 11:18:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:24:06.197 11:18:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:06.454 11:18:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:06.454 11:18:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:24:07.828 11:18:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:24:07.828 11:18:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:07.828 11:18:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:07.828 11:18:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:07.828 11:18:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:07.828 11:18:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:07.828 11:18:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:07.828 11:18:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:08.086 11:18:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:08.086 11:18:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:08.086 11:18:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:08.086 11:18:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:08.086 11:18:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:08.086 11:18:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:08.086 11:18:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:08.086 11:18:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:08.344 11:18:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:08.344 11:18:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:24:08.345 11:18:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:08.345 11:18:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:08.603 11:18:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:08.603 11:18:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:08.603 11:18:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:08.603 11:18:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:08.862 11:18:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:08.862 11:18:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:24:08.862 11:18:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:08.862 11:18:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:09.120 11:18:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:24:10.055 11:18:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:24:10.055 11:18:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:10.313 11:18:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:10.313 11:18:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:10.313 11:18:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:10.313 11:18:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:10.313 11:18:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:10.313 11:18:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:10.572 11:18:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:10.572 11:18:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:10.572 11:18:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:10.572 11:18:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:10.831 11:18:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:10.831 11:18:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:10.831 11:18:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:10.831 11:18:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:11.090 11:18:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:11.090 11:18:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:24:11.090 11:18:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:11.090 11:18:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:11.388 11:18:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:11.388 11:18:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:11.388 11:18:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:11.388 11:18:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:11.388 11:18:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:11.388 11:18:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:24:11.645 11:18:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:24:11.645 11:18:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:24:11.903 11:18:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:12.160 11:18:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:24:13.092 11:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:24:13.092 11:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:13.092 11:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:13.092 11:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:13.350 11:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:13.350 11:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:13.350 11:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:13.350 11:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:13.608 11:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:13.608 11:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:13.608 11:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:13.608 11:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:13.866 11:18:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:13.866 11:18:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:13.866 11:18:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:13.866 11:18:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:13.866 11:18:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:13.866 11:18:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:13.866 11:18:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:13.866 11:18:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:14.123 11:18:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:14.123 11:18:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:14.123 11:18:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:14.123 11:18:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:14.380 11:18:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:14.380 11:18:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:24:14.380 11:18:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:14.639 11:18:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:14.897 11:18:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:24:15.830 11:18:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:24:15.830 11:18:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:15.830 11:18:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:15.830 11:18:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:16.088 11:18:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:16.088 11:18:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:16.088 11:18:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:16.088 11:18:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:16.088 11:18:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:16.088 11:18:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:16.088 11:18:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:16.088 11:18:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:16.345 11:18:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:16.345 11:18:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:16.345 11:18:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:16.345 11:18:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:16.603 11:18:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:16.603 11:18:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:16.603 11:18:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:16.603 11:18:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:16.861 11:18:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:16.861 11:18:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:16.861 11:18:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:16.861 11:18:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:17.119 11:18:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:17.119 11:18:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:24:17.119 11:18:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:17.377 11:18:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:24:17.377 11:18:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:24:18.750 11:18:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:24:18.750 11:18:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:18.750 11:18:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:18.750 11:18:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:18.750 11:18:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:18.750 11:18:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:18.750 11:18:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:18.750 11:18:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:19.009 11:18:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:19.009 11:18:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:19.009 11:18:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:19.009 11:18:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:19.009 11:18:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:19.009 11:18:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:19.009 11:18:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:19.009 11:18:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:19.267 11:18:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:19.267 11:18:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:19.267 11:18:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:19.267 11:18:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:19.525 11:18:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:19.525 11:18:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:19.525 11:18:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:19.525 11:18:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:19.783 11:18:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:19.783 11:18:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:24:19.783 11:18:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:20.041 11:18:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:20.299 11:18:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:24:21.230 11:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:24:21.230 11:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:21.230 11:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:21.230 11:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:21.488 11:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:21.488 11:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:21.488 11:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:21.488 11:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:21.488 11:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:21.488 11:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:21.488 11:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:21.488 11:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:21.746 11:18:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:21.746 11:18:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:21.747 11:18:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:21.747 11:18:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:22.004 11:18:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:22.004 11:18:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:22.004 11:18:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:22.004 11:18:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:22.262 11:18:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:22.262 11:18:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:22.262 11:18:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:22.262 11:18:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:22.520 11:18:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:22.520 11:18:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 4167753 00:24:22.520 11:18:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 4167753 ']' 00:24:22.520 11:18:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 4167753 00:24:22.520 11:18:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:24:22.520 11:18:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:22.520 11:18:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4167753 00:24:22.520 11:18:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:22.520 11:18:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:22.520 11:18:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4167753' 00:24:22.520 killing process with pid 4167753 00:24:22.521 11:18:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 4167753 00:24:22.521 11:18:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 4167753 00:24:22.521 { 00:24:22.521 "results": [ 00:24:22.521 { 00:24:22.521 "job": "Nvme0n1", 00:24:22.521 "core_mask": "0x4", 00:24:22.521 "workload": "verify", 00:24:22.521 "status": "terminated", 00:24:22.521 "verify_range": { 00:24:22.521 "start": 0, 00:24:22.521 "length": 16384 00:24:22.521 }, 00:24:22.521 "queue_depth": 128, 00:24:22.521 "io_size": 4096, 00:24:22.521 "runtime": 29.050849, 00:24:22.521 "iops": 10361.934689068812, 00:24:22.521 "mibps": 40.47630737917505, 00:24:22.521 "io_failed": 0, 00:24:22.521 "io_timeout": 0, 00:24:22.521 "avg_latency_us": 12333.369995376634, 00:24:22.521 "min_latency_us": 459.46434782608696, 00:24:22.521 "max_latency_us": 3019898.88 00:24:22.521 } 00:24:22.521 ], 00:24:22.521 "core_count": 1 00:24:22.521 } 00:24:22.782 11:18:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 4167753 00:24:22.782 11:18:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:22.782 [2024-11-20 11:18:19.547740] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:24:22.783 [2024-11-20 11:18:19.547797] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4167753 ] 00:24:22.783 [2024-11-20 11:18:19.624361] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:22.783 [2024-11-20 11:18:19.665701] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:22.783 Running I/O for 90 seconds... 00:24:22.783 10872.00 IOPS, 42.47 MiB/s [2024-11-20T10:18:50.279Z] 10955.50 IOPS, 42.79 MiB/s [2024-11-20T10:18:50.279Z] 11004.33 IOPS, 42.99 MiB/s [2024-11-20T10:18:50.279Z] 11031.25 IOPS, 43.09 MiB/s [2024-11-20T10:18:50.279Z] 11064.40 IOPS, 43.22 MiB/s [2024-11-20T10:18:50.279Z] 11085.83 IOPS, 43.30 MiB/s [2024-11-20T10:18:50.279Z] 11110.71 IOPS, 43.40 MiB/s [2024-11-20T10:18:50.279Z] 11117.88 IOPS, 43.43 MiB/s [2024-11-20T10:18:50.279Z] 11108.89 IOPS, 43.39 MiB/s [2024-11-20T10:18:50.279Z] 11098.80 IOPS, 43.35 MiB/s [2024-11-20T10:18:50.279Z] 11099.18 IOPS, 43.36 MiB/s [2024-11-20T10:18:50.279Z] 11112.83 IOPS, 43.41 MiB/s [2024-11-20T10:18:50.279Z] [2024-11-20 11:18:33.701861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:98528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.783 [2024-11-20 11:18:33.701902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:22.783 [2024-11-20 11:18:33.701940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:98600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.783 [2024-11-20 11:18:33.701954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:22.783 [2024-11-20 11:18:33.701968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:98608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.783 [2024-11-20 11:18:33.701976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:22.783 [2024-11-20 11:18:33.701989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:98616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.783 [2024-11-20 11:18:33.701997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:22.783 [2024-11-20 11:18:33.702009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:98624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.783 [2024-11-20 11:18:33.702017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:22.783 [2024-11-20 11:18:33.702030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:98632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.783 [2024-11-20 11:18:33.702037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:22.783 [2024-11-20 11:18:33.702050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:98640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.783 [2024-11-20 11:18:33.702057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:22.783 [2024-11-20 11:18:33.702070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:98648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.783 [2024-11-20 11:18:33.702078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.783 [2024-11-20 11:18:33.702091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:98656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.783 [2024-11-20 11:18:33.702098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:22.783 [2024-11-20 11:18:33.702112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:98664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.783 [2024-11-20 11:18:33.702126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:22.783 [2024-11-20 11:18:33.702141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:98672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.783 [2024-11-20 11:18:33.702149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:22.783 [2024-11-20 11:18:33.702162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:98680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.783 [2024-11-20 11:18:33.702170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:22.783 [2024-11-20 11:18:33.702183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:98688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.783 [2024-11-20 11:18:33.702189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:22.783 [2024-11-20 11:18:33.702203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:98696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.783 [2024-11-20 11:18:33.702212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:22.783 [2024-11-20 11:18:33.702225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:98704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.783 [2024-11-20 11:18:33.702232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:22.783 [2024-11-20 11:18:33.702245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:98712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.783 [2024-11-20 11:18:33.702253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:22.783 [2024-11-20 11:18:33.702266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:98720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.783 [2024-11-20 11:18:33.702275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:22.783 [2024-11-20 11:18:33.702287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:98728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.783 [2024-11-20 11:18:33.702295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:22.783 [2024-11-20 11:18:33.702308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:98736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.783 [2024-11-20 11:18:33.702315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:22.783 [2024-11-20 11:18:33.702327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:98744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.783 [2024-11-20 11:18:33.702334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:22.783 [2024-11-20 11:18:33.702348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:98752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.783 [2024-11-20 11:18:33.702354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:22.783 [2024-11-20 11:18:33.702366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:98760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.783 [2024-11-20 11:18:33.702376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:22.783 [2024-11-20 11:18:33.702389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:98768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.783 [2024-11-20 11:18:33.702397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:22.783 [2024-11-20 11:18:33.702409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:98776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.783 [2024-11-20 11:18:33.702416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:22.783 [2024-11-20 11:18:33.702428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:98784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.783 [2024-11-20 11:18:33.702436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:22.783 [2024-11-20 11:18:33.702449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:98792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.783 [2024-11-20 11:18:33.702456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:22.783 [2024-11-20 11:18:33.702468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:98800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.783 [2024-11-20 11:18:33.702477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:22.783 [2024-11-20 11:18:33.702491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:98808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.783 [2024-11-20 11:18:33.702498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:22.783 [2024-11-20 11:18:33.702512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:98816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.783 [2024-11-20 11:18:33.702519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:22.783 [2024-11-20 11:18:33.702533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:98824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.783 [2024-11-20 11:18:33.702540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:22.783 [2024-11-20 11:18:33.702553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:98832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.783 [2024-11-20 11:18:33.702560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:22.783 [2024-11-20 11:18:33.702572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:98840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.783 [2024-11-20 11:18:33.702579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:22.783 [2024-11-20 11:18:33.702592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:98848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.783 [2024-11-20 11:18:33.702599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:22.783 [2024-11-20 11:18:33.702612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:98856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.783 [2024-11-20 11:18:33.702619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:22.783 [2024-11-20 11:18:33.703051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:98864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.783 [2024-11-20 11:18:33.703069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:22.784 [2024-11-20 11:18:33.703085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:98872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.784 [2024-11-20 11:18:33.703092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:22.784 [2024-11-20 11:18:33.703108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:98880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.784 [2024-11-20 11:18:33.703115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:22.784 [2024-11-20 11:18:33.703130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:98888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.784 [2024-11-20 11:18:33.703137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:22.784 [2024-11-20 11:18:33.703152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:98896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.784 [2024-11-20 11:18:33.703160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.784 [2024-11-20 11:18:33.703175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:98904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.784 [2024-11-20 11:18:33.703182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.784 [2024-11-20 11:18:33.703196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:98912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.784 [2024-11-20 11:18:33.703203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:22.784 [2024-11-20 11:18:33.703218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:98920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.784 [2024-11-20 11:18:33.703225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:22.784 [2024-11-20 11:18:33.703239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:98928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.784 [2024-11-20 11:18:33.703247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:22.784 [2024-11-20 11:18:33.703261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:98936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.784 [2024-11-20 11:18:33.703269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:22.784 [2024-11-20 11:18:33.703283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:98944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.784 [2024-11-20 11:18:33.703291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:22.784 [2024-11-20 11:18:33.703305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:98952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.784 [2024-11-20 11:18:33.703312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:22.784 [2024-11-20 11:18:33.703328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:98960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.784 [2024-11-20 11:18:33.703336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:22.784 [2024-11-20 11:18:33.703350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:98968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.784 [2024-11-20 11:18:33.703357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:22.784 [2024-11-20 11:18:33.703372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:98976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.784 [2024-11-20 11:18:33.703379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:22.784 [2024-11-20 11:18:33.703393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:98984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.784 [2024-11-20 11:18:33.703401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:22.784 [2024-11-20 11:18:33.703416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:98992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.784 [2024-11-20 11:18:33.703423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:22.784 [2024-11-20 11:18:33.703438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:99000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.784 [2024-11-20 11:18:33.703445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:22.784 [2024-11-20 11:18:33.703460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:99008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.784 [2024-11-20 11:18:33.703467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:22.784 [2024-11-20 11:18:33.703481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:99016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.784 [2024-11-20 11:18:33.703488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:22.784 [2024-11-20 11:18:33.703502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:99024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.784 [2024-11-20 11:18:33.703510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:22.784 [2024-11-20 11:18:33.703524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:99032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.784 [2024-11-20 11:18:33.703531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:22.784 [2024-11-20 11:18:33.703545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:99040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.784 [2024-11-20 11:18:33.703554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:22.784 [2024-11-20 11:18:33.703569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:99048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.784 [2024-11-20 11:18:33.703576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:22.784 [2024-11-20 11:18:33.703590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:99056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.784 [2024-11-20 11:18:33.703599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:22.784 [2024-11-20 11:18:33.703613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:99064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.784 [2024-11-20 11:18:33.703621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:22.784 [2024-11-20 11:18:33.703636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:99072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.784 [2024-11-20 11:18:33.703643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:22.784 [2024-11-20 11:18:33.703657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:99080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.784 [2024-11-20 11:18:33.703664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:22.784 [2024-11-20 11:18:33.703678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:99088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.784 [2024-11-20 11:18:33.703685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:22.784 [2024-11-20 11:18:33.703700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:99096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.784 [2024-11-20 11:18:33.703706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:22.784 [2024-11-20 11:18:33.703720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:99104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.784 [2024-11-20 11:18:33.703730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:22.784 [2024-11-20 11:18:33.703744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:99112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.784 [2024-11-20 11:18:33.703751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:22.784 [2024-11-20 11:18:33.703766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:99120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.784 [2024-11-20 11:18:33.703773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:22.784 [2024-11-20 11:18:33.703787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:99128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.784 [2024-11-20 11:18:33.703795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:22.784 [2024-11-20 11:18:33.703810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:99136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.784 [2024-11-20 11:18:33.703817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:22.784 [2024-11-20 11:18:33.703831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:99144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.784 [2024-11-20 11:18:33.703840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:22.784 [2024-11-20 11:18:33.703855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:99152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.784 [2024-11-20 11:18:33.703864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:22.784 [2024-11-20 11:18:33.703879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:99160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.784 [2024-11-20 11:18:33.703886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.784 [2024-11-20 11:18:33.703900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:99168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.784 [2024-11-20 11:18:33.703908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:22.784 [2024-11-20 11:18:33.703922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:98536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.785 [2024-11-20 11:18:33.703930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:22.785 [2024-11-20 11:18:33.703944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:98544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.785 [2024-11-20 11:18:33.703956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:22.785 [2024-11-20 11:18:33.703971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:98552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.785 [2024-11-20 11:18:33.703978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:22.785 [2024-11-20 11:18:33.703993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:98560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.785 [2024-11-20 11:18:33.704000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:22.785 [2024-11-20 11:18:33.704014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:98568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.785 [2024-11-20 11:18:33.704021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:22.785 [2024-11-20 11:18:33.704036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:98576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.785 [2024-11-20 11:18:33.704043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:22.785 [2024-11-20 11:18:33.704058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:98584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.785 [2024-11-20 11:18:33.704066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:22.785 [2024-11-20 11:18:33.704172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:98592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.785 [2024-11-20 11:18:33.704183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:22.785 [2024-11-20 11:18:33.704201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:99176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.785 [2024-11-20 11:18:33.704208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:22.785 [2024-11-20 11:18:33.704225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:99184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.785 [2024-11-20 11:18:33.704234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:22.785 [2024-11-20 11:18:33.704253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:99192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.785 [2024-11-20 11:18:33.704260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:22.785 [2024-11-20 11:18:33.704277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:99200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.785 [2024-11-20 11:18:33.704285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:22.785 [2024-11-20 11:18:33.704302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:99208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.785 [2024-11-20 11:18:33.704309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:22.785 [2024-11-20 11:18:33.704326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:99216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.785 [2024-11-20 11:18:33.704333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:22.785 [2024-11-20 11:18:33.704350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:99224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.785 [2024-11-20 11:18:33.704357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:22.785 [2024-11-20 11:18:33.704374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:99232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.785 [2024-11-20 11:18:33.704381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:22.785 [2024-11-20 11:18:33.704398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:99240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.785 [2024-11-20 11:18:33.704405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:22.785 [2024-11-20 11:18:33.704421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:99248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.785 [2024-11-20 11:18:33.704428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:22.785 [2024-11-20 11:18:33.704446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:99256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.785 [2024-11-20 11:18:33.704454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:22.785 [2024-11-20 11:18:33.704471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:99264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.785 [2024-11-20 11:18:33.704478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:22.785 [2024-11-20 11:18:33.704495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:99272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.785 [2024-11-20 11:18:33.704503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:22.785 [2024-11-20 11:18:33.704519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:99280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.785 [2024-11-20 11:18:33.704527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:22.785 [2024-11-20 11:18:33.704544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:99288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.785 [2024-11-20 11:18:33.704552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:22.785 [2024-11-20 11:18:33.704569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:99296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.785 [2024-11-20 11:18:33.704577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:22.785 [2024-11-20 11:18:33.704594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:99304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.785 [2024-11-20 11:18:33.704600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:22.785 [2024-11-20 11:18:33.704617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:99312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.785 [2024-11-20 11:18:33.704624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:22.785 [2024-11-20 11:18:33.704642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:99320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.785 [2024-11-20 11:18:33.704649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:22.785 [2024-11-20 11:18:33.704665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:99328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.785 [2024-11-20 11:18:33.704673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:22.785 [2024-11-20 11:18:33.704689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:99336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.785 [2024-11-20 11:18:33.704696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:22.785 [2024-11-20 11:18:33.704713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:99344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.785 [2024-11-20 11:18:33.704720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:22.785 [2024-11-20 11:18:33.704738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:99352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.785 [2024-11-20 11:18:33.704745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.785 [2024-11-20 11:18:33.704769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:99360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.785 [2024-11-20 11:18:33.704776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:22.785 [2024-11-20 11:18:33.704793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:99368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.785 [2024-11-20 11:18:33.704800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:22.785 [2024-11-20 11:18:33.704817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:99376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.785 [2024-11-20 11:18:33.704824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:22.785 [2024-11-20 11:18:33.704841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:99384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.785 [2024-11-20 11:18:33.704853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:22.785 [2024-11-20 11:18:33.704871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:99392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.785 [2024-11-20 11:18:33.704877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:22.785 [2024-11-20 11:18:33.704894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:99400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.785 [2024-11-20 11:18:33.704901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:22.785 [2024-11-20 11:18:33.704918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:99408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.785 [2024-11-20 11:18:33.704925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:22.785 [2024-11-20 11:18:33.704942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:99416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.785 [2024-11-20 11:18:33.704953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:22.786 [2024-11-20 11:18:33.705040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:99424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.786 [2024-11-20 11:18:33.705050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:22.786 [2024-11-20 11:18:33.705069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:99432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.786 [2024-11-20 11:18:33.705076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:22.786 [2024-11-20 11:18:33.705095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:99440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.786 [2024-11-20 11:18:33.705103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:22.786 [2024-11-20 11:18:33.705121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:99448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.786 [2024-11-20 11:18:33.705128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:22.786 [2024-11-20 11:18:33.705146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:99456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.786 [2024-11-20 11:18:33.705154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:22.786 [2024-11-20 11:18:33.705172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:99464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.786 [2024-11-20 11:18:33.705179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:22.786 [2024-11-20 11:18:33.705197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:99472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.786 [2024-11-20 11:18:33.705205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:22.786 [2024-11-20 11:18:33.705223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:99480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.786 [2024-11-20 11:18:33.705233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:22.786 [2024-11-20 11:18:33.705252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:99488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.786 [2024-11-20 11:18:33.705259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:22.786 [2024-11-20 11:18:33.705277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:99496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.786 [2024-11-20 11:18:33.705284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:22.786 [2024-11-20 11:18:33.705302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:99504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.786 [2024-11-20 11:18:33.705310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:22.786 [2024-11-20 11:18:33.705328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:99512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.786 [2024-11-20 11:18:33.705336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:22.786 [2024-11-20 11:18:33.705353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:99520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.786 [2024-11-20 11:18:33.705360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:22.786 [2024-11-20 11:18:33.705379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:99528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.786 [2024-11-20 11:18:33.705386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:22.786 [2024-11-20 11:18:33.705404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:99536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.786 [2024-11-20 11:18:33.705411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:22.786 [2024-11-20 11:18:33.705429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:99544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.786 [2024-11-20 11:18:33.705437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:22.786 11029.85 IOPS, 43.09 MiB/s [2024-11-20T10:18:50.282Z] 10242.00 IOPS, 40.01 MiB/s [2024-11-20T10:18:50.282Z] 9559.20 IOPS, 37.34 MiB/s [2024-11-20T10:18:50.282Z] 9029.69 IOPS, 35.27 MiB/s [2024-11-20T10:18:50.282Z] 9161.18 IOPS, 35.79 MiB/s [2024-11-20T10:18:50.282Z] 9273.50 IOPS, 36.22 MiB/s [2024-11-20T10:18:50.282Z] 9430.74 IOPS, 36.84 MiB/s [2024-11-20T10:18:50.282Z] 9628.60 IOPS, 37.61 MiB/s [2024-11-20T10:18:50.282Z] 9800.95 IOPS, 38.28 MiB/s [2024-11-20T10:18:50.282Z] 9874.45 IOPS, 38.57 MiB/s [2024-11-20T10:18:50.282Z] 9928.96 IOPS, 38.78 MiB/s [2024-11-20T10:18:50.282Z] 9974.46 IOPS, 38.96 MiB/s [2024-11-20T10:18:50.282Z] 10093.84 IOPS, 39.43 MiB/s [2024-11-20T10:18:50.282Z] 10218.96 IOPS, 39.92 MiB/s [2024-11-20T10:18:50.282Z] [2024-11-20 11:18:47.512451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:98976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.786 [2024-11-20 11:18:47.512491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:22.786 [2024-11-20 11:18:47.512525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:98992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.786 [2024-11-20 11:18:47.512534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:22.786 [2024-11-20 11:18:47.512547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:99008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.786 [2024-11-20 11:18:47.512560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:22.786 [2024-11-20 11:18:47.512573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:99024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.786 [2024-11-20 11:18:47.512581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:22.786 [2024-11-20 11:18:47.512593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:99040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.786 [2024-11-20 11:18:47.512600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:22.786 [2024-11-20 11:18:47.512613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:99056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.786 [2024-11-20 11:18:47.512620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:22.786 [2024-11-20 11:18:47.512632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:99072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.786 [2024-11-20 11:18:47.512639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:22.786 [2024-11-20 11:18:47.512651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:99088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.786 [2024-11-20 11:18:47.512659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:22.786 [2024-11-20 11:18:47.512671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:99104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.786 [2024-11-20 11:18:47.512679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:22.786 [2024-11-20 11:18:47.512691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:99120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.786 [2024-11-20 11:18:47.512699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:22.786 [2024-11-20 11:18:47.512712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:99136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.786 [2024-11-20 11:18:47.512720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:22.786 [2024-11-20 11:18:47.512732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:99152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.786 [2024-11-20 11:18:47.512739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:22.786 [2024-11-20 11:18:47.512751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:99168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.786 [2024-11-20 11:18:47.512758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:22.786 [2024-11-20 11:18:47.512772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:99184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.786 [2024-11-20 11:18:47.512780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:22.786 [2024-11-20 11:18:47.512793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:99200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.786 [2024-11-20 11:18:47.512800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:22.786 [2024-11-20 11:18:47.512816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:99216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.786 [2024-11-20 11:18:47.512825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:22.786 [2024-11-20 11:18:47.512838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:99232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.786 [2024-11-20 11:18:47.512845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.786 [2024-11-20 11:18:47.512859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:99248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.786 [2024-11-20 11:18:47.512866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:22.786 [2024-11-20 11:18:47.512880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:99264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.786 [2024-11-20 11:18:47.512887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:22.786 [2024-11-20 11:18:47.512899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:99280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.787 [2024-11-20 11:18:47.512906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:22.787 [2024-11-20 11:18:47.512919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:99296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.787 [2024-11-20 11:18:47.512925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:22.787 [2024-11-20 11:18:47.512937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:99312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.787 [2024-11-20 11:18:47.512945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:22.787 [2024-11-20 11:18:47.512963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:99328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.787 [2024-11-20 11:18:47.512970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:22.787 [2024-11-20 11:18:47.512999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:99344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.787 [2024-11-20 11:18:47.513006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:22.787 [2024-11-20 11:18:47.513019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:98784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.787 [2024-11-20 11:18:47.513026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:22.787 [2024-11-20 11:18:47.513038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:98816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.787 [2024-11-20 11:18:47.513046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:22.787 [2024-11-20 11:18:47.513058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:98848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.787 [2024-11-20 11:18:47.513066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:22.787 [2024-11-20 11:18:47.513080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:98880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.787 [2024-11-20 11:18:47.513087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:22.787 [2024-11-20 11:18:47.513100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:98912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.787 [2024-11-20 11:18:47.513107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:22.787 [2024-11-20 11:18:47.513120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:98944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.787 [2024-11-20 11:18:47.513127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:22.787 [2024-11-20 11:18:47.513140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:99360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.787 [2024-11-20 11:18:47.513146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:22.787 [2024-11-20 11:18:47.513159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:99376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.787 [2024-11-20 11:18:47.513167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:22.787 [2024-11-20 11:18:47.513179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:99392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.787 [2024-11-20 11:18:47.513186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:22.787 [2024-11-20 11:18:47.513199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:99408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.787 [2024-11-20 11:18:47.513206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:22.787 [2024-11-20 11:18:47.513219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:99424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.787 [2024-11-20 11:18:47.513226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:22.787 [2024-11-20 11:18:47.513239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:99440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.787 [2024-11-20 11:18:47.513246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:22.787 [2024-11-20 11:18:47.513259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:99456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.787 [2024-11-20 11:18:47.513267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:22.787 [2024-11-20 11:18:47.513703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:99472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.787 [2024-11-20 11:18:47.513718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:22.787 [2024-11-20 11:18:47.513734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:99488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.787 [2024-11-20 11:18:47.513742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:22.787 [2024-11-20 11:18:47.513755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:99504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.787 [2024-11-20 11:18:47.513766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:22.787 [2024-11-20 11:18:47.513779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:99520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.787 [2024-11-20 11:18:47.513786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:22.787 [2024-11-20 11:18:47.513799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:99536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.787 [2024-11-20 11:18:47.513806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:22.787 [2024-11-20 11:18:47.513819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:99552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.787 [2024-11-20 11:18:47.513826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:22.787 [2024-11-20 11:18:47.513838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:99568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.787 [2024-11-20 11:18:47.513845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:22.787 [2024-11-20 11:18:47.513858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:99584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.787 [2024-11-20 11:18:47.513865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:22.787 [2024-11-20 11:18:47.513878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:99600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.787 [2024-11-20 11:18:47.513884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:22.787 [2024-11-20 11:18:47.513897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:99616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.787 [2024-11-20 11:18:47.513904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:22.788 [2024-11-20 11:18:47.513917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:99632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.788 [2024-11-20 11:18:47.513924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:22.788 [2024-11-20 11:18:47.513937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:99648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.788 [2024-11-20 11:18:47.513943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.788 [2024-11-20 11:18:47.513962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:99664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.788 [2024-11-20 11:18:47.513970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:22.788 [2024-11-20 11:18:47.513983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:99680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.788 [2024-11-20 11:18:47.513990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:22.788 [2024-11-20 11:18:47.514003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:99696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.788 [2024-11-20 11:18:47.514012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:22.788 [2024-11-20 11:18:47.514025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:99712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.788 [2024-11-20 11:18:47.514033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:22.788 [2024-11-20 11:18:47.514046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:99728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.788 [2024-11-20 11:18:47.514053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:22.788 [2024-11-20 11:18:47.514066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:98808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.788 [2024-11-20 11:18:47.514073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:22.788 [2024-11-20 11:18:47.514904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:98840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.788 [2024-11-20 11:18:47.514923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:22.788 [2024-11-20 11:18:47.514939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:98872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.788 [2024-11-20 11:18:47.514946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:22.788 [2024-11-20 11:18:47.514967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:98904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.788 [2024-11-20 11:18:47.514974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:22.788 [2024-11-20 11:18:47.514987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:98936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.788 [2024-11-20 11:18:47.514994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:22.788 [2024-11-20 11:18:47.515007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:98968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.788 [2024-11-20 11:18:47.515014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:22.788 [2024-11-20 11:18:47.515027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:99744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.788 [2024-11-20 11:18:47.515034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:22.788 [2024-11-20 11:18:47.515047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:99760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.788 [2024-11-20 11:18:47.515054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:22.788 [2024-11-20 11:18:47.515066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:99776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.788 [2024-11-20 11:18:47.515073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:22.788 [2024-11-20 11:18:47.515086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:99792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.788 [2024-11-20 11:18:47.515093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:22.788 [2024-11-20 11:18:47.515109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:98984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.788 [2024-11-20 11:18:47.515116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:22.788 [2024-11-20 11:18:47.515129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:99016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.788 [2024-11-20 11:18:47.515136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:22.788 [2024-11-20 11:18:47.515149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:99048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.788 [2024-11-20 11:18:47.515156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:22.788 [2024-11-20 11:18:47.515169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:99080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.788 [2024-11-20 11:18:47.515175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:22.788 [2024-11-20 11:18:47.515188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:99112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.788 [2024-11-20 11:18:47.515195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:22.788 [2024-11-20 11:18:47.515208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:99144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.788 [2024-11-20 11:18:47.515217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:22.788 [2024-11-20 11:18:47.515230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:99176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.788 [2024-11-20 11:18:47.515237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:22.788 [2024-11-20 11:18:47.515250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:99208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.788 [2024-11-20 11:18:47.515258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:22.788 [2024-11-20 11:18:47.515270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:99240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.788 [2024-11-20 11:18:47.515278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:22.788 [2024-11-20 11:18:47.515290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:99272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.788 [2024-11-20 11:18:47.515297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:22.788 [2024-11-20 11:18:47.515310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:99304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.788 [2024-11-20 11:18:47.515317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:22.788 [2024-11-20 11:18:47.515330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:99336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.788 [2024-11-20 11:18:47.515337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:22.788 [2024-11-20 11:18:47.515351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:99808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.788 [2024-11-20 11:18:47.515358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:22.788 10303.15 IOPS, 40.25 MiB/s [2024-11-20T10:18:50.284Z] 10334.11 IOPS, 40.37 MiB/s [2024-11-20T10:18:50.284Z] 10362.72 IOPS, 40.48 MiB/s [2024-11-20T10:18:50.284Z] Received shutdown signal, test time was about 29.051510 seconds 00:24:22.788 00:24:22.788 Latency(us) 00:24:22.788 [2024-11-20T10:18:50.284Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:22.788 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:22.788 Verification LBA range: start 0x0 length 0x4000 00:24:22.788 Nvme0n1 : 29.05 10361.93 40.48 0.00 0.00 12333.37 459.46 3019898.88 00:24:22.788 [2024-11-20T10:18:50.284Z] =================================================================================================================== 00:24:22.788 [2024-11-20T10:18:50.284Z] Total : 10361.93 40.48 0.00 0.00 12333.37 459.46 3019898.88 00:24:22.788 11:18:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:22.788 11:18:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:24:22.788 11:18:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:22.788 11:18:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:24:22.788 11:18:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:22.788 11:18:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:24:23.047 11:18:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:23.047 11:18:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:24:23.047 11:18:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:23.047 11:18:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:23.047 rmmod nvme_tcp 00:24:23.047 rmmod nvme_fabrics 00:24:23.047 rmmod nvme_keyring 00:24:23.047 11:18:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:23.047 11:18:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:24:23.047 11:18:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:24:23.047 11:18:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 4167445 ']' 00:24:23.047 11:18:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 4167445 00:24:23.047 11:18:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 4167445 ']' 00:24:23.047 11:18:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 4167445 00:24:23.047 11:18:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:24:23.047 11:18:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:23.047 11:18:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4167445 00:24:23.047 11:18:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:23.047 11:18:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:23.047 11:18:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4167445' 00:24:23.047 killing process with pid 4167445 00:24:23.047 11:18:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 4167445 00:24:23.047 11:18:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 4167445 00:24:23.306 11:18:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:23.306 11:18:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:23.306 11:18:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:23.306 11:18:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:24:23.306 11:18:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:24:23.306 11:18:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:24:23.306 11:18:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:23.306 11:18:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:23.306 11:18:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:23.306 11:18:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:23.306 11:18:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:23.306 11:18:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:25.212 11:18:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:25.212 00:24:25.212 real 0m40.823s 00:24:25.212 user 1m50.590s 00:24:25.212 sys 0m11.846s 00:24:25.212 11:18:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:25.212 11:18:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:25.212 ************************************ 00:24:25.212 END TEST nvmf_host_multipath_status 00:24:25.212 ************************************ 00:24:25.212 11:18:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:24:25.212 11:18:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:25.212 11:18:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:25.212 11:18:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.474 ************************************ 00:24:25.474 START TEST nvmf_discovery_remove_ifc 00:24:25.474 ************************************ 00:24:25.474 11:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:24:25.474 * Looking for test storage... 00:24:25.474 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:25.474 11:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:25.474 11:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lcov --version 00:24:25.474 11:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:25.474 11:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:25.474 11:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:25.474 11:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:25.474 11:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:25.474 11:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:24:25.474 11:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:24:25.474 11:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:24:25.474 11:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:24:25.474 11:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:24:25.474 11:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:24:25.474 11:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:24:25.474 11:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:25.474 11:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:24:25.474 11:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:24:25.474 11:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:25.474 11:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:25.474 11:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:24:25.474 11:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:24:25.474 11:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:25.474 11:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:24:25.474 11:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:24:25.474 11:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:24:25.474 11:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:24:25.474 11:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:25.474 11:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:24:25.474 11:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:24:25.474 11:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:25.474 11:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:25.474 11:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:24:25.474 11:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:25.474 11:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:25.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:25.474 --rc genhtml_branch_coverage=1 00:24:25.474 --rc genhtml_function_coverage=1 00:24:25.474 --rc genhtml_legend=1 00:24:25.474 --rc geninfo_all_blocks=1 00:24:25.474 --rc geninfo_unexecuted_blocks=1 00:24:25.474 00:24:25.474 ' 00:24:25.474 11:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:25.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:25.474 --rc genhtml_branch_coverage=1 00:24:25.474 --rc genhtml_function_coverage=1 00:24:25.474 --rc genhtml_legend=1 00:24:25.474 --rc geninfo_all_blocks=1 00:24:25.474 --rc geninfo_unexecuted_blocks=1 00:24:25.474 00:24:25.474 ' 00:24:25.474 11:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:25.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:25.474 --rc genhtml_branch_coverage=1 00:24:25.474 --rc genhtml_function_coverage=1 00:24:25.474 --rc genhtml_legend=1 00:24:25.474 --rc geninfo_all_blocks=1 00:24:25.474 --rc geninfo_unexecuted_blocks=1 00:24:25.474 00:24:25.474 ' 00:24:25.474 11:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:25.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:25.474 --rc genhtml_branch_coverage=1 00:24:25.474 --rc genhtml_function_coverage=1 00:24:25.474 --rc genhtml_legend=1 00:24:25.474 --rc geninfo_all_blocks=1 00:24:25.474 --rc geninfo_unexecuted_blocks=1 00:24:25.474 00:24:25.474 ' 00:24:25.474 11:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:25.474 11:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:24:25.474 11:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:25.474 11:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:25.474 11:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:25.474 11:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:25.474 11:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:25.474 11:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:25.474 11:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:25.474 11:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:25.474 11:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:25.474 11:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:25.474 11:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:25.474 11:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:25.474 11:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:25.474 11:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:25.474 11:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:25.474 11:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:25.474 11:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:25.474 11:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:24:25.474 11:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:25.474 11:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:25.474 11:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:25.474 11:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.474 11:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.475 11:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.475 11:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:24:25.475 11:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.475 11:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:24:25.475 11:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:25.475 11:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:25.475 11:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:25.475 11:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:25.475 11:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:25.475 11:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:25.475 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:25.475 11:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:25.475 11:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:25.475 11:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:25.475 11:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:24:25.475 11:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:24:25.475 11:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:24:25.475 11:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:24:25.475 11:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:24:25.475 11:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:24:25.475 11:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:24:25.475 11:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:25.475 11:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:25.475 11:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:25.475 11:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:25.475 11:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:25.475 11:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:25.475 11:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:25.475 11:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:25.475 11:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:25.475 11:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:25.475 11:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:24:25.475 11:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:32.045 11:18:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:32.045 11:18:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:24:32.045 11:18:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:32.045 11:18:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:32.045 11:18:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:32.045 11:18:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:32.046 11:18:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:32.046 11:18:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:24:32.046 11:18:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:32.046 11:18:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:24:32.046 11:18:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:24:32.046 11:18:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:24:32.046 11:18:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:24:32.046 11:18:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:24:32.046 11:18:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:24:32.046 11:18:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:32.046 11:18:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:32.046 11:18:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:32.046 11:18:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:32.046 11:18:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:32.046 11:18:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:32.046 11:18:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:32.046 11:18:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:32.046 11:18:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:32.046 11:18:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:32.046 11:18:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:32.046 11:18:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:32.046 11:18:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:32.046 11:18:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:32.046 11:18:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:32.046 11:18:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:32.046 11:18:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:32.046 11:18:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:32.046 11:18:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:32.046 11:18:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:32.046 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:32.046 11:18:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:32.046 11:18:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:32.046 11:18:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:32.046 11:18:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:32.046 11:18:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:32.046 11:18:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:32.046 11:18:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:32.046 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:32.046 11:18:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:32.046 11:18:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:32.046 11:18:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:32.046 11:18:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:32.046 11:18:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:32.046 11:18:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:32.046 11:18:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:32.046 11:18:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:32.046 11:18:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:32.046 11:18:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:32.046 11:18:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:32.046 11:18:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:32.046 11:18:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:32.046 11:18:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:32.046 11:18:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:32.046 11:18:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:32.046 Found net devices under 0000:86:00.0: cvl_0_0 00:24:32.046 11:18:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:32.046 11:18:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:32.046 11:18:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:32.046 11:18:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:32.046 11:18:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:32.046 11:18:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:32.046 11:18:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:32.046 11:18:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:32.046 11:18:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:32.046 Found net devices under 0000:86:00.1: cvl_0_1 00:24:32.046 11:18:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:32.046 11:18:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:32.046 11:18:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:24:32.046 11:18:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:32.046 11:18:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:32.046 11:18:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:32.046 11:18:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:32.046 11:18:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:32.046 11:18:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:32.046 11:18:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:32.046 11:18:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:32.046 11:18:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:32.046 11:18:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:32.046 11:18:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:32.046 11:18:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:32.047 11:18:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:32.047 11:18:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:32.047 11:18:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:32.047 11:18:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:32.047 11:18:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:32.047 11:18:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:32.047 11:18:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:32.047 11:18:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:32.047 11:18:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:32.047 11:18:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:32.047 11:18:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:32.047 11:18:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:32.047 11:18:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:32.047 11:18:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:32.047 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:32.047 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.442 ms 00:24:32.047 00:24:32.047 --- 10.0.0.2 ping statistics --- 00:24:32.047 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:32.047 rtt min/avg/max/mdev = 0.442/0.442/0.442/0.000 ms 00:24:32.047 11:18:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:32.047 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:32.047 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:24:32.047 00:24:32.047 --- 10.0.0.1 ping statistics --- 00:24:32.047 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:32.047 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:24:32.047 11:18:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:32.047 11:18:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:24:32.047 11:18:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:32.047 11:18:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:32.047 11:18:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:32.047 11:18:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:32.047 11:18:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:32.047 11:18:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:32.047 11:18:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:32.047 11:18:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:24:32.047 11:18:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:32.047 11:18:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:32.047 11:18:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:32.047 11:18:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=4176302 00:24:32.047 11:18:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 4176302 00:24:32.047 11:18:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:32.047 11:18:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 4176302 ']' 00:24:32.047 11:18:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:32.047 11:18:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:32.047 11:18:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:32.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:32.047 11:18:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:32.047 11:18:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:32.047 [2024-11-20 11:18:58.919328] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:24:32.047 [2024-11-20 11:18:58.919375] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:32.047 [2024-11-20 11:18:58.984769] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:32.047 [2024-11-20 11:18:59.026675] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:32.047 [2024-11-20 11:18:59.026710] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:32.047 [2024-11-20 11:18:59.026720] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:32.047 [2024-11-20 11:18:59.026728] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:32.047 [2024-11-20 11:18:59.026734] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:32.047 [2024-11-20 11:18:59.027345] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:32.047 11:18:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:32.047 11:18:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:24:32.047 11:18:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:32.047 11:18:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:32.047 11:18:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:32.047 11:18:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:32.047 11:18:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:24:32.047 11:18:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.047 11:18:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:32.047 [2024-11-20 11:18:59.170206] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:32.047 [2024-11-20 11:18:59.178371] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:24:32.047 null0 00:24:32.047 [2024-11-20 11:18:59.210358] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:32.047 11:18:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.047 11:18:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=4176460 00:24:32.047 11:18:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:24:32.047 11:18:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 4176460 /tmp/host.sock 00:24:32.047 11:18:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 4176460 ']' 00:24:32.048 11:18:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:24:32.048 11:18:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:32.048 11:18:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:24:32.048 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:24:32.048 11:18:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:32.048 11:18:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:32.048 [2024-11-20 11:18:59.279341] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:24:32.048 [2024-11-20 11:18:59.279380] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4176460 ] 00:24:32.048 [2024-11-20 11:18:59.356498] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:32.048 [2024-11-20 11:18:59.399617] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:32.048 11:18:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:32.048 11:18:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:24:32.048 11:18:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:32.048 11:18:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:24:32.048 11:18:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.048 11:18:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:32.048 11:18:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.048 11:18:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:24:32.048 11:18:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.048 11:18:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:32.048 11:18:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.048 11:18:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:24:32.048 11:18:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.048 11:18:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:33.422 [2024-11-20 11:19:00.577087] bdev_nvme.c:7478:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:33.422 [2024-11-20 11:19:00.577109] bdev_nvme.c:7564:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:33.422 [2024-11-20 11:19:00.577128] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:33.423 [2024-11-20 11:19:00.663389] bdev_nvme.c:7407:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:24:33.423 [2024-11-20 11:19:00.838350] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:24:33.423 [2024-11-20 11:19:00.839069] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x13259f0:1 started. 00:24:33.423 [2024-11-20 11:19:00.840408] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:24:33.423 [2024-11-20 11:19:00.840447] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:24:33.423 [2024-11-20 11:19:00.840466] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:24:33.423 [2024-11-20 11:19:00.840478] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:33.423 [2024-11-20 11:19:00.840498] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:33.423 11:19:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.423 11:19:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:24:33.423 11:19:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:33.423 [2024-11-20 11:19:00.845840] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x13259f0 was disconnected and freed. delete nvme_qpair. 00:24:33.423 11:19:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:33.423 11:19:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:33.423 11:19:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.423 11:19:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:33.423 11:19:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:33.423 11:19:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:33.423 11:19:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.423 11:19:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:24:33.423 11:19:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:24:33.423 11:19:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:24:33.681 11:19:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:24:33.681 11:19:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:33.681 11:19:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:33.681 11:19:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:33.681 11:19:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.681 11:19:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:33.681 11:19:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:33.681 11:19:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:33.681 11:19:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.681 11:19:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:33.681 11:19:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:34.616 11:19:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:34.616 11:19:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:34.616 11:19:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:34.616 11:19:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.616 11:19:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:34.616 11:19:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:34.616 11:19:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:34.616 11:19:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.616 11:19:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:34.616 11:19:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:35.987 11:19:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:35.987 11:19:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:35.987 11:19:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:35.988 11:19:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.988 11:19:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:35.988 11:19:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:35.988 11:19:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:35.988 11:19:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.988 11:19:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:35.988 11:19:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:36.920 11:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:36.920 11:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:36.920 11:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:36.920 11:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.920 11:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:36.920 11:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:36.920 11:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:36.920 11:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.920 11:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:36.920 11:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:37.852 11:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:37.852 11:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:37.852 11:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:37.852 11:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.852 11:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:37.852 11:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:37.852 11:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:37.852 11:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.852 11:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:37.852 11:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:38.787 11:19:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:38.787 11:19:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:38.787 11:19:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:38.787 11:19:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.787 11:19:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:38.787 11:19:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:38.787 11:19:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:38.787 11:19:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.046 [2024-11-20 11:19:06.282111] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:24:39.046 [2024-11-20 11:19:06.282147] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:39.046 [2024-11-20 11:19:06.282158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.046 [2024-11-20 11:19:06.282166] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:39.046 [2024-11-20 11:19:06.282173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.046 [2024-11-20 11:19:06.282181] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:39.046 [2024-11-20 11:19:06.282188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.046 [2024-11-20 11:19:06.282195] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:39.046 [2024-11-20 11:19:06.282201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.046 [2024-11-20 11:19:06.282211] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:39.046 [2024-11-20 11:19:06.282219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.046 [2024-11-20 11:19:06.282225] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1302220 is same with the state(6) to be set 00:24:39.046 [2024-11-20 11:19:06.292134] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1302220 (9): Bad file descriptor 00:24:39.046 11:19:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:39.046 11:19:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:39.046 [2024-11-20 11:19:06.302167] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:39.046 [2024-11-20 11:19:06.302181] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:39.046 [2024-11-20 11:19:06.302185] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:39.046 [2024-11-20 11:19:06.302190] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:39.046 [2024-11-20 11:19:06.302208] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:39.983 11:19:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:39.983 11:19:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:39.983 11:19:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:39.983 11:19:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.983 11:19:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:39.983 11:19:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:39.983 11:19:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:39.983 [2024-11-20 11:19:07.311981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:24:39.983 [2024-11-20 11:19:07.312056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1302220 with addr=10.0.0.2, port=4420 00:24:39.983 [2024-11-20 11:19:07.312088] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1302220 is same with the state(6) to be set 00:24:39.983 [2024-11-20 11:19:07.312138] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1302220 (9): Bad file descriptor 00:24:39.983 [2024-11-20 11:19:07.313097] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:24:39.983 [2024-11-20 11:19:07.313160] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:39.983 [2024-11-20 11:19:07.313185] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:39.983 [2024-11-20 11:19:07.313208] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:39.983 [2024-11-20 11:19:07.313227] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:39.983 [2024-11-20 11:19:07.313242] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:39.983 [2024-11-20 11:19:07.313255] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:39.983 [2024-11-20 11:19:07.313277] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:39.983 [2024-11-20 11:19:07.313291] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:39.984 11:19:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.984 11:19:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:39.984 11:19:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:40.922 [2024-11-20 11:19:08.315813] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:40.922 [2024-11-20 11:19:08.315833] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:40.922 [2024-11-20 11:19:08.315843] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:40.922 [2024-11-20 11:19:08.315855] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:40.922 [2024-11-20 11:19:08.315861] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:24:40.922 [2024-11-20 11:19:08.315867] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:40.922 [2024-11-20 11:19:08.315872] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:40.922 [2024-11-20 11:19:08.315876] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:40.922 [2024-11-20 11:19:08.315892] bdev_nvme.c:7229:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:24:40.922 [2024-11-20 11:19:08.315909] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:40.922 [2024-11-20 11:19:08.315918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.922 [2024-11-20 11:19:08.315927] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:40.922 [2024-11-20 11:19:08.315933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.922 [2024-11-20 11:19:08.315940] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:40.922 [2024-11-20 11:19:08.315950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.922 [2024-11-20 11:19:08.315957] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:40.922 [2024-11-20 11:19:08.315963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.922 [2024-11-20 11:19:08.315970] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:40.922 [2024-11-20 11:19:08.315976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.922 [2024-11-20 11:19:08.315982] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:24:40.922 [2024-11-20 11:19:08.316450] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f1900 (9): Bad file descriptor 00:24:40.922 [2024-11-20 11:19:08.317461] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:24:40.923 [2024-11-20 11:19:08.317472] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:24:40.923 11:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:40.923 11:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:40.923 11:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:40.923 11:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.923 11:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:40.923 11:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:40.923 11:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:40.923 11:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.923 11:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:24:40.923 11:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:40.923 11:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:41.181 11:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:24:41.181 11:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:41.181 11:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:41.181 11:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:41.181 11:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.181 11:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:41.181 11:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:41.181 11:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:41.181 11:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.181 11:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:24:41.181 11:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:42.114 11:19:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:42.114 11:19:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:42.114 11:19:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:42.114 11:19:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.114 11:19:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:42.114 11:19:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:42.114 11:19:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:42.114 11:19:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.114 11:19:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:24:42.114 11:19:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:43.047 [2024-11-20 11:19:10.372016] bdev_nvme.c:7478:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:43.047 [2024-11-20 11:19:10.372038] bdev_nvme.c:7564:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:43.047 [2024-11-20 11:19:10.372056] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:43.047 [2024-11-20 11:19:10.458310] bdev_nvme.c:7407:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:24:43.047 [2024-11-20 11:19:10.512816] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:24:43.047 [2024-11-20 11:19:10.513381] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x12f6760:1 started. 00:24:43.047 [2024-11-20 11:19:10.514457] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:24:43.047 [2024-11-20 11:19:10.514490] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:24:43.047 [2024-11-20 11:19:10.514507] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:24:43.047 [2024-11-20 11:19:10.514519] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:24:43.047 [2024-11-20 11:19:10.514526] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:43.047 [2024-11-20 11:19:10.520526] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x12f6760 was disconnected and freed. delete nvme_qpair. 00:24:43.305 11:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:43.305 11:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:43.305 11:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:43.305 11:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.305 11:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:43.305 11:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:43.305 11:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:43.305 11:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.305 11:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:24:43.305 11:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:24:43.305 11:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 4176460 00:24:43.305 11:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 4176460 ']' 00:24:43.305 11:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 4176460 00:24:43.305 11:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:24:43.305 11:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:43.305 11:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4176460 00:24:43.305 11:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:43.305 11:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:43.305 11:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4176460' 00:24:43.305 killing process with pid 4176460 00:24:43.305 11:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 4176460 00:24:43.305 11:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 4176460 00:24:43.564 11:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:24:43.564 11:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:43.564 11:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:24:43.564 11:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:43.564 11:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:24:43.564 11:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:43.564 11:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:43.564 rmmod nvme_tcp 00:24:43.564 rmmod nvme_fabrics 00:24:43.564 rmmod nvme_keyring 00:24:43.564 11:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:43.564 11:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:24:43.564 11:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:24:43.564 11:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 4176302 ']' 00:24:43.564 11:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 4176302 00:24:43.564 11:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 4176302 ']' 00:24:43.564 11:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 4176302 00:24:43.564 11:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:24:43.564 11:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:43.564 11:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4176302 00:24:43.564 11:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:43.564 11:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:43.564 11:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4176302' 00:24:43.564 killing process with pid 4176302 00:24:43.564 11:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 4176302 00:24:43.564 11:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 4176302 00:24:43.823 11:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:43.823 11:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:43.823 11:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:43.823 11:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:24:43.823 11:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:24:43.823 11:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:43.823 11:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:24:43.823 11:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:43.823 11:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:43.823 11:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:43.823 11:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:43.823 11:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:45.728 11:19:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:45.728 00:24:45.728 real 0m20.452s 00:24:45.728 user 0m24.673s 00:24:45.728 sys 0m5.875s 00:24:45.728 11:19:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:45.728 11:19:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:45.728 ************************************ 00:24:45.728 END TEST nvmf_discovery_remove_ifc 00:24:45.728 ************************************ 00:24:45.728 11:19:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:24:45.728 11:19:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:45.728 11:19:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:45.728 11:19:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.989 ************************************ 00:24:45.989 START TEST nvmf_identify_kernel_target 00:24:45.989 ************************************ 00:24:45.989 11:19:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:24:45.989 * Looking for test storage... 00:24:45.989 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:45.989 11:19:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:45.989 11:19:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lcov --version 00:24:45.989 11:19:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:45.989 11:19:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:45.989 11:19:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:45.989 11:19:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:45.989 11:19:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:45.989 11:19:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:24:45.989 11:19:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:24:45.989 11:19:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:24:45.989 11:19:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:24:45.989 11:19:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:24:45.989 11:19:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:24:45.989 11:19:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:24:45.989 11:19:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:45.989 11:19:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:24:45.989 11:19:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:24:45.989 11:19:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:45.989 11:19:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:45.989 11:19:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:24:45.989 11:19:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:24:45.989 11:19:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:45.989 11:19:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:24:45.989 11:19:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:24:45.989 11:19:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:24:45.989 11:19:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:24:45.989 11:19:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:45.989 11:19:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:24:45.989 11:19:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:24:45.989 11:19:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:45.989 11:19:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:45.989 11:19:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:24:45.989 11:19:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:45.989 11:19:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:45.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:45.989 --rc genhtml_branch_coverage=1 00:24:45.989 --rc genhtml_function_coverage=1 00:24:45.989 --rc genhtml_legend=1 00:24:45.989 --rc geninfo_all_blocks=1 00:24:45.989 --rc geninfo_unexecuted_blocks=1 00:24:45.989 00:24:45.989 ' 00:24:45.989 11:19:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:45.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:45.989 --rc genhtml_branch_coverage=1 00:24:45.989 --rc genhtml_function_coverage=1 00:24:45.989 --rc genhtml_legend=1 00:24:45.989 --rc geninfo_all_blocks=1 00:24:45.989 --rc geninfo_unexecuted_blocks=1 00:24:45.989 00:24:45.989 ' 00:24:45.989 11:19:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:45.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:45.989 --rc genhtml_branch_coverage=1 00:24:45.989 --rc genhtml_function_coverage=1 00:24:45.989 --rc genhtml_legend=1 00:24:45.989 --rc geninfo_all_blocks=1 00:24:45.989 --rc geninfo_unexecuted_blocks=1 00:24:45.989 00:24:45.989 ' 00:24:45.989 11:19:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:45.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:45.989 --rc genhtml_branch_coverage=1 00:24:45.989 --rc genhtml_function_coverage=1 00:24:45.989 --rc genhtml_legend=1 00:24:45.989 --rc geninfo_all_blocks=1 00:24:45.989 --rc geninfo_unexecuted_blocks=1 00:24:45.989 00:24:45.989 ' 00:24:45.989 11:19:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:45.989 11:19:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:24:45.989 11:19:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:45.989 11:19:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:45.989 11:19:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:45.989 11:19:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:45.989 11:19:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:45.989 11:19:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:45.989 11:19:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:45.989 11:19:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:45.989 11:19:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:45.989 11:19:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:45.989 11:19:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:45.989 11:19:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:45.989 11:19:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:45.989 11:19:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:45.989 11:19:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:45.989 11:19:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:45.989 11:19:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:45.989 11:19:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:24:45.989 11:19:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:45.989 11:19:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:45.989 11:19:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:45.989 11:19:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:45.989 11:19:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:45.990 11:19:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:45.990 11:19:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:24:45.990 11:19:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:45.990 11:19:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:24:45.990 11:19:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:45.990 11:19:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:45.990 11:19:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:45.990 11:19:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:45.990 11:19:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:45.990 11:19:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:45.990 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:45.990 11:19:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:45.990 11:19:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:45.990 11:19:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:45.990 11:19:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:24:45.990 11:19:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:45.990 11:19:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:45.990 11:19:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:45.990 11:19:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:45.990 11:19:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:45.990 11:19:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:45.990 11:19:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:45.990 11:19:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:45.990 11:19:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:45.990 11:19:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:45.990 11:19:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:24:45.990 11:19:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:24:52.664 11:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:52.664 11:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:24:52.664 11:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:52.664 11:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:52.664 11:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:52.664 11:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:52.664 11:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:52.664 11:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:24:52.664 11:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:52.664 11:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:24:52.664 11:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:24:52.664 11:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:24:52.664 11:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:24:52.664 11:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:24:52.664 11:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:24:52.664 11:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:52.664 11:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:52.664 11:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:52.664 11:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:52.664 11:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:52.664 11:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:52.664 11:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:52.664 11:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:52.664 11:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:52.664 11:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:52.665 11:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:52.665 11:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:52.665 11:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:52.665 11:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:52.665 11:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:52.665 11:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:52.665 11:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:52.665 11:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:52.665 11:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:52.665 11:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:52.665 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:52.665 11:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:52.665 11:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:52.665 11:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:52.665 11:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:52.665 11:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:52.665 11:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:52.665 11:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:52.665 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:52.665 11:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:52.665 11:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:52.665 11:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:52.665 11:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:52.665 11:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:52.665 11:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:52.665 11:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:52.665 11:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:52.665 11:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:52.665 11:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:52.665 11:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:52.665 11:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:52.665 11:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:52.665 11:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:52.665 11:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:52.665 11:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:52.665 Found net devices under 0000:86:00.0: cvl_0_0 00:24:52.665 11:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:52.665 11:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:52.665 11:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:52.665 11:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:52.665 11:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:52.665 11:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:52.665 11:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:52.665 11:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:52.665 11:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:52.665 Found net devices under 0000:86:00.1: cvl_0_1 00:24:52.665 11:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:52.665 11:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:52.665 11:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:24:52.665 11:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:52.665 11:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:52.665 11:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:52.665 11:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:52.665 11:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:52.665 11:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:52.665 11:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:52.665 11:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:52.665 11:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:52.665 11:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:52.665 11:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:52.665 11:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:52.665 11:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:52.665 11:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:52.665 11:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:52.665 11:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:52.665 11:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:52.665 11:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:52.665 11:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:52.665 11:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:52.665 11:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:52.665 11:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:52.665 11:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:52.665 11:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:52.665 11:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:52.665 11:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:52.665 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:52.665 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.442 ms 00:24:52.665 00:24:52.665 --- 10.0.0.2 ping statistics --- 00:24:52.665 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:52.665 rtt min/avg/max/mdev = 0.442/0.442/0.442/0.000 ms 00:24:52.665 11:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:52.665 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:52.665 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:24:52.665 00:24:52.665 --- 10.0.0.1 ping statistics --- 00:24:52.665 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:52.665 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:24:52.665 11:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:52.665 11:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:24:52.665 11:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:52.665 11:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:52.665 11:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:52.665 11:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:52.665 11:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:52.665 11:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:52.665 11:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:52.665 11:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:24:52.665 11:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:24:52.665 11:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:24:52.665 11:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:52.665 11:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:52.665 11:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:52.665 11:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:52.665 11:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:52.665 11:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:52.665 11:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:52.665 11:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:52.665 11:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:52.665 11:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:24:52.666 11:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:24:52.666 11:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:24:52.666 11:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:24:52.666 11:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:52.666 11:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:52.666 11:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:24:52.666 11:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:24:52.666 11:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:24:52.666 11:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:24:52.666 11:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:24:52.666 11:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:24:55.202 Waiting for block devices as requested 00:24:55.202 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:24:55.202 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:24:55.202 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:24:55.202 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:24:55.202 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:24:55.202 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:24:55.461 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:24:55.461 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:24:55.461 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:24:55.461 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:24:55.724 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:24:55.724 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:24:55.724 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:24:55.724 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:24:55.983 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:24:55.983 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:24:55.983 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:24:56.243 11:19:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:24:56.243 11:19:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:24:56.243 11:19:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:24:56.243 11:19:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:24:56.243 11:19:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:24:56.243 11:19:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:24:56.243 11:19:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:24:56.243 11:19:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:24:56.243 11:19:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:24:56.243 No valid GPT data, bailing 00:24:56.243 11:19:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:24:56.243 11:19:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:24:56.243 11:19:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:24:56.243 11:19:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:24:56.243 11:19:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:24:56.243 11:19:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:56.243 11:19:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:56.243 11:19:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:24:56.243 11:19:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:24:56.243 11:19:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:24:56.243 11:19:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:24:56.243 11:19:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:24:56.243 11:19:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:24:56.243 11:19:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:24:56.243 11:19:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:24:56.243 11:19:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:24:56.243 11:19:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:24:56.243 11:19:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:24:56.243 00:24:56.243 Discovery Log Number of Records 2, Generation counter 2 00:24:56.243 =====Discovery Log Entry 0====== 00:24:56.243 trtype: tcp 00:24:56.243 adrfam: ipv4 00:24:56.243 subtype: current discovery subsystem 00:24:56.243 treq: not specified, sq flow control disable supported 00:24:56.243 portid: 1 00:24:56.243 trsvcid: 4420 00:24:56.243 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:24:56.243 traddr: 10.0.0.1 00:24:56.243 eflags: none 00:24:56.243 sectype: none 00:24:56.243 =====Discovery Log Entry 1====== 00:24:56.243 trtype: tcp 00:24:56.243 adrfam: ipv4 00:24:56.243 subtype: nvme subsystem 00:24:56.243 treq: not specified, sq flow control disable supported 00:24:56.243 portid: 1 00:24:56.243 trsvcid: 4420 00:24:56.243 subnqn: nqn.2016-06.io.spdk:testnqn 00:24:56.243 traddr: 10.0.0.1 00:24:56.243 eflags: none 00:24:56.243 sectype: none 00:24:56.243 11:19:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:24:56.243 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:24:56.503 ===================================================== 00:24:56.503 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:24:56.503 ===================================================== 00:24:56.503 Controller Capabilities/Features 00:24:56.503 ================================ 00:24:56.503 Vendor ID: 0000 00:24:56.503 Subsystem Vendor ID: 0000 00:24:56.503 Serial Number: 761b6290f32e9f665f3d 00:24:56.503 Model Number: Linux 00:24:56.503 Firmware Version: 6.8.9-20 00:24:56.503 Recommended Arb Burst: 0 00:24:56.503 IEEE OUI Identifier: 00 00 00 00:24:56.503 Multi-path I/O 00:24:56.503 May have multiple subsystem ports: No 00:24:56.503 May have multiple controllers: No 00:24:56.503 Associated with SR-IOV VF: No 00:24:56.503 Max Data Transfer Size: Unlimited 00:24:56.503 Max Number of Namespaces: 0 00:24:56.503 Max Number of I/O Queues: 1024 00:24:56.503 NVMe Specification Version (VS): 1.3 00:24:56.503 NVMe Specification Version (Identify): 1.3 00:24:56.503 Maximum Queue Entries: 1024 00:24:56.503 Contiguous Queues Required: No 00:24:56.503 Arbitration Mechanisms Supported 00:24:56.503 Weighted Round Robin: Not Supported 00:24:56.503 Vendor Specific: Not Supported 00:24:56.503 Reset Timeout: 7500 ms 00:24:56.503 Doorbell Stride: 4 bytes 00:24:56.503 NVM Subsystem Reset: Not Supported 00:24:56.503 Command Sets Supported 00:24:56.503 NVM Command Set: Supported 00:24:56.503 Boot Partition: Not Supported 00:24:56.503 Memory Page Size Minimum: 4096 bytes 00:24:56.503 Memory Page Size Maximum: 4096 bytes 00:24:56.503 Persistent Memory Region: Not Supported 00:24:56.503 Optional Asynchronous Events Supported 00:24:56.503 Namespace Attribute Notices: Not Supported 00:24:56.503 Firmware Activation Notices: Not Supported 00:24:56.503 ANA Change Notices: Not Supported 00:24:56.503 PLE Aggregate Log Change Notices: Not Supported 00:24:56.503 LBA Status Info Alert Notices: Not Supported 00:24:56.503 EGE Aggregate Log Change Notices: Not Supported 00:24:56.503 Normal NVM Subsystem Shutdown event: Not Supported 00:24:56.503 Zone Descriptor Change Notices: Not Supported 00:24:56.503 Discovery Log Change Notices: Supported 00:24:56.503 Controller Attributes 00:24:56.503 128-bit Host Identifier: Not Supported 00:24:56.503 Non-Operational Permissive Mode: Not Supported 00:24:56.503 NVM Sets: Not Supported 00:24:56.503 Read Recovery Levels: Not Supported 00:24:56.503 Endurance Groups: Not Supported 00:24:56.503 Predictable Latency Mode: Not Supported 00:24:56.503 Traffic Based Keep ALive: Not Supported 00:24:56.503 Namespace Granularity: Not Supported 00:24:56.503 SQ Associations: Not Supported 00:24:56.503 UUID List: Not Supported 00:24:56.503 Multi-Domain Subsystem: Not Supported 00:24:56.503 Fixed Capacity Management: Not Supported 00:24:56.503 Variable Capacity Management: Not Supported 00:24:56.503 Delete Endurance Group: Not Supported 00:24:56.503 Delete NVM Set: Not Supported 00:24:56.503 Extended LBA Formats Supported: Not Supported 00:24:56.503 Flexible Data Placement Supported: Not Supported 00:24:56.503 00:24:56.503 Controller Memory Buffer Support 00:24:56.503 ================================ 00:24:56.503 Supported: No 00:24:56.503 00:24:56.503 Persistent Memory Region Support 00:24:56.503 ================================ 00:24:56.503 Supported: No 00:24:56.503 00:24:56.503 Admin Command Set Attributes 00:24:56.503 ============================ 00:24:56.503 Security Send/Receive: Not Supported 00:24:56.504 Format NVM: Not Supported 00:24:56.504 Firmware Activate/Download: Not Supported 00:24:56.504 Namespace Management: Not Supported 00:24:56.504 Device Self-Test: Not Supported 00:24:56.504 Directives: Not Supported 00:24:56.504 NVMe-MI: Not Supported 00:24:56.504 Virtualization Management: Not Supported 00:24:56.504 Doorbell Buffer Config: Not Supported 00:24:56.504 Get LBA Status Capability: Not Supported 00:24:56.504 Command & Feature Lockdown Capability: Not Supported 00:24:56.504 Abort Command Limit: 1 00:24:56.504 Async Event Request Limit: 1 00:24:56.504 Number of Firmware Slots: N/A 00:24:56.504 Firmware Slot 1 Read-Only: N/A 00:24:56.504 Firmware Activation Without Reset: N/A 00:24:56.504 Multiple Update Detection Support: N/A 00:24:56.504 Firmware Update Granularity: No Information Provided 00:24:56.504 Per-Namespace SMART Log: No 00:24:56.504 Asymmetric Namespace Access Log Page: Not Supported 00:24:56.504 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:24:56.504 Command Effects Log Page: Not Supported 00:24:56.504 Get Log Page Extended Data: Supported 00:24:56.504 Telemetry Log Pages: Not Supported 00:24:56.504 Persistent Event Log Pages: Not Supported 00:24:56.504 Supported Log Pages Log Page: May Support 00:24:56.504 Commands Supported & Effects Log Page: Not Supported 00:24:56.504 Feature Identifiers & Effects Log Page:May Support 00:24:56.504 NVMe-MI Commands & Effects Log Page: May Support 00:24:56.504 Data Area 4 for Telemetry Log: Not Supported 00:24:56.504 Error Log Page Entries Supported: 1 00:24:56.504 Keep Alive: Not Supported 00:24:56.504 00:24:56.504 NVM Command Set Attributes 00:24:56.504 ========================== 00:24:56.504 Submission Queue Entry Size 00:24:56.504 Max: 1 00:24:56.504 Min: 1 00:24:56.504 Completion Queue Entry Size 00:24:56.504 Max: 1 00:24:56.504 Min: 1 00:24:56.504 Number of Namespaces: 0 00:24:56.504 Compare Command: Not Supported 00:24:56.504 Write Uncorrectable Command: Not Supported 00:24:56.504 Dataset Management Command: Not Supported 00:24:56.504 Write Zeroes Command: Not Supported 00:24:56.504 Set Features Save Field: Not Supported 00:24:56.504 Reservations: Not Supported 00:24:56.504 Timestamp: Not Supported 00:24:56.504 Copy: Not Supported 00:24:56.504 Volatile Write Cache: Not Present 00:24:56.504 Atomic Write Unit (Normal): 1 00:24:56.504 Atomic Write Unit (PFail): 1 00:24:56.504 Atomic Compare & Write Unit: 1 00:24:56.504 Fused Compare & Write: Not Supported 00:24:56.504 Scatter-Gather List 00:24:56.504 SGL Command Set: Supported 00:24:56.504 SGL Keyed: Not Supported 00:24:56.504 SGL Bit Bucket Descriptor: Not Supported 00:24:56.504 SGL Metadata Pointer: Not Supported 00:24:56.504 Oversized SGL: Not Supported 00:24:56.504 SGL Metadata Address: Not Supported 00:24:56.504 SGL Offset: Supported 00:24:56.504 Transport SGL Data Block: Not Supported 00:24:56.504 Replay Protected Memory Block: Not Supported 00:24:56.504 00:24:56.504 Firmware Slot Information 00:24:56.504 ========================= 00:24:56.504 Active slot: 0 00:24:56.504 00:24:56.504 00:24:56.504 Error Log 00:24:56.504 ========= 00:24:56.504 00:24:56.504 Active Namespaces 00:24:56.504 ================= 00:24:56.504 Discovery Log Page 00:24:56.504 ================== 00:24:56.504 Generation Counter: 2 00:24:56.504 Number of Records: 2 00:24:56.504 Record Format: 0 00:24:56.504 00:24:56.504 Discovery Log Entry 0 00:24:56.504 ---------------------- 00:24:56.504 Transport Type: 3 (TCP) 00:24:56.504 Address Family: 1 (IPv4) 00:24:56.504 Subsystem Type: 3 (Current Discovery Subsystem) 00:24:56.504 Entry Flags: 00:24:56.504 Duplicate Returned Information: 0 00:24:56.504 Explicit Persistent Connection Support for Discovery: 0 00:24:56.504 Transport Requirements: 00:24:56.504 Secure Channel: Not Specified 00:24:56.504 Port ID: 1 (0x0001) 00:24:56.504 Controller ID: 65535 (0xffff) 00:24:56.504 Admin Max SQ Size: 32 00:24:56.504 Transport Service Identifier: 4420 00:24:56.504 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:24:56.504 Transport Address: 10.0.0.1 00:24:56.504 Discovery Log Entry 1 00:24:56.504 ---------------------- 00:24:56.504 Transport Type: 3 (TCP) 00:24:56.504 Address Family: 1 (IPv4) 00:24:56.504 Subsystem Type: 2 (NVM Subsystem) 00:24:56.504 Entry Flags: 00:24:56.504 Duplicate Returned Information: 0 00:24:56.504 Explicit Persistent Connection Support for Discovery: 0 00:24:56.504 Transport Requirements: 00:24:56.504 Secure Channel: Not Specified 00:24:56.504 Port ID: 1 (0x0001) 00:24:56.504 Controller ID: 65535 (0xffff) 00:24:56.504 Admin Max SQ Size: 32 00:24:56.504 Transport Service Identifier: 4420 00:24:56.504 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:24:56.504 Transport Address: 10.0.0.1 00:24:56.504 11:19:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:56.504 get_feature(0x01) failed 00:24:56.504 get_feature(0x02) failed 00:24:56.504 get_feature(0x04) failed 00:24:56.504 ===================================================== 00:24:56.504 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:24:56.504 ===================================================== 00:24:56.504 Controller Capabilities/Features 00:24:56.504 ================================ 00:24:56.504 Vendor ID: 0000 00:24:56.504 Subsystem Vendor ID: 0000 00:24:56.504 Serial Number: 55b8d17d3af995139898 00:24:56.504 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:24:56.504 Firmware Version: 6.8.9-20 00:24:56.504 Recommended Arb Burst: 6 00:24:56.504 IEEE OUI Identifier: 00 00 00 00:24:56.504 Multi-path I/O 00:24:56.504 May have multiple subsystem ports: Yes 00:24:56.504 May have multiple controllers: Yes 00:24:56.504 Associated with SR-IOV VF: No 00:24:56.504 Max Data Transfer Size: Unlimited 00:24:56.504 Max Number of Namespaces: 1024 00:24:56.504 Max Number of I/O Queues: 128 00:24:56.504 NVMe Specification Version (VS): 1.3 00:24:56.504 NVMe Specification Version (Identify): 1.3 00:24:56.504 Maximum Queue Entries: 1024 00:24:56.504 Contiguous Queues Required: No 00:24:56.504 Arbitration Mechanisms Supported 00:24:56.504 Weighted Round Robin: Not Supported 00:24:56.504 Vendor Specific: Not Supported 00:24:56.504 Reset Timeout: 7500 ms 00:24:56.504 Doorbell Stride: 4 bytes 00:24:56.504 NVM Subsystem Reset: Not Supported 00:24:56.504 Command Sets Supported 00:24:56.504 NVM Command Set: Supported 00:24:56.504 Boot Partition: Not Supported 00:24:56.504 Memory Page Size Minimum: 4096 bytes 00:24:56.504 Memory Page Size Maximum: 4096 bytes 00:24:56.504 Persistent Memory Region: Not Supported 00:24:56.504 Optional Asynchronous Events Supported 00:24:56.504 Namespace Attribute Notices: Supported 00:24:56.504 Firmware Activation Notices: Not Supported 00:24:56.504 ANA Change Notices: Supported 00:24:56.504 PLE Aggregate Log Change Notices: Not Supported 00:24:56.504 LBA Status Info Alert Notices: Not Supported 00:24:56.504 EGE Aggregate Log Change Notices: Not Supported 00:24:56.504 Normal NVM Subsystem Shutdown event: Not Supported 00:24:56.504 Zone Descriptor Change Notices: Not Supported 00:24:56.504 Discovery Log Change Notices: Not Supported 00:24:56.504 Controller Attributes 00:24:56.504 128-bit Host Identifier: Supported 00:24:56.504 Non-Operational Permissive Mode: Not Supported 00:24:56.504 NVM Sets: Not Supported 00:24:56.504 Read Recovery Levels: Not Supported 00:24:56.504 Endurance Groups: Not Supported 00:24:56.504 Predictable Latency Mode: Not Supported 00:24:56.504 Traffic Based Keep ALive: Supported 00:24:56.504 Namespace Granularity: Not Supported 00:24:56.504 SQ Associations: Not Supported 00:24:56.504 UUID List: Not Supported 00:24:56.504 Multi-Domain Subsystem: Not Supported 00:24:56.504 Fixed Capacity Management: Not Supported 00:24:56.504 Variable Capacity Management: Not Supported 00:24:56.504 Delete Endurance Group: Not Supported 00:24:56.504 Delete NVM Set: Not Supported 00:24:56.504 Extended LBA Formats Supported: Not Supported 00:24:56.504 Flexible Data Placement Supported: Not Supported 00:24:56.504 00:24:56.504 Controller Memory Buffer Support 00:24:56.504 ================================ 00:24:56.504 Supported: No 00:24:56.504 00:24:56.504 Persistent Memory Region Support 00:24:56.504 ================================ 00:24:56.504 Supported: No 00:24:56.504 00:24:56.504 Admin Command Set Attributes 00:24:56.504 ============================ 00:24:56.504 Security Send/Receive: Not Supported 00:24:56.504 Format NVM: Not Supported 00:24:56.505 Firmware Activate/Download: Not Supported 00:24:56.505 Namespace Management: Not Supported 00:24:56.505 Device Self-Test: Not Supported 00:24:56.505 Directives: Not Supported 00:24:56.505 NVMe-MI: Not Supported 00:24:56.505 Virtualization Management: Not Supported 00:24:56.505 Doorbell Buffer Config: Not Supported 00:24:56.505 Get LBA Status Capability: Not Supported 00:24:56.505 Command & Feature Lockdown Capability: Not Supported 00:24:56.505 Abort Command Limit: 4 00:24:56.505 Async Event Request Limit: 4 00:24:56.505 Number of Firmware Slots: N/A 00:24:56.505 Firmware Slot 1 Read-Only: N/A 00:24:56.505 Firmware Activation Without Reset: N/A 00:24:56.505 Multiple Update Detection Support: N/A 00:24:56.505 Firmware Update Granularity: No Information Provided 00:24:56.505 Per-Namespace SMART Log: Yes 00:24:56.505 Asymmetric Namespace Access Log Page: Supported 00:24:56.505 ANA Transition Time : 10 sec 00:24:56.505 00:24:56.505 Asymmetric Namespace Access Capabilities 00:24:56.505 ANA Optimized State : Supported 00:24:56.505 ANA Non-Optimized State : Supported 00:24:56.505 ANA Inaccessible State : Supported 00:24:56.505 ANA Persistent Loss State : Supported 00:24:56.505 ANA Change State : Supported 00:24:56.505 ANAGRPID is not changed : No 00:24:56.505 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:24:56.505 00:24:56.505 ANA Group Identifier Maximum : 128 00:24:56.505 Number of ANA Group Identifiers : 128 00:24:56.505 Max Number of Allowed Namespaces : 1024 00:24:56.505 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:24:56.505 Command Effects Log Page: Supported 00:24:56.505 Get Log Page Extended Data: Supported 00:24:56.505 Telemetry Log Pages: Not Supported 00:24:56.505 Persistent Event Log Pages: Not Supported 00:24:56.505 Supported Log Pages Log Page: May Support 00:24:56.505 Commands Supported & Effects Log Page: Not Supported 00:24:56.505 Feature Identifiers & Effects Log Page:May Support 00:24:56.505 NVMe-MI Commands & Effects Log Page: May Support 00:24:56.505 Data Area 4 for Telemetry Log: Not Supported 00:24:56.505 Error Log Page Entries Supported: 128 00:24:56.505 Keep Alive: Supported 00:24:56.505 Keep Alive Granularity: 1000 ms 00:24:56.505 00:24:56.505 NVM Command Set Attributes 00:24:56.505 ========================== 00:24:56.505 Submission Queue Entry Size 00:24:56.505 Max: 64 00:24:56.505 Min: 64 00:24:56.505 Completion Queue Entry Size 00:24:56.505 Max: 16 00:24:56.505 Min: 16 00:24:56.505 Number of Namespaces: 1024 00:24:56.505 Compare Command: Not Supported 00:24:56.505 Write Uncorrectable Command: Not Supported 00:24:56.505 Dataset Management Command: Supported 00:24:56.505 Write Zeroes Command: Supported 00:24:56.505 Set Features Save Field: Not Supported 00:24:56.505 Reservations: Not Supported 00:24:56.505 Timestamp: Not Supported 00:24:56.505 Copy: Not Supported 00:24:56.505 Volatile Write Cache: Present 00:24:56.505 Atomic Write Unit (Normal): 1 00:24:56.505 Atomic Write Unit (PFail): 1 00:24:56.505 Atomic Compare & Write Unit: 1 00:24:56.505 Fused Compare & Write: Not Supported 00:24:56.505 Scatter-Gather List 00:24:56.505 SGL Command Set: Supported 00:24:56.505 SGL Keyed: Not Supported 00:24:56.505 SGL Bit Bucket Descriptor: Not Supported 00:24:56.505 SGL Metadata Pointer: Not Supported 00:24:56.505 Oversized SGL: Not Supported 00:24:56.505 SGL Metadata Address: Not Supported 00:24:56.505 SGL Offset: Supported 00:24:56.505 Transport SGL Data Block: Not Supported 00:24:56.505 Replay Protected Memory Block: Not Supported 00:24:56.505 00:24:56.505 Firmware Slot Information 00:24:56.505 ========================= 00:24:56.505 Active slot: 0 00:24:56.505 00:24:56.505 Asymmetric Namespace Access 00:24:56.505 =========================== 00:24:56.505 Change Count : 0 00:24:56.505 Number of ANA Group Descriptors : 1 00:24:56.505 ANA Group Descriptor : 0 00:24:56.505 ANA Group ID : 1 00:24:56.505 Number of NSID Values : 1 00:24:56.505 Change Count : 0 00:24:56.505 ANA State : 1 00:24:56.505 Namespace Identifier : 1 00:24:56.505 00:24:56.505 Commands Supported and Effects 00:24:56.505 ============================== 00:24:56.505 Admin Commands 00:24:56.505 -------------- 00:24:56.505 Get Log Page (02h): Supported 00:24:56.505 Identify (06h): Supported 00:24:56.505 Abort (08h): Supported 00:24:56.505 Set Features (09h): Supported 00:24:56.505 Get Features (0Ah): Supported 00:24:56.505 Asynchronous Event Request (0Ch): Supported 00:24:56.505 Keep Alive (18h): Supported 00:24:56.505 I/O Commands 00:24:56.505 ------------ 00:24:56.505 Flush (00h): Supported 00:24:56.505 Write (01h): Supported LBA-Change 00:24:56.505 Read (02h): Supported 00:24:56.505 Write Zeroes (08h): Supported LBA-Change 00:24:56.505 Dataset Management (09h): Supported 00:24:56.505 00:24:56.505 Error Log 00:24:56.505 ========= 00:24:56.505 Entry: 0 00:24:56.505 Error Count: 0x3 00:24:56.505 Submission Queue Id: 0x0 00:24:56.505 Command Id: 0x5 00:24:56.505 Phase Bit: 0 00:24:56.505 Status Code: 0x2 00:24:56.505 Status Code Type: 0x0 00:24:56.505 Do Not Retry: 1 00:24:56.505 Error Location: 0x28 00:24:56.505 LBA: 0x0 00:24:56.505 Namespace: 0x0 00:24:56.505 Vendor Log Page: 0x0 00:24:56.505 ----------- 00:24:56.505 Entry: 1 00:24:56.505 Error Count: 0x2 00:24:56.505 Submission Queue Id: 0x0 00:24:56.505 Command Id: 0x5 00:24:56.505 Phase Bit: 0 00:24:56.505 Status Code: 0x2 00:24:56.505 Status Code Type: 0x0 00:24:56.505 Do Not Retry: 1 00:24:56.505 Error Location: 0x28 00:24:56.505 LBA: 0x0 00:24:56.505 Namespace: 0x0 00:24:56.505 Vendor Log Page: 0x0 00:24:56.505 ----------- 00:24:56.505 Entry: 2 00:24:56.505 Error Count: 0x1 00:24:56.505 Submission Queue Id: 0x0 00:24:56.505 Command Id: 0x4 00:24:56.505 Phase Bit: 0 00:24:56.505 Status Code: 0x2 00:24:56.505 Status Code Type: 0x0 00:24:56.505 Do Not Retry: 1 00:24:56.505 Error Location: 0x28 00:24:56.505 LBA: 0x0 00:24:56.505 Namespace: 0x0 00:24:56.505 Vendor Log Page: 0x0 00:24:56.505 00:24:56.505 Number of Queues 00:24:56.505 ================ 00:24:56.505 Number of I/O Submission Queues: 128 00:24:56.505 Number of I/O Completion Queues: 128 00:24:56.505 00:24:56.505 ZNS Specific Controller Data 00:24:56.505 ============================ 00:24:56.505 Zone Append Size Limit: 0 00:24:56.505 00:24:56.505 00:24:56.505 Active Namespaces 00:24:56.505 ================= 00:24:56.505 get_feature(0x05) failed 00:24:56.505 Namespace ID:1 00:24:56.505 Command Set Identifier: NVM (00h) 00:24:56.505 Deallocate: Supported 00:24:56.505 Deallocated/Unwritten Error: Not Supported 00:24:56.505 Deallocated Read Value: Unknown 00:24:56.505 Deallocate in Write Zeroes: Not Supported 00:24:56.505 Deallocated Guard Field: 0xFFFF 00:24:56.505 Flush: Supported 00:24:56.505 Reservation: Not Supported 00:24:56.505 Namespace Sharing Capabilities: Multiple Controllers 00:24:56.505 Size (in LBAs): 1953525168 (931GiB) 00:24:56.505 Capacity (in LBAs): 1953525168 (931GiB) 00:24:56.505 Utilization (in LBAs): 1953525168 (931GiB) 00:24:56.505 UUID: 1d92f1ab-33b8-4fed-aa71-800fdf9b596c 00:24:56.505 Thin Provisioning: Not Supported 00:24:56.505 Per-NS Atomic Units: Yes 00:24:56.505 Atomic Boundary Size (Normal): 0 00:24:56.505 Atomic Boundary Size (PFail): 0 00:24:56.505 Atomic Boundary Offset: 0 00:24:56.505 NGUID/EUI64 Never Reused: No 00:24:56.505 ANA group ID: 1 00:24:56.505 Namespace Write Protected: No 00:24:56.505 Number of LBA Formats: 1 00:24:56.505 Current LBA Format: LBA Format #00 00:24:56.505 LBA Format #00: Data Size: 512 Metadata Size: 0 00:24:56.505 00:24:56.505 11:19:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:24:56.505 11:19:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:56.505 11:19:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:24:56.505 11:19:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:56.505 11:19:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:24:56.505 11:19:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:56.505 11:19:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:56.505 rmmod nvme_tcp 00:24:56.505 rmmod nvme_fabrics 00:24:56.505 11:19:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:56.505 11:19:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:24:56.505 11:19:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:24:56.506 11:19:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:24:56.506 11:19:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:56.506 11:19:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:56.506 11:19:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:56.506 11:19:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:24:56.506 11:19:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:24:56.506 11:19:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:56.506 11:19:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:24:56.506 11:19:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:56.506 11:19:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:56.506 11:19:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:56.506 11:19:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:56.506 11:19:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:59.043 11:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:59.043 11:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:24:59.043 11:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:24:59.043 11:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:24:59.043 11:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:59.043 11:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:59.043 11:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:24:59.043 11:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:59.043 11:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:24:59.043 11:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:24:59.043 11:19:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:01.579 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:01.579 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:01.579 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:01.579 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:01.579 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:01.579 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:01.579 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:01.579 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:01.579 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:01.579 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:01.579 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:01.579 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:01.579 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:01.579 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:01.579 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:01.580 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:02.531 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:25:02.531 00:25:02.531 real 0m16.726s 00:25:02.531 user 0m4.297s 00:25:02.531 sys 0m8.862s 00:25:02.531 11:19:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:02.531 11:19:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:25:02.531 ************************************ 00:25:02.531 END TEST nvmf_identify_kernel_target 00:25:02.531 ************************************ 00:25:02.531 11:19:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:25:02.531 11:19:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:02.531 11:19:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:02.531 11:19:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.791 ************************************ 00:25:02.791 START TEST nvmf_auth_host 00:25:02.791 ************************************ 00:25:02.791 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:25:02.791 * Looking for test storage... 00:25:02.791 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:02.791 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:02.791 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lcov --version 00:25:02.791 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:02.791 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:02.791 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:02.791 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:02.791 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:02.791 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:25:02.791 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:25:02.791 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:25:02.791 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:25:02.791 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:25:02.791 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:25:02.791 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:25:02.791 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:02.791 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:25:02.791 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:25:02.791 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:02.791 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:02.791 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:25:02.791 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:25:02.791 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:02.791 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:25:02.791 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:25:02.791 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:25:02.791 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:25:02.791 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:02.791 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:25:02.791 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:25:02.791 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:02.791 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:02.791 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:25:02.791 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:02.791 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:02.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:02.791 --rc genhtml_branch_coverage=1 00:25:02.791 --rc genhtml_function_coverage=1 00:25:02.791 --rc genhtml_legend=1 00:25:02.791 --rc geninfo_all_blocks=1 00:25:02.791 --rc geninfo_unexecuted_blocks=1 00:25:02.791 00:25:02.791 ' 00:25:02.791 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:02.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:02.791 --rc genhtml_branch_coverage=1 00:25:02.791 --rc genhtml_function_coverage=1 00:25:02.791 --rc genhtml_legend=1 00:25:02.791 --rc geninfo_all_blocks=1 00:25:02.791 --rc geninfo_unexecuted_blocks=1 00:25:02.791 00:25:02.791 ' 00:25:02.791 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:02.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:02.791 --rc genhtml_branch_coverage=1 00:25:02.791 --rc genhtml_function_coverage=1 00:25:02.791 --rc genhtml_legend=1 00:25:02.791 --rc geninfo_all_blocks=1 00:25:02.791 --rc geninfo_unexecuted_blocks=1 00:25:02.791 00:25:02.791 ' 00:25:02.791 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:02.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:02.791 --rc genhtml_branch_coverage=1 00:25:02.791 --rc genhtml_function_coverage=1 00:25:02.791 --rc genhtml_legend=1 00:25:02.791 --rc geninfo_all_blocks=1 00:25:02.791 --rc geninfo_unexecuted_blocks=1 00:25:02.791 00:25:02.791 ' 00:25:02.791 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:02.791 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:25:02.791 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:02.791 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:02.791 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:02.791 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:02.791 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:02.791 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:02.791 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:02.791 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:02.791 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:02.791 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:02.791 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:25:02.791 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:25:02.791 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:02.791 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:02.791 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:02.792 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:02.792 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:02.792 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:25:02.792 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:02.792 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:02.792 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:02.792 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:02.792 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:02.792 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:02.792 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:25:02.792 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:02.792 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:25:02.792 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:02.792 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:02.792 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:02.792 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:02.792 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:02.792 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:02.792 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:02.792 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:02.792 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:02.792 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:02.792 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:25:02.792 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:25:02.792 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:25:02.792 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:25:02.792 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:02.792 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:02.792 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:25:02.792 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:25:02.792 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:25:02.792 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:02.792 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:02.792 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:02.792 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:02.792 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:02.792 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:02.792 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:02.792 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:02.792 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:02.792 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:02.792 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:25:02.792 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.365 11:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:09.365 11:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:25:09.365 11:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:09.365 11:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:09.365 11:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:09.365 11:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:09.365 11:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:09.365 11:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:25:09.365 11:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:09.365 11:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:25:09.365 11:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:25:09.365 11:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:25:09.365 11:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:25:09.365 11:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:25:09.365 11:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:25:09.365 11:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:09.365 11:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:09.365 11:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:09.365 11:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:09.365 11:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:09.365 11:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:09.365 11:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:09.365 11:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:09.365 11:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:09.365 11:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:09.365 11:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:09.365 11:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:09.365 11:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:09.365 11:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:09.365 11:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:09.365 11:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:09.365 11:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:09.365 11:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:09.365 11:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:09.365 11:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:09.365 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:09.365 11:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:09.365 11:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:09.365 11:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:09.365 11:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:09.365 11:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:09.365 11:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:09.365 11:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:09.365 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:09.365 11:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:09.365 11:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:09.365 11:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:09.365 11:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:09.365 11:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:09.365 11:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:09.365 11:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:09.365 11:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:09.365 11:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:09.365 11:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:09.365 11:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:09.365 11:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:09.365 11:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:09.365 11:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:09.365 11:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:09.365 11:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:09.365 Found net devices under 0000:86:00.0: cvl_0_0 00:25:09.365 11:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:09.365 11:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:09.365 11:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:09.365 11:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:09.365 11:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:09.365 11:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:09.365 11:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:09.365 11:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:09.365 11:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:09.365 Found net devices under 0000:86:00.1: cvl_0_1 00:25:09.365 11:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:09.365 11:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:09.365 11:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:25:09.365 11:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:09.365 11:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:09.365 11:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:09.365 11:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:09.365 11:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:09.365 11:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:09.365 11:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:09.365 11:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:09.365 11:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:09.365 11:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:09.365 11:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:09.365 11:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:09.365 11:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:09.365 11:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:09.365 11:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:09.365 11:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:09.365 11:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:09.365 11:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:09.365 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:09.365 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:09.366 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:09.366 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:09.366 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:09.366 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:09.366 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:09.366 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:09.366 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:09.366 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.438 ms 00:25:09.366 00:25:09.366 --- 10.0.0.2 ping statistics --- 00:25:09.366 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:09.366 rtt min/avg/max/mdev = 0.438/0.438/0.438/0.000 ms 00:25:09.366 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:09.366 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:09.366 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:25:09.366 00:25:09.366 --- 10.0.0.1 ping statistics --- 00:25:09.366 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:09.366 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:25:09.366 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:09.366 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:25:09.366 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:09.366 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:09.366 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:09.366 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:09.366 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:09.366 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:09.366 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:09.366 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:25:09.366 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:09.366 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:09.366 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.366 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=4188304 00:25:09.366 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:25:09.366 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 4188304 00:25:09.366 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 4188304 ']' 00:25:09.366 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:09.366 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:09.366 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:09.366 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:09.366 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.366 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:09.366 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:25:09.366 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:09.366 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:09.366 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.366 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:09.366 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:25:09.366 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:25:09.366 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:09.366 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:09.366 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:09.366 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:25:09.366 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:25:09.366 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:09.366 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=c59d4c054fecd3c795d21b7eb6a65b2c 00:25:09.366 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:25:09.366 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.Xdd 00:25:09.366 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key c59d4c054fecd3c795d21b7eb6a65b2c 0 00:25:09.366 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 c59d4c054fecd3c795d21b7eb6a65b2c 0 00:25:09.366 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:09.366 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:09.366 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=c59d4c054fecd3c795d21b7eb6a65b2c 00:25:09.366 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:25:09.366 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:09.366 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.Xdd 00:25:09.366 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.Xdd 00:25:09.366 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.Xdd 00:25:09.366 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:25:09.366 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:09.366 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:09.366 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:09.366 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:25:09.366 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:25:09.366 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:25:09.366 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=6825d549421754473ef1973e3072eaa7ea4b3fbfd5ad20edfb380949e5b8d53e 00:25:09.366 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:25:09.366 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.dxK 00:25:09.366 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 6825d549421754473ef1973e3072eaa7ea4b3fbfd5ad20edfb380949e5b8d53e 3 00:25:09.366 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 6825d549421754473ef1973e3072eaa7ea4b3fbfd5ad20edfb380949e5b8d53e 3 00:25:09.366 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:09.366 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:09.366 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=6825d549421754473ef1973e3072eaa7ea4b3fbfd5ad20edfb380949e5b8d53e 00:25:09.366 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:25:09.366 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:09.366 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.dxK 00:25:09.366 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.dxK 00:25:09.366 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.dxK 00:25:09.366 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:25:09.366 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:09.366 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:09.366 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:09.366 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:25:09.366 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:25:09.366 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:09.366 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=bf5a12c695512077c69ab562d1bb4a3a78f1dbd8c36c5bcc 00:25:09.366 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:25:09.366 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.p01 00:25:09.366 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key bf5a12c695512077c69ab562d1bb4a3a78f1dbd8c36c5bcc 0 00:25:09.366 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 bf5a12c695512077c69ab562d1bb4a3a78f1dbd8c36c5bcc 0 00:25:09.366 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:09.366 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:09.366 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=bf5a12c695512077c69ab562d1bb4a3a78f1dbd8c36c5bcc 00:25:09.366 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:25:09.366 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:09.366 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.p01 00:25:09.366 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.p01 00:25:09.366 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.p01 00:25:09.366 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:25:09.366 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:09.366 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:09.366 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:09.366 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:25:09.366 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:25:09.366 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:09.367 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=e960df9f04fa3b127e9b32dd2c1ef93281cbba874a65d7a6 00:25:09.367 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:25:09.367 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.gsi 00:25:09.367 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key e960df9f04fa3b127e9b32dd2c1ef93281cbba874a65d7a6 2 00:25:09.367 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 e960df9f04fa3b127e9b32dd2c1ef93281cbba874a65d7a6 2 00:25:09.367 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:09.367 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:09.367 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=e960df9f04fa3b127e9b32dd2c1ef93281cbba874a65d7a6 00:25:09.367 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:25:09.367 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:09.367 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.gsi 00:25:09.367 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.gsi 00:25:09.367 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.gsi 00:25:09.367 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:25:09.367 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:09.367 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:09.367 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:09.367 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:25:09.367 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:25:09.367 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:09.367 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=527ccfb79bd925dd3cb8a0005a70d13a 00:25:09.367 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:25:09.367 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.5tH 00:25:09.367 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 527ccfb79bd925dd3cb8a0005a70d13a 1 00:25:09.367 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 527ccfb79bd925dd3cb8a0005a70d13a 1 00:25:09.367 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:09.367 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:09.367 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=527ccfb79bd925dd3cb8a0005a70d13a 00:25:09.367 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:25:09.367 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:09.367 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.5tH 00:25:09.367 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.5tH 00:25:09.367 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.5tH 00:25:09.367 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:25:09.367 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:09.367 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:09.367 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:09.367 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:25:09.367 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:25:09.367 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:09.367 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=5ed09a10adfdbf0fc2f2aa4609c38707 00:25:09.367 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:25:09.367 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.8w5 00:25:09.367 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 5ed09a10adfdbf0fc2f2aa4609c38707 1 00:25:09.367 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 5ed09a10adfdbf0fc2f2aa4609c38707 1 00:25:09.367 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:09.367 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:09.367 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=5ed09a10adfdbf0fc2f2aa4609c38707 00:25:09.367 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:25:09.367 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:09.367 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.8w5 00:25:09.367 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.8w5 00:25:09.367 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.8w5 00:25:09.367 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:25:09.367 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:09.367 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:09.367 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:09.367 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:25:09.367 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:25:09.367 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:09.367 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=77a9cef2e5e9918e686cadb2d11f7b6cc6e50ca92ea437f8 00:25:09.367 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:25:09.626 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.SgZ 00:25:09.626 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 77a9cef2e5e9918e686cadb2d11f7b6cc6e50ca92ea437f8 2 00:25:09.626 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 77a9cef2e5e9918e686cadb2d11f7b6cc6e50ca92ea437f8 2 00:25:09.626 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:09.626 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:09.626 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=77a9cef2e5e9918e686cadb2d11f7b6cc6e50ca92ea437f8 00:25:09.626 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:25:09.626 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:09.626 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.SgZ 00:25:09.626 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.SgZ 00:25:09.626 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.SgZ 00:25:09.626 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:25:09.626 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:09.626 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:09.626 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:09.626 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:25:09.626 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:25:09.626 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:09.626 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=e9f235dad850185537df0de5264c74cd 00:25:09.626 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:25:09.626 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.HAD 00:25:09.626 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key e9f235dad850185537df0de5264c74cd 0 00:25:09.627 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 e9f235dad850185537df0de5264c74cd 0 00:25:09.627 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:09.627 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:09.627 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=e9f235dad850185537df0de5264c74cd 00:25:09.627 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:25:09.627 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:09.627 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.HAD 00:25:09.627 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.HAD 00:25:09.627 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.HAD 00:25:09.627 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:25:09.627 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:09.627 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:09.627 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:09.627 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:25:09.627 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:25:09.627 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:25:09.627 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=e946b793aad2b69ccb59c3f4df1e945558e577c67fd404af29b9b16337f85023 00:25:09.627 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:25:09.627 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.yoq 00:25:09.627 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key e946b793aad2b69ccb59c3f4df1e945558e577c67fd404af29b9b16337f85023 3 00:25:09.627 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 e946b793aad2b69ccb59c3f4df1e945558e577c67fd404af29b9b16337f85023 3 00:25:09.627 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:09.627 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:09.627 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=e946b793aad2b69ccb59c3f4df1e945558e577c67fd404af29b9b16337f85023 00:25:09.627 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:25:09.627 11:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:09.627 11:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.yoq 00:25:09.627 11:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.yoq 00:25:09.627 11:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.yoq 00:25:09.627 11:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:25:09.627 11:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 4188304 00:25:09.627 11:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 4188304 ']' 00:25:09.627 11:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:09.627 11:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:09.627 11:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:09.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:09.627 11:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:09.627 11:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.886 11:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:09.886 11:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:25:09.886 11:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:09.886 11:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Xdd 00:25:09.886 11:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.886 11:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.886 11:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.886 11:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.dxK ]] 00:25:09.886 11:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.dxK 00:25:09.886 11:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.886 11:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.886 11:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.886 11:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:09.886 11:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.p01 00:25:09.886 11:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.886 11:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.886 11:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.886 11:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.gsi ]] 00:25:09.886 11:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.gsi 00:25:09.887 11:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.887 11:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.887 11:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.887 11:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:09.887 11:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.5tH 00:25:09.887 11:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.887 11:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.887 11:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.887 11:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.8w5 ]] 00:25:09.887 11:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.8w5 00:25:09.887 11:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.887 11:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.887 11:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.887 11:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:09.887 11:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.SgZ 00:25:09.887 11:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.887 11:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.887 11:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.887 11:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.HAD ]] 00:25:09.887 11:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.HAD 00:25:09.887 11:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.887 11:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.887 11:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.887 11:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:09.887 11:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.yoq 00:25:09.887 11:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.887 11:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.887 11:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.887 11:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:25:09.887 11:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:25:09.887 11:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:25:09.887 11:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:09.887 11:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:09.887 11:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:09.887 11:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:09.887 11:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:09.887 11:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:09.887 11:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:09.887 11:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:09.887 11:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:09.887 11:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:09.887 11:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:25:09.887 11:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:25:09.887 11:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:25:09.887 11:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:09.887 11:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:09.887 11:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:25:09.887 11:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:25:09.887 11:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:25:09.887 11:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:25:09.887 11:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:25:09.887 11:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:25:12.422 Waiting for block devices as requested 00:25:12.681 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:25:12.681 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:12.681 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:12.940 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:12.940 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:12.940 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:12.940 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:13.198 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:13.198 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:13.198 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:13.456 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:13.456 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:13.456 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:13.456 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:13.715 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:13.715 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:13.715 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:14.283 11:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:25:14.283 11:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:25:14.283 11:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:25:14.283 11:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:25:14.283 11:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:14.283 11:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:25:14.283 11:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:25:14.283 11:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:25:14.283 11:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:25:14.283 No valid GPT data, bailing 00:25:14.283 11:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:14.542 11:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:25:14.542 11:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:25:14.542 11:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:25:14.542 11:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:25:14.542 11:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:14.542 11:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:14.542 11:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:25:14.542 11:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:25:14.542 11:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:25:14.542 11:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:25:14.542 11:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:25:14.542 11:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:25:14.543 11:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:25:14.543 11:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:25:14.543 11:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:25:14.543 11:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:25:14.543 11:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:25:14.543 00:25:14.543 Discovery Log Number of Records 2, Generation counter 2 00:25:14.543 =====Discovery Log Entry 0====== 00:25:14.543 trtype: tcp 00:25:14.543 adrfam: ipv4 00:25:14.543 subtype: current discovery subsystem 00:25:14.543 treq: not specified, sq flow control disable supported 00:25:14.543 portid: 1 00:25:14.543 trsvcid: 4420 00:25:14.543 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:25:14.543 traddr: 10.0.0.1 00:25:14.543 eflags: none 00:25:14.543 sectype: none 00:25:14.543 =====Discovery Log Entry 1====== 00:25:14.543 trtype: tcp 00:25:14.543 adrfam: ipv4 00:25:14.543 subtype: nvme subsystem 00:25:14.543 treq: not specified, sq flow control disable supported 00:25:14.543 portid: 1 00:25:14.543 trsvcid: 4420 00:25:14.543 subnqn: nqn.2024-02.io.spdk:cnode0 00:25:14.543 traddr: 10.0.0.1 00:25:14.543 eflags: none 00:25:14.543 sectype: none 00:25:14.543 11:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:14.543 11:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:25:14.543 11:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:25:14.543 11:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:14.543 11:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:14.543 11:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:14.543 11:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:14.543 11:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:14.543 11:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmY1YTEyYzY5NTUxMjA3N2M2OWFiNTYyZDFiYjRhM2E3OGYxZGJkOGMzNmM1YmNjWljl4g==: 00:25:14.543 11:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTk2MGRmOWYwNGZhM2IxMjdlOWIzMmRkMmMxZWY5MzI4MWNiYmE4NzRhNjVkN2E2w75NCQ==: 00:25:14.543 11:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:14.543 11:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:14.543 11:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmY1YTEyYzY5NTUxMjA3N2M2OWFiNTYyZDFiYjRhM2E3OGYxZGJkOGMzNmM1YmNjWljl4g==: 00:25:14.543 11:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTk2MGRmOWYwNGZhM2IxMjdlOWIzMmRkMmMxZWY5MzI4MWNiYmE4NzRhNjVkN2E2w75NCQ==: ]] 00:25:14.543 11:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTk2MGRmOWYwNGZhM2IxMjdlOWIzMmRkMmMxZWY5MzI4MWNiYmE4NzRhNjVkN2E2w75NCQ==: 00:25:14.543 11:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:25:14.543 11:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:25:14.543 11:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:25:14.543 11:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:14.543 11:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:25:14.543 11:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:14.543 11:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:25:14.543 11:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:14.543 11:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:14.543 11:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:14.543 11:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:14.543 11:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.543 11:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.543 11:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.543 11:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:14.543 11:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:14.543 11:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:14.543 11:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:14.543 11:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:14.543 11:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:14.543 11:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:14.543 11:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:14.543 11:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:14.543 11:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:14.543 11:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:14.543 11:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:14.543 11:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.543 11:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.803 nvme0n1 00:25:14.803 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.803 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:14.803 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:14.803 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.803 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.803 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.803 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:14.803 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:14.803 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.803 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.803 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.803 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:14.803 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:14.803 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:14.803 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:25:14.803 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:14.803 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:14.803 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:14.803 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:14.803 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzU5ZDRjMDU0ZmVjZDNjNzk1ZDIxYjdlYjZhNjViMmPNAXD/: 00:25:14.803 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjgyNWQ1NDk0MjE3NTQ0NzNlZjE5NzNlMzA3MmVhYTdlYTRiM2ZiZmQ1YWQyMGVkZmIzODA5NDllNWI4ZDUzZcwMYtY=: 00:25:14.803 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:14.803 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:14.803 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzU5ZDRjMDU0ZmVjZDNjNzk1ZDIxYjdlYjZhNjViMmPNAXD/: 00:25:14.803 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjgyNWQ1NDk0MjE3NTQ0NzNlZjE5NzNlMzA3MmVhYTdlYTRiM2ZiZmQ1YWQyMGVkZmIzODA5NDllNWI4ZDUzZcwMYtY=: ]] 00:25:14.803 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjgyNWQ1NDk0MjE3NTQ0NzNlZjE5NzNlMzA3MmVhYTdlYTRiM2ZiZmQ1YWQyMGVkZmIzODA5NDllNWI4ZDUzZcwMYtY=: 00:25:14.803 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:25:14.803 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:14.803 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:14.803 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:14.803 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:14.803 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:14.803 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:14.803 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.803 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.803 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.803 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:14.803 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:14.803 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:14.803 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:14.803 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:14.803 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:14.803 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:14.803 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:14.803 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:14.803 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:14.803 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:14.803 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:14.803 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.803 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.803 nvme0n1 00:25:14.803 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.803 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:14.803 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:14.803 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.803 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.063 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.063 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:15.063 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:15.063 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.063 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.063 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.063 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:15.063 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:15.063 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:15.063 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:15.063 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:15.064 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:15.064 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmY1YTEyYzY5NTUxMjA3N2M2OWFiNTYyZDFiYjRhM2E3OGYxZGJkOGMzNmM1YmNjWljl4g==: 00:25:15.064 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTk2MGRmOWYwNGZhM2IxMjdlOWIzMmRkMmMxZWY5MzI4MWNiYmE4NzRhNjVkN2E2w75NCQ==: 00:25:15.064 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:15.064 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:15.064 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmY1YTEyYzY5NTUxMjA3N2M2OWFiNTYyZDFiYjRhM2E3OGYxZGJkOGMzNmM1YmNjWljl4g==: 00:25:15.064 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTk2MGRmOWYwNGZhM2IxMjdlOWIzMmRkMmMxZWY5MzI4MWNiYmE4NzRhNjVkN2E2w75NCQ==: ]] 00:25:15.064 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTk2MGRmOWYwNGZhM2IxMjdlOWIzMmRkMmMxZWY5MzI4MWNiYmE4NzRhNjVkN2E2w75NCQ==: 00:25:15.064 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:25:15.064 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:15.064 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:15.064 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:15.064 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:15.064 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:15.064 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:15.064 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.064 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.064 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.064 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:15.064 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:15.064 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:15.064 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:15.064 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:15.064 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:15.064 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:15.064 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:15.064 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:15.064 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:15.064 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:15.064 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:15.064 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.064 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.064 nvme0n1 00:25:15.064 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.064 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:15.064 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.064 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:15.064 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.064 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.323 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:15.323 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:15.324 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.324 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.324 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.324 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:15.324 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:25:15.324 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:15.324 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:15.324 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:15.324 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:15.324 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTI3Y2NmYjc5YmQ5MjVkZDNjYjhhMDAwNWE3MGQxM2GPOAge: 00:25:15.324 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWVkMDlhMTBhZGZkYmYwZmMyZjJhYTQ2MDljMzg3MDdzGunR: 00:25:15.324 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:15.324 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:15.324 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTI3Y2NmYjc5YmQ5MjVkZDNjYjhhMDAwNWE3MGQxM2GPOAge: 00:25:15.324 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWVkMDlhMTBhZGZkYmYwZmMyZjJhYTQ2MDljMzg3MDdzGunR: ]] 00:25:15.324 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWVkMDlhMTBhZGZkYmYwZmMyZjJhYTQ2MDljMzg3MDdzGunR: 00:25:15.324 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:25:15.324 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:15.324 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:15.324 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:15.324 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:15.324 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:15.324 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:15.324 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.324 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.324 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.324 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:15.324 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:15.324 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:15.324 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:15.324 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:15.324 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:15.324 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:15.324 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:15.324 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:15.324 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:15.324 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:15.324 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:15.324 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.324 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.324 nvme0n1 00:25:15.324 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.324 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:15.324 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:15.324 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.324 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.324 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.324 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:15.324 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:15.324 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.324 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.324 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.324 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:15.324 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:25:15.324 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:15.324 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:15.324 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:15.324 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:15.324 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzdhOWNlZjJlNWU5OTE4ZTY4NmNhZGIyZDExZjdiNmNjNmU1MGNhOTJlYTQzN2Y4f+CRXg==: 00:25:15.324 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTlmMjM1ZGFkODUwMTg1NTM3ZGYwZGU1MjY0Yzc0Y2SjTt5E: 00:25:15.324 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:15.324 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:15.324 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzdhOWNlZjJlNWU5OTE4ZTY4NmNhZGIyZDExZjdiNmNjNmU1MGNhOTJlYTQzN2Y4f+CRXg==: 00:25:15.324 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTlmMjM1ZGFkODUwMTg1NTM3ZGYwZGU1MjY0Yzc0Y2SjTt5E: ]] 00:25:15.324 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTlmMjM1ZGFkODUwMTg1NTM3ZGYwZGU1MjY0Yzc0Y2SjTt5E: 00:25:15.324 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:25:15.324 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:15.324 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:15.324 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:15.324 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:15.324 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:15.324 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:15.324 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.324 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.583 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.583 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:15.583 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:15.583 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:15.583 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:15.583 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:15.583 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:15.583 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:15.583 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:15.583 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:15.583 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:15.583 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:15.583 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:15.583 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.583 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.583 nvme0n1 00:25:15.583 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.583 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:15.583 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:15.584 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.584 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.584 11:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.584 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:15.584 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:15.584 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.584 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.584 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.584 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:15.584 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:25:15.584 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:15.584 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:15.584 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:15.584 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:15.584 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTk0NmI3OTNhYWQyYjY5Y2NiNTljM2Y0ZGYxZTk0NTU1OGU1NzdjNjdmZDQwNGFmMjliOWIxNjMzN2Y4NTAyM+O9zv8=: 00:25:15.584 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:15.584 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:15.584 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:15.584 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTk0NmI3OTNhYWQyYjY5Y2NiNTljM2Y0ZGYxZTk0NTU1OGU1NzdjNjdmZDQwNGFmMjliOWIxNjMzN2Y4NTAyM+O9zv8=: 00:25:15.584 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:15.584 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:25:15.584 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:15.584 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:15.584 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:15.584 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:15.584 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:15.584 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:15.584 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.584 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.584 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.584 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:15.584 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:15.584 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:15.584 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:15.584 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:15.584 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:15.584 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:15.584 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:15.584 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:15.584 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:15.584 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:15.584 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:15.584 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.584 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.843 nvme0n1 00:25:15.843 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.843 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:15.843 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:15.843 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.843 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.843 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.843 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:15.843 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:15.843 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.843 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.843 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.843 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:15.843 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:15.843 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:25:15.843 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:15.843 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:15.843 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:15.843 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:15.843 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzU5ZDRjMDU0ZmVjZDNjNzk1ZDIxYjdlYjZhNjViMmPNAXD/: 00:25:15.843 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjgyNWQ1NDk0MjE3NTQ0NzNlZjE5NzNlMzA3MmVhYTdlYTRiM2ZiZmQ1YWQyMGVkZmIzODA5NDllNWI4ZDUzZcwMYtY=: 00:25:15.843 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:15.843 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:15.843 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzU5ZDRjMDU0ZmVjZDNjNzk1ZDIxYjdlYjZhNjViMmPNAXD/: 00:25:15.843 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjgyNWQ1NDk0MjE3NTQ0NzNlZjE5NzNlMzA3MmVhYTdlYTRiM2ZiZmQ1YWQyMGVkZmIzODA5NDllNWI4ZDUzZcwMYtY=: ]] 00:25:15.843 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjgyNWQ1NDk0MjE3NTQ0NzNlZjE5NzNlMzA3MmVhYTdlYTRiM2ZiZmQ1YWQyMGVkZmIzODA5NDllNWI4ZDUzZcwMYtY=: 00:25:15.843 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:25:15.843 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:15.843 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:15.843 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:15.843 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:15.843 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:15.843 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:15.843 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.843 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.843 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.843 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:15.843 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:15.843 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:15.843 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:15.843 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:15.843 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:15.843 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:15.843 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:15.843 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:15.843 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:15.843 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:15.843 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:15.843 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.843 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.103 nvme0n1 00:25:16.103 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.103 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:16.103 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:16.103 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.103 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.103 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.103 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:16.103 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:16.103 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.103 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.103 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.103 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:16.103 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:25:16.103 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:16.103 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:16.103 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:16.103 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:16.103 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmY1YTEyYzY5NTUxMjA3N2M2OWFiNTYyZDFiYjRhM2E3OGYxZGJkOGMzNmM1YmNjWljl4g==: 00:25:16.103 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTk2MGRmOWYwNGZhM2IxMjdlOWIzMmRkMmMxZWY5MzI4MWNiYmE4NzRhNjVkN2E2w75NCQ==: 00:25:16.103 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:16.103 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:16.103 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmY1YTEyYzY5NTUxMjA3N2M2OWFiNTYyZDFiYjRhM2E3OGYxZGJkOGMzNmM1YmNjWljl4g==: 00:25:16.103 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTk2MGRmOWYwNGZhM2IxMjdlOWIzMmRkMmMxZWY5MzI4MWNiYmE4NzRhNjVkN2E2w75NCQ==: ]] 00:25:16.103 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTk2MGRmOWYwNGZhM2IxMjdlOWIzMmRkMmMxZWY5MzI4MWNiYmE4NzRhNjVkN2E2w75NCQ==: 00:25:16.103 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:25:16.103 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:16.103 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:16.103 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:16.103 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:16.103 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:16.103 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:16.103 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.103 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.103 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.103 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:16.103 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:16.103 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:16.103 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:16.103 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:16.103 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:16.103 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:16.103 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:16.103 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:16.103 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:16.103 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:16.103 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:16.103 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.103 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.362 nvme0n1 00:25:16.362 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.362 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:16.362 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:16.362 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.362 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.362 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.362 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:16.362 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:16.362 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.362 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.362 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.362 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:16.362 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:25:16.362 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:16.362 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:16.362 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:16.362 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:16.362 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTI3Y2NmYjc5YmQ5MjVkZDNjYjhhMDAwNWE3MGQxM2GPOAge: 00:25:16.362 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWVkMDlhMTBhZGZkYmYwZmMyZjJhYTQ2MDljMzg3MDdzGunR: 00:25:16.362 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:16.362 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:16.362 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTI3Y2NmYjc5YmQ5MjVkZDNjYjhhMDAwNWE3MGQxM2GPOAge: 00:25:16.362 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWVkMDlhMTBhZGZkYmYwZmMyZjJhYTQ2MDljMzg3MDdzGunR: ]] 00:25:16.362 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWVkMDlhMTBhZGZkYmYwZmMyZjJhYTQ2MDljMzg3MDdzGunR: 00:25:16.362 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:25:16.362 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:16.362 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:16.362 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:16.362 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:16.362 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:16.362 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:16.362 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.362 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.362 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.362 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:16.362 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:16.362 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:16.362 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:16.362 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:16.362 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:16.362 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:16.363 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:16.363 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:16.363 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:16.363 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:16.363 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:16.363 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.363 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.621 nvme0n1 00:25:16.621 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.621 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:16.621 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:16.621 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.621 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.621 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.621 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:16.621 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:16.621 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.621 11:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.622 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.622 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:16.622 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:25:16.622 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:16.622 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:16.622 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:16.622 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:16.622 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzdhOWNlZjJlNWU5OTE4ZTY4NmNhZGIyZDExZjdiNmNjNmU1MGNhOTJlYTQzN2Y4f+CRXg==: 00:25:16.622 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTlmMjM1ZGFkODUwMTg1NTM3ZGYwZGU1MjY0Yzc0Y2SjTt5E: 00:25:16.622 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:16.622 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:16.622 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzdhOWNlZjJlNWU5OTE4ZTY4NmNhZGIyZDExZjdiNmNjNmU1MGNhOTJlYTQzN2Y4f+CRXg==: 00:25:16.622 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTlmMjM1ZGFkODUwMTg1NTM3ZGYwZGU1MjY0Yzc0Y2SjTt5E: ]] 00:25:16.622 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTlmMjM1ZGFkODUwMTg1NTM3ZGYwZGU1MjY0Yzc0Y2SjTt5E: 00:25:16.622 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:25:16.622 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:16.622 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:16.622 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:16.622 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:16.622 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:16.622 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:16.622 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.622 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.622 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.622 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:16.622 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:16.622 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:16.622 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:16.622 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:16.622 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:16.622 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:16.622 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:16.622 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:16.622 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:16.622 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:16.622 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:16.622 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.622 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.881 nvme0n1 00:25:16.881 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.881 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:16.881 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:16.881 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.881 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.881 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.881 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:16.881 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:16.881 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.881 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.881 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.881 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:16.881 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:25:16.881 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:16.881 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:16.881 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:16.881 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:16.881 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTk0NmI3OTNhYWQyYjY5Y2NiNTljM2Y0ZGYxZTk0NTU1OGU1NzdjNjdmZDQwNGFmMjliOWIxNjMzN2Y4NTAyM+O9zv8=: 00:25:16.881 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:16.881 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:16.881 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:16.881 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTk0NmI3OTNhYWQyYjY5Y2NiNTljM2Y0ZGYxZTk0NTU1OGU1NzdjNjdmZDQwNGFmMjliOWIxNjMzN2Y4NTAyM+O9zv8=: 00:25:16.881 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:16.881 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:25:16.881 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:16.881 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:16.881 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:16.881 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:16.881 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:16.881 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:16.881 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.881 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.881 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.881 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:16.881 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:16.881 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:16.881 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:16.881 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:16.881 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:16.881 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:16.881 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:16.881 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:16.881 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:16.881 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:16.881 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:16.881 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.882 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.141 nvme0n1 00:25:17.141 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.141 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:17.141 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.141 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:17.141 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.141 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.141 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:17.141 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:17.141 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.141 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.141 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.141 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:17.141 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:17.141 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:25:17.141 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:17.141 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:17.141 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:17.141 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:17.141 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzU5ZDRjMDU0ZmVjZDNjNzk1ZDIxYjdlYjZhNjViMmPNAXD/: 00:25:17.141 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjgyNWQ1NDk0MjE3NTQ0NzNlZjE5NzNlMzA3MmVhYTdlYTRiM2ZiZmQ1YWQyMGVkZmIzODA5NDllNWI4ZDUzZcwMYtY=: 00:25:17.141 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:17.141 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:17.141 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzU5ZDRjMDU0ZmVjZDNjNzk1ZDIxYjdlYjZhNjViMmPNAXD/: 00:25:17.141 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjgyNWQ1NDk0MjE3NTQ0NzNlZjE5NzNlMzA3MmVhYTdlYTRiM2ZiZmQ1YWQyMGVkZmIzODA5NDllNWI4ZDUzZcwMYtY=: ]] 00:25:17.141 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjgyNWQ1NDk0MjE3NTQ0NzNlZjE5NzNlMzA3MmVhYTdlYTRiM2ZiZmQ1YWQyMGVkZmIzODA5NDllNWI4ZDUzZcwMYtY=: 00:25:17.141 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:25:17.141 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:17.141 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:17.141 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:17.141 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:17.141 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:17.141 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:17.141 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.141 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.141 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.141 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:17.141 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:17.141 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:17.141 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:17.141 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:17.141 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:17.141 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:17.141 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:17.141 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:17.141 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:17.141 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:17.141 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:17.141 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.141 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.400 nvme0n1 00:25:17.400 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.400 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:17.400 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:17.400 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.400 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.400 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.400 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:17.400 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:17.400 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.400 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.400 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.400 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:17.400 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:25:17.400 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:17.400 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:17.400 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:17.400 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:17.400 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmY1YTEyYzY5NTUxMjA3N2M2OWFiNTYyZDFiYjRhM2E3OGYxZGJkOGMzNmM1YmNjWljl4g==: 00:25:17.400 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTk2MGRmOWYwNGZhM2IxMjdlOWIzMmRkMmMxZWY5MzI4MWNiYmE4NzRhNjVkN2E2w75NCQ==: 00:25:17.400 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:17.400 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:17.400 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmY1YTEyYzY5NTUxMjA3N2M2OWFiNTYyZDFiYjRhM2E3OGYxZGJkOGMzNmM1YmNjWljl4g==: 00:25:17.401 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTk2MGRmOWYwNGZhM2IxMjdlOWIzMmRkMmMxZWY5MzI4MWNiYmE4NzRhNjVkN2E2w75NCQ==: ]] 00:25:17.401 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTk2MGRmOWYwNGZhM2IxMjdlOWIzMmRkMmMxZWY5MzI4MWNiYmE4NzRhNjVkN2E2w75NCQ==: 00:25:17.401 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:25:17.401 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:17.401 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:17.401 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:17.401 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:17.401 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:17.401 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:17.401 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.401 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.401 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.401 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:17.401 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:17.401 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:17.401 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:17.401 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:17.401 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:17.401 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:17.401 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:17.401 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:17.401 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:17.401 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:17.401 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:17.401 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.401 11:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.660 nvme0n1 00:25:17.660 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.660 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:17.660 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:17.660 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.660 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.660 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.918 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:17.918 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:17.918 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.918 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.918 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.918 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:17.918 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:25:17.918 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:17.918 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:17.918 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:17.918 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:17.918 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTI3Y2NmYjc5YmQ5MjVkZDNjYjhhMDAwNWE3MGQxM2GPOAge: 00:25:17.918 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWVkMDlhMTBhZGZkYmYwZmMyZjJhYTQ2MDljMzg3MDdzGunR: 00:25:17.918 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:17.918 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:17.918 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTI3Y2NmYjc5YmQ5MjVkZDNjYjhhMDAwNWE3MGQxM2GPOAge: 00:25:17.918 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWVkMDlhMTBhZGZkYmYwZmMyZjJhYTQ2MDljMzg3MDdzGunR: ]] 00:25:17.918 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWVkMDlhMTBhZGZkYmYwZmMyZjJhYTQ2MDljMzg3MDdzGunR: 00:25:17.918 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:25:17.919 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:17.919 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:17.919 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:17.919 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:17.919 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:17.919 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:17.919 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.919 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.919 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.919 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:17.919 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:17.919 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:17.919 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:17.919 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:17.919 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:17.919 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:17.919 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:17.919 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:17.919 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:17.919 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:17.919 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:17.919 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.919 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.178 nvme0n1 00:25:18.178 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.178 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:18.178 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:18.178 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.178 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.178 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.178 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:18.178 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:18.178 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.178 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.178 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.178 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:18.178 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:25:18.178 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:18.178 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:18.178 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:18.178 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:18.178 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzdhOWNlZjJlNWU5OTE4ZTY4NmNhZGIyZDExZjdiNmNjNmU1MGNhOTJlYTQzN2Y4f+CRXg==: 00:25:18.178 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTlmMjM1ZGFkODUwMTg1NTM3ZGYwZGU1MjY0Yzc0Y2SjTt5E: 00:25:18.178 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:18.178 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:18.178 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzdhOWNlZjJlNWU5OTE4ZTY4NmNhZGIyZDExZjdiNmNjNmU1MGNhOTJlYTQzN2Y4f+CRXg==: 00:25:18.178 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTlmMjM1ZGFkODUwMTg1NTM3ZGYwZGU1MjY0Yzc0Y2SjTt5E: ]] 00:25:18.178 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTlmMjM1ZGFkODUwMTg1NTM3ZGYwZGU1MjY0Yzc0Y2SjTt5E: 00:25:18.178 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:25:18.178 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:18.178 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:18.178 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:18.178 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:18.178 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:18.178 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:18.178 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.178 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.178 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.178 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:18.178 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:18.178 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:18.178 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:18.178 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:18.178 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:18.178 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:18.178 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:18.178 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:18.178 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:18.178 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:18.178 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:18.178 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.178 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.437 nvme0n1 00:25:18.437 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.437 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:18.437 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:18.437 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.437 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.437 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.437 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:18.437 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:18.437 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.437 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.437 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.437 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:18.437 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:25:18.437 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:18.437 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:18.437 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:18.437 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:18.437 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTk0NmI3OTNhYWQyYjY5Y2NiNTljM2Y0ZGYxZTk0NTU1OGU1NzdjNjdmZDQwNGFmMjliOWIxNjMzN2Y4NTAyM+O9zv8=: 00:25:18.437 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:18.437 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:18.437 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:18.437 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTk0NmI3OTNhYWQyYjY5Y2NiNTljM2Y0ZGYxZTk0NTU1OGU1NzdjNjdmZDQwNGFmMjliOWIxNjMzN2Y4NTAyM+O9zv8=: 00:25:18.437 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:18.437 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:25:18.437 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:18.437 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:18.437 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:18.437 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:18.437 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:18.437 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:18.437 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.437 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.437 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.437 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:18.437 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:18.437 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:18.437 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:18.437 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:18.437 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:18.437 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:18.437 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:18.437 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:18.437 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:18.437 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:18.437 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:18.437 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.437 11:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.697 nvme0n1 00:25:18.697 11:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.697 11:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:18.697 11:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:18.697 11:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.697 11:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.697 11:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.697 11:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:18.697 11:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:18.697 11:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.697 11:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.697 11:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.697 11:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:18.697 11:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:18.697 11:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:25:18.697 11:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:18.697 11:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:18.697 11:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:18.697 11:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:18.697 11:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzU5ZDRjMDU0ZmVjZDNjNzk1ZDIxYjdlYjZhNjViMmPNAXD/: 00:25:18.697 11:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjgyNWQ1NDk0MjE3NTQ0NzNlZjE5NzNlMzA3MmVhYTdlYTRiM2ZiZmQ1YWQyMGVkZmIzODA5NDllNWI4ZDUzZcwMYtY=: 00:25:18.697 11:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:18.697 11:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:18.697 11:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzU5ZDRjMDU0ZmVjZDNjNzk1ZDIxYjdlYjZhNjViMmPNAXD/: 00:25:18.697 11:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjgyNWQ1NDk0MjE3NTQ0NzNlZjE5NzNlMzA3MmVhYTdlYTRiM2ZiZmQ1YWQyMGVkZmIzODA5NDllNWI4ZDUzZcwMYtY=: ]] 00:25:18.697 11:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjgyNWQ1NDk0MjE3NTQ0NzNlZjE5NzNlMzA3MmVhYTdlYTRiM2ZiZmQ1YWQyMGVkZmIzODA5NDllNWI4ZDUzZcwMYtY=: 00:25:18.697 11:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:25:18.697 11:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:18.697 11:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:18.697 11:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:18.697 11:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:18.697 11:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:18.697 11:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:18.697 11:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.697 11:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.697 11:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.697 11:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:18.697 11:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:18.697 11:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:18.697 11:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:18.697 11:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:18.697 11:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:18.697 11:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:18.697 11:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:18.697 11:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:18.697 11:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:18.697 11:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:18.697 11:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:18.697 11:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.697 11:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.266 nvme0n1 00:25:19.266 11:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.266 11:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:19.266 11:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:19.266 11:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.266 11:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.266 11:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.266 11:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:19.266 11:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:19.266 11:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.266 11:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.266 11:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.266 11:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:19.266 11:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:25:19.266 11:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:19.266 11:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:19.266 11:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:19.266 11:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:19.266 11:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmY1YTEyYzY5NTUxMjA3N2M2OWFiNTYyZDFiYjRhM2E3OGYxZGJkOGMzNmM1YmNjWljl4g==: 00:25:19.266 11:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTk2MGRmOWYwNGZhM2IxMjdlOWIzMmRkMmMxZWY5MzI4MWNiYmE4NzRhNjVkN2E2w75NCQ==: 00:25:19.266 11:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:19.266 11:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:19.266 11:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmY1YTEyYzY5NTUxMjA3N2M2OWFiNTYyZDFiYjRhM2E3OGYxZGJkOGMzNmM1YmNjWljl4g==: 00:25:19.266 11:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTk2MGRmOWYwNGZhM2IxMjdlOWIzMmRkMmMxZWY5MzI4MWNiYmE4NzRhNjVkN2E2w75NCQ==: ]] 00:25:19.266 11:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTk2MGRmOWYwNGZhM2IxMjdlOWIzMmRkMmMxZWY5MzI4MWNiYmE4NzRhNjVkN2E2w75NCQ==: 00:25:19.266 11:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:25:19.266 11:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:19.266 11:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:19.266 11:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:19.266 11:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:19.266 11:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:19.266 11:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:19.266 11:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.266 11:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.266 11:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.266 11:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:19.266 11:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:19.266 11:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:19.266 11:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:19.266 11:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:19.266 11:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:19.266 11:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:19.266 11:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:19.266 11:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:19.267 11:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:19.267 11:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:19.267 11:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:19.267 11:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.267 11:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.527 nvme0n1 00:25:19.527 11:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.527 11:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:19.527 11:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:19.527 11:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.527 11:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.527 11:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.786 11:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:19.786 11:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:19.786 11:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.786 11:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.786 11:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.786 11:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:19.786 11:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:25:19.787 11:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:19.787 11:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:19.787 11:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:19.787 11:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:19.787 11:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTI3Y2NmYjc5YmQ5MjVkZDNjYjhhMDAwNWE3MGQxM2GPOAge: 00:25:19.787 11:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWVkMDlhMTBhZGZkYmYwZmMyZjJhYTQ2MDljMzg3MDdzGunR: 00:25:19.787 11:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:19.787 11:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:19.787 11:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTI3Y2NmYjc5YmQ5MjVkZDNjYjhhMDAwNWE3MGQxM2GPOAge: 00:25:19.787 11:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWVkMDlhMTBhZGZkYmYwZmMyZjJhYTQ2MDljMzg3MDdzGunR: ]] 00:25:19.787 11:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWVkMDlhMTBhZGZkYmYwZmMyZjJhYTQ2MDljMzg3MDdzGunR: 00:25:19.787 11:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:25:19.787 11:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:19.787 11:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:19.787 11:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:19.787 11:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:19.787 11:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:19.787 11:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:19.787 11:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.787 11:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.787 11:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.787 11:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:19.787 11:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:19.787 11:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:19.787 11:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:19.787 11:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:19.787 11:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:19.787 11:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:19.787 11:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:19.787 11:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:19.787 11:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:19.787 11:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:19.787 11:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:19.787 11:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.787 11:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.045 nvme0n1 00:25:20.045 11:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.045 11:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:20.045 11:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:20.045 11:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.046 11:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.046 11:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.046 11:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:20.046 11:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:20.046 11:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.046 11:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.046 11:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.046 11:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:20.046 11:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:25:20.046 11:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:20.046 11:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:20.046 11:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:20.046 11:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:20.046 11:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzdhOWNlZjJlNWU5OTE4ZTY4NmNhZGIyZDExZjdiNmNjNmU1MGNhOTJlYTQzN2Y4f+CRXg==: 00:25:20.046 11:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTlmMjM1ZGFkODUwMTg1NTM3ZGYwZGU1MjY0Yzc0Y2SjTt5E: 00:25:20.046 11:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:20.046 11:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:20.046 11:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzdhOWNlZjJlNWU5OTE4ZTY4NmNhZGIyZDExZjdiNmNjNmU1MGNhOTJlYTQzN2Y4f+CRXg==: 00:25:20.046 11:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTlmMjM1ZGFkODUwMTg1NTM3ZGYwZGU1MjY0Yzc0Y2SjTt5E: ]] 00:25:20.046 11:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTlmMjM1ZGFkODUwMTg1NTM3ZGYwZGU1MjY0Yzc0Y2SjTt5E: 00:25:20.046 11:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:25:20.046 11:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:20.046 11:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:20.046 11:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:20.046 11:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:20.046 11:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:20.046 11:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:20.046 11:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.046 11:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.046 11:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.046 11:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:20.046 11:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:20.046 11:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:20.046 11:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:20.046 11:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:20.046 11:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:20.046 11:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:20.046 11:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:20.046 11:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:20.046 11:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:20.046 11:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:20.046 11:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:20.046 11:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.046 11:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.611 nvme0n1 00:25:20.611 11:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.611 11:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:20.611 11:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:20.611 11:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.611 11:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.611 11:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.611 11:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:20.611 11:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:20.611 11:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.611 11:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.611 11:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.611 11:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:20.611 11:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:25:20.611 11:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:20.611 11:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:20.611 11:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:20.611 11:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:20.611 11:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTk0NmI3OTNhYWQyYjY5Y2NiNTljM2Y0ZGYxZTk0NTU1OGU1NzdjNjdmZDQwNGFmMjliOWIxNjMzN2Y4NTAyM+O9zv8=: 00:25:20.611 11:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:20.611 11:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:20.611 11:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:20.611 11:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTk0NmI3OTNhYWQyYjY5Y2NiNTljM2Y0ZGYxZTk0NTU1OGU1NzdjNjdmZDQwNGFmMjliOWIxNjMzN2Y4NTAyM+O9zv8=: 00:25:20.611 11:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:20.611 11:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:25:20.611 11:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:20.611 11:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:20.611 11:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:20.611 11:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:20.611 11:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:20.611 11:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:20.611 11:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.611 11:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.611 11:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.611 11:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:20.611 11:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:20.611 11:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:20.611 11:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:20.611 11:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:20.611 11:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:20.611 11:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:20.611 11:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:20.611 11:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:20.611 11:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:20.611 11:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:20.611 11:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:20.611 11:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.611 11:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.869 nvme0n1 00:25:20.869 11:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.870 11:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:20.870 11:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:20.870 11:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.870 11:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.870 11:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.870 11:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:20.870 11:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:20.870 11:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.870 11:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.137 11:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.137 11:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:21.137 11:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:21.137 11:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:25:21.137 11:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:21.137 11:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:21.137 11:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:21.137 11:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:21.137 11:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzU5ZDRjMDU0ZmVjZDNjNzk1ZDIxYjdlYjZhNjViMmPNAXD/: 00:25:21.137 11:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjgyNWQ1NDk0MjE3NTQ0NzNlZjE5NzNlMzA3MmVhYTdlYTRiM2ZiZmQ1YWQyMGVkZmIzODA5NDllNWI4ZDUzZcwMYtY=: 00:25:21.137 11:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:21.137 11:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:21.137 11:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzU5ZDRjMDU0ZmVjZDNjNzk1ZDIxYjdlYjZhNjViMmPNAXD/: 00:25:21.137 11:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjgyNWQ1NDk0MjE3NTQ0NzNlZjE5NzNlMzA3MmVhYTdlYTRiM2ZiZmQ1YWQyMGVkZmIzODA5NDllNWI4ZDUzZcwMYtY=: ]] 00:25:21.137 11:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjgyNWQ1NDk0MjE3NTQ0NzNlZjE5NzNlMzA3MmVhYTdlYTRiM2ZiZmQ1YWQyMGVkZmIzODA5NDllNWI4ZDUzZcwMYtY=: 00:25:21.137 11:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:25:21.137 11:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:21.137 11:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:21.137 11:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:21.137 11:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:21.137 11:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:21.137 11:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:21.137 11:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.137 11:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.137 11:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.137 11:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:21.137 11:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:21.137 11:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:21.137 11:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:21.137 11:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:21.137 11:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:21.137 11:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:21.137 11:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:21.138 11:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:21.138 11:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:21.138 11:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:21.138 11:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:21.138 11:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.138 11:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.704 nvme0n1 00:25:21.704 11:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.704 11:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:21.704 11:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:21.704 11:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.704 11:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.704 11:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.704 11:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:21.704 11:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:21.704 11:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.704 11:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.704 11:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.704 11:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:21.704 11:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:25:21.704 11:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:21.704 11:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:21.704 11:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:21.704 11:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:21.704 11:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmY1YTEyYzY5NTUxMjA3N2M2OWFiNTYyZDFiYjRhM2E3OGYxZGJkOGMzNmM1YmNjWljl4g==: 00:25:21.704 11:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTk2MGRmOWYwNGZhM2IxMjdlOWIzMmRkMmMxZWY5MzI4MWNiYmE4NzRhNjVkN2E2w75NCQ==: 00:25:21.704 11:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:21.704 11:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:21.705 11:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmY1YTEyYzY5NTUxMjA3N2M2OWFiNTYyZDFiYjRhM2E3OGYxZGJkOGMzNmM1YmNjWljl4g==: 00:25:21.705 11:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTk2MGRmOWYwNGZhM2IxMjdlOWIzMmRkMmMxZWY5MzI4MWNiYmE4NzRhNjVkN2E2w75NCQ==: ]] 00:25:21.705 11:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTk2MGRmOWYwNGZhM2IxMjdlOWIzMmRkMmMxZWY5MzI4MWNiYmE4NzRhNjVkN2E2w75NCQ==: 00:25:21.705 11:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:25:21.705 11:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:21.705 11:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:21.705 11:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:21.705 11:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:21.705 11:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:21.705 11:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:21.705 11:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.705 11:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.705 11:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.705 11:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:21.705 11:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:21.705 11:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:21.705 11:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:21.705 11:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:21.705 11:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:21.705 11:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:21.705 11:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:21.705 11:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:21.705 11:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:21.705 11:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:21.705 11:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:21.705 11:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.705 11:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.270 nvme0n1 00:25:22.270 11:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.270 11:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:22.270 11:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:22.270 11:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.270 11:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.270 11:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.270 11:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:22.270 11:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:22.270 11:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.270 11:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.270 11:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.270 11:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:22.270 11:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:25:22.270 11:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:22.270 11:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:22.270 11:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:22.270 11:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:22.270 11:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTI3Y2NmYjc5YmQ5MjVkZDNjYjhhMDAwNWE3MGQxM2GPOAge: 00:25:22.270 11:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWVkMDlhMTBhZGZkYmYwZmMyZjJhYTQ2MDljMzg3MDdzGunR: 00:25:22.270 11:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:22.270 11:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:22.270 11:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTI3Y2NmYjc5YmQ5MjVkZDNjYjhhMDAwNWE3MGQxM2GPOAge: 00:25:22.270 11:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWVkMDlhMTBhZGZkYmYwZmMyZjJhYTQ2MDljMzg3MDdzGunR: ]] 00:25:22.270 11:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWVkMDlhMTBhZGZkYmYwZmMyZjJhYTQ2MDljMzg3MDdzGunR: 00:25:22.270 11:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:25:22.270 11:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:22.270 11:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:22.270 11:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:22.270 11:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:22.270 11:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:22.271 11:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:22.271 11:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.271 11:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.271 11:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.271 11:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:22.271 11:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:22.271 11:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:22.271 11:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:22.271 11:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:22.271 11:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:22.271 11:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:22.271 11:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:22.271 11:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:22.271 11:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:22.271 11:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:22.271 11:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:22.271 11:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.271 11:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.839 nvme0n1 00:25:22.839 11:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.839 11:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:22.839 11:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:22.839 11:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.839 11:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.839 11:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.098 11:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:23.098 11:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:23.098 11:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.098 11:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.098 11:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.098 11:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:23.098 11:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:25:23.098 11:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:23.098 11:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:23.098 11:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:23.098 11:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:23.098 11:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzdhOWNlZjJlNWU5OTE4ZTY4NmNhZGIyZDExZjdiNmNjNmU1MGNhOTJlYTQzN2Y4f+CRXg==: 00:25:23.098 11:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTlmMjM1ZGFkODUwMTg1NTM3ZGYwZGU1MjY0Yzc0Y2SjTt5E: 00:25:23.098 11:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:23.098 11:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:23.098 11:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzdhOWNlZjJlNWU5OTE4ZTY4NmNhZGIyZDExZjdiNmNjNmU1MGNhOTJlYTQzN2Y4f+CRXg==: 00:25:23.098 11:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTlmMjM1ZGFkODUwMTg1NTM3ZGYwZGU1MjY0Yzc0Y2SjTt5E: ]] 00:25:23.098 11:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTlmMjM1ZGFkODUwMTg1NTM3ZGYwZGU1MjY0Yzc0Y2SjTt5E: 00:25:23.098 11:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:25:23.098 11:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:23.098 11:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:23.098 11:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:23.098 11:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:23.098 11:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:23.098 11:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:23.098 11:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.098 11:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.098 11:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.098 11:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:23.098 11:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:23.098 11:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:23.098 11:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:23.098 11:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:23.098 11:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:23.098 11:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:23.098 11:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:23.098 11:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:23.098 11:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:23.098 11:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:23.098 11:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:23.098 11:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.098 11:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.666 nvme0n1 00:25:23.666 11:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.667 11:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:23.667 11:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:23.667 11:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.667 11:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.667 11:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.667 11:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:23.667 11:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:23.667 11:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.667 11:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.667 11:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.667 11:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:23.667 11:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:25:23.667 11:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:23.667 11:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:23.667 11:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:23.667 11:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:23.667 11:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTk0NmI3OTNhYWQyYjY5Y2NiNTljM2Y0ZGYxZTk0NTU1OGU1NzdjNjdmZDQwNGFmMjliOWIxNjMzN2Y4NTAyM+O9zv8=: 00:25:23.667 11:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:23.667 11:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:23.667 11:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:23.667 11:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTk0NmI3OTNhYWQyYjY5Y2NiNTljM2Y0ZGYxZTk0NTU1OGU1NzdjNjdmZDQwNGFmMjliOWIxNjMzN2Y4NTAyM+O9zv8=: 00:25:23.667 11:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:23.667 11:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:25:23.667 11:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:23.667 11:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:23.667 11:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:23.667 11:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:23.667 11:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:23.667 11:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:23.667 11:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.667 11:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.667 11:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.667 11:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:23.667 11:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:23.667 11:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:23.667 11:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:23.667 11:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:23.667 11:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:23.667 11:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:23.667 11:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:23.667 11:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:23.667 11:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:23.667 11:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:23.667 11:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:23.667 11:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.667 11:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.235 nvme0n1 00:25:24.235 11:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.235 11:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:24.235 11:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:24.235 11:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.235 11:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.235 11:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.235 11:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:24.235 11:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:24.235 11:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.235 11:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.235 11:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.235 11:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:24.235 11:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:24.235 11:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:24.235 11:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:25:24.235 11:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:24.235 11:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:24.235 11:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:24.235 11:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:24.235 11:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzU5ZDRjMDU0ZmVjZDNjNzk1ZDIxYjdlYjZhNjViMmPNAXD/: 00:25:24.235 11:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjgyNWQ1NDk0MjE3NTQ0NzNlZjE5NzNlMzA3MmVhYTdlYTRiM2ZiZmQ1YWQyMGVkZmIzODA5NDllNWI4ZDUzZcwMYtY=: 00:25:24.235 11:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:24.235 11:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:24.235 11:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzU5ZDRjMDU0ZmVjZDNjNzk1ZDIxYjdlYjZhNjViMmPNAXD/: 00:25:24.236 11:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjgyNWQ1NDk0MjE3NTQ0NzNlZjE5NzNlMzA3MmVhYTdlYTRiM2ZiZmQ1YWQyMGVkZmIzODA5NDllNWI4ZDUzZcwMYtY=: ]] 00:25:24.236 11:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjgyNWQ1NDk0MjE3NTQ0NzNlZjE5NzNlMzA3MmVhYTdlYTRiM2ZiZmQ1YWQyMGVkZmIzODA5NDllNWI4ZDUzZcwMYtY=: 00:25:24.236 11:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:25:24.236 11:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:24.236 11:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:24.236 11:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:24.236 11:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:24.236 11:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:24.236 11:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:24.236 11:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.236 11:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.236 11:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.236 11:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:24.236 11:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:24.236 11:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:24.236 11:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:24.236 11:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:24.236 11:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:24.236 11:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:24.236 11:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:24.236 11:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:24.236 11:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:24.236 11:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:24.236 11:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:24.236 11:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.236 11:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.494 nvme0n1 00:25:24.494 11:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.494 11:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:24.494 11:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:24.494 11:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.494 11:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.494 11:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.495 11:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:24.495 11:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:24.495 11:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.495 11:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.495 11:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.495 11:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:24.495 11:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:25:24.495 11:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:24.495 11:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:24.495 11:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:24.495 11:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:24.495 11:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmY1YTEyYzY5NTUxMjA3N2M2OWFiNTYyZDFiYjRhM2E3OGYxZGJkOGMzNmM1YmNjWljl4g==: 00:25:24.495 11:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTk2MGRmOWYwNGZhM2IxMjdlOWIzMmRkMmMxZWY5MzI4MWNiYmE4NzRhNjVkN2E2w75NCQ==: 00:25:24.495 11:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:24.495 11:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:24.495 11:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmY1YTEyYzY5NTUxMjA3N2M2OWFiNTYyZDFiYjRhM2E3OGYxZGJkOGMzNmM1YmNjWljl4g==: 00:25:24.495 11:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTk2MGRmOWYwNGZhM2IxMjdlOWIzMmRkMmMxZWY5MzI4MWNiYmE4NzRhNjVkN2E2w75NCQ==: ]] 00:25:24.495 11:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTk2MGRmOWYwNGZhM2IxMjdlOWIzMmRkMmMxZWY5MzI4MWNiYmE4NzRhNjVkN2E2w75NCQ==: 00:25:24.495 11:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:25:24.495 11:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:24.495 11:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:24.495 11:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:24.495 11:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:24.495 11:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:24.495 11:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:24.495 11:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.495 11:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.495 11:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.495 11:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:24.495 11:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:24.495 11:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:24.495 11:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:24.495 11:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:24.495 11:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:24.495 11:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:24.495 11:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:24.495 11:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:24.495 11:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:24.495 11:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:24.495 11:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:24.495 11:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.495 11:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.754 nvme0n1 00:25:24.754 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.754 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:24.754 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:24.754 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.754 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.754 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.754 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:24.754 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:24.754 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.754 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.754 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.754 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:24.754 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:25:24.754 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:24.754 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:24.754 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:24.754 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:24.754 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTI3Y2NmYjc5YmQ5MjVkZDNjYjhhMDAwNWE3MGQxM2GPOAge: 00:25:24.754 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWVkMDlhMTBhZGZkYmYwZmMyZjJhYTQ2MDljMzg3MDdzGunR: 00:25:24.754 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:24.754 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:24.754 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTI3Y2NmYjc5YmQ5MjVkZDNjYjhhMDAwNWE3MGQxM2GPOAge: 00:25:24.754 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWVkMDlhMTBhZGZkYmYwZmMyZjJhYTQ2MDljMzg3MDdzGunR: ]] 00:25:24.754 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWVkMDlhMTBhZGZkYmYwZmMyZjJhYTQ2MDljMzg3MDdzGunR: 00:25:24.754 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:25:24.754 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:24.754 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:24.754 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:24.754 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:24.754 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:24.754 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:24.754 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.754 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.754 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.754 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:24.754 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:24.754 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:24.754 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:24.754 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:24.754 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:24.754 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:24.754 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:24.754 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:24.754 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:24.754 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:24.754 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:24.754 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.754 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.013 nvme0n1 00:25:25.013 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.013 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:25.013 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:25.013 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.013 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.013 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.013 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:25.013 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:25.013 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.013 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.013 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.014 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:25.014 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:25:25.014 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:25.014 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:25.014 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:25.014 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:25.014 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzdhOWNlZjJlNWU5OTE4ZTY4NmNhZGIyZDExZjdiNmNjNmU1MGNhOTJlYTQzN2Y4f+CRXg==: 00:25:25.014 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTlmMjM1ZGFkODUwMTg1NTM3ZGYwZGU1MjY0Yzc0Y2SjTt5E: 00:25:25.014 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:25.014 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:25.014 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzdhOWNlZjJlNWU5OTE4ZTY4NmNhZGIyZDExZjdiNmNjNmU1MGNhOTJlYTQzN2Y4f+CRXg==: 00:25:25.014 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTlmMjM1ZGFkODUwMTg1NTM3ZGYwZGU1MjY0Yzc0Y2SjTt5E: ]] 00:25:25.014 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTlmMjM1ZGFkODUwMTg1NTM3ZGYwZGU1MjY0Yzc0Y2SjTt5E: 00:25:25.014 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:25:25.014 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:25.014 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:25.014 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:25.014 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:25.014 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:25.014 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:25.014 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.014 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.014 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.014 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:25.014 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:25.014 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:25.014 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:25.014 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:25.014 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:25.014 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:25.014 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:25.014 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:25.014 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:25.014 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:25.014 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:25.014 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.014 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.014 nvme0n1 00:25:25.014 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.014 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:25.014 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.014 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:25.014 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.014 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.273 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:25.273 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:25.273 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.273 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.273 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.273 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:25.273 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:25:25.273 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:25.273 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:25.273 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:25.273 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:25.273 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTk0NmI3OTNhYWQyYjY5Y2NiNTljM2Y0ZGYxZTk0NTU1OGU1NzdjNjdmZDQwNGFmMjliOWIxNjMzN2Y4NTAyM+O9zv8=: 00:25:25.273 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:25.273 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:25.273 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:25.273 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTk0NmI3OTNhYWQyYjY5Y2NiNTljM2Y0ZGYxZTk0NTU1OGU1NzdjNjdmZDQwNGFmMjliOWIxNjMzN2Y4NTAyM+O9zv8=: 00:25:25.273 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:25.273 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:25:25.273 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:25.273 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:25.273 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:25.273 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:25.273 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:25.273 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:25.273 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.273 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.273 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.273 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:25.273 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:25.273 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:25.273 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:25.273 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:25.273 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:25.273 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:25.273 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:25.273 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:25.273 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:25.273 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:25.273 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:25.273 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.273 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.273 nvme0n1 00:25:25.273 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.273 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:25.273 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:25.273 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.273 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.274 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.274 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:25.274 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:25.274 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.274 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.274 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.274 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:25.274 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:25.274 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:25:25.274 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:25.274 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:25.274 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:25.274 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:25.274 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzU5ZDRjMDU0ZmVjZDNjNzk1ZDIxYjdlYjZhNjViMmPNAXD/: 00:25:25.274 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjgyNWQ1NDk0MjE3NTQ0NzNlZjE5NzNlMzA3MmVhYTdlYTRiM2ZiZmQ1YWQyMGVkZmIzODA5NDllNWI4ZDUzZcwMYtY=: 00:25:25.274 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:25.274 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:25.274 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzU5ZDRjMDU0ZmVjZDNjNzk1ZDIxYjdlYjZhNjViMmPNAXD/: 00:25:25.532 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjgyNWQ1NDk0MjE3NTQ0NzNlZjE5NzNlMzA3MmVhYTdlYTRiM2ZiZmQ1YWQyMGVkZmIzODA5NDllNWI4ZDUzZcwMYtY=: ]] 00:25:25.532 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjgyNWQ1NDk0MjE3NTQ0NzNlZjE5NzNlMzA3MmVhYTdlYTRiM2ZiZmQ1YWQyMGVkZmIzODA5NDllNWI4ZDUzZcwMYtY=: 00:25:25.532 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:25:25.532 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:25.532 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:25.532 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:25.532 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:25.532 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:25.532 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:25.532 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.532 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.532 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.532 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:25.532 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:25.532 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:25.532 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:25.532 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:25.532 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:25.532 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:25.532 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:25.532 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:25.532 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:25.532 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:25.532 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:25.532 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.532 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.532 nvme0n1 00:25:25.532 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.532 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:25.532 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:25.532 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.532 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.532 11:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.532 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:25.532 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:25.532 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.532 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.532 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.532 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:25.532 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:25:25.532 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:25.532 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:25.532 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:25.532 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:25.532 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmY1YTEyYzY5NTUxMjA3N2M2OWFiNTYyZDFiYjRhM2E3OGYxZGJkOGMzNmM1YmNjWljl4g==: 00:25:25.532 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTk2MGRmOWYwNGZhM2IxMjdlOWIzMmRkMmMxZWY5MzI4MWNiYmE4NzRhNjVkN2E2w75NCQ==: 00:25:25.532 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:25.533 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:25.533 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmY1YTEyYzY5NTUxMjA3N2M2OWFiNTYyZDFiYjRhM2E3OGYxZGJkOGMzNmM1YmNjWljl4g==: 00:25:25.533 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTk2MGRmOWYwNGZhM2IxMjdlOWIzMmRkMmMxZWY5MzI4MWNiYmE4NzRhNjVkN2E2w75NCQ==: ]] 00:25:25.533 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTk2MGRmOWYwNGZhM2IxMjdlOWIzMmRkMmMxZWY5MzI4MWNiYmE4NzRhNjVkN2E2w75NCQ==: 00:25:25.533 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:25:25.533 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:25.533 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:25.533 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:25.533 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:25.533 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:25.791 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:25.791 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.791 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.791 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.791 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:25.791 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:25.791 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:25.791 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:25.791 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:25.791 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:25.791 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:25.791 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:25.791 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:25.791 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:25.791 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:25.791 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:25.791 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.791 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.791 nvme0n1 00:25:25.791 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.791 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:25.791 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:25.791 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.791 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.791 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.791 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:25.791 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:25.791 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.791 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.791 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.791 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:25.791 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:25:25.791 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:25.791 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:25.791 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:25.791 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:25.791 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTI3Y2NmYjc5YmQ5MjVkZDNjYjhhMDAwNWE3MGQxM2GPOAge: 00:25:25.791 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWVkMDlhMTBhZGZkYmYwZmMyZjJhYTQ2MDljMzg3MDdzGunR: 00:25:25.791 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:25.791 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:25.791 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTI3Y2NmYjc5YmQ5MjVkZDNjYjhhMDAwNWE3MGQxM2GPOAge: 00:25:25.791 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWVkMDlhMTBhZGZkYmYwZmMyZjJhYTQ2MDljMzg3MDdzGunR: ]] 00:25:25.791 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWVkMDlhMTBhZGZkYmYwZmMyZjJhYTQ2MDljMzg3MDdzGunR: 00:25:25.791 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:25:25.791 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:25.791 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:25.791 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:25.791 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:25.791 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:25.791 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:25.791 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.791 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.049 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.049 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:26.049 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:26.049 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:26.049 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:26.049 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:26.049 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:26.049 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:26.049 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:26.049 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:26.049 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:26.049 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:26.049 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:26.049 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.049 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.049 nvme0n1 00:25:26.049 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.049 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:26.049 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:26.049 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.049 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.049 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.049 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:26.049 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:26.049 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.049 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.049 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.049 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:26.049 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:25:26.049 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:26.049 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:26.049 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:26.049 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:26.049 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzdhOWNlZjJlNWU5OTE4ZTY4NmNhZGIyZDExZjdiNmNjNmU1MGNhOTJlYTQzN2Y4f+CRXg==: 00:25:26.049 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTlmMjM1ZGFkODUwMTg1NTM3ZGYwZGU1MjY0Yzc0Y2SjTt5E: 00:25:26.049 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:26.049 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:26.049 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzdhOWNlZjJlNWU5OTE4ZTY4NmNhZGIyZDExZjdiNmNjNmU1MGNhOTJlYTQzN2Y4f+CRXg==: 00:25:26.049 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTlmMjM1ZGFkODUwMTg1NTM3ZGYwZGU1MjY0Yzc0Y2SjTt5E: ]] 00:25:26.049 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTlmMjM1ZGFkODUwMTg1NTM3ZGYwZGU1MjY0Yzc0Y2SjTt5E: 00:25:26.049 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:25:26.049 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:26.049 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:26.049 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:26.049 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:26.049 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:26.049 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:26.049 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.049 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.307 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.307 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:26.307 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:26.307 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:26.307 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:26.307 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:26.307 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:26.307 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:26.307 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:26.307 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:26.307 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:26.307 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:26.307 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:26.307 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.307 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.307 nvme0n1 00:25:26.307 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.307 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:26.307 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:26.307 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.307 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.307 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.307 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:26.307 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:26.307 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.307 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.307 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.307 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:26.307 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:25:26.307 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:26.307 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:26.307 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:26.307 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:26.307 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTk0NmI3OTNhYWQyYjY5Y2NiNTljM2Y0ZGYxZTk0NTU1OGU1NzdjNjdmZDQwNGFmMjliOWIxNjMzN2Y4NTAyM+O9zv8=: 00:25:26.307 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:26.307 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:26.307 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:26.307 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTk0NmI3OTNhYWQyYjY5Y2NiNTljM2Y0ZGYxZTk0NTU1OGU1NzdjNjdmZDQwNGFmMjliOWIxNjMzN2Y4NTAyM+O9zv8=: 00:25:26.307 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:26.308 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:25:26.308 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:26.308 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:26.308 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:26.308 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:26.308 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:26.308 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:26.308 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.308 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.308 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.308 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:26.308 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:26.308 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:26.308 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:26.308 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:26.308 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:26.308 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:26.308 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:26.308 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:26.308 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:26.308 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:26.308 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:26.308 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.308 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.565 nvme0n1 00:25:26.565 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.565 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:26.565 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:26.565 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.565 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.565 11:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.565 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:26.565 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:26.565 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.565 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.565 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.565 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:26.565 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:26.565 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:25:26.565 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:26.565 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:26.565 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:26.565 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:26.565 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzU5ZDRjMDU0ZmVjZDNjNzk1ZDIxYjdlYjZhNjViMmPNAXD/: 00:25:26.565 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjgyNWQ1NDk0MjE3NTQ0NzNlZjE5NzNlMzA3MmVhYTdlYTRiM2ZiZmQ1YWQyMGVkZmIzODA5NDllNWI4ZDUzZcwMYtY=: 00:25:26.565 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:26.565 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:26.565 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzU5ZDRjMDU0ZmVjZDNjNzk1ZDIxYjdlYjZhNjViMmPNAXD/: 00:25:26.565 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjgyNWQ1NDk0MjE3NTQ0NzNlZjE5NzNlMzA3MmVhYTdlYTRiM2ZiZmQ1YWQyMGVkZmIzODA5NDllNWI4ZDUzZcwMYtY=: ]] 00:25:26.565 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjgyNWQ1NDk0MjE3NTQ0NzNlZjE5NzNlMzA3MmVhYTdlYTRiM2ZiZmQ1YWQyMGVkZmIzODA5NDllNWI4ZDUzZcwMYtY=: 00:25:26.565 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:25:26.565 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:26.565 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:26.565 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:26.565 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:26.565 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:26.565 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:26.565 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.565 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.565 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.565 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:26.565 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:26.565 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:26.565 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:26.565 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:26.565 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:26.565 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:26.565 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:26.565 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:26.565 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:26.565 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:26.565 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:26.565 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.565 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.823 nvme0n1 00:25:26.823 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.823 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:26.823 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:26.823 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.823 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.823 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.081 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:27.081 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:27.081 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.081 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.081 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.081 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:27.081 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:25:27.081 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:27.081 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:27.081 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:27.081 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:27.081 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmY1YTEyYzY5NTUxMjA3N2M2OWFiNTYyZDFiYjRhM2E3OGYxZGJkOGMzNmM1YmNjWljl4g==: 00:25:27.081 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTk2MGRmOWYwNGZhM2IxMjdlOWIzMmRkMmMxZWY5MzI4MWNiYmE4NzRhNjVkN2E2w75NCQ==: 00:25:27.081 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:27.081 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:27.081 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmY1YTEyYzY5NTUxMjA3N2M2OWFiNTYyZDFiYjRhM2E3OGYxZGJkOGMzNmM1YmNjWljl4g==: 00:25:27.081 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTk2MGRmOWYwNGZhM2IxMjdlOWIzMmRkMmMxZWY5MzI4MWNiYmE4NzRhNjVkN2E2w75NCQ==: ]] 00:25:27.081 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTk2MGRmOWYwNGZhM2IxMjdlOWIzMmRkMmMxZWY5MzI4MWNiYmE4NzRhNjVkN2E2w75NCQ==: 00:25:27.081 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:25:27.081 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:27.081 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:27.081 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:27.081 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:27.081 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:27.081 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:27.081 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.081 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.081 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.081 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:27.081 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:27.081 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:27.081 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:27.081 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:27.081 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:27.081 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:27.081 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:27.081 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:27.081 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:27.081 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:27.081 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:27.082 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.082 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.340 nvme0n1 00:25:27.340 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.340 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:27.340 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:27.340 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.340 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.340 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.340 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:27.340 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:27.340 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.340 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.340 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.340 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:27.340 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:25:27.340 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:27.340 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:27.340 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:27.340 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:27.340 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTI3Y2NmYjc5YmQ5MjVkZDNjYjhhMDAwNWE3MGQxM2GPOAge: 00:25:27.340 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWVkMDlhMTBhZGZkYmYwZmMyZjJhYTQ2MDljMzg3MDdzGunR: 00:25:27.340 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:27.340 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:27.340 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTI3Y2NmYjc5YmQ5MjVkZDNjYjhhMDAwNWE3MGQxM2GPOAge: 00:25:27.340 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWVkMDlhMTBhZGZkYmYwZmMyZjJhYTQ2MDljMzg3MDdzGunR: ]] 00:25:27.340 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWVkMDlhMTBhZGZkYmYwZmMyZjJhYTQ2MDljMzg3MDdzGunR: 00:25:27.340 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:25:27.340 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:27.340 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:27.340 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:27.340 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:27.340 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:27.340 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:27.340 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.340 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.340 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.340 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:27.340 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:27.340 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:27.340 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:27.340 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:27.340 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:27.340 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:27.340 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:27.340 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:27.340 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:27.340 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:27.340 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:27.340 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.340 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.599 nvme0n1 00:25:27.599 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.599 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:27.599 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.599 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:27.599 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.599 11:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.599 11:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:27.599 11:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:27.599 11:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.599 11:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.599 11:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.599 11:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:27.599 11:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:25:27.599 11:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:27.599 11:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:27.599 11:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:27.599 11:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:27.599 11:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzdhOWNlZjJlNWU5OTE4ZTY4NmNhZGIyZDExZjdiNmNjNmU1MGNhOTJlYTQzN2Y4f+CRXg==: 00:25:27.599 11:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTlmMjM1ZGFkODUwMTg1NTM3ZGYwZGU1MjY0Yzc0Y2SjTt5E: 00:25:27.599 11:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:27.599 11:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:27.599 11:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzdhOWNlZjJlNWU5OTE4ZTY4NmNhZGIyZDExZjdiNmNjNmU1MGNhOTJlYTQzN2Y4f+CRXg==: 00:25:27.599 11:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTlmMjM1ZGFkODUwMTg1NTM3ZGYwZGU1MjY0Yzc0Y2SjTt5E: ]] 00:25:27.599 11:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTlmMjM1ZGFkODUwMTg1NTM3ZGYwZGU1MjY0Yzc0Y2SjTt5E: 00:25:27.599 11:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:25:27.599 11:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:27.599 11:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:27.599 11:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:27.599 11:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:27.599 11:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:27.599 11:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:27.599 11:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.599 11:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.599 11:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.599 11:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:27.599 11:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:27.599 11:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:27.599 11:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:27.599 11:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:27.599 11:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:27.599 11:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:27.599 11:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:27.599 11:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:27.599 11:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:27.599 11:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:27.599 11:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:27.599 11:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.599 11:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.857 nvme0n1 00:25:27.857 11:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.857 11:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:27.857 11:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:27.857 11:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.857 11:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.857 11:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.116 11:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:28.116 11:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:28.116 11:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.116 11:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.116 11:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.116 11:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:28.116 11:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:25:28.116 11:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:28.116 11:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:28.116 11:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:28.116 11:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:28.116 11:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTk0NmI3OTNhYWQyYjY5Y2NiNTljM2Y0ZGYxZTk0NTU1OGU1NzdjNjdmZDQwNGFmMjliOWIxNjMzN2Y4NTAyM+O9zv8=: 00:25:28.116 11:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:28.116 11:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:28.116 11:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:28.116 11:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTk0NmI3OTNhYWQyYjY5Y2NiNTljM2Y0ZGYxZTk0NTU1OGU1NzdjNjdmZDQwNGFmMjliOWIxNjMzN2Y4NTAyM+O9zv8=: 00:25:28.116 11:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:28.116 11:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:25:28.116 11:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:28.116 11:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:28.116 11:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:28.116 11:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:28.116 11:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:28.116 11:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:28.116 11:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.116 11:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.116 11:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.116 11:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:28.116 11:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:28.116 11:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:28.116 11:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:28.116 11:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:28.116 11:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:28.116 11:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:28.116 11:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:28.116 11:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:28.116 11:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:28.116 11:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:28.116 11:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:28.116 11:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.116 11:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.375 nvme0n1 00:25:28.375 11:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.375 11:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:28.375 11:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:28.375 11:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.375 11:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.375 11:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.375 11:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:28.375 11:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:28.375 11:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.375 11:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.375 11:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.375 11:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:28.375 11:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:28.375 11:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:25:28.375 11:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:28.375 11:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:28.375 11:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:28.375 11:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:28.375 11:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzU5ZDRjMDU0ZmVjZDNjNzk1ZDIxYjdlYjZhNjViMmPNAXD/: 00:25:28.376 11:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjgyNWQ1NDk0MjE3NTQ0NzNlZjE5NzNlMzA3MmVhYTdlYTRiM2ZiZmQ1YWQyMGVkZmIzODA5NDllNWI4ZDUzZcwMYtY=: 00:25:28.376 11:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:28.376 11:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:28.376 11:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzU5ZDRjMDU0ZmVjZDNjNzk1ZDIxYjdlYjZhNjViMmPNAXD/: 00:25:28.376 11:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjgyNWQ1NDk0MjE3NTQ0NzNlZjE5NzNlMzA3MmVhYTdlYTRiM2ZiZmQ1YWQyMGVkZmIzODA5NDllNWI4ZDUzZcwMYtY=: ]] 00:25:28.376 11:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjgyNWQ1NDk0MjE3NTQ0NzNlZjE5NzNlMzA3MmVhYTdlYTRiM2ZiZmQ1YWQyMGVkZmIzODA5NDllNWI4ZDUzZcwMYtY=: 00:25:28.376 11:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:25:28.376 11:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:28.376 11:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:28.376 11:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:28.376 11:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:28.376 11:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:28.376 11:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:28.376 11:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.376 11:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.376 11:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.376 11:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:28.376 11:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:28.376 11:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:28.376 11:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:28.376 11:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:28.376 11:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:28.376 11:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:28.376 11:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:28.376 11:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:28.376 11:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:28.376 11:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:28.376 11:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:28.376 11:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.376 11:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.634 nvme0n1 00:25:28.634 11:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.634 11:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:28.634 11:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:28.634 11:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.634 11:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.892 11:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.892 11:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:28.892 11:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:28.892 11:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.892 11:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.892 11:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.892 11:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:28.892 11:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:25:28.892 11:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:28.892 11:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:28.892 11:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:28.892 11:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:28.892 11:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmY1YTEyYzY5NTUxMjA3N2M2OWFiNTYyZDFiYjRhM2E3OGYxZGJkOGMzNmM1YmNjWljl4g==: 00:25:28.892 11:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTk2MGRmOWYwNGZhM2IxMjdlOWIzMmRkMmMxZWY5MzI4MWNiYmE4NzRhNjVkN2E2w75NCQ==: 00:25:28.892 11:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:28.892 11:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:28.892 11:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmY1YTEyYzY5NTUxMjA3N2M2OWFiNTYyZDFiYjRhM2E3OGYxZGJkOGMzNmM1YmNjWljl4g==: 00:25:28.892 11:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTk2MGRmOWYwNGZhM2IxMjdlOWIzMmRkMmMxZWY5MzI4MWNiYmE4NzRhNjVkN2E2w75NCQ==: ]] 00:25:28.893 11:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTk2MGRmOWYwNGZhM2IxMjdlOWIzMmRkMmMxZWY5MzI4MWNiYmE4NzRhNjVkN2E2w75NCQ==: 00:25:28.893 11:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:25:28.893 11:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:28.893 11:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:28.893 11:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:28.893 11:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:28.893 11:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:28.893 11:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:28.893 11:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.893 11:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.893 11:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.893 11:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:28.893 11:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:28.893 11:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:28.893 11:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:28.893 11:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:28.893 11:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:28.893 11:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:28.893 11:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:28.893 11:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:28.893 11:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:28.893 11:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:28.893 11:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:28.893 11:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.893 11:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.151 nvme0n1 00:25:29.151 11:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.151 11:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:29.151 11:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:29.151 11:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.151 11:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.151 11:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.151 11:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:29.151 11:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:29.151 11:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.151 11:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.151 11:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.151 11:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:29.151 11:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:25:29.151 11:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:29.151 11:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:29.151 11:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:29.151 11:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:29.151 11:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTI3Y2NmYjc5YmQ5MjVkZDNjYjhhMDAwNWE3MGQxM2GPOAge: 00:25:29.151 11:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWVkMDlhMTBhZGZkYmYwZmMyZjJhYTQ2MDljMzg3MDdzGunR: 00:25:29.151 11:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:29.151 11:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:29.151 11:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTI3Y2NmYjc5YmQ5MjVkZDNjYjhhMDAwNWE3MGQxM2GPOAge: 00:25:29.151 11:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWVkMDlhMTBhZGZkYmYwZmMyZjJhYTQ2MDljMzg3MDdzGunR: ]] 00:25:29.151 11:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWVkMDlhMTBhZGZkYmYwZmMyZjJhYTQ2MDljMzg3MDdzGunR: 00:25:29.151 11:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:25:29.151 11:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:29.410 11:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:29.410 11:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:29.410 11:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:29.410 11:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:29.410 11:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:29.410 11:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.410 11:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.410 11:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.411 11:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:29.411 11:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:29.411 11:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:29.411 11:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:29.411 11:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:29.411 11:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:29.411 11:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:29.411 11:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:29.411 11:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:29.411 11:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:29.411 11:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:29.411 11:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:29.411 11:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.411 11:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.669 nvme0n1 00:25:29.669 11:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.669 11:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:29.669 11:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:29.669 11:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.669 11:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.669 11:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.669 11:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:29.669 11:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:29.669 11:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.669 11:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.669 11:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.669 11:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:29.669 11:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:25:29.669 11:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:29.669 11:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:29.669 11:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:29.669 11:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:29.669 11:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzdhOWNlZjJlNWU5OTE4ZTY4NmNhZGIyZDExZjdiNmNjNmU1MGNhOTJlYTQzN2Y4f+CRXg==: 00:25:29.669 11:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTlmMjM1ZGFkODUwMTg1NTM3ZGYwZGU1MjY0Yzc0Y2SjTt5E: 00:25:29.669 11:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:29.669 11:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:29.670 11:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzdhOWNlZjJlNWU5OTE4ZTY4NmNhZGIyZDExZjdiNmNjNmU1MGNhOTJlYTQzN2Y4f+CRXg==: 00:25:29.670 11:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTlmMjM1ZGFkODUwMTg1NTM3ZGYwZGU1MjY0Yzc0Y2SjTt5E: ]] 00:25:29.670 11:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTlmMjM1ZGFkODUwMTg1NTM3ZGYwZGU1MjY0Yzc0Y2SjTt5E: 00:25:29.670 11:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:25:29.670 11:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:29.670 11:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:29.670 11:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:29.670 11:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:29.670 11:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:29.670 11:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:29.670 11:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.670 11:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.670 11:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.670 11:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:29.670 11:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:29.670 11:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:29.670 11:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:29.670 11:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:29.670 11:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:29.670 11:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:29.670 11:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:29.670 11:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:29.670 11:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:29.670 11:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:29.670 11:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:29.670 11:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.670 11:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.236 nvme0n1 00:25:30.236 11:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.236 11:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:30.236 11:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:30.236 11:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.236 11:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.236 11:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.236 11:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:30.236 11:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:30.236 11:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.236 11:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.236 11:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.236 11:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:30.236 11:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:25:30.236 11:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:30.236 11:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:30.236 11:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:30.236 11:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:30.236 11:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTk0NmI3OTNhYWQyYjY5Y2NiNTljM2Y0ZGYxZTk0NTU1OGU1NzdjNjdmZDQwNGFmMjliOWIxNjMzN2Y4NTAyM+O9zv8=: 00:25:30.236 11:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:30.236 11:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:30.236 11:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:30.236 11:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTk0NmI3OTNhYWQyYjY5Y2NiNTljM2Y0ZGYxZTk0NTU1OGU1NzdjNjdmZDQwNGFmMjliOWIxNjMzN2Y4NTAyM+O9zv8=: 00:25:30.236 11:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:30.236 11:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:25:30.236 11:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:30.236 11:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:30.236 11:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:30.236 11:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:30.236 11:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:30.236 11:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:30.236 11:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.236 11:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.237 11:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.237 11:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:30.237 11:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:30.237 11:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:30.237 11:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:30.237 11:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:30.237 11:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:30.237 11:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:30.237 11:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:30.237 11:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:30.237 11:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:30.237 11:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:30.237 11:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:30.237 11:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.237 11:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.493 nvme0n1 00:25:30.493 11:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.493 11:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:30.493 11:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:30.493 11:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.493 11:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.493 11:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.493 11:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:30.493 11:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:30.493 11:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.493 11:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.750 11:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.750 11:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:30.750 11:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:30.750 11:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:25:30.750 11:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:30.750 11:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:30.750 11:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:30.750 11:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:30.750 11:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzU5ZDRjMDU0ZmVjZDNjNzk1ZDIxYjdlYjZhNjViMmPNAXD/: 00:25:30.750 11:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjgyNWQ1NDk0MjE3NTQ0NzNlZjE5NzNlMzA3MmVhYTdlYTRiM2ZiZmQ1YWQyMGVkZmIzODA5NDllNWI4ZDUzZcwMYtY=: 00:25:30.750 11:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:30.750 11:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:30.750 11:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzU5ZDRjMDU0ZmVjZDNjNzk1ZDIxYjdlYjZhNjViMmPNAXD/: 00:25:30.751 11:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjgyNWQ1NDk0MjE3NTQ0NzNlZjE5NzNlMzA3MmVhYTdlYTRiM2ZiZmQ1YWQyMGVkZmIzODA5NDllNWI4ZDUzZcwMYtY=: ]] 00:25:30.751 11:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjgyNWQ1NDk0MjE3NTQ0NzNlZjE5NzNlMzA3MmVhYTdlYTRiM2ZiZmQ1YWQyMGVkZmIzODA5NDllNWI4ZDUzZcwMYtY=: 00:25:30.751 11:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:25:30.751 11:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:30.751 11:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:30.751 11:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:30.751 11:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:30.751 11:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:30.751 11:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:30.751 11:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.751 11:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.751 11:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.751 11:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:30.751 11:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:30.751 11:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:30.751 11:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:30.751 11:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:30.751 11:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:30.751 11:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:30.751 11:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:30.751 11:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:30.751 11:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:30.751 11:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:30.751 11:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:30.751 11:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.751 11:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.317 nvme0n1 00:25:31.317 11:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.317 11:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:31.317 11:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:31.317 11:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.317 11:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.317 11:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.317 11:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:31.317 11:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:31.317 11:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.317 11:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.317 11:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.317 11:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:31.317 11:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:25:31.317 11:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:31.317 11:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:31.317 11:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:31.317 11:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:31.317 11:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmY1YTEyYzY5NTUxMjA3N2M2OWFiNTYyZDFiYjRhM2E3OGYxZGJkOGMzNmM1YmNjWljl4g==: 00:25:31.317 11:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTk2MGRmOWYwNGZhM2IxMjdlOWIzMmRkMmMxZWY5MzI4MWNiYmE4NzRhNjVkN2E2w75NCQ==: 00:25:31.317 11:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:31.317 11:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:31.317 11:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmY1YTEyYzY5NTUxMjA3N2M2OWFiNTYyZDFiYjRhM2E3OGYxZGJkOGMzNmM1YmNjWljl4g==: 00:25:31.317 11:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTk2MGRmOWYwNGZhM2IxMjdlOWIzMmRkMmMxZWY5MzI4MWNiYmE4NzRhNjVkN2E2w75NCQ==: ]] 00:25:31.317 11:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTk2MGRmOWYwNGZhM2IxMjdlOWIzMmRkMmMxZWY5MzI4MWNiYmE4NzRhNjVkN2E2w75NCQ==: 00:25:31.317 11:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:25:31.317 11:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:31.317 11:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:31.317 11:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:31.317 11:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:31.317 11:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:31.317 11:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:31.317 11:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.317 11:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.317 11:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.317 11:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:31.317 11:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:31.317 11:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:31.317 11:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:31.317 11:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:31.317 11:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:31.317 11:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:31.317 11:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:31.317 11:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:31.317 11:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:31.317 11:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:31.317 11:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:31.317 11:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.317 11:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.882 nvme0n1 00:25:31.882 11:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.882 11:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:31.882 11:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:31.882 11:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.882 11:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.882 11:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.882 11:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:31.882 11:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:31.882 11:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.882 11:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.882 11:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.882 11:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:31.882 11:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:25:31.882 11:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:31.882 11:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:31.882 11:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:31.882 11:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:31.882 11:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTI3Y2NmYjc5YmQ5MjVkZDNjYjhhMDAwNWE3MGQxM2GPOAge: 00:25:31.882 11:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWVkMDlhMTBhZGZkYmYwZmMyZjJhYTQ2MDljMzg3MDdzGunR: 00:25:31.882 11:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:31.882 11:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:31.882 11:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTI3Y2NmYjc5YmQ5MjVkZDNjYjhhMDAwNWE3MGQxM2GPOAge: 00:25:31.882 11:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWVkMDlhMTBhZGZkYmYwZmMyZjJhYTQ2MDljMzg3MDdzGunR: ]] 00:25:31.882 11:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWVkMDlhMTBhZGZkYmYwZmMyZjJhYTQ2MDljMzg3MDdzGunR: 00:25:31.882 11:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:25:31.882 11:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:31.882 11:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:31.882 11:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:31.882 11:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:31.882 11:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:31.882 11:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:31.882 11:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.882 11:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.882 11:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.882 11:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:31.882 11:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:31.883 11:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:31.883 11:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:31.883 11:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:31.883 11:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:31.883 11:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:31.883 11:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:31.883 11:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:31.883 11:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:31.883 11:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:31.883 11:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:31.883 11:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.883 11:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.448 nvme0n1 00:25:32.448 11:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.448 11:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:32.448 11:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:32.448 11:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.448 11:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.448 11:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.448 11:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:32.448 11:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:32.448 11:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.448 11:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.707 11:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.708 11:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:32.708 11:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:25:32.708 11:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:32.708 11:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:32.708 11:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:32.708 11:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:32.708 11:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzdhOWNlZjJlNWU5OTE4ZTY4NmNhZGIyZDExZjdiNmNjNmU1MGNhOTJlYTQzN2Y4f+CRXg==: 00:25:32.708 11:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTlmMjM1ZGFkODUwMTg1NTM3ZGYwZGU1MjY0Yzc0Y2SjTt5E: 00:25:32.708 11:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:32.708 11:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:32.708 11:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzdhOWNlZjJlNWU5OTE4ZTY4NmNhZGIyZDExZjdiNmNjNmU1MGNhOTJlYTQzN2Y4f+CRXg==: 00:25:32.708 11:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTlmMjM1ZGFkODUwMTg1NTM3ZGYwZGU1MjY0Yzc0Y2SjTt5E: ]] 00:25:32.708 11:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTlmMjM1ZGFkODUwMTg1NTM3ZGYwZGU1MjY0Yzc0Y2SjTt5E: 00:25:32.708 11:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:25:32.708 11:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:32.708 11:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:32.708 11:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:32.708 11:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:32.708 11:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:32.708 11:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:32.708 11:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.708 11:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.708 11:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.708 11:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:32.708 11:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:32.708 11:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:32.708 11:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:32.708 11:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:32.708 11:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:32.708 11:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:32.708 11:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:32.708 11:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:32.708 11:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:32.708 11:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:32.708 11:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:32.708 11:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.708 11:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.375 nvme0n1 00:25:33.375 11:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.375 11:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:33.375 11:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:33.375 11:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.375 11:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.375 11:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.375 11:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:33.375 11:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:33.375 11:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.375 11:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.375 11:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.375 11:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:33.375 11:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:25:33.375 11:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:33.375 11:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:33.375 11:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:33.375 11:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:33.375 11:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTk0NmI3OTNhYWQyYjY5Y2NiNTljM2Y0ZGYxZTk0NTU1OGU1NzdjNjdmZDQwNGFmMjliOWIxNjMzN2Y4NTAyM+O9zv8=: 00:25:33.375 11:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:33.375 11:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:33.375 11:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:33.375 11:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTk0NmI3OTNhYWQyYjY5Y2NiNTljM2Y0ZGYxZTk0NTU1OGU1NzdjNjdmZDQwNGFmMjliOWIxNjMzN2Y4NTAyM+O9zv8=: 00:25:33.375 11:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:33.375 11:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:25:33.375 11:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:33.375 11:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:33.375 11:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:33.375 11:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:33.375 11:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:33.375 11:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:33.375 11:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.375 11:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.375 11:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.375 11:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:33.375 11:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:33.375 11:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:33.375 11:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:33.375 11:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:33.375 11:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:33.375 11:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:33.375 11:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:33.375 11:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:33.375 11:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:33.375 11:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:33.375 11:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:33.375 11:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.375 11:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.018 nvme0n1 00:25:34.018 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.018 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:34.018 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:34.019 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.019 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.019 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.019 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:34.019 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:34.019 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.019 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.019 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.019 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:34.019 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:34.019 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:34.019 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:25:34.019 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:34.019 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:34.019 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:34.019 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:34.019 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzU5ZDRjMDU0ZmVjZDNjNzk1ZDIxYjdlYjZhNjViMmPNAXD/: 00:25:34.019 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjgyNWQ1NDk0MjE3NTQ0NzNlZjE5NzNlMzA3MmVhYTdlYTRiM2ZiZmQ1YWQyMGVkZmIzODA5NDllNWI4ZDUzZcwMYtY=: 00:25:34.019 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:34.019 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:34.019 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzU5ZDRjMDU0ZmVjZDNjNzk1ZDIxYjdlYjZhNjViMmPNAXD/: 00:25:34.019 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjgyNWQ1NDk0MjE3NTQ0NzNlZjE5NzNlMzA3MmVhYTdlYTRiM2ZiZmQ1YWQyMGVkZmIzODA5NDllNWI4ZDUzZcwMYtY=: ]] 00:25:34.019 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjgyNWQ1NDk0MjE3NTQ0NzNlZjE5NzNlMzA3MmVhYTdlYTRiM2ZiZmQ1YWQyMGVkZmIzODA5NDllNWI4ZDUzZcwMYtY=: 00:25:34.019 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:25:34.019 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:34.019 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:34.019 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:34.019 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:34.019 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:34.019 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:34.019 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.019 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.019 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.019 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:34.019 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:34.019 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:34.019 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:34.019 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:34.019 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:34.019 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:34.019 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:34.019 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:34.019 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:34.019 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:34.019 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:34.019 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.019 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.019 nvme0n1 00:25:34.019 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.019 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:34.019 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:34.019 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.019 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.019 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.019 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:34.019 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:34.019 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.019 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.019 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.019 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:34.019 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:25:34.019 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:34.019 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:34.019 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:34.019 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:34.019 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmY1YTEyYzY5NTUxMjA3N2M2OWFiNTYyZDFiYjRhM2E3OGYxZGJkOGMzNmM1YmNjWljl4g==: 00:25:34.019 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTk2MGRmOWYwNGZhM2IxMjdlOWIzMmRkMmMxZWY5MzI4MWNiYmE4NzRhNjVkN2E2w75NCQ==: 00:25:34.019 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:34.019 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:34.019 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmY1YTEyYzY5NTUxMjA3N2M2OWFiNTYyZDFiYjRhM2E3OGYxZGJkOGMzNmM1YmNjWljl4g==: 00:25:34.019 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTk2MGRmOWYwNGZhM2IxMjdlOWIzMmRkMmMxZWY5MzI4MWNiYmE4NzRhNjVkN2E2w75NCQ==: ]] 00:25:34.019 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTk2MGRmOWYwNGZhM2IxMjdlOWIzMmRkMmMxZWY5MzI4MWNiYmE4NzRhNjVkN2E2w75NCQ==: 00:25:34.019 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:25:34.019 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:34.019 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:34.019 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:34.019 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:34.019 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:34.019 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:34.276 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.276 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.276 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.276 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:34.276 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:34.276 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:34.276 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:34.276 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:34.277 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:34.277 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:34.277 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:34.277 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:34.277 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:34.277 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:34.277 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:34.277 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.277 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.277 nvme0n1 00:25:34.277 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.277 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:34.277 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:34.277 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.277 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.277 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.277 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:34.277 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:34.277 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.277 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.277 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.277 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:34.277 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:25:34.277 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:34.277 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:34.277 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:34.277 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:34.277 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTI3Y2NmYjc5YmQ5MjVkZDNjYjhhMDAwNWE3MGQxM2GPOAge: 00:25:34.277 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWVkMDlhMTBhZGZkYmYwZmMyZjJhYTQ2MDljMzg3MDdzGunR: 00:25:34.277 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:34.277 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:34.277 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTI3Y2NmYjc5YmQ5MjVkZDNjYjhhMDAwNWE3MGQxM2GPOAge: 00:25:34.277 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWVkMDlhMTBhZGZkYmYwZmMyZjJhYTQ2MDljMzg3MDdzGunR: ]] 00:25:34.277 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWVkMDlhMTBhZGZkYmYwZmMyZjJhYTQ2MDljMzg3MDdzGunR: 00:25:34.277 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:25:34.277 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:34.277 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:34.277 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:34.277 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:34.277 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:34.277 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:34.277 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.277 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.277 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.277 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:34.277 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:34.277 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:34.277 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:34.277 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:34.277 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:34.277 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:34.277 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:34.277 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:34.277 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:34.277 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:34.277 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:34.277 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.277 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.536 nvme0n1 00:25:34.536 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.536 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:34.536 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:34.536 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.536 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.536 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.536 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:34.536 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:34.536 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.536 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.536 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.536 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:34.536 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:25:34.536 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:34.536 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:34.536 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:34.536 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:34.536 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzdhOWNlZjJlNWU5OTE4ZTY4NmNhZGIyZDExZjdiNmNjNmU1MGNhOTJlYTQzN2Y4f+CRXg==: 00:25:34.536 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTlmMjM1ZGFkODUwMTg1NTM3ZGYwZGU1MjY0Yzc0Y2SjTt5E: 00:25:34.536 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:34.536 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:34.536 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzdhOWNlZjJlNWU5OTE4ZTY4NmNhZGIyZDExZjdiNmNjNmU1MGNhOTJlYTQzN2Y4f+CRXg==: 00:25:34.536 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTlmMjM1ZGFkODUwMTg1NTM3ZGYwZGU1MjY0Yzc0Y2SjTt5E: ]] 00:25:34.536 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTlmMjM1ZGFkODUwMTg1NTM3ZGYwZGU1MjY0Yzc0Y2SjTt5E: 00:25:34.536 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:25:34.536 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:34.536 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:34.536 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:34.536 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:34.536 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:34.536 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:34.536 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.536 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.536 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.536 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:34.536 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:34.536 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:34.536 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:34.536 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:34.536 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:34.536 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:34.537 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:34.537 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:34.537 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:34.537 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:34.537 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:34.537 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.537 11:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.796 nvme0n1 00:25:34.796 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.796 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:34.796 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:34.796 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.796 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.796 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.796 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:34.796 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:34.796 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.796 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.796 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.796 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:34.796 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:25:34.796 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:34.796 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:34.796 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:34.796 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:34.796 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTk0NmI3OTNhYWQyYjY5Y2NiNTljM2Y0ZGYxZTk0NTU1OGU1NzdjNjdmZDQwNGFmMjliOWIxNjMzN2Y4NTAyM+O9zv8=: 00:25:34.796 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:34.796 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:34.796 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:34.796 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTk0NmI3OTNhYWQyYjY5Y2NiNTljM2Y0ZGYxZTk0NTU1OGU1NzdjNjdmZDQwNGFmMjliOWIxNjMzN2Y4NTAyM+O9zv8=: 00:25:34.796 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:34.796 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:25:34.796 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:34.796 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:34.796 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:34.796 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:34.796 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:34.796 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:34.796 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.796 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.796 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.796 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:34.796 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:34.796 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:34.796 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:34.796 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:34.796 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:34.796 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:34.796 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:34.796 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:34.796 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:34.796 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:34.796 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:34.796 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.796 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.055 nvme0n1 00:25:35.055 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.055 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:35.055 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:35.055 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.055 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.055 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.055 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:35.055 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:35.055 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.055 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.055 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.055 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:35.055 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:35.055 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:25:35.055 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:35.055 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:35.055 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:35.055 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:35.055 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzU5ZDRjMDU0ZmVjZDNjNzk1ZDIxYjdlYjZhNjViMmPNAXD/: 00:25:35.055 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjgyNWQ1NDk0MjE3NTQ0NzNlZjE5NzNlMzA3MmVhYTdlYTRiM2ZiZmQ1YWQyMGVkZmIzODA5NDllNWI4ZDUzZcwMYtY=: 00:25:35.055 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:35.055 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:35.055 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzU5ZDRjMDU0ZmVjZDNjNzk1ZDIxYjdlYjZhNjViMmPNAXD/: 00:25:35.055 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjgyNWQ1NDk0MjE3NTQ0NzNlZjE5NzNlMzA3MmVhYTdlYTRiM2ZiZmQ1YWQyMGVkZmIzODA5NDllNWI4ZDUzZcwMYtY=: ]] 00:25:35.055 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjgyNWQ1NDk0MjE3NTQ0NzNlZjE5NzNlMzA3MmVhYTdlYTRiM2ZiZmQ1YWQyMGVkZmIzODA5NDllNWI4ZDUzZcwMYtY=: 00:25:35.055 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:25:35.055 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:35.055 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:35.055 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:35.055 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:35.055 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:35.055 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:35.055 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.055 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.055 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.055 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:35.055 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:35.055 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:35.055 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:35.055 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:35.055 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:35.055 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:35.055 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:35.055 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:35.055 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:35.055 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:35.056 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:35.056 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.056 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.314 nvme0n1 00:25:35.314 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.314 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:35.314 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:35.314 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.314 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.314 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.314 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:35.314 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:35.315 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.315 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.315 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.315 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:35.315 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:25:35.315 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:35.315 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:35.315 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:35.315 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:35.315 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmY1YTEyYzY5NTUxMjA3N2M2OWFiNTYyZDFiYjRhM2E3OGYxZGJkOGMzNmM1YmNjWljl4g==: 00:25:35.315 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTk2MGRmOWYwNGZhM2IxMjdlOWIzMmRkMmMxZWY5MzI4MWNiYmE4NzRhNjVkN2E2w75NCQ==: 00:25:35.315 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:35.315 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:35.315 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmY1YTEyYzY5NTUxMjA3N2M2OWFiNTYyZDFiYjRhM2E3OGYxZGJkOGMzNmM1YmNjWljl4g==: 00:25:35.315 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTk2MGRmOWYwNGZhM2IxMjdlOWIzMmRkMmMxZWY5MzI4MWNiYmE4NzRhNjVkN2E2w75NCQ==: ]] 00:25:35.315 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTk2MGRmOWYwNGZhM2IxMjdlOWIzMmRkMmMxZWY5MzI4MWNiYmE4NzRhNjVkN2E2w75NCQ==: 00:25:35.315 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:25:35.315 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:35.315 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:35.315 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:35.315 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:35.315 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:35.315 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:35.315 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.315 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.315 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.315 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:35.315 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:35.315 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:35.315 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:35.315 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:35.315 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:35.315 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:35.315 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:35.315 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:35.315 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:35.315 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:35.315 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:35.315 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.315 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.574 nvme0n1 00:25:35.574 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.574 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:35.574 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:35.574 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.574 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.574 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.574 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:35.574 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:35.574 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.574 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.574 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.574 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:35.574 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:25:35.574 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:35.574 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:35.574 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:35.574 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:35.574 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTI3Y2NmYjc5YmQ5MjVkZDNjYjhhMDAwNWE3MGQxM2GPOAge: 00:25:35.574 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWVkMDlhMTBhZGZkYmYwZmMyZjJhYTQ2MDljMzg3MDdzGunR: 00:25:35.574 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:35.574 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:35.574 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTI3Y2NmYjc5YmQ5MjVkZDNjYjhhMDAwNWE3MGQxM2GPOAge: 00:25:35.574 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWVkMDlhMTBhZGZkYmYwZmMyZjJhYTQ2MDljMzg3MDdzGunR: ]] 00:25:35.574 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWVkMDlhMTBhZGZkYmYwZmMyZjJhYTQ2MDljMzg3MDdzGunR: 00:25:35.574 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:25:35.574 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:35.574 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:35.574 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:35.574 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:35.574 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:35.574 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:35.574 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.574 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.574 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.574 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:35.574 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:35.574 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:35.574 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:35.574 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:35.574 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:35.574 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:35.574 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:35.574 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:35.574 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:35.574 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:35.574 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:35.574 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.574 11:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.834 nvme0n1 00:25:35.834 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.834 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:35.834 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:35.834 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.834 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.834 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.834 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:35.834 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:35.834 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.834 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.834 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.834 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:35.834 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:25:35.834 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:35.834 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:35.834 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:35.834 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:35.834 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzdhOWNlZjJlNWU5OTE4ZTY4NmNhZGIyZDExZjdiNmNjNmU1MGNhOTJlYTQzN2Y4f+CRXg==: 00:25:35.834 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTlmMjM1ZGFkODUwMTg1NTM3ZGYwZGU1MjY0Yzc0Y2SjTt5E: 00:25:35.834 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:35.834 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:35.834 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzdhOWNlZjJlNWU5OTE4ZTY4NmNhZGIyZDExZjdiNmNjNmU1MGNhOTJlYTQzN2Y4f+CRXg==: 00:25:35.834 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTlmMjM1ZGFkODUwMTg1NTM3ZGYwZGU1MjY0Yzc0Y2SjTt5E: ]] 00:25:35.834 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTlmMjM1ZGFkODUwMTg1NTM3ZGYwZGU1MjY0Yzc0Y2SjTt5E: 00:25:35.834 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:25:35.834 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:35.834 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:35.834 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:35.834 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:35.834 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:35.834 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:35.834 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.834 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.834 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.834 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:35.834 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:35.834 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:35.834 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:35.834 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:35.834 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:35.834 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:35.834 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:35.834 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:35.834 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:35.834 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:35.834 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:35.834 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.834 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.093 nvme0n1 00:25:36.093 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.093 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:36.093 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:36.093 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.093 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.093 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.093 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:36.093 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:36.093 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.093 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.093 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.093 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:36.093 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:25:36.093 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:36.093 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:36.093 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:36.093 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:36.093 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTk0NmI3OTNhYWQyYjY5Y2NiNTljM2Y0ZGYxZTk0NTU1OGU1NzdjNjdmZDQwNGFmMjliOWIxNjMzN2Y4NTAyM+O9zv8=: 00:25:36.093 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:36.093 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:36.093 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:36.093 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTk0NmI3OTNhYWQyYjY5Y2NiNTljM2Y0ZGYxZTk0NTU1OGU1NzdjNjdmZDQwNGFmMjliOWIxNjMzN2Y4NTAyM+O9zv8=: 00:25:36.093 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:36.093 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:25:36.093 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:36.093 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:36.093 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:36.093 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:36.093 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:36.093 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:36.093 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.094 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.094 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.094 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:36.094 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:36.094 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:36.094 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:36.094 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:36.094 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:36.094 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:36.094 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:36.094 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:36.094 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:36.094 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:36.094 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:36.094 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.094 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.353 nvme0n1 00:25:36.353 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.353 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:36.353 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:36.353 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.353 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.353 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.353 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:36.353 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:36.353 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.353 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.353 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.353 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:36.353 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:36.353 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:25:36.353 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:36.353 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:36.353 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:36.353 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:36.353 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzU5ZDRjMDU0ZmVjZDNjNzk1ZDIxYjdlYjZhNjViMmPNAXD/: 00:25:36.353 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjgyNWQ1NDk0MjE3NTQ0NzNlZjE5NzNlMzA3MmVhYTdlYTRiM2ZiZmQ1YWQyMGVkZmIzODA5NDllNWI4ZDUzZcwMYtY=: 00:25:36.353 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:36.353 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:36.353 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzU5ZDRjMDU0ZmVjZDNjNzk1ZDIxYjdlYjZhNjViMmPNAXD/: 00:25:36.353 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjgyNWQ1NDk0MjE3NTQ0NzNlZjE5NzNlMzA3MmVhYTdlYTRiM2ZiZmQ1YWQyMGVkZmIzODA5NDllNWI4ZDUzZcwMYtY=: ]] 00:25:36.353 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjgyNWQ1NDk0MjE3NTQ0NzNlZjE5NzNlMzA3MmVhYTdlYTRiM2ZiZmQ1YWQyMGVkZmIzODA5NDllNWI4ZDUzZcwMYtY=: 00:25:36.353 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:25:36.353 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:36.353 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:36.353 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:36.353 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:36.353 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:36.353 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:36.353 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.353 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.353 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.353 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:36.353 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:36.353 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:36.353 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:36.353 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:36.353 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:36.353 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:36.353 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:36.353 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:36.353 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:36.353 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:36.353 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:36.353 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.353 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.612 nvme0n1 00:25:36.612 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.612 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:36.612 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:36.612 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.612 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.612 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.612 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:36.612 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:36.612 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.612 11:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.612 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.612 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:36.612 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:25:36.612 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:36.612 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:36.612 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:36.612 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:36.612 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmY1YTEyYzY5NTUxMjA3N2M2OWFiNTYyZDFiYjRhM2E3OGYxZGJkOGMzNmM1YmNjWljl4g==: 00:25:36.612 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTk2MGRmOWYwNGZhM2IxMjdlOWIzMmRkMmMxZWY5MzI4MWNiYmE4NzRhNjVkN2E2w75NCQ==: 00:25:36.612 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:36.612 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:36.612 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmY1YTEyYzY5NTUxMjA3N2M2OWFiNTYyZDFiYjRhM2E3OGYxZGJkOGMzNmM1YmNjWljl4g==: 00:25:36.612 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTk2MGRmOWYwNGZhM2IxMjdlOWIzMmRkMmMxZWY5MzI4MWNiYmE4NzRhNjVkN2E2w75NCQ==: ]] 00:25:36.612 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTk2MGRmOWYwNGZhM2IxMjdlOWIzMmRkMmMxZWY5MzI4MWNiYmE4NzRhNjVkN2E2w75NCQ==: 00:25:36.612 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:25:36.612 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:36.612 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:36.612 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:36.612 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:36.612 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:36.612 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:36.612 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.612 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.612 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.612 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:36.612 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:36.612 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:36.612 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:36.612 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:36.612 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:36.612 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:36.612 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:36.612 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:36.612 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:36.612 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:36.612 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:36.612 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.612 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.871 nvme0n1 00:25:36.871 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.871 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:36.871 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:36.871 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.871 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.871 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.871 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:36.871 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:36.871 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.871 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.871 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.871 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:36.871 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:25:36.871 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:36.871 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:36.871 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:36.871 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:36.871 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTI3Y2NmYjc5YmQ5MjVkZDNjYjhhMDAwNWE3MGQxM2GPOAge: 00:25:36.871 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWVkMDlhMTBhZGZkYmYwZmMyZjJhYTQ2MDljMzg3MDdzGunR: 00:25:36.871 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:36.871 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:36.871 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTI3Y2NmYjc5YmQ5MjVkZDNjYjhhMDAwNWE3MGQxM2GPOAge: 00:25:36.871 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWVkMDlhMTBhZGZkYmYwZmMyZjJhYTQ2MDljMzg3MDdzGunR: ]] 00:25:36.871 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWVkMDlhMTBhZGZkYmYwZmMyZjJhYTQ2MDljMzg3MDdzGunR: 00:25:36.871 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:25:36.871 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:36.871 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:36.871 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:36.871 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:36.871 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:36.871 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:36.871 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.871 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.871 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.871 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:36.871 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:36.871 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:36.871 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:36.872 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:36.872 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:36.872 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:36.872 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:36.872 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:36.872 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:36.872 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:36.872 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:36.872 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.872 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.130 nvme0n1 00:25:37.130 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.130 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:37.130 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:37.130 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.130 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.130 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.130 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:37.130 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:37.130 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.130 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.389 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.389 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:37.389 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:25:37.389 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:37.389 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:37.389 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:37.389 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:37.389 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzdhOWNlZjJlNWU5OTE4ZTY4NmNhZGIyZDExZjdiNmNjNmU1MGNhOTJlYTQzN2Y4f+CRXg==: 00:25:37.389 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTlmMjM1ZGFkODUwMTg1NTM3ZGYwZGU1MjY0Yzc0Y2SjTt5E: 00:25:37.389 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:37.389 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:37.389 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzdhOWNlZjJlNWU5OTE4ZTY4NmNhZGIyZDExZjdiNmNjNmU1MGNhOTJlYTQzN2Y4f+CRXg==: 00:25:37.389 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTlmMjM1ZGFkODUwMTg1NTM3ZGYwZGU1MjY0Yzc0Y2SjTt5E: ]] 00:25:37.389 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTlmMjM1ZGFkODUwMTg1NTM3ZGYwZGU1MjY0Yzc0Y2SjTt5E: 00:25:37.389 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:25:37.389 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:37.389 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:37.389 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:37.389 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:37.389 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:37.389 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:37.389 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.389 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.389 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.389 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:37.389 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:37.389 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:37.389 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:37.389 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:37.389 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:37.389 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:37.389 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:37.389 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:37.389 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:37.389 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:37.389 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:37.389 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.389 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.649 nvme0n1 00:25:37.649 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.649 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:37.649 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:37.649 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.649 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.649 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.649 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:37.649 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:37.649 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.649 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.649 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.649 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:37.649 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:25:37.649 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:37.649 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:37.649 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:37.649 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:37.649 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTk0NmI3OTNhYWQyYjY5Y2NiNTljM2Y0ZGYxZTk0NTU1OGU1NzdjNjdmZDQwNGFmMjliOWIxNjMzN2Y4NTAyM+O9zv8=: 00:25:37.649 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:37.649 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:37.649 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:37.649 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTk0NmI3OTNhYWQyYjY5Y2NiNTljM2Y0ZGYxZTk0NTU1OGU1NzdjNjdmZDQwNGFmMjliOWIxNjMzN2Y4NTAyM+O9zv8=: 00:25:37.649 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:37.649 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:25:37.649 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:37.649 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:37.649 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:37.649 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:37.649 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:37.649 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:37.649 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.649 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.649 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.649 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:37.649 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:37.649 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:37.649 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:37.649 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:37.649 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:37.649 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:37.649 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:37.649 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:37.649 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:37.649 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:37.649 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:37.649 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.649 11:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.908 nvme0n1 00:25:37.908 11:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.908 11:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:37.908 11:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:37.908 11:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.908 11:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.908 11:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.908 11:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:37.908 11:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:37.908 11:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.908 11:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.908 11:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.908 11:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:37.908 11:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:37.908 11:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:25:37.908 11:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:37.908 11:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:37.908 11:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:37.908 11:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:37.908 11:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzU5ZDRjMDU0ZmVjZDNjNzk1ZDIxYjdlYjZhNjViMmPNAXD/: 00:25:37.908 11:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjgyNWQ1NDk0MjE3NTQ0NzNlZjE5NzNlMzA3MmVhYTdlYTRiM2ZiZmQ1YWQyMGVkZmIzODA5NDllNWI4ZDUzZcwMYtY=: 00:25:37.908 11:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:37.908 11:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:37.908 11:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzU5ZDRjMDU0ZmVjZDNjNzk1ZDIxYjdlYjZhNjViMmPNAXD/: 00:25:37.908 11:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjgyNWQ1NDk0MjE3NTQ0NzNlZjE5NzNlMzA3MmVhYTdlYTRiM2ZiZmQ1YWQyMGVkZmIzODA5NDllNWI4ZDUzZcwMYtY=: ]] 00:25:37.908 11:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjgyNWQ1NDk0MjE3NTQ0NzNlZjE5NzNlMzA3MmVhYTdlYTRiM2ZiZmQ1YWQyMGVkZmIzODA5NDllNWI4ZDUzZcwMYtY=: 00:25:37.908 11:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:25:37.908 11:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:37.908 11:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:37.908 11:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:37.908 11:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:37.908 11:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:37.908 11:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:37.908 11:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.908 11:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.908 11:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.908 11:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:37.908 11:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:37.908 11:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:37.908 11:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:37.908 11:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:37.908 11:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:37.908 11:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:37.908 11:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:37.908 11:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:37.908 11:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:37.908 11:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:37.908 11:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:37.908 11:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.908 11:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.477 nvme0n1 00:25:38.477 11:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.477 11:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:38.477 11:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:38.477 11:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.477 11:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.477 11:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.477 11:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:38.477 11:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:38.477 11:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.477 11:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.477 11:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.477 11:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:38.477 11:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:25:38.477 11:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:38.477 11:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:38.477 11:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:38.477 11:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:38.477 11:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmY1YTEyYzY5NTUxMjA3N2M2OWFiNTYyZDFiYjRhM2E3OGYxZGJkOGMzNmM1YmNjWljl4g==: 00:25:38.477 11:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTk2MGRmOWYwNGZhM2IxMjdlOWIzMmRkMmMxZWY5MzI4MWNiYmE4NzRhNjVkN2E2w75NCQ==: 00:25:38.477 11:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:38.477 11:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:38.477 11:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmY1YTEyYzY5NTUxMjA3N2M2OWFiNTYyZDFiYjRhM2E3OGYxZGJkOGMzNmM1YmNjWljl4g==: 00:25:38.477 11:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTk2MGRmOWYwNGZhM2IxMjdlOWIzMmRkMmMxZWY5MzI4MWNiYmE4NzRhNjVkN2E2w75NCQ==: ]] 00:25:38.477 11:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTk2MGRmOWYwNGZhM2IxMjdlOWIzMmRkMmMxZWY5MzI4MWNiYmE4NzRhNjVkN2E2w75NCQ==: 00:25:38.477 11:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:25:38.477 11:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:38.477 11:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:38.477 11:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:38.477 11:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:38.477 11:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:38.477 11:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:38.477 11:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.477 11:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.477 11:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.477 11:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:38.477 11:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:38.477 11:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:38.477 11:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:38.477 11:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:38.477 11:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:38.477 11:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:38.477 11:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:38.477 11:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:38.477 11:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:38.477 11:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:38.477 11:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:38.477 11:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.477 11:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.736 nvme0n1 00:25:38.736 11:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.736 11:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:38.736 11:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:38.736 11:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.736 11:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.736 11:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.736 11:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:38.736 11:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:38.736 11:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.736 11:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.736 11:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.736 11:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:38.736 11:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:25:38.736 11:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:38.736 11:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:38.736 11:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:38.736 11:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:38.736 11:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTI3Y2NmYjc5YmQ5MjVkZDNjYjhhMDAwNWE3MGQxM2GPOAge: 00:25:38.736 11:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWVkMDlhMTBhZGZkYmYwZmMyZjJhYTQ2MDljMzg3MDdzGunR: 00:25:38.736 11:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:38.736 11:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:38.736 11:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTI3Y2NmYjc5YmQ5MjVkZDNjYjhhMDAwNWE3MGQxM2GPOAge: 00:25:38.736 11:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWVkMDlhMTBhZGZkYmYwZmMyZjJhYTQ2MDljMzg3MDdzGunR: ]] 00:25:38.736 11:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWVkMDlhMTBhZGZkYmYwZmMyZjJhYTQ2MDljMzg3MDdzGunR: 00:25:38.736 11:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:25:38.736 11:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:38.736 11:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:38.736 11:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:38.736 11:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:38.736 11:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:38.736 11:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:38.736 11:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.736 11:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.736 11:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.736 11:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:38.736 11:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:38.736 11:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:38.736 11:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:38.736 11:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:38.736 11:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:38.736 11:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:38.736 11:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:38.736 11:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:38.736 11:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:38.736 11:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:38.736 11:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:38.736 11:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.736 11:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.304 nvme0n1 00:25:39.304 11:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.304 11:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:39.304 11:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:39.304 11:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.304 11:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.304 11:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.304 11:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:39.304 11:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:39.304 11:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.304 11:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.304 11:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.304 11:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:39.304 11:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:25:39.304 11:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:39.304 11:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:39.304 11:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:39.304 11:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:39.304 11:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzdhOWNlZjJlNWU5OTE4ZTY4NmNhZGIyZDExZjdiNmNjNmU1MGNhOTJlYTQzN2Y4f+CRXg==: 00:25:39.304 11:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTlmMjM1ZGFkODUwMTg1NTM3ZGYwZGU1MjY0Yzc0Y2SjTt5E: 00:25:39.304 11:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:39.304 11:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:39.304 11:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzdhOWNlZjJlNWU5OTE4ZTY4NmNhZGIyZDExZjdiNmNjNmU1MGNhOTJlYTQzN2Y4f+CRXg==: 00:25:39.304 11:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTlmMjM1ZGFkODUwMTg1NTM3ZGYwZGU1MjY0Yzc0Y2SjTt5E: ]] 00:25:39.304 11:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTlmMjM1ZGFkODUwMTg1NTM3ZGYwZGU1MjY0Yzc0Y2SjTt5E: 00:25:39.304 11:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:25:39.304 11:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:39.304 11:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:39.304 11:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:39.304 11:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:39.304 11:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:39.304 11:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:39.304 11:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.304 11:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.304 11:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.304 11:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:39.304 11:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:39.304 11:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:39.304 11:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:39.304 11:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:39.304 11:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:39.304 11:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:39.304 11:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:39.304 11:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:39.304 11:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:39.304 11:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:39.304 11:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:39.304 11:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.304 11:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.562 nvme0n1 00:25:39.562 11:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.562 11:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:39.562 11:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:39.562 11:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.562 11:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.562 11:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.821 11:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:39.821 11:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:39.821 11:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.821 11:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.821 11:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.821 11:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:39.821 11:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:25:39.821 11:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:39.821 11:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:39.821 11:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:39.821 11:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:39.821 11:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTk0NmI3OTNhYWQyYjY5Y2NiNTljM2Y0ZGYxZTk0NTU1OGU1NzdjNjdmZDQwNGFmMjliOWIxNjMzN2Y4NTAyM+O9zv8=: 00:25:39.821 11:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:39.821 11:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:39.821 11:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:39.821 11:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTk0NmI3OTNhYWQyYjY5Y2NiNTljM2Y0ZGYxZTk0NTU1OGU1NzdjNjdmZDQwNGFmMjliOWIxNjMzN2Y4NTAyM+O9zv8=: 00:25:39.821 11:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:39.821 11:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:25:39.821 11:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:39.821 11:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:39.821 11:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:39.821 11:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:39.821 11:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:39.821 11:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:39.821 11:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.821 11:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.821 11:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.821 11:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:39.821 11:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:39.821 11:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:39.821 11:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:39.821 11:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:39.821 11:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:39.821 11:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:39.821 11:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:39.821 11:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:39.821 11:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:39.821 11:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:39.821 11:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:39.821 11:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.821 11:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.079 nvme0n1 00:25:40.079 11:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.079 11:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:40.079 11:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:40.079 11:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.079 11:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.079 11:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.079 11:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:40.079 11:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:40.079 11:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.079 11:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.079 11:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.079 11:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:40.079 11:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:40.079 11:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:25:40.079 11:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:40.079 11:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:40.079 11:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:40.079 11:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:40.079 11:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzU5ZDRjMDU0ZmVjZDNjNzk1ZDIxYjdlYjZhNjViMmPNAXD/: 00:25:40.079 11:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjgyNWQ1NDk0MjE3NTQ0NzNlZjE5NzNlMzA3MmVhYTdlYTRiM2ZiZmQ1YWQyMGVkZmIzODA5NDllNWI4ZDUzZcwMYtY=: 00:25:40.079 11:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:40.079 11:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:40.079 11:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzU5ZDRjMDU0ZmVjZDNjNzk1ZDIxYjdlYjZhNjViMmPNAXD/: 00:25:40.080 11:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjgyNWQ1NDk0MjE3NTQ0NzNlZjE5NzNlMzA3MmVhYTdlYTRiM2ZiZmQ1YWQyMGVkZmIzODA5NDllNWI4ZDUzZcwMYtY=: ]] 00:25:40.080 11:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjgyNWQ1NDk0MjE3NTQ0NzNlZjE5NzNlMzA3MmVhYTdlYTRiM2ZiZmQ1YWQyMGVkZmIzODA5NDllNWI4ZDUzZcwMYtY=: 00:25:40.080 11:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:25:40.080 11:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:40.080 11:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:40.080 11:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:40.080 11:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:40.080 11:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:40.080 11:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:40.080 11:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.080 11:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.338 11:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.338 11:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:40.338 11:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:40.338 11:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:40.338 11:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:40.338 11:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:40.338 11:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:40.338 11:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:40.338 11:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:40.338 11:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:40.338 11:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:40.338 11:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:40.338 11:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:40.338 11:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.338 11:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.904 nvme0n1 00:25:40.904 11:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.904 11:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:40.904 11:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:40.904 11:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.904 11:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.904 11:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.904 11:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:40.904 11:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:40.904 11:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.904 11:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.904 11:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.904 11:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:40.904 11:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:25:40.904 11:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:40.904 11:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:40.904 11:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:40.904 11:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:40.905 11:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmY1YTEyYzY5NTUxMjA3N2M2OWFiNTYyZDFiYjRhM2E3OGYxZGJkOGMzNmM1YmNjWljl4g==: 00:25:40.905 11:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTk2MGRmOWYwNGZhM2IxMjdlOWIzMmRkMmMxZWY5MzI4MWNiYmE4NzRhNjVkN2E2w75NCQ==: 00:25:40.905 11:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:40.905 11:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:40.905 11:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmY1YTEyYzY5NTUxMjA3N2M2OWFiNTYyZDFiYjRhM2E3OGYxZGJkOGMzNmM1YmNjWljl4g==: 00:25:40.905 11:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTk2MGRmOWYwNGZhM2IxMjdlOWIzMmRkMmMxZWY5MzI4MWNiYmE4NzRhNjVkN2E2w75NCQ==: ]] 00:25:40.905 11:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTk2MGRmOWYwNGZhM2IxMjdlOWIzMmRkMmMxZWY5MzI4MWNiYmE4NzRhNjVkN2E2w75NCQ==: 00:25:40.905 11:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:25:40.905 11:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:40.905 11:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:40.905 11:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:40.905 11:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:40.905 11:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:40.905 11:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:40.905 11:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.905 11:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.905 11:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.905 11:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:40.905 11:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:40.905 11:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:40.905 11:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:40.905 11:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:40.905 11:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:40.905 11:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:40.905 11:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:40.905 11:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:40.905 11:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:40.905 11:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:40.905 11:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:40.905 11:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.905 11:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.472 nvme0n1 00:25:41.472 11:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.472 11:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:41.472 11:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:41.472 11:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.472 11:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.472 11:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.472 11:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:41.472 11:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:41.472 11:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.472 11:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.472 11:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.472 11:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:41.472 11:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:25:41.472 11:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:41.472 11:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:41.472 11:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:41.472 11:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:41.472 11:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTI3Y2NmYjc5YmQ5MjVkZDNjYjhhMDAwNWE3MGQxM2GPOAge: 00:25:41.472 11:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWVkMDlhMTBhZGZkYmYwZmMyZjJhYTQ2MDljMzg3MDdzGunR: 00:25:41.472 11:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:41.472 11:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:41.472 11:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTI3Y2NmYjc5YmQ5MjVkZDNjYjhhMDAwNWE3MGQxM2GPOAge: 00:25:41.472 11:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWVkMDlhMTBhZGZkYmYwZmMyZjJhYTQ2MDljMzg3MDdzGunR: ]] 00:25:41.472 11:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWVkMDlhMTBhZGZkYmYwZmMyZjJhYTQ2MDljMzg3MDdzGunR: 00:25:41.472 11:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:25:41.472 11:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:41.472 11:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:41.472 11:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:41.472 11:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:41.472 11:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:41.472 11:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:41.472 11:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.472 11:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.472 11:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.472 11:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:41.472 11:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:41.472 11:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:41.472 11:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:41.472 11:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:41.472 11:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:41.472 11:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:41.472 11:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:41.472 11:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:41.473 11:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:41.473 11:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:41.473 11:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:41.473 11:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.473 11:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.040 nvme0n1 00:25:42.040 11:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.040 11:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:42.040 11:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.040 11:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:42.040 11:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.040 11:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.040 11:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:42.040 11:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:42.040 11:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.040 11:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.040 11:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.040 11:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:42.040 11:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:25:42.040 11:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:42.040 11:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:42.040 11:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:42.040 11:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:42.040 11:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzdhOWNlZjJlNWU5OTE4ZTY4NmNhZGIyZDExZjdiNmNjNmU1MGNhOTJlYTQzN2Y4f+CRXg==: 00:25:42.040 11:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTlmMjM1ZGFkODUwMTg1NTM3ZGYwZGU1MjY0Yzc0Y2SjTt5E: 00:25:42.040 11:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:42.040 11:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:42.040 11:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzdhOWNlZjJlNWU5OTE4ZTY4NmNhZGIyZDExZjdiNmNjNmU1MGNhOTJlYTQzN2Y4f+CRXg==: 00:25:42.040 11:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTlmMjM1ZGFkODUwMTg1NTM3ZGYwZGU1MjY0Yzc0Y2SjTt5E: ]] 00:25:42.298 11:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTlmMjM1ZGFkODUwMTg1NTM3ZGYwZGU1MjY0Yzc0Y2SjTt5E: 00:25:42.298 11:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:25:42.298 11:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:42.298 11:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:42.298 11:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:42.298 11:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:42.298 11:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:42.299 11:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:42.299 11:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.299 11:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.299 11:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.299 11:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:42.299 11:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:42.299 11:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:42.299 11:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:42.299 11:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:42.299 11:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:42.299 11:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:42.299 11:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:42.299 11:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:42.299 11:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:42.299 11:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:42.299 11:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:42.299 11:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.299 11:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.867 nvme0n1 00:25:42.867 11:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.867 11:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:42.867 11:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:42.867 11:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.867 11:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.867 11:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.867 11:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:42.867 11:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:42.867 11:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.867 11:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.867 11:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.867 11:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:42.867 11:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:25:42.867 11:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:42.867 11:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:42.867 11:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:42.867 11:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:42.867 11:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTk0NmI3OTNhYWQyYjY5Y2NiNTljM2Y0ZGYxZTk0NTU1OGU1NzdjNjdmZDQwNGFmMjliOWIxNjMzN2Y4NTAyM+O9zv8=: 00:25:42.867 11:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:42.867 11:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:42.867 11:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:42.867 11:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTk0NmI3OTNhYWQyYjY5Y2NiNTljM2Y0ZGYxZTk0NTU1OGU1NzdjNjdmZDQwNGFmMjliOWIxNjMzN2Y4NTAyM+O9zv8=: 00:25:42.867 11:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:42.867 11:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:25:42.867 11:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:42.867 11:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:42.867 11:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:42.867 11:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:42.867 11:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:42.867 11:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:42.868 11:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.868 11:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.868 11:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.868 11:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:42.868 11:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:42.868 11:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:42.868 11:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:42.868 11:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:42.868 11:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:42.868 11:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:42.868 11:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:42.868 11:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:42.868 11:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:42.868 11:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:42.868 11:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:42.868 11:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.868 11:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.436 nvme0n1 00:25:43.436 11:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.436 11:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:43.436 11:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:43.436 11:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.436 11:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.436 11:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.436 11:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:43.436 11:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:43.436 11:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.436 11:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.436 11:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.436 11:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:43.436 11:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:43.436 11:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:43.436 11:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:43.436 11:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:43.436 11:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmY1YTEyYzY5NTUxMjA3N2M2OWFiNTYyZDFiYjRhM2E3OGYxZGJkOGMzNmM1YmNjWljl4g==: 00:25:43.436 11:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTk2MGRmOWYwNGZhM2IxMjdlOWIzMmRkMmMxZWY5MzI4MWNiYmE4NzRhNjVkN2E2w75NCQ==: 00:25:43.436 11:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:43.436 11:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:43.436 11:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmY1YTEyYzY5NTUxMjA3N2M2OWFiNTYyZDFiYjRhM2E3OGYxZGJkOGMzNmM1YmNjWljl4g==: 00:25:43.436 11:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTk2MGRmOWYwNGZhM2IxMjdlOWIzMmRkMmMxZWY5MzI4MWNiYmE4NzRhNjVkN2E2w75NCQ==: ]] 00:25:43.436 11:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTk2MGRmOWYwNGZhM2IxMjdlOWIzMmRkMmMxZWY5MzI4MWNiYmE4NzRhNjVkN2E2w75NCQ==: 00:25:43.436 11:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:43.436 11:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.436 11:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.436 11:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.436 11:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:25:43.436 11:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:43.436 11:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:43.436 11:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:43.436 11:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:43.436 11:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:43.436 11:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:43.436 11:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:43.436 11:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:43.436 11:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:43.436 11:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:43.436 11:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:43.436 11:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:25:43.436 11:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:43.436 11:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:43.436 11:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:43.436 11:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:43.436 11:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:43.436 11:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:43.436 11:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.436 11:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.436 request: 00:25:43.436 { 00:25:43.437 "name": "nvme0", 00:25:43.437 "trtype": "tcp", 00:25:43.437 "traddr": "10.0.0.1", 00:25:43.437 "adrfam": "ipv4", 00:25:43.437 "trsvcid": "4420", 00:25:43.437 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:43.437 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:43.697 "prchk_reftag": false, 00:25:43.697 "prchk_guard": false, 00:25:43.697 "hdgst": false, 00:25:43.697 "ddgst": false, 00:25:43.697 "allow_unrecognized_csi": false, 00:25:43.697 "method": "bdev_nvme_attach_controller", 00:25:43.697 "req_id": 1 00:25:43.697 } 00:25:43.697 Got JSON-RPC error response 00:25:43.697 response: 00:25:43.697 { 00:25:43.697 "code": -5, 00:25:43.697 "message": "Input/output error" 00:25:43.697 } 00:25:43.697 11:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:43.697 11:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:25:43.697 11:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:43.697 11:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:43.697 11:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:43.697 11:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:25:43.697 11:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:25:43.697 11:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.697 11:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.697 11:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.697 11:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:25:43.697 11:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:25:43.697 11:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:43.697 11:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:43.697 11:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:43.697 11:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:43.697 11:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:43.697 11:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:43.697 11:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:43.697 11:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:43.697 11:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:43.697 11:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:43.697 11:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:43.697 11:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:25:43.697 11:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:43.697 11:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:43.697 11:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:43.697 11:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:43.697 11:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:43.697 11:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:43.697 11:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.697 11:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.697 request: 00:25:43.697 { 00:25:43.697 "name": "nvme0", 00:25:43.697 "trtype": "tcp", 00:25:43.697 "traddr": "10.0.0.1", 00:25:43.697 "adrfam": "ipv4", 00:25:43.697 "trsvcid": "4420", 00:25:43.697 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:43.697 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:43.697 "prchk_reftag": false, 00:25:43.697 "prchk_guard": false, 00:25:43.697 "hdgst": false, 00:25:43.697 "ddgst": false, 00:25:43.697 "dhchap_key": "key2", 00:25:43.697 "allow_unrecognized_csi": false, 00:25:43.697 "method": "bdev_nvme_attach_controller", 00:25:43.697 "req_id": 1 00:25:43.697 } 00:25:43.697 Got JSON-RPC error response 00:25:43.697 response: 00:25:43.697 { 00:25:43.697 "code": -5, 00:25:43.697 "message": "Input/output error" 00:25:43.697 } 00:25:43.697 11:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:43.697 11:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:25:43.697 11:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:43.697 11:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:43.697 11:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:43.697 11:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:25:43.697 11:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:25:43.697 11:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.697 11:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.697 11:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.697 11:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:25:43.697 11:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:25:43.697 11:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:43.697 11:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:43.697 11:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:43.697 11:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:43.697 11:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:43.697 11:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:43.697 11:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:43.697 11:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:43.697 11:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:43.697 11:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:43.697 11:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:43.697 11:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:25:43.697 11:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:43.697 11:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:43.697 11:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:43.697 11:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:43.697 11:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:43.697 11:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:43.697 11:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.697 11:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.697 request: 00:25:43.698 { 00:25:43.698 "name": "nvme0", 00:25:43.698 "trtype": "tcp", 00:25:43.698 "traddr": "10.0.0.1", 00:25:43.698 "adrfam": "ipv4", 00:25:43.698 "trsvcid": "4420", 00:25:43.698 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:43.698 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:43.698 "prchk_reftag": false, 00:25:43.698 "prchk_guard": false, 00:25:43.698 "hdgst": false, 00:25:43.698 "ddgst": false, 00:25:43.698 "dhchap_key": "key1", 00:25:43.698 "dhchap_ctrlr_key": "ckey2", 00:25:43.698 "allow_unrecognized_csi": false, 00:25:43.698 "method": "bdev_nvme_attach_controller", 00:25:43.698 "req_id": 1 00:25:43.698 } 00:25:43.698 Got JSON-RPC error response 00:25:43.698 response: 00:25:43.698 { 00:25:43.698 "code": -5, 00:25:43.698 "message": "Input/output error" 00:25:43.698 } 00:25:43.698 11:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:43.698 11:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:25:43.698 11:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:43.698 11:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:43.698 11:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:43.698 11:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:25:43.698 11:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:43.698 11:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:43.698 11:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:43.698 11:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:43.957 11:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:43.957 11:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:43.957 11:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:43.957 11:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:43.957 11:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:43.957 11:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:43.957 11:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:25:43.957 11:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.957 11:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.957 nvme0n1 00:25:43.957 11:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.957 11:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:25:43.957 11:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:43.957 11:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:43.957 11:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:43.957 11:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:43.957 11:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTI3Y2NmYjc5YmQ5MjVkZDNjYjhhMDAwNWE3MGQxM2GPOAge: 00:25:43.957 11:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWVkMDlhMTBhZGZkYmYwZmMyZjJhYTQ2MDljMzg3MDdzGunR: 00:25:43.957 11:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:43.957 11:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:43.957 11:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTI3Y2NmYjc5YmQ5MjVkZDNjYjhhMDAwNWE3MGQxM2GPOAge: 00:25:43.957 11:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWVkMDlhMTBhZGZkYmYwZmMyZjJhYTQ2MDljMzg3MDdzGunR: ]] 00:25:43.957 11:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWVkMDlhMTBhZGZkYmYwZmMyZjJhYTQ2MDljMzg3MDdzGunR: 00:25:43.957 11:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:43.957 11:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.957 11:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.957 11:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.957 11:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:25:43.957 11:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.957 11:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:25:43.957 11:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.957 11:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.957 11:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:43.957 11:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:43.957 11:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:25:44.216 11:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:44.216 11:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:44.216 11:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:44.216 11:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:44.216 11:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:44.216 11:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:44.216 11:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.216 11:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.216 request: 00:25:44.216 { 00:25:44.216 "name": "nvme0", 00:25:44.216 "dhchap_key": "key1", 00:25:44.216 "dhchap_ctrlr_key": "ckey2", 00:25:44.216 "method": "bdev_nvme_set_keys", 00:25:44.216 "req_id": 1 00:25:44.216 } 00:25:44.216 Got JSON-RPC error response 00:25:44.216 response: 00:25:44.216 { 00:25:44.216 "code": -13, 00:25:44.216 "message": "Permission denied" 00:25:44.216 } 00:25:44.216 11:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:44.216 11:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:25:44.216 11:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:44.216 11:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:44.216 11:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:44.216 11:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:25:44.216 11:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:25:44.216 11:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.216 11:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.216 11:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.216 11:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:25:44.216 11:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:25:45.152 11:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:25:45.152 11:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:25:45.152 11:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.152 11:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.152 11:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.152 11:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:25:45.152 11:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:25:46.608 11:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:25:46.608 11:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:25:46.608 11:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.608 11:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.608 11:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.608 11:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:25:46.608 11:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:46.608 11:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:46.608 11:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:46.608 11:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:46.608 11:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:46.608 11:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmY1YTEyYzY5NTUxMjA3N2M2OWFiNTYyZDFiYjRhM2E3OGYxZGJkOGMzNmM1YmNjWljl4g==: 00:25:46.608 11:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTk2MGRmOWYwNGZhM2IxMjdlOWIzMmRkMmMxZWY5MzI4MWNiYmE4NzRhNjVkN2E2w75NCQ==: 00:25:46.608 11:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:46.608 11:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:46.608 11:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmY1YTEyYzY5NTUxMjA3N2M2OWFiNTYyZDFiYjRhM2E3OGYxZGJkOGMzNmM1YmNjWljl4g==: 00:25:46.608 11:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTk2MGRmOWYwNGZhM2IxMjdlOWIzMmRkMmMxZWY5MzI4MWNiYmE4NzRhNjVkN2E2w75NCQ==: ]] 00:25:46.608 11:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTk2MGRmOWYwNGZhM2IxMjdlOWIzMmRkMmMxZWY5MzI4MWNiYmE4NzRhNjVkN2E2w75NCQ==: 00:25:46.608 11:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:25:46.608 11:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:46.608 11:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:46.608 11:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:46.608 11:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:46.609 11:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:46.609 11:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:46.609 11:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:46.609 11:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:46.609 11:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:46.609 11:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:46.609 11:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:25:46.609 11:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.609 11:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.609 nvme0n1 00:25:46.609 11:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.609 11:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:25:46.609 11:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:46.609 11:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:46.609 11:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:46.609 11:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:46.609 11:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTI3Y2NmYjc5YmQ5MjVkZDNjYjhhMDAwNWE3MGQxM2GPOAge: 00:25:46.609 11:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWVkMDlhMTBhZGZkYmYwZmMyZjJhYTQ2MDljMzg3MDdzGunR: 00:25:46.609 11:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:46.609 11:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:46.609 11:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTI3Y2NmYjc5YmQ5MjVkZDNjYjhhMDAwNWE3MGQxM2GPOAge: 00:25:46.609 11:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWVkMDlhMTBhZGZkYmYwZmMyZjJhYTQ2MDljMzg3MDdzGunR: ]] 00:25:46.609 11:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWVkMDlhMTBhZGZkYmYwZmMyZjJhYTQ2MDljMzg3MDdzGunR: 00:25:46.609 11:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:25:46.609 11:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:25:46.609 11:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:25:46.609 11:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:46.609 11:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:46.609 11:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:46.609 11:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:46.609 11:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:25:46.609 11:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.609 11:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.609 request: 00:25:46.609 { 00:25:46.609 "name": "nvme0", 00:25:46.609 "dhchap_key": "key2", 00:25:46.609 "dhchap_ctrlr_key": "ckey1", 00:25:46.609 "method": "bdev_nvme_set_keys", 00:25:46.609 "req_id": 1 00:25:46.609 } 00:25:46.609 Got JSON-RPC error response 00:25:46.609 response: 00:25:46.609 { 00:25:46.609 "code": -13, 00:25:46.609 "message": "Permission denied" 00:25:46.609 } 00:25:46.609 11:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:46.609 11:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:25:46.609 11:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:46.609 11:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:46.609 11:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:46.609 11:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:25:46.609 11:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:25:46.609 11:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.609 11:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.609 11:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.609 11:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:25:46.609 11:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:25:47.547 11:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:25:47.547 11:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:25:47.547 11:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.547 11:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.547 11:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.547 11:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:25:47.547 11:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:25:47.547 11:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:25:47.547 11:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:25:47.547 11:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:47.547 11:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:25:47.547 11:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:47.547 11:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:25:47.547 11:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:47.547 11:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:47.547 rmmod nvme_tcp 00:25:47.547 rmmod nvme_fabrics 00:25:47.547 11:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:47.547 11:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:25:47.547 11:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:25:47.547 11:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 4188304 ']' 00:25:47.547 11:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 4188304 00:25:47.547 11:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 4188304 ']' 00:25:47.547 11:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 4188304 00:25:47.547 11:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:25:47.547 11:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:47.547 11:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4188304 00:25:47.806 11:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:47.806 11:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:47.806 11:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4188304' 00:25:47.806 killing process with pid 4188304 00:25:47.806 11:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 4188304 00:25:47.806 11:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 4188304 00:25:47.806 11:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:47.806 11:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:47.806 11:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:47.806 11:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:25:47.806 11:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:25:47.806 11:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:47.806 11:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:25:47.806 11:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:47.806 11:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:47.806 11:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:47.806 11:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:47.806 11:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:50.344 11:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:50.344 11:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:25:50.344 11:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:50.344 11:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:25:50.344 11:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:25:50.344 11:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:25:50.344 11:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:50.344 11:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:50.344 11:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:25:50.344 11:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:50.344 11:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:25:50.344 11:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:25:50.344 11:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:52.881 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:52.881 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:52.881 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:52.881 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:52.881 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:52.881 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:52.881 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:52.881 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:52.881 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:52.881 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:52.881 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:52.881 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:52.881 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:52.881 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:52.881 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:52.881 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:53.819 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:25:53.819 11:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.Xdd /tmp/spdk.key-null.p01 /tmp/spdk.key-sha256.5tH /tmp/spdk.key-sha384.SgZ /tmp/spdk.key-sha512.yoq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:25:53.819 11:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:57.107 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:25:57.107 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:25:57.107 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:25:57.107 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:25:57.107 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:25:57.107 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:25:57.107 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:25:57.107 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:25:57.107 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:25:57.107 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:25:57.107 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:25:57.107 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:25:57.107 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:25:57.107 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:25:57.107 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:25:57.107 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:25:57.107 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:25:57.107 00:25:57.107 real 0m54.057s 00:25:57.107 user 0m48.857s 00:25:57.107 sys 0m12.545s 00:25:57.107 11:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:57.107 11:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.107 ************************************ 00:25:57.107 END TEST nvmf_auth_host 00:25:57.107 ************************************ 00:25:57.107 11:20:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:25:57.107 11:20:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:25:57.108 11:20:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:57.108 11:20:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:57.108 11:20:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.108 ************************************ 00:25:57.108 START TEST nvmf_digest 00:25:57.108 ************************************ 00:25:57.108 11:20:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:25:57.108 * Looking for test storage... 00:25:57.108 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:57.108 11:20:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:57.108 11:20:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lcov --version 00:25:57.108 11:20:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:57.108 11:20:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:57.108 11:20:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:57.108 11:20:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:57.108 11:20:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:57.108 11:20:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:25:57.108 11:20:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:25:57.108 11:20:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:25:57.108 11:20:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:25:57.108 11:20:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:25:57.108 11:20:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:25:57.108 11:20:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:25:57.108 11:20:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:57.108 11:20:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:25:57.108 11:20:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:25:57.108 11:20:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:57.108 11:20:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:57.108 11:20:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:25:57.108 11:20:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:25:57.108 11:20:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:57.108 11:20:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:25:57.108 11:20:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:25:57.108 11:20:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:25:57.108 11:20:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:25:57.108 11:20:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:57.108 11:20:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:25:57.108 11:20:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:25:57.108 11:20:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:57.108 11:20:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:57.108 11:20:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:25:57.108 11:20:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:57.108 11:20:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:57.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:57.108 --rc genhtml_branch_coverage=1 00:25:57.108 --rc genhtml_function_coverage=1 00:25:57.108 --rc genhtml_legend=1 00:25:57.108 --rc geninfo_all_blocks=1 00:25:57.108 --rc geninfo_unexecuted_blocks=1 00:25:57.108 00:25:57.108 ' 00:25:57.108 11:20:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:57.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:57.108 --rc genhtml_branch_coverage=1 00:25:57.108 --rc genhtml_function_coverage=1 00:25:57.108 --rc genhtml_legend=1 00:25:57.108 --rc geninfo_all_blocks=1 00:25:57.108 --rc geninfo_unexecuted_blocks=1 00:25:57.108 00:25:57.108 ' 00:25:57.108 11:20:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:57.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:57.108 --rc genhtml_branch_coverage=1 00:25:57.108 --rc genhtml_function_coverage=1 00:25:57.108 --rc genhtml_legend=1 00:25:57.108 --rc geninfo_all_blocks=1 00:25:57.108 --rc geninfo_unexecuted_blocks=1 00:25:57.108 00:25:57.108 ' 00:25:57.108 11:20:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:57.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:57.108 --rc genhtml_branch_coverage=1 00:25:57.108 --rc genhtml_function_coverage=1 00:25:57.108 --rc genhtml_legend=1 00:25:57.108 --rc geninfo_all_blocks=1 00:25:57.108 --rc geninfo_unexecuted_blocks=1 00:25:57.108 00:25:57.108 ' 00:25:57.108 11:20:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:57.108 11:20:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:25:57.108 11:20:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:57.108 11:20:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:57.108 11:20:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:57.108 11:20:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:57.108 11:20:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:57.108 11:20:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:57.108 11:20:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:57.108 11:20:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:57.108 11:20:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:57.108 11:20:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:57.108 11:20:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:25:57.108 11:20:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:25:57.108 11:20:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:57.108 11:20:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:57.108 11:20:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:57.108 11:20:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:57.108 11:20:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:57.108 11:20:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:25:57.108 11:20:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:57.108 11:20:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:57.108 11:20:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:57.108 11:20:24 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:57.108 11:20:24 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:57.108 11:20:24 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:57.108 11:20:24 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:25:57.109 11:20:24 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:57.109 11:20:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:25:57.109 11:20:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:57.109 11:20:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:57.109 11:20:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:57.109 11:20:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:57.109 11:20:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:57.109 11:20:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:57.109 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:57.109 11:20:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:57.109 11:20:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:57.109 11:20:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:57.109 11:20:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:25:57.109 11:20:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:25:57.109 11:20:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:25:57.109 11:20:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:25:57.109 11:20:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:25:57.109 11:20:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:57.109 11:20:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:57.109 11:20:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:57.109 11:20:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:57.109 11:20:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:57.109 11:20:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:57.109 11:20:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:57.109 11:20:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:57.109 11:20:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:57.109 11:20:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:57.109 11:20:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:25:57.109 11:20:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:03.676 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:03.677 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:26:03.677 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:03.677 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:03.677 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:03.677 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:03.677 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:03.677 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:26:03.677 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:03.677 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:26:03.677 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:26:03.677 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:26:03.677 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:26:03.677 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:26:03.677 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:26:03.677 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:03.677 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:03.677 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:03.677 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:03.677 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:03.677 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:03.677 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:03.677 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:03.677 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:03.677 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:03.677 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:03.677 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:03.677 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:03.677 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:03.677 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:03.677 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:03.677 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:03.677 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:03.677 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:03.677 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:03.677 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:03.677 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:03.677 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:03.677 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:03.677 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:03.677 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:03.677 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:03.677 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:03.677 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:03.677 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:03.677 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:03.677 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:03.677 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:03.677 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:03.677 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:03.677 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:03.677 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:03.677 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:03.677 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:03.677 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:03.677 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:03.677 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:03.677 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:03.677 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:03.677 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:03.677 Found net devices under 0000:86:00.0: cvl_0_0 00:26:03.677 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:03.677 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:03.677 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:03.677 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:03.677 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:03.677 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:03.677 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:03.677 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:03.677 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:03.677 Found net devices under 0000:86:00.1: cvl_0_1 00:26:03.677 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:03.677 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:03.677 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:26:03.677 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:03.677 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:03.677 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:03.677 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:03.677 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:03.677 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:03.677 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:03.677 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:03.677 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:03.677 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:03.677 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:03.677 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:03.677 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:03.677 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:03.677 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:03.677 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:03.677 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:03.677 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:03.677 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:03.677 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:03.677 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:03.677 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:03.677 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:03.677 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:03.677 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:03.677 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:03.677 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:03.678 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.371 ms 00:26:03.678 00:26:03.678 --- 10.0.0.2 ping statistics --- 00:26:03.678 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:03.678 rtt min/avg/max/mdev = 0.371/0.371/0.371/0.000 ms 00:26:03.678 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:03.678 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:03.678 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:26:03.678 00:26:03.678 --- 10.0.0.1 ping statistics --- 00:26:03.678 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:03.678 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:26:03.678 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:03.678 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:26:03.678 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:03.678 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:03.678 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:03.678 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:03.678 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:03.678 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:03.678 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:03.678 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:26:03.678 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:26:03.678 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:26:03.678 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:03.678 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:03.678 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:03.678 ************************************ 00:26:03.678 START TEST nvmf_digest_clean 00:26:03.678 ************************************ 00:26:03.678 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:26:03.678 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:26:03.678 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:26:03.678 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:26:03.678 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:26:03.678 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:26:03.678 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:03.678 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:03.678 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:03.678 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=8618 00:26:03.678 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 8618 00:26:03.678 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:26:03.678 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 8618 ']' 00:26:03.678 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:03.678 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:03.678 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:03.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:03.678 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:03.678 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:03.678 [2024-11-20 11:20:30.376991] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:26:03.678 [2024-11-20 11:20:30.377032] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:03.678 [2024-11-20 11:20:30.442872] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:03.678 [2024-11-20 11:20:30.484636] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:03.678 [2024-11-20 11:20:30.484672] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:03.678 [2024-11-20 11:20:30.484680] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:03.678 [2024-11-20 11:20:30.484687] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:03.678 [2024-11-20 11:20:30.484692] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:03.678 [2024-11-20 11:20:30.485263] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:03.678 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:03.678 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:03.678 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:03.678 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:03.678 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:03.678 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:03.678 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:26:03.678 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:26:03.678 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:26:03.678 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.678 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:03.678 null0 00:26:03.678 [2024-11-20 11:20:30.657764] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:03.678 [2024-11-20 11:20:30.681978] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:03.678 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.678 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:26:03.678 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:03.678 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:03.678 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:26:03.678 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:26:03.678 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:26:03.678 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:03.678 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=8637 00:26:03.678 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 8637 /var/tmp/bperf.sock 00:26:03.678 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:03.678 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 8637 ']' 00:26:03.678 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:03.678 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:03.678 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:03.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:03.678 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:03.678 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:03.678 [2024-11-20 11:20:30.736685] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:26:03.678 [2024-11-20 11:20:30.736729] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid8637 ] 00:26:03.678 [2024-11-20 11:20:30.796929] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:03.678 [2024-11-20 11:20:30.843012] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:03.678 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:03.678 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:03.678 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:03.679 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:03.679 11:20:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:03.679 11:20:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:03.679 11:20:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:04.244 nvme0n1 00:26:04.244 11:20:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:04.244 11:20:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:04.244 Running I/O for 2 seconds... 00:26:06.112 24992.00 IOPS, 97.62 MiB/s [2024-11-20T10:20:33.608Z] 25004.00 IOPS, 97.67 MiB/s 00:26:06.112 Latency(us) 00:26:06.112 [2024-11-20T10:20:33.608Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:06.112 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:06.112 nvme0n1 : 2.00 25022.75 97.75 0.00 0.00 5110.24 2535.96 11511.54 00:26:06.112 [2024-11-20T10:20:33.608Z] =================================================================================================================== 00:26:06.112 [2024-11-20T10:20:33.608Z] Total : 25022.75 97.75 0.00 0.00 5110.24 2535.96 11511.54 00:26:06.112 { 00:26:06.112 "results": [ 00:26:06.112 { 00:26:06.112 "job": "nvme0n1", 00:26:06.112 "core_mask": "0x2", 00:26:06.112 "workload": "randread", 00:26:06.112 "status": "finished", 00:26:06.112 "queue_depth": 128, 00:26:06.112 "io_size": 4096, 00:26:06.112 "runtime": 2.003617, 00:26:06.112 "iops": 25022.74636320215, 00:26:06.112 "mibps": 97.7451029812584, 00:26:06.112 "io_failed": 0, 00:26:06.112 "io_timeout": 0, 00:26:06.112 "avg_latency_us": 5110.243437744985, 00:26:06.112 "min_latency_us": 2535.958260869565, 00:26:06.112 "max_latency_us": 11511.540869565217 00:26:06.112 } 00:26:06.112 ], 00:26:06.112 "core_count": 1 00:26:06.112 } 00:26:06.112 11:20:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:06.112 11:20:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:06.112 11:20:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:06.112 11:20:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:06.112 | select(.opcode=="crc32c") 00:26:06.112 | "\(.module_name) \(.executed)"' 00:26:06.112 11:20:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:06.370 11:20:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:06.370 11:20:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:06.370 11:20:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:06.370 11:20:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:06.370 11:20:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 8637 00:26:06.370 11:20:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 8637 ']' 00:26:06.370 11:20:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 8637 00:26:06.370 11:20:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:06.370 11:20:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:06.370 11:20:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 8637 00:26:06.370 11:20:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:06.370 11:20:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:06.370 11:20:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 8637' 00:26:06.370 killing process with pid 8637 00:26:06.370 11:20:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 8637 00:26:06.370 Received shutdown signal, test time was about 2.000000 seconds 00:26:06.370 00:26:06.371 Latency(us) 00:26:06.371 [2024-11-20T10:20:33.867Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:06.371 [2024-11-20T10:20:33.867Z] =================================================================================================================== 00:26:06.371 [2024-11-20T10:20:33.867Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:06.371 11:20:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 8637 00:26:06.629 11:20:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:26:06.629 11:20:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:06.629 11:20:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:06.629 11:20:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:26:06.629 11:20:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:26:06.629 11:20:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:26:06.629 11:20:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:06.629 11:20:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=9115 00:26:06.629 11:20:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 9115 /var/tmp/bperf.sock 00:26:06.629 11:20:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:06.629 11:20:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 9115 ']' 00:26:06.629 11:20:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:06.629 11:20:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:06.629 11:20:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:06.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:06.629 11:20:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:06.629 11:20:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:06.629 [2024-11-20 11:20:34.055103] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:26:06.629 [2024-11-20 11:20:34.055152] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid9115 ] 00:26:06.629 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:06.629 Zero copy mechanism will not be used. 00:26:06.888 [2024-11-20 11:20:34.133154] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:06.888 [2024-11-20 11:20:34.176893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:06.888 11:20:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:06.888 11:20:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:06.888 11:20:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:06.888 11:20:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:06.888 11:20:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:07.146 11:20:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:07.146 11:20:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:07.404 nvme0n1 00:26:07.404 11:20:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:07.404 11:20:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:07.404 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:07.404 Zero copy mechanism will not be used. 00:26:07.404 Running I/O for 2 seconds... 00:26:09.711 5432.00 IOPS, 679.00 MiB/s [2024-11-20T10:20:37.207Z] 5709.00 IOPS, 713.62 MiB/s 00:26:09.711 Latency(us) 00:26:09.711 [2024-11-20T10:20:37.207Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:09.711 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:26:09.711 nvme0n1 : 2.00 5707.73 713.47 0.00 0.00 2800.57 662.48 10827.69 00:26:09.711 [2024-11-20T10:20:37.207Z] =================================================================================================================== 00:26:09.711 [2024-11-20T10:20:37.207Z] Total : 5707.73 713.47 0.00 0.00 2800.57 662.48 10827.69 00:26:09.711 { 00:26:09.711 "results": [ 00:26:09.711 { 00:26:09.711 "job": "nvme0n1", 00:26:09.711 "core_mask": "0x2", 00:26:09.711 "workload": "randread", 00:26:09.711 "status": "finished", 00:26:09.711 "queue_depth": 16, 00:26:09.711 "io_size": 131072, 00:26:09.711 "runtime": 2.003247, 00:26:09.711 "iops": 5707.733494671401, 00:26:09.711 "mibps": 713.4666868339251, 00:26:09.711 "io_failed": 0, 00:26:09.711 "io_timeout": 0, 00:26:09.711 "avg_latency_us": 2800.5656673080284, 00:26:09.711 "min_latency_us": 662.4834782608696, 00:26:09.711 "max_latency_us": 10827.686956521738 00:26:09.711 } 00:26:09.711 ], 00:26:09.711 "core_count": 1 00:26:09.711 } 00:26:09.711 11:20:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:09.711 11:20:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:09.711 11:20:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:09.711 11:20:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:09.711 | select(.opcode=="crc32c") 00:26:09.711 | "\(.module_name) \(.executed)"' 00:26:09.711 11:20:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:09.711 11:20:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:09.711 11:20:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:09.711 11:20:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:09.711 11:20:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:09.711 11:20:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 9115 00:26:09.711 11:20:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 9115 ']' 00:26:09.711 11:20:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 9115 00:26:09.711 11:20:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:09.711 11:20:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:09.711 11:20:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 9115 00:26:09.711 11:20:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:09.711 11:20:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:09.711 11:20:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 9115' 00:26:09.711 killing process with pid 9115 00:26:09.711 11:20:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 9115 00:26:09.711 Received shutdown signal, test time was about 2.000000 seconds 00:26:09.711 00:26:09.711 Latency(us) 00:26:09.711 [2024-11-20T10:20:37.207Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:09.711 [2024-11-20T10:20:37.207Z] =================================================================================================================== 00:26:09.711 [2024-11-20T10:20:37.207Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:09.711 11:20:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 9115 00:26:09.969 11:20:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:26:09.969 11:20:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:09.969 11:20:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:09.969 11:20:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:26:09.969 11:20:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:26:09.969 11:20:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:26:09.969 11:20:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:09.969 11:20:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=9800 00:26:09.969 11:20:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 9800 /var/tmp/bperf.sock 00:26:09.969 11:20:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:09.969 11:20:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 9800 ']' 00:26:09.969 11:20:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:09.969 11:20:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:09.970 11:20:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:09.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:09.970 11:20:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:09.970 11:20:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:09.970 [2024-11-20 11:20:37.385690] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:26:09.970 [2024-11-20 11:20:37.385737] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid9800 ] 00:26:10.228 [2024-11-20 11:20:37.464525] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:10.228 [2024-11-20 11:20:37.509232] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:10.228 11:20:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:10.228 11:20:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:10.228 11:20:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:10.228 11:20:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:10.228 11:20:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:10.486 11:20:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:10.486 11:20:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:10.743 nvme0n1 00:26:10.743 11:20:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:10.743 11:20:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:10.743 Running I/O for 2 seconds... 00:26:13.047 27759.00 IOPS, 108.43 MiB/s [2024-11-20T10:20:40.543Z] 27779.50 IOPS, 108.51 MiB/s 00:26:13.047 Latency(us) 00:26:13.047 [2024-11-20T10:20:40.543Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:13.047 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:13.047 nvme0n1 : 2.00 27799.85 108.59 0.00 0.00 4598.97 2265.27 9289.02 00:26:13.047 [2024-11-20T10:20:40.543Z] =================================================================================================================== 00:26:13.047 [2024-11-20T10:20:40.543Z] Total : 27799.85 108.59 0.00 0.00 4598.97 2265.27 9289.02 00:26:13.047 { 00:26:13.047 "results": [ 00:26:13.047 { 00:26:13.047 "job": "nvme0n1", 00:26:13.047 "core_mask": "0x2", 00:26:13.047 "workload": "randwrite", 00:26:13.047 "status": "finished", 00:26:13.047 "queue_depth": 128, 00:26:13.047 "io_size": 4096, 00:26:13.047 "runtime": 2.00314, 00:26:13.047 "iops": 27799.854228860688, 00:26:13.047 "mibps": 108.59318058148706, 00:26:13.047 "io_failed": 0, 00:26:13.047 "io_timeout": 0, 00:26:13.047 "avg_latency_us": 4598.967812173788, 00:26:13.047 "min_latency_us": 2265.2660869565216, 00:26:13.047 "max_latency_us": 9289.015652173914 00:26:13.047 } 00:26:13.047 ], 00:26:13.047 "core_count": 1 00:26:13.047 } 00:26:13.047 11:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:13.047 11:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:13.047 11:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:13.047 11:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:13.047 | select(.opcode=="crc32c") 00:26:13.047 | "\(.module_name) \(.executed)"' 00:26:13.047 11:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:13.047 11:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:13.047 11:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:13.047 11:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:13.047 11:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:13.047 11:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 9800 00:26:13.047 11:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 9800 ']' 00:26:13.047 11:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 9800 00:26:13.047 11:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:13.047 11:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:13.047 11:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 9800 00:26:13.047 11:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:13.047 11:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:13.047 11:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 9800' 00:26:13.047 killing process with pid 9800 00:26:13.047 11:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 9800 00:26:13.047 Received shutdown signal, test time was about 2.000000 seconds 00:26:13.047 00:26:13.047 Latency(us) 00:26:13.047 [2024-11-20T10:20:40.543Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:13.047 [2024-11-20T10:20:40.543Z] =================================================================================================================== 00:26:13.047 [2024-11-20T10:20:40.543Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:13.047 11:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 9800 00:26:13.306 11:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:26:13.306 11:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:13.306 11:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:13.306 11:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:26:13.306 11:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:26:13.306 11:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:26:13.306 11:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:13.306 11:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=10277 00:26:13.306 11:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 10277 /var/tmp/bperf.sock 00:26:13.306 11:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:13.306 11:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 10277 ']' 00:26:13.306 11:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:13.306 11:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:13.306 11:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:13.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:13.306 11:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:13.306 11:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:13.306 [2024-11-20 11:20:40.691712] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:26:13.306 [2024-11-20 11:20:40.691765] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid10277 ] 00:26:13.306 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:13.306 Zero copy mechanism will not be used. 00:26:13.306 [2024-11-20 11:20:40.768686] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:13.565 [2024-11-20 11:20:40.811598] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:13.565 11:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:13.565 11:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:13.565 11:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:13.565 11:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:13.565 11:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:13.823 11:20:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:13.823 11:20:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:14.081 nvme0n1 00:26:14.081 11:20:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:14.081 11:20:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:14.081 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:14.081 Zero copy mechanism will not be used. 00:26:14.081 Running I/O for 2 seconds... 00:26:16.393 6378.00 IOPS, 797.25 MiB/s [2024-11-20T10:20:43.889Z] 6456.00 IOPS, 807.00 MiB/s 00:26:16.393 Latency(us) 00:26:16.393 [2024-11-20T10:20:43.889Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:16.393 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:26:16.393 nvme0n1 : 2.00 6452.99 806.62 0.00 0.00 2475.14 1866.35 5271.37 00:26:16.393 [2024-11-20T10:20:43.889Z] =================================================================================================================== 00:26:16.393 [2024-11-20T10:20:43.889Z] Total : 6452.99 806.62 0.00 0.00 2475.14 1866.35 5271.37 00:26:16.393 { 00:26:16.393 "results": [ 00:26:16.393 { 00:26:16.393 "job": "nvme0n1", 00:26:16.393 "core_mask": "0x2", 00:26:16.393 "workload": "randwrite", 00:26:16.393 "status": "finished", 00:26:16.393 "queue_depth": 16, 00:26:16.393 "io_size": 131072, 00:26:16.393 "runtime": 2.003411, 00:26:16.393 "iops": 6452.994418020066, 00:26:16.393 "mibps": 806.6243022525083, 00:26:16.393 "io_failed": 0, 00:26:16.393 "io_timeout": 0, 00:26:16.393 "avg_latency_us": 2475.1381489453297, 00:26:16.393 "min_latency_us": 1866.351304347826, 00:26:16.393 "max_latency_us": 5271.373913043478 00:26:16.393 } 00:26:16.393 ], 00:26:16.393 "core_count": 1 00:26:16.393 } 00:26:16.393 11:20:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:16.393 11:20:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:16.393 11:20:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:16.393 11:20:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:16.393 | select(.opcode=="crc32c") 00:26:16.393 | "\(.module_name) \(.executed)"' 00:26:16.393 11:20:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:16.393 11:20:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:16.393 11:20:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:16.393 11:20:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:16.393 11:20:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:16.393 11:20:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 10277 00:26:16.393 11:20:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 10277 ']' 00:26:16.393 11:20:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 10277 00:26:16.393 11:20:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:16.393 11:20:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:16.393 11:20:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 10277 00:26:16.393 11:20:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:16.393 11:20:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:16.393 11:20:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 10277' 00:26:16.393 killing process with pid 10277 00:26:16.393 11:20:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 10277 00:26:16.393 Received shutdown signal, test time was about 2.000000 seconds 00:26:16.393 00:26:16.393 Latency(us) 00:26:16.393 [2024-11-20T10:20:43.889Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:16.393 [2024-11-20T10:20:43.889Z] =================================================================================================================== 00:26:16.393 [2024-11-20T10:20:43.889Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:16.393 11:20:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 10277 00:26:16.652 11:20:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 8618 00:26:16.652 11:20:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 8618 ']' 00:26:16.652 11:20:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 8618 00:26:16.652 11:20:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:16.652 11:20:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:16.652 11:20:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 8618 00:26:16.652 11:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:16.652 11:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:16.652 11:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 8618' 00:26:16.652 killing process with pid 8618 00:26:16.652 11:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 8618 00:26:16.652 11:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 8618 00:26:16.912 00:26:16.912 real 0m13.847s 00:26:16.912 user 0m26.645s 00:26:16.912 sys 0m4.506s 00:26:16.912 11:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:16.912 11:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:16.912 ************************************ 00:26:16.912 END TEST nvmf_digest_clean 00:26:16.912 ************************************ 00:26:16.912 11:20:44 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:26:16.912 11:20:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:16.912 11:20:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:16.912 11:20:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:16.912 ************************************ 00:26:16.912 START TEST nvmf_digest_error 00:26:16.912 ************************************ 00:26:16.912 11:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:26:16.912 11:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:26:16.912 11:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:16.912 11:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:16.912 11:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:16.912 11:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=10882 00:26:16.912 11:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 10882 00:26:16.912 11:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:26:16.912 11:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 10882 ']' 00:26:16.912 11:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:16.912 11:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:16.912 11:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:16.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:16.912 11:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:16.912 11:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:16.912 [2024-11-20 11:20:44.303009] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:26:16.912 [2024-11-20 11:20:44.303055] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:16.912 [2024-11-20 11:20:44.383533] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:17.171 [2024-11-20 11:20:44.425334] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:17.171 [2024-11-20 11:20:44.425370] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:17.171 [2024-11-20 11:20:44.425378] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:17.171 [2024-11-20 11:20:44.425384] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:17.171 [2024-11-20 11:20:44.425389] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:17.171 [2024-11-20 11:20:44.425935] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:17.171 11:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:17.171 11:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:26:17.171 11:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:17.171 11:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:17.171 11:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:17.171 11:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:17.171 11:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:26:17.171 11:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.171 11:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:17.171 [2024-11-20 11:20:44.486363] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:26:17.171 11:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.171 11:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:26:17.171 11:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:26:17.171 11:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.171 11:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:17.171 null0 00:26:17.171 [2024-11-20 11:20:44.581865] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:17.171 [2024-11-20 11:20:44.606075] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:17.171 11:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.171 11:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:26:17.171 11:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:17.171 11:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:26:17.171 11:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:26:17.171 11:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:26:17.171 11:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=11012 00:26:17.171 11:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 11012 /var/tmp/bperf.sock 00:26:17.171 11:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:26:17.171 11:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 11012 ']' 00:26:17.171 11:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:17.171 11:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:17.171 11:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:17.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:17.171 11:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:17.171 11:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:17.171 [2024-11-20 11:20:44.656695] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:26:17.171 [2024-11-20 11:20:44.656734] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid11012 ] 00:26:17.431 [2024-11-20 11:20:44.729842] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:17.431 [2024-11-20 11:20:44.770590] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:17.431 11:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:17.431 11:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:26:17.431 11:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:17.431 11:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:17.690 11:20:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:17.690 11:20:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.690 11:20:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:17.690 11:20:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.690 11:20:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:17.690 11:20:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:17.947 nvme0n1 00:26:17.947 11:20:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:26:17.947 11:20:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.947 11:20:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:17.947 11:20:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.947 11:20:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:17.947 11:20:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:17.947 Running I/O for 2 seconds... 00:26:18.204 [2024-11-20 11:20:45.448420] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:18.204 [2024-11-20 11:20:45.448453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10667 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.204 [2024-11-20 11:20:45.448465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.204 [2024-11-20 11:20:45.458608] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:18.204 [2024-11-20 11:20:45.458635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.204 [2024-11-20 11:20:45.458644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.204 [2024-11-20 11:20:45.470275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:18.204 [2024-11-20 11:20:45.470301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:14827 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.204 [2024-11-20 11:20:45.470310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.204 [2024-11-20 11:20:45.481591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:18.204 [2024-11-20 11:20:45.481615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19820 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.204 [2024-11-20 11:20:45.481624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.204 [2024-11-20 11:20:45.490298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:18.204 [2024-11-20 11:20:45.490320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:25340 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.204 [2024-11-20 11:20:45.490329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.204 [2024-11-20 11:20:45.501627] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:18.204 [2024-11-20 11:20:45.501650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:22261 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.204 [2024-11-20 11:20:45.501663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.204 [2024-11-20 11:20:45.518000] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:18.204 [2024-11-20 11:20:45.518023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:21084 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.204 [2024-11-20 11:20:45.518031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.204 [2024-11-20 11:20:45.530989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:18.204 [2024-11-20 11:20:45.531011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:16196 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.204 [2024-11-20 11:20:45.531019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.204 [2024-11-20 11:20:45.543848] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:18.204 [2024-11-20 11:20:45.543869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:25495 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.204 [2024-11-20 11:20:45.543878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.204 [2024-11-20 11:20:45.552279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:18.205 [2024-11-20 11:20:45.552299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:19636 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.205 [2024-11-20 11:20:45.552307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.205 [2024-11-20 11:20:45.564043] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:18.205 [2024-11-20 11:20:45.564065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18738 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.205 [2024-11-20 11:20:45.564073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.205 [2024-11-20 11:20:45.574013] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:18.205 [2024-11-20 11:20:45.574034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:10546 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.205 [2024-11-20 11:20:45.574042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.205 [2024-11-20 11:20:45.584478] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:18.205 [2024-11-20 11:20:45.584500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:20637 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.205 [2024-11-20 11:20:45.584510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.205 [2024-11-20 11:20:45.593366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:18.205 [2024-11-20 11:20:45.593394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:23982 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.205 [2024-11-20 11:20:45.593403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.205 [2024-11-20 11:20:45.603665] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:18.205 [2024-11-20 11:20:45.603690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:6436 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.205 [2024-11-20 11:20:45.603699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.205 [2024-11-20 11:20:45.614567] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:18.205 [2024-11-20 11:20:45.614589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:137 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.205 [2024-11-20 11:20:45.614597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.205 [2024-11-20 11:20:45.624033] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:18.205 [2024-11-20 11:20:45.624055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:2525 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.205 [2024-11-20 11:20:45.624063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.205 [2024-11-20 11:20:45.632878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:18.205 [2024-11-20 11:20:45.632898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:17164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.205 [2024-11-20 11:20:45.632907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.205 [2024-11-20 11:20:45.642331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:18.205 [2024-11-20 11:20:45.642351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:1690 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.205 [2024-11-20 11:20:45.642359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.205 [2024-11-20 11:20:45.652441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:18.205 [2024-11-20 11:20:45.652462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:19918 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.205 [2024-11-20 11:20:45.652470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.205 [2024-11-20 11:20:45.662687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:18.205 [2024-11-20 11:20:45.662708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:25559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.205 [2024-11-20 11:20:45.662717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.205 [2024-11-20 11:20:45.671309] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:18.205 [2024-11-20 11:20:45.671329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:44 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.205 [2024-11-20 11:20:45.671338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.205 [2024-11-20 11:20:45.680648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:18.205 [2024-11-20 11:20:45.680669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:12715 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.205 [2024-11-20 11:20:45.680677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.205 [2024-11-20 11:20:45.691357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:18.205 [2024-11-20 11:20:45.691377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:24244 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.205 [2024-11-20 11:20:45.691386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.463 [2024-11-20 11:20:45.700133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:18.463 [2024-11-20 11:20:45.700158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:20921 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.463 [2024-11-20 11:20:45.700168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.463 [2024-11-20 11:20:45.709667] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:18.463 [2024-11-20 11:20:45.709690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:18584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.463 [2024-11-20 11:20:45.709699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.463 [2024-11-20 11:20:45.719803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:18.463 [2024-11-20 11:20:45.719825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:6175 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.463 [2024-11-20 11:20:45.719834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.463 [2024-11-20 11:20:45.730656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:18.463 [2024-11-20 11:20:45.730677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9246 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.463 [2024-11-20 11:20:45.730686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.463 [2024-11-20 11:20:45.741788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:18.463 [2024-11-20 11:20:45.741810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:5989 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.463 [2024-11-20 11:20:45.741819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.463 [2024-11-20 11:20:45.752063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:18.463 [2024-11-20 11:20:45.752085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:17933 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.463 [2024-11-20 11:20:45.752093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.463 [2024-11-20 11:20:45.760433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:18.463 [2024-11-20 11:20:45.760454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:21357 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.463 [2024-11-20 11:20:45.760463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.463 [2024-11-20 11:20:45.770251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:18.463 [2024-11-20 11:20:45.770272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11295 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.463 [2024-11-20 11:20:45.770285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.463 [2024-11-20 11:20:45.780167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:18.463 [2024-11-20 11:20:45.780192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:5992 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.463 [2024-11-20 11:20:45.780200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.463 [2024-11-20 11:20:45.788587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:18.463 [2024-11-20 11:20:45.788608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:2676 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.463 [2024-11-20 11:20:45.788616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.463 [2024-11-20 11:20:45.799319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:18.463 [2024-11-20 11:20:45.799340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:17085 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.463 [2024-11-20 11:20:45.799348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.464 [2024-11-20 11:20:45.807776] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:18.464 [2024-11-20 11:20:45.807797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9212 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.464 [2024-11-20 11:20:45.807805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.464 [2024-11-20 11:20:45.816985] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:18.464 [2024-11-20 11:20:45.817006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:6402 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.464 [2024-11-20 11:20:45.817014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.464 [2024-11-20 11:20:45.826623] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:18.464 [2024-11-20 11:20:45.826643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.464 [2024-11-20 11:20:45.826651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.464 [2024-11-20 11:20:45.836314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:18.464 [2024-11-20 11:20:45.836334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:14567 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.464 [2024-11-20 11:20:45.836342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.464 [2024-11-20 11:20:45.846180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:18.464 [2024-11-20 11:20:45.846200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11815 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.464 [2024-11-20 11:20:45.846209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.464 [2024-11-20 11:20:45.858167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:18.464 [2024-11-20 11:20:45.858188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:16932 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.464 [2024-11-20 11:20:45.858196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.464 [2024-11-20 11:20:45.866325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:18.464 [2024-11-20 11:20:45.866347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:8379 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.464 [2024-11-20 11:20:45.866356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.464 [2024-11-20 11:20:45.875811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:18.464 [2024-11-20 11:20:45.875831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:21105 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.464 [2024-11-20 11:20:45.875840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.464 [2024-11-20 11:20:45.885803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:18.464 [2024-11-20 11:20:45.885824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:5188 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.464 [2024-11-20 11:20:45.885833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.464 [2024-11-20 11:20:45.895872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:18.464 [2024-11-20 11:20:45.895893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:15724 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.464 [2024-11-20 11:20:45.895901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.464 [2024-11-20 11:20:45.905131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:18.464 [2024-11-20 11:20:45.905151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:10594 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.464 [2024-11-20 11:20:45.905159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.464 [2024-11-20 11:20:45.914387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:18.464 [2024-11-20 11:20:45.914408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:18324 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.464 [2024-11-20 11:20:45.914416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.464 [2024-11-20 11:20:45.923765] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:18.464 [2024-11-20 11:20:45.923785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:16680 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.464 [2024-11-20 11:20:45.923793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.464 [2024-11-20 11:20:45.932225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:18.464 [2024-11-20 11:20:45.932245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:6187 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.464 [2024-11-20 11:20:45.932257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.464 [2024-11-20 11:20:45.942086] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:18.464 [2024-11-20 11:20:45.942107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:19083 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.464 [2024-11-20 11:20:45.942115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.464 [2024-11-20 11:20:45.951822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:18.464 [2024-11-20 11:20:45.951843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20021 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.464 [2024-11-20 11:20:45.951851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.723 [2024-11-20 11:20:45.960932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:18.723 [2024-11-20 11:20:45.960965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:9939 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.723 [2024-11-20 11:20:45.960976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.723 [2024-11-20 11:20:45.970388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:18.723 [2024-11-20 11:20:45.970409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:18528 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.723 [2024-11-20 11:20:45.970417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.723 [2024-11-20 11:20:45.981329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:18.723 [2024-11-20 11:20:45.981349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17679 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.723 [2024-11-20 11:20:45.981357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.723 [2024-11-20 11:20:45.989213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:18.723 [2024-11-20 11:20:45.989233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10223 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.723 [2024-11-20 11:20:45.989242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.723 [2024-11-20 11:20:45.999320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:18.723 [2024-11-20 11:20:45.999340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:11389 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.723 [2024-11-20 11:20:45.999349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.723 [2024-11-20 11:20:46.008554] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:18.723 [2024-11-20 11:20:46.008574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:21176 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.723 [2024-11-20 11:20:46.008582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.723 [2024-11-20 11:20:46.018508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:18.723 [2024-11-20 11:20:46.018533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:13821 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.723 [2024-11-20 11:20:46.018541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.723 [2024-11-20 11:20:46.028393] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:18.723 [2024-11-20 11:20:46.028414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:4104 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.723 [2024-11-20 11:20:46.028422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.723 [2024-11-20 11:20:46.037723] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:18.723 [2024-11-20 11:20:46.037744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:9350 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.723 [2024-11-20 11:20:46.037752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.723 [2024-11-20 11:20:46.046270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:18.723 [2024-11-20 11:20:46.046291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:23702 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.724 [2024-11-20 11:20:46.046299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.724 [2024-11-20 11:20:46.055761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:18.724 [2024-11-20 11:20:46.055781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:5262 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.724 [2024-11-20 11:20:46.055790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.724 [2024-11-20 11:20:46.066270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:18.724 [2024-11-20 11:20:46.066291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:6518 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.724 [2024-11-20 11:20:46.066300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.724 [2024-11-20 11:20:46.075667] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:18.724 [2024-11-20 11:20:46.075687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25219 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.724 [2024-11-20 11:20:46.075695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.724 [2024-11-20 11:20:46.084875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:18.724 [2024-11-20 11:20:46.084896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:5640 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.724 [2024-11-20 11:20:46.084904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.724 [2024-11-20 11:20:46.094136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:18.724 [2024-11-20 11:20:46.094156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:24623 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.724 [2024-11-20 11:20:46.094165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.724 [2024-11-20 11:20:46.102833] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:18.724 [2024-11-20 11:20:46.102853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:21731 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.724 [2024-11-20 11:20:46.102861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.724 [2024-11-20 11:20:46.113143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:18.724 [2024-11-20 11:20:46.113163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11574 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.724 [2024-11-20 11:20:46.113172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.724 [2024-11-20 11:20:46.124670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:18.724 [2024-11-20 11:20:46.124690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19405 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.724 [2024-11-20 11:20:46.124698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.724 [2024-11-20 11:20:46.132983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:18.724 [2024-11-20 11:20:46.133003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:22139 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.724 [2024-11-20 11:20:46.133011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.724 [2024-11-20 11:20:46.144652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:18.724 [2024-11-20 11:20:46.144672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3748 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.724 [2024-11-20 11:20:46.144681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.724 [2024-11-20 11:20:46.154799] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:18.724 [2024-11-20 11:20:46.154819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:13461 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.724 [2024-11-20 11:20:46.154827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.724 [2024-11-20 11:20:46.163063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:18.724 [2024-11-20 11:20:46.163084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:7508 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.724 [2024-11-20 11:20:46.163092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.724 [2024-11-20 11:20:46.173212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:18.724 [2024-11-20 11:20:46.173232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:2366 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.724 [2024-11-20 11:20:46.173239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.724 [2024-11-20 11:20:46.182860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:18.724 [2024-11-20 11:20:46.182881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:16579 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.724 [2024-11-20 11:20:46.182893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.724 [2024-11-20 11:20:46.192348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:18.724 [2024-11-20 11:20:46.192368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:24514 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.724 [2024-11-20 11:20:46.192376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.724 [2024-11-20 11:20:46.201755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:18.724 [2024-11-20 11:20:46.201776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:7871 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.724 [2024-11-20 11:20:46.201784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.724 [2024-11-20 11:20:46.211083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:18.724 [2024-11-20 11:20:46.211103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:4210 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.724 [2024-11-20 11:20:46.211111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.983 [2024-11-20 11:20:46.220378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:18.983 [2024-11-20 11:20:46.220402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:18678 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.983 [2024-11-20 11:20:46.220412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.983 [2024-11-20 11:20:46.229918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:18.983 [2024-11-20 11:20:46.229941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:7952 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.983 [2024-11-20 11:20:46.229956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.983 [2024-11-20 11:20:46.239026] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:18.983 [2024-11-20 11:20:46.239049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:732 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.983 [2024-11-20 11:20:46.239057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.983 [2024-11-20 11:20:46.249612] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:18.984 [2024-11-20 11:20:46.249633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:2116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.984 [2024-11-20 11:20:46.249641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.984 [2024-11-20 11:20:46.258933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:18.984 [2024-11-20 11:20:46.258958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:1700 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.984 [2024-11-20 11:20:46.258966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.984 [2024-11-20 11:20:46.269102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:18.984 [2024-11-20 11:20:46.269127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:17924 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.984 [2024-11-20 11:20:46.269135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.984 [2024-11-20 11:20:46.277378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:18.984 [2024-11-20 11:20:46.277400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:19754 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.984 [2024-11-20 11:20:46.277408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.984 [2024-11-20 11:20:46.287121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:18.984 [2024-11-20 11:20:46.287143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:782 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.984 [2024-11-20 11:20:46.287151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.984 [2024-11-20 11:20:46.297849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:18.984 [2024-11-20 11:20:46.297870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:2278 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.984 [2024-11-20 11:20:46.297878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.984 [2024-11-20 11:20:46.310455] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:18.984 [2024-11-20 11:20:46.310476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:21840 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.984 [2024-11-20 11:20:46.310484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.984 [2024-11-20 11:20:46.321684] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:18.984 [2024-11-20 11:20:46.321705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5082 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.984 [2024-11-20 11:20:46.321714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.984 [2024-11-20 11:20:46.330367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:18.984 [2024-11-20 11:20:46.330387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:9053 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.984 [2024-11-20 11:20:46.330395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.984 [2024-11-20 11:20:46.342723] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:18.984 [2024-11-20 11:20:46.342744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:16515 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.984 [2024-11-20 11:20:46.342752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.984 [2024-11-20 11:20:46.353793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:18.984 [2024-11-20 11:20:46.353813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:13529 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.984 [2024-11-20 11:20:46.353822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.984 [2024-11-20 11:20:46.362697] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:18.984 [2024-11-20 11:20:46.362719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:19914 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.984 [2024-11-20 11:20:46.362727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.984 [2024-11-20 11:20:46.375282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:18.984 [2024-11-20 11:20:46.375303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:1516 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.984 [2024-11-20 11:20:46.375312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.984 [2024-11-20 11:20:46.387193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:18.984 [2024-11-20 11:20:46.387213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:9714 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.984 [2024-11-20 11:20:46.387222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.984 [2024-11-20 11:20:46.395542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:18.984 [2024-11-20 11:20:46.395562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22961 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.984 [2024-11-20 11:20:46.395571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.984 [2024-11-20 11:20:46.405686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:18.984 [2024-11-20 11:20:46.405706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:18569 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.984 [2024-11-20 11:20:46.405714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.984 [2024-11-20 11:20:46.417410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:18.984 [2024-11-20 11:20:46.417430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:21414 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.984 [2024-11-20 11:20:46.417438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.984 [2024-11-20 11:20:46.426204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:18.984 [2024-11-20 11:20:46.426224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:16219 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.984 [2024-11-20 11:20:46.426233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.985 25300.00 IOPS, 98.83 MiB/s [2024-11-20T10:20:46.481Z] [2024-11-20 11:20:46.437489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:18.985 [2024-11-20 11:20:46.437510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:17436 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.985 [2024-11-20 11:20:46.437518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.985 [2024-11-20 11:20:46.448706] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:18.985 [2024-11-20 11:20:46.448730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:22106 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.985 [2024-11-20 11:20:46.448739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.985 [2024-11-20 11:20:46.463032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:18.985 [2024-11-20 11:20:46.463053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:24487 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.985 [2024-11-20 11:20:46.463062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.985 [2024-11-20 11:20:46.474505] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:18.985 [2024-11-20 11:20:46.474528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:3733 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.985 [2024-11-20 11:20:46.474537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.244 [2024-11-20 11:20:46.487533] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:19.244 [2024-11-20 11:20:46.487556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3470 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.244 [2024-11-20 11:20:46.487565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.244 [2024-11-20 11:20:46.496809] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:19.244 [2024-11-20 11:20:46.496830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19496 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.244 [2024-11-20 11:20:46.496839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.244 [2024-11-20 11:20:46.507885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:19.244 [2024-11-20 11:20:46.507906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:1339 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.244 [2024-11-20 11:20:46.507914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.244 [2024-11-20 11:20:46.519102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:19.244 [2024-11-20 11:20:46.519123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:20263 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.244 [2024-11-20 11:20:46.519132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.244 [2024-11-20 11:20:46.527115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:19.244 [2024-11-20 11:20:46.527136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:4778 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.244 [2024-11-20 11:20:46.527144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.244 [2024-11-20 11:20:46.538783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:19.244 [2024-11-20 11:20:46.538804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:12181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.244 [2024-11-20 11:20:46.538812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.244 [2024-11-20 11:20:46.550924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:19.244 [2024-11-20 11:20:46.550945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:10717 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.244 [2024-11-20 11:20:46.550958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.244 [2024-11-20 11:20:46.563846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:19.244 [2024-11-20 11:20:46.563868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:18507 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.244 [2024-11-20 11:20:46.563876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.244 [2024-11-20 11:20:46.574482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:19.244 [2024-11-20 11:20:46.574502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:9540 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.244 [2024-11-20 11:20:46.574510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.244 [2024-11-20 11:20:46.583472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:19.244 [2024-11-20 11:20:46.583493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:24816 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.244 [2024-11-20 11:20:46.583501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.244 [2024-11-20 11:20:46.593802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:19.244 [2024-11-20 11:20:46.593822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:7992 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.245 [2024-11-20 11:20:46.593830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.245 [2024-11-20 11:20:46.603564] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:19.245 [2024-11-20 11:20:46.603583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:16102 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.245 [2024-11-20 11:20:46.603592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.245 [2024-11-20 11:20:46.615270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:19.245 [2024-11-20 11:20:46.615291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:25183 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.245 [2024-11-20 11:20:46.615299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.245 [2024-11-20 11:20:46.626362] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:19.245 [2024-11-20 11:20:46.626382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:7481 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.245 [2024-11-20 11:20:46.626391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.245 [2024-11-20 11:20:46.635099] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:19.245 [2024-11-20 11:20:46.635119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:3368 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.245 [2024-11-20 11:20:46.635131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.245 [2024-11-20 11:20:46.647833] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:19.245 [2024-11-20 11:20:46.647854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:913 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.245 [2024-11-20 11:20:46.647863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.245 [2024-11-20 11:20:46.659651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:19.245 [2024-11-20 11:20:46.659672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:12335 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.245 [2024-11-20 11:20:46.659680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.245 [2024-11-20 11:20:46.669802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:19.245 [2024-11-20 11:20:46.669823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:22558 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.245 [2024-11-20 11:20:46.669831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.245 [2024-11-20 11:20:46.678893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:19.245 [2024-11-20 11:20:46.678912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:9143 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.245 [2024-11-20 11:20:46.678921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.245 [2024-11-20 11:20:46.688053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:19.245 [2024-11-20 11:20:46.688074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:14481 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.245 [2024-11-20 11:20:46.688083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.245 [2024-11-20 11:20:46.698372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:19.245 [2024-11-20 11:20:46.698392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5143 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.245 [2024-11-20 11:20:46.698401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.245 [2024-11-20 11:20:46.709733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:19.245 [2024-11-20 11:20:46.709753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22017 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.245 [2024-11-20 11:20:46.709761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.245 [2024-11-20 11:20:46.717724] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:19.245 [2024-11-20 11:20:46.717745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:292 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.245 [2024-11-20 11:20:46.717754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.245 [2024-11-20 11:20:46.728622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:19.245 [2024-11-20 11:20:46.728646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:993 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.245 [2024-11-20 11:20:46.728655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.504 [2024-11-20 11:20:46.739207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:19.504 [2024-11-20 11:20:46.739232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:6314 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.504 [2024-11-20 11:20:46.739242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.504 [2024-11-20 11:20:46.750278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:19.504 [2024-11-20 11:20:46.750301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:12783 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.504 [2024-11-20 11:20:46.750310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.504 [2024-11-20 11:20:46.759413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:19.504 [2024-11-20 11:20:46.759434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:11224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.504 [2024-11-20 11:20:46.759443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.504 [2024-11-20 11:20:46.767617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:19.504 [2024-11-20 11:20:46.767638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:15654 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.504 [2024-11-20 11:20:46.767647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.504 [2024-11-20 11:20:46.778032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:19.504 [2024-11-20 11:20:46.778052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:8109 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.504 [2024-11-20 11:20:46.778060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.504 [2024-11-20 11:20:46.789096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:19.504 [2024-11-20 11:20:46.789117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9130 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.504 [2024-11-20 11:20:46.789125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.504 [2024-11-20 11:20:46.799679] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:19.504 [2024-11-20 11:20:46.799699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:9326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.504 [2024-11-20 11:20:46.799708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.504 [2024-11-20 11:20:46.808149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:19.504 [2024-11-20 11:20:46.808169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:16313 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.504 [2024-11-20 11:20:46.808177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.504 [2024-11-20 11:20:46.817481] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:19.504 [2024-11-20 11:20:46.817500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:10258 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.504 [2024-11-20 11:20:46.817509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.504 [2024-11-20 11:20:46.828642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:19.504 [2024-11-20 11:20:46.828662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20894 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.504 [2024-11-20 11:20:46.828672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.504 [2024-11-20 11:20:46.841070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:19.504 [2024-11-20 11:20:46.841090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:17331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.504 [2024-11-20 11:20:46.841098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.504 [2024-11-20 11:20:46.852266] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:19.504 [2024-11-20 11:20:46.852286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:18592 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.504 [2024-11-20 11:20:46.852295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.504 [2024-11-20 11:20:46.861738] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:19.504 [2024-11-20 11:20:46.861759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:14333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.504 [2024-11-20 11:20:46.861767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.504 [2024-11-20 11:20:46.872807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:19.504 [2024-11-20 11:20:46.872826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:8942 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.504 [2024-11-20 11:20:46.872835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.504 [2024-11-20 11:20:46.883358] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:19.504 [2024-11-20 11:20:46.883379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:12450 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.504 [2024-11-20 11:20:46.883387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.504 [2024-11-20 11:20:46.896269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:19.504 [2024-11-20 11:20:46.896291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:148 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.504 [2024-11-20 11:20:46.896300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.504 [2024-11-20 11:20:46.905911] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:19.504 [2024-11-20 11:20:46.905932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:11970 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.504 [2024-11-20 11:20:46.905944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.504 [2024-11-20 11:20:46.914984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:19.504 [2024-11-20 11:20:46.915005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22887 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.504 [2024-11-20 11:20:46.915014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.505 [2024-11-20 11:20:46.925644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:19.505 [2024-11-20 11:20:46.925667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12902 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.505 [2024-11-20 11:20:46.925675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.505 [2024-11-20 11:20:46.935172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:19.505 [2024-11-20 11:20:46.935194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:2555 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.505 [2024-11-20 11:20:46.935204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.505 [2024-11-20 11:20:46.944207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:19.505 [2024-11-20 11:20:46.944228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:17125 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.505 [2024-11-20 11:20:46.944236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.505 [2024-11-20 11:20:46.956248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:19.505 [2024-11-20 11:20:46.956270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20362 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.505 [2024-11-20 11:20:46.956279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.505 [2024-11-20 11:20:46.965851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:19.505 [2024-11-20 11:20:46.965872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:22107 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.505 [2024-11-20 11:20:46.965880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.505 [2024-11-20 11:20:46.977055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:19.505 [2024-11-20 11:20:46.977080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:14062 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.505 [2024-11-20 11:20:46.977089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.505 [2024-11-20 11:20:46.986595] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:19.505 [2024-11-20 11:20:46.986615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:9240 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.505 [2024-11-20 11:20:46.986623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.505 [2024-11-20 11:20:46.996458] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:19.505 [2024-11-20 11:20:46.996486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:21126 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.505 [2024-11-20 11:20:46.996495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.763 [2024-11-20 11:20:47.008734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:19.763 [2024-11-20 11:20:47.008758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:525 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.763 [2024-11-20 11:20:47.008768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.763 [2024-11-20 11:20:47.019605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:19.763 [2024-11-20 11:20:47.019626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:18227 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.763 [2024-11-20 11:20:47.019635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.763 [2024-11-20 11:20:47.028040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:19.763 [2024-11-20 11:20:47.028061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20799 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.763 [2024-11-20 11:20:47.028070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.763 [2024-11-20 11:20:47.039131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:19.763 [2024-11-20 11:20:47.039152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:23611 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.764 [2024-11-20 11:20:47.039160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.764 [2024-11-20 11:20:47.048860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:19.764 [2024-11-20 11:20:47.048880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:6455 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.764 [2024-11-20 11:20:47.048888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.764 [2024-11-20 11:20:47.058816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:19.764 [2024-11-20 11:20:47.058836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:9795 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.764 [2024-11-20 11:20:47.058844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.764 [2024-11-20 11:20:47.069455] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:19.764 [2024-11-20 11:20:47.069476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:22127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.764 [2024-11-20 11:20:47.069485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.764 [2024-11-20 11:20:47.079151] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:19.764 [2024-11-20 11:20:47.079172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23864 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.764 [2024-11-20 11:20:47.079181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.764 [2024-11-20 11:20:47.088092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:19.764 [2024-11-20 11:20:47.088113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:1988 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.764 [2024-11-20 11:20:47.088121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.764 [2024-11-20 11:20:47.099527] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:19.764 [2024-11-20 11:20:47.099548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:18756 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.764 [2024-11-20 11:20:47.099556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.764 [2024-11-20 11:20:47.111704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:19.764 [2024-11-20 11:20:47.111724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:13033 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.764 [2024-11-20 11:20:47.111734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.764 [2024-11-20 11:20:47.124365] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:19.764 [2024-11-20 11:20:47.124386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:2748 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.764 [2024-11-20 11:20:47.124394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.764 [2024-11-20 11:20:47.135845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:19.764 [2024-11-20 11:20:47.135866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:12240 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.764 [2024-11-20 11:20:47.135874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.764 [2024-11-20 11:20:47.144498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:19.764 [2024-11-20 11:20:47.144519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:23823 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.764 [2024-11-20 11:20:47.144528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.764 [2024-11-20 11:20:47.155360] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:19.764 [2024-11-20 11:20:47.155381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:23329 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.764 [2024-11-20 11:20:47.155390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.764 [2024-11-20 11:20:47.165521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:19.764 [2024-11-20 11:20:47.165543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:17592 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.764 [2024-11-20 11:20:47.165552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.764 [2024-11-20 11:20:47.174118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:19.764 [2024-11-20 11:20:47.174139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:9126 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.764 [2024-11-20 11:20:47.174153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.764 [2024-11-20 11:20:47.185591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:19.764 [2024-11-20 11:20:47.185612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13355 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.764 [2024-11-20 11:20:47.185621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.764 [2024-11-20 11:20:47.198423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:19.764 [2024-11-20 11:20:47.198444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:18108 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.764 [2024-11-20 11:20:47.198452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.764 [2024-11-20 11:20:47.210288] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:19.764 [2024-11-20 11:20:47.210309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:18377 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.764 [2024-11-20 11:20:47.210318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.764 [2024-11-20 11:20:47.220960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:19.764 [2024-11-20 11:20:47.220981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:4034 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.764 [2024-11-20 11:20:47.220989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.764 [2024-11-20 11:20:47.232660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:19.764 [2024-11-20 11:20:47.232680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:20687 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.764 [2024-11-20 11:20:47.232688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.764 [2024-11-20 11:20:47.243255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:19.764 [2024-11-20 11:20:47.243275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:14989 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.764 [2024-11-20 11:20:47.243283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.764 [2024-11-20 11:20:47.252328] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:19.764 [2024-11-20 11:20:47.252348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:19265 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.764 [2024-11-20 11:20:47.252356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.022 [2024-11-20 11:20:47.264922] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:20.022 [2024-11-20 11:20:47.264952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:11473 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.022 [2024-11-20 11:20:47.264963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.022 [2024-11-20 11:20:47.276524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:20.022 [2024-11-20 11:20:47.276545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:23266 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.022 [2024-11-20 11:20:47.276554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.022 [2024-11-20 11:20:47.285900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:20.023 [2024-11-20 11:20:47.285921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:20118 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.023 [2024-11-20 11:20:47.285929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.023 [2024-11-20 11:20:47.294763] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:20.023 [2024-11-20 11:20:47.294783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:25375 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.023 [2024-11-20 11:20:47.294792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.023 [2024-11-20 11:20:47.305136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:20.023 [2024-11-20 11:20:47.305158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:7595 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.023 [2024-11-20 11:20:47.305166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.023 [2024-11-20 11:20:47.313731] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:20.023 [2024-11-20 11:20:47.313753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:23038 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.023 [2024-11-20 11:20:47.313764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.023 [2024-11-20 11:20:47.325107] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:20.023 [2024-11-20 11:20:47.325128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20493 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.023 [2024-11-20 11:20:47.325137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.023 [2024-11-20 11:20:47.333413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:20.023 [2024-11-20 11:20:47.333434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:11793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.023 [2024-11-20 11:20:47.333442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.023 [2024-11-20 11:20:47.342897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:20.023 [2024-11-20 11:20:47.342918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:4446 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.023 [2024-11-20 11:20:47.342926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.023 [2024-11-20 11:20:47.354455] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:20.023 [2024-11-20 11:20:47.354476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:8039 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.023 [2024-11-20 11:20:47.354488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.023 [2024-11-20 11:20:47.367053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:20.023 [2024-11-20 11:20:47.367074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4947 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.023 [2024-11-20 11:20:47.367082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.023 [2024-11-20 11:20:47.377666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:20.023 [2024-11-20 11:20:47.377687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:15225 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.023 [2024-11-20 11:20:47.377695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.023 [2024-11-20 11:20:47.385721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:20.023 [2024-11-20 11:20:47.385741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:293 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.023 [2024-11-20 11:20:47.385749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.023 [2024-11-20 11:20:47.397332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:20.023 [2024-11-20 11:20:47.397352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:23575 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.023 [2024-11-20 11:20:47.397360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.023 [2024-11-20 11:20:47.409868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:20.023 [2024-11-20 11:20:47.409889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:3581 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.023 [2024-11-20 11:20:47.409897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.023 [2024-11-20 11:20:47.422033] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:20.023 [2024-11-20 11:20:47.422054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1379 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.023 [2024-11-20 11:20:47.422063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.023 24732.50 IOPS, 96.61 MiB/s [2024-11-20T10:20:47.519Z] [2024-11-20 11:20:47.434198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x922370) 00:26:20.023 [2024-11-20 11:20:47.434216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17003 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.023 [2024-11-20 11:20:47.434224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.023 00:26:20.023 Latency(us) 00:26:20.023 [2024-11-20T10:20:47.519Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:20.023 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:20.023 nvme0n1 : 2.04 24261.47 94.77 0.00 0.00 5169.30 2664.18 47641.82 00:26:20.023 [2024-11-20T10:20:47.519Z] =================================================================================================================== 00:26:20.023 [2024-11-20T10:20:47.519Z] Total : 24261.47 94.77 0.00 0.00 5169.30 2664.18 47641.82 00:26:20.023 { 00:26:20.023 "results": [ 00:26:20.023 { 00:26:20.023 "job": "nvme0n1", 00:26:20.023 "core_mask": "0x2", 00:26:20.023 "workload": "randread", 00:26:20.023 "status": "finished", 00:26:20.023 "queue_depth": 128, 00:26:20.023 "io_size": 4096, 00:26:20.023 "runtime": 2.044105, 00:26:20.023 "iops": 24261.473847967693, 00:26:20.023 "mibps": 94.7713822186238, 00:26:20.023 "io_failed": 0, 00:26:20.023 "io_timeout": 0, 00:26:20.023 "avg_latency_us": 5169.301463337655, 00:26:20.023 "min_latency_us": 2664.1808695652176, 00:26:20.023 "max_latency_us": 47641.82260869565 00:26:20.023 } 00:26:20.023 ], 00:26:20.023 "core_count": 1 00:26:20.023 } 00:26:20.023 11:20:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:20.023 11:20:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:20.023 11:20:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:20.023 | .driver_specific 00:26:20.023 | .nvme_error 00:26:20.023 | .status_code 00:26:20.023 | .command_transient_transport_error' 00:26:20.023 11:20:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:20.282 11:20:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 194 > 0 )) 00:26:20.282 11:20:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 11012 00:26:20.282 11:20:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 11012 ']' 00:26:20.282 11:20:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 11012 00:26:20.282 11:20:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:26:20.282 11:20:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:20.282 11:20:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 11012 00:26:20.282 11:20:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:20.282 11:20:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:20.282 11:20:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 11012' 00:26:20.282 killing process with pid 11012 00:26:20.282 11:20:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 11012 00:26:20.282 Received shutdown signal, test time was about 2.000000 seconds 00:26:20.282 00:26:20.282 Latency(us) 00:26:20.282 [2024-11-20T10:20:47.778Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:20.282 [2024-11-20T10:20:47.778Z] =================================================================================================================== 00:26:20.282 [2024-11-20T10:20:47.778Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:20.282 11:20:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 11012 00:26:20.563 11:20:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:26:20.563 11:20:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:20.563 11:20:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:26:20.563 11:20:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:26:20.563 11:20:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:26:20.563 11:20:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=11488 00:26:20.563 11:20:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 11488 /var/tmp/bperf.sock 00:26:20.563 11:20:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:26:20.563 11:20:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 11488 ']' 00:26:20.563 11:20:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:20.563 11:20:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:20.563 11:20:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:20.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:20.563 11:20:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:20.563 11:20:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:20.563 [2024-11-20 11:20:47.948625] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:26:20.563 [2024-11-20 11:20:47.948673] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid11488 ] 00:26:20.563 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:20.563 Zero copy mechanism will not be used. 00:26:20.563 [2024-11-20 11:20:48.024559] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:20.877 [2024-11-20 11:20:48.069401] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:20.877 11:20:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:20.877 11:20:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:26:20.877 11:20:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:20.877 11:20:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:20.877 11:20:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:20.877 11:20:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.877 11:20:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:20.877 11:20:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.877 11:20:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:20.877 11:20:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:21.460 nvme0n1 00:26:21.460 11:20:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:26:21.460 11:20:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.460 11:20:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:21.460 11:20:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.460 11:20:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:21.460 11:20:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:21.460 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:21.460 Zero copy mechanism will not be used. 00:26:21.461 Running I/O for 2 seconds... 00:26:21.461 [2024-11-20 11:20:48.776641] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:21.461 [2024-11-20 11:20:48.776681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.461 [2024-11-20 11:20:48.776692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:21.461 [2024-11-20 11:20:48.782319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:21.461 [2024-11-20 11:20:48.782346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.461 [2024-11-20 11:20:48.782355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:21.461 [2024-11-20 11:20:48.787877] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:21.461 [2024-11-20 11:20:48.787899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.461 [2024-11-20 11:20:48.787908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:21.461 [2024-11-20 11:20:48.793487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:21.461 [2024-11-20 11:20:48.793510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.461 [2024-11-20 11:20:48.793518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:21.461 [2024-11-20 11:20:48.798946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:21.461 [2024-11-20 11:20:48.798973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.461 [2024-11-20 11:20:48.798981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:21.461 [2024-11-20 11:20:48.804278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:21.461 [2024-11-20 11:20:48.804300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.461 [2024-11-20 11:20:48.804308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:21.461 [2024-11-20 11:20:48.809686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:21.461 [2024-11-20 11:20:48.809707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.461 [2024-11-20 11:20:48.809714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:21.461 [2024-11-20 11:20:48.815078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:21.461 [2024-11-20 11:20:48.815100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.461 [2024-11-20 11:20:48.815109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:21.461 [2024-11-20 11:20:48.820459] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:21.461 [2024-11-20 11:20:48.820480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.461 [2024-11-20 11:20:48.820488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:21.461 [2024-11-20 11:20:48.825941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:21.461 [2024-11-20 11:20:48.825968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.461 [2024-11-20 11:20:48.825975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:21.461 [2024-11-20 11:20:48.831248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:21.461 [2024-11-20 11:20:48.831270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.461 [2024-11-20 11:20:48.831278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:21.461 [2024-11-20 11:20:48.836673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:21.461 [2024-11-20 11:20:48.836694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.461 [2024-11-20 11:20:48.836702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:21.461 [2024-11-20 11:20:48.842043] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:21.461 [2024-11-20 11:20:48.842065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.461 [2024-11-20 11:20:48.842073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:21.461 [2024-11-20 11:20:48.847468] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:21.461 [2024-11-20 11:20:48.847489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.461 [2024-11-20 11:20:48.847497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:21.461 [2024-11-20 11:20:48.852803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:21.461 [2024-11-20 11:20:48.852825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.461 [2024-11-20 11:20:48.852834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:21.461 [2024-11-20 11:20:48.858295] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:21.461 [2024-11-20 11:20:48.858317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.461 [2024-11-20 11:20:48.858325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:21.461 [2024-11-20 11:20:48.863765] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:21.461 [2024-11-20 11:20:48.863788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.461 [2024-11-20 11:20:48.863798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:21.461 [2024-11-20 11:20:48.869160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:21.461 [2024-11-20 11:20:48.869182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.461 [2024-11-20 11:20:48.869193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:21.461 [2024-11-20 11:20:48.874539] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:21.461 [2024-11-20 11:20:48.874560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.461 [2024-11-20 11:20:48.874568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:21.461 [2024-11-20 11:20:48.879887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:21.461 [2024-11-20 11:20:48.879909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.461 [2024-11-20 11:20:48.879917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:21.461 [2024-11-20 11:20:48.885331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:21.461 [2024-11-20 11:20:48.885352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.461 [2024-11-20 11:20:48.885360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:21.461 [2024-11-20 11:20:48.890739] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:21.461 [2024-11-20 11:20:48.890761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.461 [2024-11-20 11:20:48.890769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:21.461 [2024-11-20 11:20:48.896128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:21.461 [2024-11-20 11:20:48.896150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.461 [2024-11-20 11:20:48.896158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:21.461 [2024-11-20 11:20:48.901473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:21.461 [2024-11-20 11:20:48.901494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.461 [2024-11-20 11:20:48.901502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:21.461 [2024-11-20 11:20:48.906787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:21.461 [2024-11-20 11:20:48.906809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.461 [2024-11-20 11:20:48.906817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:21.461 [2024-11-20 11:20:48.912075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:21.461 [2024-11-20 11:20:48.912097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.461 [2024-11-20 11:20:48.912105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:21.461 [2024-11-20 11:20:48.917554] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:21.461 [2024-11-20 11:20:48.917576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.461 [2024-11-20 11:20:48.917584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:21.461 [2024-11-20 11:20:48.923606] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:21.461 [2024-11-20 11:20:48.923628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.461 [2024-11-20 11:20:48.923636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:21.461 [2024-11-20 11:20:48.929159] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:21.461 [2024-11-20 11:20:48.929181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.461 [2024-11-20 11:20:48.929191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:21.461 [2024-11-20 11:20:48.935083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:21.461 [2024-11-20 11:20:48.935105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.461 [2024-11-20 11:20:48.935114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:21.461 [2024-11-20 11:20:48.940492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:21.461 [2024-11-20 11:20:48.940514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.461 [2024-11-20 11:20:48.940522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:21.461 [2024-11-20 11:20:48.945787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:21.461 [2024-11-20 11:20:48.945809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.461 [2024-11-20 11:20:48.945816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:21.461 [2024-11-20 11:20:48.951270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:21.461 [2024-11-20 11:20:48.951295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.461 [2024-11-20 11:20:48.951305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:21.724 [2024-11-20 11:20:48.956859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:21.724 [2024-11-20 11:20:48.956883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.724 [2024-11-20 11:20:48.956893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:21.724 [2024-11-20 11:20:48.962438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:21.724 [2024-11-20 11:20:48.962463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.724 [2024-11-20 11:20:48.962479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:21.724 [2024-11-20 11:20:48.967826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:21.724 [2024-11-20 11:20:48.967849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.724 [2024-11-20 11:20:48.967858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:21.724 [2024-11-20 11:20:48.973458] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:21.725 [2024-11-20 11:20:48.973479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.725 [2024-11-20 11:20:48.973488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:21.725 [2024-11-20 11:20:48.978932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:21.725 [2024-11-20 11:20:48.978960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.725 [2024-11-20 11:20:48.978969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:21.725 [2024-11-20 11:20:48.984468] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:21.725 [2024-11-20 11:20:48.984490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.725 [2024-11-20 11:20:48.984498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:21.725 [2024-11-20 11:20:48.989864] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:21.725 [2024-11-20 11:20:48.989886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.725 [2024-11-20 11:20:48.989894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:21.725 [2024-11-20 11:20:48.995363] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:21.725 [2024-11-20 11:20:48.995384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.725 [2024-11-20 11:20:48.995392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:21.725 [2024-11-20 11:20:49.000737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:21.725 [2024-11-20 11:20:49.000758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.725 [2024-11-20 11:20:49.000766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:21.725 [2024-11-20 11:20:49.006230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:21.725 [2024-11-20 11:20:49.006252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.725 [2024-11-20 11:20:49.006261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:21.725 [2024-11-20 11:20:49.011646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:21.725 [2024-11-20 11:20:49.011671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.725 [2024-11-20 11:20:49.011679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:21.725 [2024-11-20 11:20:49.017092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:21.725 [2024-11-20 11:20:49.017114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.725 [2024-11-20 11:20:49.017122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:21.725 [2024-11-20 11:20:49.022552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:21.725 [2024-11-20 11:20:49.022573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.725 [2024-11-20 11:20:49.022582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:21.725 [2024-11-20 11:20:49.028142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:21.725 [2024-11-20 11:20:49.028163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.725 [2024-11-20 11:20:49.028171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:21.725 [2024-11-20 11:20:49.033840] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:21.725 [2024-11-20 11:20:49.033863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.725 [2024-11-20 11:20:49.033871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:21.725 [2024-11-20 11:20:49.039872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:21.725 [2024-11-20 11:20:49.039894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.725 [2024-11-20 11:20:49.039903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:21.725 [2024-11-20 11:20:49.045465] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:21.725 [2024-11-20 11:20:49.045487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.725 [2024-11-20 11:20:49.045495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:21.725 [2024-11-20 11:20:49.050962] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:21.725 [2024-11-20 11:20:49.050983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.725 [2024-11-20 11:20:49.050991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:21.725 [2024-11-20 11:20:49.056405] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:21.725 [2024-11-20 11:20:49.056425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.725 [2024-11-20 11:20:49.056433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:21.725 [2024-11-20 11:20:49.062556] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:21.725 [2024-11-20 11:20:49.062579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.725 [2024-11-20 11:20:49.062587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:21.725 [2024-11-20 11:20:49.070019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:21.725 [2024-11-20 11:20:49.070042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.725 [2024-11-20 11:20:49.070051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:21.725 [2024-11-20 11:20:49.077230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:21.725 [2024-11-20 11:20:49.077253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.725 [2024-11-20 11:20:49.077262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:21.725 [2024-11-20 11:20:49.084403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:21.725 [2024-11-20 11:20:49.084426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.725 [2024-11-20 11:20:49.084435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:21.725 [2024-11-20 11:20:49.091555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:21.725 [2024-11-20 11:20:49.091577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.725 [2024-11-20 11:20:49.091586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:21.725 [2024-11-20 11:20:49.098024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:21.725 [2024-11-20 11:20:49.098047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.725 [2024-11-20 11:20:49.098056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:21.725 [2024-11-20 11:20:49.106383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:21.725 [2024-11-20 11:20:49.106405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.725 [2024-11-20 11:20:49.106414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:21.725 [2024-11-20 11:20:49.113762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:21.725 [2024-11-20 11:20:49.113785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.725 [2024-11-20 11:20:49.113794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:21.725 [2024-11-20 11:20:49.121340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:21.725 [2024-11-20 11:20:49.121361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.725 [2024-11-20 11:20:49.121374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:21.725 [2024-11-20 11:20:49.128073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:21.725 [2024-11-20 11:20:49.128095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.725 [2024-11-20 11:20:49.128104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:21.725 [2024-11-20 11:20:49.135815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:21.725 [2024-11-20 11:20:49.135837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.725 [2024-11-20 11:20:49.135846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:21.725 [2024-11-20 11:20:49.143012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:21.725 [2024-11-20 11:20:49.143035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.725 [2024-11-20 11:20:49.143044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:21.725 [2024-11-20 11:20:49.149706] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:21.725 [2024-11-20 11:20:49.149727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.725 [2024-11-20 11:20:49.149736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:21.725 [2024-11-20 11:20:49.153121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:21.725 [2024-11-20 11:20:49.153142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.725 [2024-11-20 11:20:49.153150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:21.725 [2024-11-20 11:20:49.159849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:21.725 [2024-11-20 11:20:49.159872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.725 [2024-11-20 11:20:49.159881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:21.725 [2024-11-20 11:20:49.167807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:21.725 [2024-11-20 11:20:49.167830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.725 [2024-11-20 11:20:49.167839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:21.725 [2024-11-20 11:20:49.174932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:21.725 [2024-11-20 11:20:49.174960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.725 [2024-11-20 11:20:49.174969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:21.725 [2024-11-20 11:20:49.182663] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:21.725 [2024-11-20 11:20:49.182689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.726 [2024-11-20 11:20:49.182697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:21.726 [2024-11-20 11:20:49.189321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:21.726 [2024-11-20 11:20:49.189343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.726 [2024-11-20 11:20:49.189350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:21.726 [2024-11-20 11:20:49.196207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:21.726 [2024-11-20 11:20:49.196229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.726 [2024-11-20 11:20:49.196237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:21.726 [2024-11-20 11:20:49.203258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:21.726 [2024-11-20 11:20:49.203280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.726 [2024-11-20 11:20:49.203288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:21.726 [2024-11-20 11:20:49.210042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:21.726 [2024-11-20 11:20:49.210063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.726 [2024-11-20 11:20:49.210072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:21.726 [2024-11-20 11:20:49.216558] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:21.726 [2024-11-20 11:20:49.216583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.726 [2024-11-20 11:20:49.216593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:21.984 [2024-11-20 11:20:49.224513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:21.984 [2024-11-20 11:20:49.224538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.984 [2024-11-20 11:20:49.224547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:21.984 [2024-11-20 11:20:49.231474] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:21.984 [2024-11-20 11:20:49.231496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.984 [2024-11-20 11:20:49.231507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:21.984 [2024-11-20 11:20:49.237765] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:21.984 [2024-11-20 11:20:49.237786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.984 [2024-11-20 11:20:49.237795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:21.984 [2024-11-20 11:20:49.244337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:21.984 [2024-11-20 11:20:49.244360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.984 [2024-11-20 11:20:49.244368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:21.984 [2024-11-20 11:20:49.250616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:21.984 [2024-11-20 11:20:49.250638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.984 [2024-11-20 11:20:49.250646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:21.984 [2024-11-20 11:20:49.256420] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:21.984 [2024-11-20 11:20:49.256442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.984 [2024-11-20 11:20:49.256451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:21.984 [2024-11-20 11:20:49.261501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:21.984 [2024-11-20 11:20:49.261524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.984 [2024-11-20 11:20:49.261532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:21.984 [2024-11-20 11:20:49.266823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:21.984 [2024-11-20 11:20:49.266844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.984 [2024-11-20 11:20:49.266852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:21.984 [2024-11-20 11:20:49.272084] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:21.984 [2024-11-20 11:20:49.272106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.984 [2024-11-20 11:20:49.272114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:21.984 [2024-11-20 11:20:49.277449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:21.984 [2024-11-20 11:20:49.277471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.984 [2024-11-20 11:20:49.277479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:21.984 [2024-11-20 11:20:49.282834] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:21.984 [2024-11-20 11:20:49.282856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.984 [2024-11-20 11:20:49.282864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:21.984 [2024-11-20 11:20:49.288450] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:21.984 [2024-11-20 11:20:49.288476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.984 [2024-11-20 11:20:49.288485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:21.984 [2024-11-20 11:20:49.294258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:21.984 [2024-11-20 11:20:49.294280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.984 [2024-11-20 11:20:49.294288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:21.984 [2024-11-20 11:20:49.299841] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:21.984 [2024-11-20 11:20:49.299862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.984 [2024-11-20 11:20:49.299871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:21.984 [2024-11-20 11:20:49.305172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:21.984 [2024-11-20 11:20:49.305193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.984 [2024-11-20 11:20:49.305201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:21.984 [2024-11-20 11:20:49.310442] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:21.984 [2024-11-20 11:20:49.310463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.984 [2024-11-20 11:20:49.310472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:21.984 [2024-11-20 11:20:49.315758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:21.984 [2024-11-20 11:20:49.315779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.984 [2024-11-20 11:20:49.315787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:21.984 [2024-11-20 11:20:49.320981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:21.984 [2024-11-20 11:20:49.321001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.984 [2024-11-20 11:20:49.321009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:21.984 [2024-11-20 11:20:49.326222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:21.984 [2024-11-20 11:20:49.326242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.984 [2024-11-20 11:20:49.326250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:21.984 [2024-11-20 11:20:49.331588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:21.984 [2024-11-20 11:20:49.331609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.984 [2024-11-20 11:20:49.331618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:21.985 [2024-11-20 11:20:49.337160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:21.985 [2024-11-20 11:20:49.337181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.985 [2024-11-20 11:20:49.337189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:21.985 [2024-11-20 11:20:49.342723] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:21.985 [2024-11-20 11:20:49.342744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.985 [2024-11-20 11:20:49.342752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:21.985 [2024-11-20 11:20:49.348180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:21.985 [2024-11-20 11:20:49.348202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.985 [2024-11-20 11:20:49.348210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:21.985 [2024-11-20 11:20:49.353237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:21.985 [2024-11-20 11:20:49.353259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.985 [2024-11-20 11:20:49.353267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:21.985 [2024-11-20 11:20:49.358560] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:21.985 [2024-11-20 11:20:49.358583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.985 [2024-11-20 11:20:49.358591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:21.985 [2024-11-20 11:20:49.364020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:21.985 [2024-11-20 11:20:49.364042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.985 [2024-11-20 11:20:49.364050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:21.985 [2024-11-20 11:20:49.369403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:21.985 [2024-11-20 11:20:49.369425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.985 [2024-11-20 11:20:49.369433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:21.985 [2024-11-20 11:20:49.374849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:21.985 [2024-11-20 11:20:49.374870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.985 [2024-11-20 11:20:49.374878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:21.985 [2024-11-20 11:20:49.380330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:21.985 [2024-11-20 11:20:49.380352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.985 [2024-11-20 11:20:49.380364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:21.985 [2024-11-20 11:20:49.385806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:21.985 [2024-11-20 11:20:49.385827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.985 [2024-11-20 11:20:49.385835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:21.985 [2024-11-20 11:20:49.391235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:21.985 [2024-11-20 11:20:49.391256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.985 [2024-11-20 11:20:49.391264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:21.985 [2024-11-20 11:20:49.395936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:21.985 [2024-11-20 11:20:49.395965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.985 [2024-11-20 11:20:49.395973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:21.985 [2024-11-20 11:20:49.399219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:21.985 [2024-11-20 11:20:49.399239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.985 [2024-11-20 11:20:49.399248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:21.985 [2024-11-20 11:20:49.404587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:21.985 [2024-11-20 11:20:49.404607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.985 [2024-11-20 11:20:49.404616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:21.985 [2024-11-20 11:20:49.410050] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:21.985 [2024-11-20 11:20:49.410072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.985 [2024-11-20 11:20:49.410080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:21.985 [2024-11-20 11:20:49.415520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:21.985 [2024-11-20 11:20:49.415540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.985 [2024-11-20 11:20:49.415548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:21.985 [2024-11-20 11:20:49.420864] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:21.985 [2024-11-20 11:20:49.420884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.985 [2024-11-20 11:20:49.420892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:21.985 [2024-11-20 11:20:49.426187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:21.985 [2024-11-20 11:20:49.426211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.985 [2024-11-20 11:20:49.426219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:21.985 [2024-11-20 11:20:49.431518] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:21.985 [2024-11-20 11:20:49.431539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.985 [2024-11-20 11:20:49.431547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:21.985 [2024-11-20 11:20:49.436806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:21.985 [2024-11-20 11:20:49.436826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.985 [2024-11-20 11:20:49.436835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:21.985 [2024-11-20 11:20:49.442114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:21.985 [2024-11-20 11:20:49.442135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.985 [2024-11-20 11:20:49.442143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:21.985 [2024-11-20 11:20:49.447631] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:21.985 [2024-11-20 11:20:49.447651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.985 [2024-11-20 11:20:49.447660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:21.985 [2024-11-20 11:20:49.453044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:21.985 [2024-11-20 11:20:49.453064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.985 [2024-11-20 11:20:49.453072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:21.986 [2024-11-20 11:20:49.458347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:21.986 [2024-11-20 11:20:49.458368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.986 [2024-11-20 11:20:49.458376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:21.986 [2024-11-20 11:20:49.463894] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:21.986 [2024-11-20 11:20:49.463915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.986 [2024-11-20 11:20:49.463923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:21.986 [2024-11-20 11:20:49.469540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:21.986 [2024-11-20 11:20:49.469561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.986 [2024-11-20 11:20:49.469569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:21.986 [2024-11-20 11:20:49.474875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:21.986 [2024-11-20 11:20:49.474899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.986 [2024-11-20 11:20:49.474908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.246 [2024-11-20 11:20:49.481040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:22.246 [2024-11-20 11:20:49.481065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.246 [2024-11-20 11:20:49.481075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.246 [2024-11-20 11:20:49.486911] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:22.246 [2024-11-20 11:20:49.486935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.246 [2024-11-20 11:20:49.486944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.246 [2024-11-20 11:20:49.492580] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:22.246 [2024-11-20 11:20:49.492603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.246 [2024-11-20 11:20:49.492612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.246 [2024-11-20 11:20:49.498009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:22.246 [2024-11-20 11:20:49.498031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.246 [2024-11-20 11:20:49.498039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.246 [2024-11-20 11:20:49.503433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:22.246 [2024-11-20 11:20:49.503453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.246 [2024-11-20 11:20:49.503461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.246 [2024-11-20 11:20:49.508874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:22.246 [2024-11-20 11:20:49.508894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.246 [2024-11-20 11:20:49.508902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.246 [2024-11-20 11:20:49.514312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:22.246 [2024-11-20 11:20:49.514333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.246 [2024-11-20 11:20:49.514341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.246 [2024-11-20 11:20:49.519834] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:22.246 [2024-11-20 11:20:49.519854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.246 [2024-11-20 11:20:49.519866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.246 [2024-11-20 11:20:49.525252] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:22.246 [2024-11-20 11:20:49.525273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.246 [2024-11-20 11:20:49.525281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.246 [2024-11-20 11:20:49.530487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:22.246 [2024-11-20 11:20:49.530509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.246 [2024-11-20 11:20:49.530517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.246 [2024-11-20 11:20:49.535524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:22.246 [2024-11-20 11:20:49.535545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.246 [2024-11-20 11:20:49.535553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.247 [2024-11-20 11:20:49.540667] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:22.247 [2024-11-20 11:20:49.540687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.247 [2024-11-20 11:20:49.540695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.247 [2024-11-20 11:20:49.545880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:22.247 [2024-11-20 11:20:49.545901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.247 [2024-11-20 11:20:49.545910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.247 [2024-11-20 11:20:49.551023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:22.247 [2024-11-20 11:20:49.551044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.247 [2024-11-20 11:20:49.551052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.247 [2024-11-20 11:20:49.556242] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:22.247 [2024-11-20 11:20:49.556263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.247 [2024-11-20 11:20:49.556272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.247 [2024-11-20 11:20:49.561434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:22.247 [2024-11-20 11:20:49.561455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.247 [2024-11-20 11:20:49.561463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.247 [2024-11-20 11:20:49.566642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:22.247 [2024-11-20 11:20:49.566664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.247 [2024-11-20 11:20:49.566672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.247 [2024-11-20 11:20:49.571910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:22.247 [2024-11-20 11:20:49.571930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.247 [2024-11-20 11:20:49.571939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.247 [2024-11-20 11:20:49.577171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:22.247 [2024-11-20 11:20:49.577193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.247 [2024-11-20 11:20:49.577201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.247 [2024-11-20 11:20:49.582470] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:22.247 [2024-11-20 11:20:49.582491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.247 [2024-11-20 11:20:49.582500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.247 [2024-11-20 11:20:49.587734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:22.247 [2024-11-20 11:20:49.587754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.247 [2024-11-20 11:20:49.587762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.247 [2024-11-20 11:20:49.592989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:22.247 [2024-11-20 11:20:49.593010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.247 [2024-11-20 11:20:49.593017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.247 [2024-11-20 11:20:49.598206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:22.247 [2024-11-20 11:20:49.598227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.247 [2024-11-20 11:20:49.598235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.247 [2024-11-20 11:20:49.603448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:22.247 [2024-11-20 11:20:49.603469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.247 [2024-11-20 11:20:49.603478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.247 [2024-11-20 11:20:49.608707] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:22.247 [2024-11-20 11:20:49.608728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.247 [2024-11-20 11:20:49.608739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.247 [2024-11-20 11:20:49.613900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:22.247 [2024-11-20 11:20:49.613921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.247 [2024-11-20 11:20:49.613929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.247 [2024-11-20 11:20:49.619143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:22.247 [2024-11-20 11:20:49.619164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.247 [2024-11-20 11:20:49.619172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.247 [2024-11-20 11:20:49.624411] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:22.247 [2024-11-20 11:20:49.624432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.247 [2024-11-20 11:20:49.624440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.247 [2024-11-20 11:20:49.629678] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:22.247 [2024-11-20 11:20:49.629699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.247 [2024-11-20 11:20:49.629707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.247 [2024-11-20 11:20:49.634977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:22.247 [2024-11-20 11:20:49.634998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.247 [2024-11-20 11:20:49.635006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.247 [2024-11-20 11:20:49.640182] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:22.247 [2024-11-20 11:20:49.640203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.247 [2024-11-20 11:20:49.640211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.247 [2024-11-20 11:20:49.645443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:22.247 [2024-11-20 11:20:49.645464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.247 [2024-11-20 11:20:49.645472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.247 [2024-11-20 11:20:49.650723] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:22.247 [2024-11-20 11:20:49.650745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.247 [2024-11-20 11:20:49.650753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.247 [2024-11-20 11:20:49.656001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:22.247 [2024-11-20 11:20:49.656025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.247 [2024-11-20 11:20:49.656033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.248 [2024-11-20 11:20:49.661222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:22.248 [2024-11-20 11:20:49.661242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.248 [2024-11-20 11:20:49.661251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.248 [2024-11-20 11:20:49.666467] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:22.248 [2024-11-20 11:20:49.666488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.248 [2024-11-20 11:20:49.666497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.248 [2024-11-20 11:20:49.671707] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:22.248 [2024-11-20 11:20:49.671728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.248 [2024-11-20 11:20:49.671736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.248 [2024-11-20 11:20:49.676976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:22.248 [2024-11-20 11:20:49.676997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.248 [2024-11-20 11:20:49.677005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.248 [2024-11-20 11:20:49.682273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:22.248 [2024-11-20 11:20:49.682294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.248 [2024-11-20 11:20:49.682302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.248 [2024-11-20 11:20:49.687535] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:22.248 [2024-11-20 11:20:49.687556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.248 [2024-11-20 11:20:49.687564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.248 [2024-11-20 11:20:49.692781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:22.248 [2024-11-20 11:20:49.692801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.248 [2024-11-20 11:20:49.692809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.248 [2024-11-20 11:20:49.698006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:22.248 [2024-11-20 11:20:49.698026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.248 [2024-11-20 11:20:49.698034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.248 [2024-11-20 11:20:49.703263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:22.248 [2024-11-20 11:20:49.703284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.248 [2024-11-20 11:20:49.703292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.248 [2024-11-20 11:20:49.708495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:22.248 [2024-11-20 11:20:49.708516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.248 [2024-11-20 11:20:49.708526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.248 [2024-11-20 11:20:49.713757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:22.248 [2024-11-20 11:20:49.713778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.248 [2024-11-20 11:20:49.713787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.248 [2024-11-20 11:20:49.719051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:22.248 [2024-11-20 11:20:49.719072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.248 [2024-11-20 11:20:49.719080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.248 [2024-11-20 11:20:49.724333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:22.248 [2024-11-20 11:20:49.724354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.248 [2024-11-20 11:20:49.724362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.248 [2024-11-20 11:20:49.729594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:22.248 [2024-11-20 11:20:49.729615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.248 [2024-11-20 11:20:49.729623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.248 [2024-11-20 11:20:49.734816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:22.248 [2024-11-20 11:20:49.734840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.248 [2024-11-20 11:20:49.734849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.508 [2024-11-20 11:20:49.740131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:22.508 [2024-11-20 11:20:49.740155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.508 [2024-11-20 11:20:49.740164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.508 [2024-11-20 11:20:49.745449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:22.508 [2024-11-20 11:20:49.745473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.508 [2024-11-20 11:20:49.745486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.508 [2024-11-20 11:20:49.750686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:22.508 [2024-11-20 11:20:49.750709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.508 [2024-11-20 11:20:49.750717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.508 [2024-11-20 11:20:49.755977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:22.508 [2024-11-20 11:20:49.756000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.508 [2024-11-20 11:20:49.756008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.508 [2024-11-20 11:20:49.761319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:22.508 [2024-11-20 11:20:49.761342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.508 [2024-11-20 11:20:49.761350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.508 [2024-11-20 11:20:49.766572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:22.508 [2024-11-20 11:20:49.766594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.508 [2024-11-20 11:20:49.766602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.508 5460.00 IOPS, 682.50 MiB/s [2024-11-20T10:20:50.004Z] [2024-11-20 11:20:49.772800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:22.508 [2024-11-20 11:20:49.772822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.508 [2024-11-20 11:20:49.772830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.508 [2024-11-20 11:20:49.778168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:22.508 [2024-11-20 11:20:49.778190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.508 [2024-11-20 11:20:49.778198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.508 [2024-11-20 11:20:49.783516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:22.508 [2024-11-20 11:20:49.783539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.508 [2024-11-20 11:20:49.783547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.508 [2024-11-20 11:20:49.788766] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:22.508 [2024-11-20 11:20:49.788788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.508 [2024-11-20 11:20:49.788796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.508 [2024-11-20 11:20:49.794136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:22.508 [2024-11-20 11:20:49.794159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.508 [2024-11-20 11:20:49.794167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.508 [2024-11-20 11:20:49.799533] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:22.508 [2024-11-20 11:20:49.799554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.508 [2024-11-20 11:20:49.799563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.508 [2024-11-20 11:20:49.804832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:22.509 [2024-11-20 11:20:49.804855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.509 [2024-11-20 11:20:49.804863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.509 [2024-11-20 11:20:49.810123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:22.509 [2024-11-20 11:20:49.810144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.509 [2024-11-20 11:20:49.810152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.509 [2024-11-20 11:20:49.815425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:22.509 [2024-11-20 11:20:49.815448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.509 [2024-11-20 11:20:49.815457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.509 [2024-11-20 11:20:49.820721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:22.509 [2024-11-20 11:20:49.820743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.509 [2024-11-20 11:20:49.820751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.509 [2024-11-20 11:20:49.826096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:22.509 [2024-11-20 11:20:49.826118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.509 [2024-11-20 11:20:49.826126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.509 [2024-11-20 11:20:49.831427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:22.509 [2024-11-20 11:20:49.831449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.509 [2024-11-20 11:20:49.831457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.509 [2024-11-20 11:20:49.836693] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:22.509 [2024-11-20 11:20:49.836714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.509 [2024-11-20 11:20:49.836726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.509 [2024-11-20 11:20:49.841959] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:22.509 [2024-11-20 11:20:49.841980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.509 [2024-11-20 11:20:49.841988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.509 [2024-11-20 11:20:49.847225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:22.509 [2024-11-20 11:20:49.847247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.509 [2024-11-20 11:20:49.847255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.509 [2024-11-20 11:20:49.852525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:22.509 [2024-11-20 11:20:49.852546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.509 [2024-11-20 11:20:49.852554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.509 [2024-11-20 11:20:49.857847] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:22.509 [2024-11-20 11:20:49.857869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.509 [2024-11-20 11:20:49.857877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.509 [2024-11-20 11:20:49.863093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:22.509 [2024-11-20 11:20:49.863114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.509 [2024-11-20 11:20:49.863123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.509 [2024-11-20 11:20:49.868358] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:22.509 [2024-11-20 11:20:49.868380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.509 [2024-11-20 11:20:49.868388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.509 [2024-11-20 11:20:49.873600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:22.509 [2024-11-20 11:20:49.873621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.509 [2024-11-20 11:20:49.873630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.509 [2024-11-20 11:20:49.878852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:22.509 [2024-11-20 11:20:49.878874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.509 [2024-11-20 11:20:49.878881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.509 [2024-11-20 11:20:49.884187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:22.509 [2024-11-20 11:20:49.884212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.509 [2024-11-20 11:20:49.884221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.509 [2024-11-20 11:20:49.889475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:22.509 [2024-11-20 11:20:49.889496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.509 [2024-11-20 11:20:49.889504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.509 [2024-11-20 11:20:49.894983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:22.509 [2024-11-20 11:20:49.895005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.509 [2024-11-20 11:20:49.895013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.509 [2024-11-20 11:20:49.900238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:22.509 [2024-11-20 11:20:49.900259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.509 [2024-11-20 11:20:49.900267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.509 [2024-11-20 11:20:49.905500] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:22.509 [2024-11-20 11:20:49.905522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.510 [2024-11-20 11:20:49.905530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.510 [2024-11-20 11:20:49.910829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:22.510 [2024-11-20 11:20:49.910851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.510 [2024-11-20 11:20:49.910859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.510 [2024-11-20 11:20:49.916131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:22.510 [2024-11-20 11:20:49.916153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.510 [2024-11-20 11:20:49.916162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.510 [2024-11-20 11:20:49.921338] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:22.510 [2024-11-20 11:20:49.921360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.510 [2024-11-20 11:20:49.921368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.510 [2024-11-20 11:20:49.926559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:22.510 [2024-11-20 11:20:49.926581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.510 [2024-11-20 11:20:49.926589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.510 [2024-11-20 11:20:49.931880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:22.510 [2024-11-20 11:20:49.931902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.510 [2024-11-20 11:20:49.931910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.510 [2024-11-20 11:20:49.937102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:22.510 [2024-11-20 11:20:49.937124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.510 [2024-11-20 11:20:49.937132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.510 [2024-11-20 11:20:49.942321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:22.510 [2024-11-20 11:20:49.942342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.510 [2024-11-20 11:20:49.942350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.510 [2024-11-20 11:20:49.947558] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:22.510 [2024-11-20 11:20:49.947579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.510 [2024-11-20 11:20:49.947587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.510 [2024-11-20 11:20:49.952826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:22.510 [2024-11-20 11:20:49.952847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.510 [2024-11-20 11:20:49.952856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.510 [2024-11-20 11:20:49.958129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:22.510 [2024-11-20 11:20:49.958150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.510 [2024-11-20 11:20:49.958158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.510 [2024-11-20 11:20:49.963376] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:22.510 [2024-11-20 11:20:49.963398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.510 [2024-11-20 11:20:49.963406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.510 [2024-11-20 11:20:49.968751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:22.510 [2024-11-20 11:20:49.968774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.510 [2024-11-20 11:20:49.968782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.510 [2024-11-20 11:20:49.974058] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:22.510 [2024-11-20 11:20:49.974079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.510 [2024-11-20 11:20:49.974091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.510 [2024-11-20 11:20:49.979329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:22.510 [2024-11-20 11:20:49.979351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.510 [2024-11-20 11:20:49.979359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.510 [2024-11-20 11:20:49.984647] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:22.510 [2024-11-20 11:20:49.984669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.510 [2024-11-20 11:20:49.984677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.510 [2024-11-20 11:20:49.991077] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:22.510 [2024-11-20 11:20:49.991099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.510 [2024-11-20 11:20:49.991107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.510 [2024-11-20 11:20:49.996679] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:22.510 [2024-11-20 11:20:49.996702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.510 [2024-11-20 11:20:49.996710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.770 [2024-11-20 11:20:50.002072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:22.770 [2024-11-20 11:20:50.002098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.770 [2024-11-20 11:20:50.002107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.770 [2024-11-20 11:20:50.007687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:22.770 [2024-11-20 11:20:50.007712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.770 [2024-11-20 11:20:50.007722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.770 [2024-11-20 11:20:50.013387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:22.770 [2024-11-20 11:20:50.013411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.770 [2024-11-20 11:20:50.013419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.770 [2024-11-20 11:20:50.019616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:22.770 [2024-11-20 11:20:50.019642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.770 [2024-11-20 11:20:50.019653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.770 [2024-11-20 11:20:50.025610] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:22.770 [2024-11-20 11:20:50.025638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.770 [2024-11-20 11:20:50.025648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.770 [2024-11-20 11:20:50.030817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:22.770 [2024-11-20 11:20:50.030840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.770 [2024-11-20 11:20:50.030849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.770 [2024-11-20 11:20:50.036161] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:22.770 [2024-11-20 11:20:50.036183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.770 [2024-11-20 11:20:50.036192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.770 [2024-11-20 11:20:50.041526] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:22.770 [2024-11-20 11:20:50.041549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.770 [2024-11-20 11:20:50.041557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.770 [2024-11-20 11:20:50.046893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:22.770 [2024-11-20 11:20:50.046915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.770 [2024-11-20 11:20:50.046924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.770 [2024-11-20 11:20:50.053192] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:22.770 [2024-11-20 11:20:50.053220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.770 [2024-11-20 11:20:50.053230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.770 [2024-11-20 11:20:50.058784] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:22.770 [2024-11-20 11:20:50.058807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.770 [2024-11-20 11:20:50.058816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.770 [2024-11-20 11:20:50.064180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:22.770 [2024-11-20 11:20:50.064203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.771 [2024-11-20 11:20:50.064213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.771 [2024-11-20 11:20:50.069600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:22.771 [2024-11-20 11:20:50.069623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.771 [2024-11-20 11:20:50.069636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.771 [2024-11-20 11:20:50.075051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:22.771 [2024-11-20 11:20:50.075073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.771 [2024-11-20 11:20:50.075081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.771 [2024-11-20 11:20:50.080443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:22.771 [2024-11-20 11:20:50.080466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.771 [2024-11-20 11:20:50.080474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.771 [2024-11-20 11:20:50.086095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:22.771 [2024-11-20 11:20:50.086118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.771 [2024-11-20 11:20:50.086127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.771 [2024-11-20 11:20:50.091599] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:22.771 [2024-11-20 11:20:50.091622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.771 [2024-11-20 11:20:50.091631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.771 [2024-11-20 11:20:50.097005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:22.771 [2024-11-20 11:20:50.097029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.771 [2024-11-20 11:20:50.097038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.771 [2024-11-20 11:20:50.102352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:22.771 [2024-11-20 11:20:50.102375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.771 [2024-11-20 11:20:50.102383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.771 [2024-11-20 11:20:50.107690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:22.771 [2024-11-20 11:20:50.107713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.771 [2024-11-20 11:20:50.107721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.771 [2024-11-20 11:20:50.113041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:22.771 [2024-11-20 11:20:50.113063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.771 [2024-11-20 11:20:50.113071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.771 [2024-11-20 11:20:50.118369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:22.771 [2024-11-20 11:20:50.118395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.771 [2024-11-20 11:20:50.118404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.771 [2024-11-20 11:20:50.123690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:22.771 [2024-11-20 11:20:50.123712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.771 [2024-11-20 11:20:50.123720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.771 [2024-11-20 11:20:50.129016] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:22.771 [2024-11-20 11:20:50.129038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.771 [2024-11-20 11:20:50.129046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.771 [2024-11-20 11:20:50.134366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:22.771 [2024-11-20 11:20:50.134388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.771 [2024-11-20 11:20:50.134398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.771 [2024-11-20 11:20:50.139631] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:22.771 [2024-11-20 11:20:50.139653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.771 [2024-11-20 11:20:50.139661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.771 [2024-11-20 11:20:50.144930] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:22.771 [2024-11-20 11:20:50.144958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.771 [2024-11-20 11:20:50.144966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.771 [2024-11-20 11:20:50.150224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:22.771 [2024-11-20 11:20:50.150246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.771 [2024-11-20 11:20:50.150254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.771 [2024-11-20 11:20:50.155523] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:22.771 [2024-11-20 11:20:50.155546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.771 [2024-11-20 11:20:50.155554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.771 [2024-11-20 11:20:50.160848] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:22.771 [2024-11-20 11:20:50.160869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.771 [2024-11-20 11:20:50.160877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.771 [2024-11-20 11:20:50.164367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:22.771 [2024-11-20 11:20:50.164388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.771 [2024-11-20 11:20:50.164396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.771 [2024-11-20 11:20:50.169749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:22.771 [2024-11-20 11:20:50.169771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.771 [2024-11-20 11:20:50.169779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.771 [2024-11-20 11:20:50.176446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:22.771 [2024-11-20 11:20:50.176469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.771 [2024-11-20 11:20:50.176478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.771 [2024-11-20 11:20:50.184636] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:22.771 [2024-11-20 11:20:50.184658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.771 [2024-11-20 11:20:50.184667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.771 [2024-11-20 11:20:50.192221] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:22.771 [2024-11-20 11:20:50.192243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.772 [2024-11-20 11:20:50.192252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.772 [2024-11-20 11:20:50.200084] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:22.772 [2024-11-20 11:20:50.200106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.772 [2024-11-20 11:20:50.200115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.772 [2024-11-20 11:20:50.207760] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:22.772 [2024-11-20 11:20:50.207783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.772 [2024-11-20 11:20:50.207792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.772 [2024-11-20 11:20:50.215705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:22.772 [2024-11-20 11:20:50.215728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.772 [2024-11-20 11:20:50.215737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.772 [2024-11-20 11:20:50.223545] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:22.772 [2024-11-20 11:20:50.223567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.772 [2024-11-20 11:20:50.223579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.772 [2024-11-20 11:20:50.231661] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:22.772 [2024-11-20 11:20:50.231683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.772 [2024-11-20 11:20:50.231692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.772 [2024-11-20 11:20:50.238924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:22.772 [2024-11-20 11:20:50.238946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.772 [2024-11-20 11:20:50.238960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.772 [2024-11-20 11:20:50.246659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:22.772 [2024-11-20 11:20:50.246681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.772 [2024-11-20 11:20:50.246689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.772 [2024-11-20 11:20:50.254379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:22.772 [2024-11-20 11:20:50.254400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.772 [2024-11-20 11:20:50.254409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.772 [2024-11-20 11:20:50.261708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:22.772 [2024-11-20 11:20:50.261733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.772 [2024-11-20 11:20:50.261742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:23.031 [2024-11-20 11:20:50.269442] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:23.031 [2024-11-20 11:20:50.269466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.031 [2024-11-20 11:20:50.269476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:23.031 [2024-11-20 11:20:50.277209] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:23.031 [2024-11-20 11:20:50.277231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.031 [2024-11-20 11:20:50.277240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:23.031 [2024-11-20 11:20:50.285233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:23.031 [2024-11-20 11:20:50.285256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.031 [2024-11-20 11:20:50.285265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:23.031 [2024-11-20 11:20:50.292292] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:23.032 [2024-11-20 11:20:50.292318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.032 [2024-11-20 11:20:50.292326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:23.032 [2024-11-20 11:20:50.297881] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:23.032 [2024-11-20 11:20:50.297904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.032 [2024-11-20 11:20:50.297913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:23.032 [2024-11-20 11:20:50.303290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:23.032 [2024-11-20 11:20:50.303311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.032 [2024-11-20 11:20:50.303319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:23.032 [2024-11-20 11:20:50.308543] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:23.032 [2024-11-20 11:20:50.308564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.032 [2024-11-20 11:20:50.308573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:23.032 [2024-11-20 11:20:50.313784] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:23.032 [2024-11-20 11:20:50.313805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.032 [2024-11-20 11:20:50.313813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:23.032 [2024-11-20 11:20:50.319026] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:23.032 [2024-11-20 11:20:50.319047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.032 [2024-11-20 11:20:50.319055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:23.032 [2024-11-20 11:20:50.324329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:23.032 [2024-11-20 11:20:50.324349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.032 [2024-11-20 11:20:50.324357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:23.032 [2024-11-20 11:20:50.329632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:23.032 [2024-11-20 11:20:50.329652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.032 [2024-11-20 11:20:50.329661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:23.032 [2024-11-20 11:20:50.334903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:23.032 [2024-11-20 11:20:50.334923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.032 [2024-11-20 11:20:50.334932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:23.032 [2024-11-20 11:20:50.340199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:23.032 [2024-11-20 11:20:50.340219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.032 [2024-11-20 11:20:50.340227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:23.032 [2024-11-20 11:20:50.345477] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:23.032 [2024-11-20 11:20:50.345497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.032 [2024-11-20 11:20:50.345505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:23.032 [2024-11-20 11:20:50.350754] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:23.032 [2024-11-20 11:20:50.350776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.032 [2024-11-20 11:20:50.350785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:23.032 [2024-11-20 11:20:50.356032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:23.032 [2024-11-20 11:20:50.356053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.032 [2024-11-20 11:20:50.356062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:23.032 [2024-11-20 11:20:50.361245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:23.032 [2024-11-20 11:20:50.361265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.032 [2024-11-20 11:20:50.361274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:23.032 [2024-11-20 11:20:50.366475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:23.032 [2024-11-20 11:20:50.366497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.032 [2024-11-20 11:20:50.366505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:23.032 [2024-11-20 11:20:50.371769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:23.032 [2024-11-20 11:20:50.371791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.032 [2024-11-20 11:20:50.371799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:23.032 [2024-11-20 11:20:50.377047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:23.032 [2024-11-20 11:20:50.377068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.032 [2024-11-20 11:20:50.377076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:23.032 [2024-11-20 11:20:50.382348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:23.032 [2024-11-20 11:20:50.382369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.032 [2024-11-20 11:20:50.382381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:23.032 [2024-11-20 11:20:50.387645] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:23.032 [2024-11-20 11:20:50.387667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.032 [2024-11-20 11:20:50.387677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:23.032 [2024-11-20 11:20:50.392923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:23.032 [2024-11-20 11:20:50.392944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.033 [2024-11-20 11:20:50.392959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:23.033 [2024-11-20 11:20:50.398225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:23.033 [2024-11-20 11:20:50.398248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.033 [2024-11-20 11:20:50.398256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:23.033 [2024-11-20 11:20:50.403550] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:23.033 [2024-11-20 11:20:50.403571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.033 [2024-11-20 11:20:50.403580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:23.033 [2024-11-20 11:20:50.408846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:23.033 [2024-11-20 11:20:50.408867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.033 [2024-11-20 11:20:50.408875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:23.033 [2024-11-20 11:20:50.414114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:23.033 [2024-11-20 11:20:50.414136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.033 [2024-11-20 11:20:50.414143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:23.033 [2024-11-20 11:20:50.419458] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:23.033 [2024-11-20 11:20:50.419479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.033 [2024-11-20 11:20:50.419487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:23.033 [2024-11-20 11:20:50.424752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:23.033 [2024-11-20 11:20:50.424774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.033 [2024-11-20 11:20:50.424782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:23.033 [2024-11-20 11:20:50.430088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:23.033 [2024-11-20 11:20:50.430109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.033 [2024-11-20 11:20:50.430117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:23.033 [2024-11-20 11:20:50.435289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:23.033 [2024-11-20 11:20:50.435310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.033 [2024-11-20 11:20:50.435318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:23.033 [2024-11-20 11:20:50.440500] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:23.033 [2024-11-20 11:20:50.440521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.033 [2024-11-20 11:20:50.440529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:23.033 [2024-11-20 11:20:50.445712] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:23.033 [2024-11-20 11:20:50.445733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.033 [2024-11-20 11:20:50.445741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:23.033 [2024-11-20 11:20:50.450996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:23.033 [2024-11-20 11:20:50.451017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.033 [2024-11-20 11:20:50.451025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:23.033 [2024-11-20 11:20:50.456257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:23.033 [2024-11-20 11:20:50.456279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.033 [2024-11-20 11:20:50.456287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:23.033 [2024-11-20 11:20:50.461582] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:23.033 [2024-11-20 11:20:50.461602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.033 [2024-11-20 11:20:50.461611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:23.033 [2024-11-20 11:20:50.466823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:23.033 [2024-11-20 11:20:50.466845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.033 [2024-11-20 11:20:50.466853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:23.033 [2024-11-20 11:20:50.472070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:23.033 [2024-11-20 11:20:50.472091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.033 [2024-11-20 11:20:50.472103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:23.033 [2024-11-20 11:20:50.477291] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:23.033 [2024-11-20 11:20:50.477312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.033 [2024-11-20 11:20:50.477321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:23.033 [2024-11-20 11:20:50.482591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:23.033 [2024-11-20 11:20:50.482612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.033 [2024-11-20 11:20:50.482620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:23.033 [2024-11-20 11:20:50.487891] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:23.033 [2024-11-20 11:20:50.487912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.033 [2024-11-20 11:20:50.487920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:23.033 [2024-11-20 11:20:50.493132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:23.033 [2024-11-20 11:20:50.493152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.033 [2024-11-20 11:20:50.493160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:23.033 [2024-11-20 11:20:50.498384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:23.033 [2024-11-20 11:20:50.498405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.033 [2024-11-20 11:20:50.498413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:23.033 [2024-11-20 11:20:50.503623] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:23.033 [2024-11-20 11:20:50.503645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.034 [2024-11-20 11:20:50.503653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:23.034 [2024-11-20 11:20:50.508828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:23.034 [2024-11-20 11:20:50.508849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.034 [2024-11-20 11:20:50.508858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:23.034 [2024-11-20 11:20:50.514051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:23.034 [2024-11-20 11:20:50.514071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.034 [2024-11-20 11:20:50.514079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:23.034 [2024-11-20 11:20:50.519293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:23.034 [2024-11-20 11:20:50.519318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.034 [2024-11-20 11:20:50.519326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:23.294 [2024-11-20 11:20:50.524601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:23.294 [2024-11-20 11:20:50.524626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.294 [2024-11-20 11:20:50.524636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:23.294 [2024-11-20 11:20:50.529886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:23.294 [2024-11-20 11:20:50.529909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.294 [2024-11-20 11:20:50.529917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:23.294 [2024-11-20 11:20:50.535138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:23.295 [2024-11-20 11:20:50.535161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.295 [2024-11-20 11:20:50.535170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:23.295 [2024-11-20 11:20:50.540413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:23.295 [2024-11-20 11:20:50.540435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.295 [2024-11-20 11:20:50.540443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:23.295 [2024-11-20 11:20:50.545660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:23.295 [2024-11-20 11:20:50.545681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.295 [2024-11-20 11:20:50.545689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:23.295 [2024-11-20 11:20:50.551003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:23.295 [2024-11-20 11:20:50.551024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.295 [2024-11-20 11:20:50.551033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:23.295 [2024-11-20 11:20:50.556324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:23.295 [2024-11-20 11:20:50.556346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.295 [2024-11-20 11:20:50.556355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:23.295 [2024-11-20 11:20:50.561555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:23.295 [2024-11-20 11:20:50.561576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.295 [2024-11-20 11:20:50.561584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:23.295 [2024-11-20 11:20:50.566784] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:23.295 [2024-11-20 11:20:50.566806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.295 [2024-11-20 11:20:50.566814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:23.295 [2024-11-20 11:20:50.572063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:23.295 [2024-11-20 11:20:50.572083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.295 [2024-11-20 11:20:50.572092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:23.295 [2024-11-20 11:20:50.577324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:23.295 [2024-11-20 11:20:50.577345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.295 [2024-11-20 11:20:50.577353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:23.295 [2024-11-20 11:20:50.582598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:23.295 [2024-11-20 11:20:50.582620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.295 [2024-11-20 11:20:50.582628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:23.295 [2024-11-20 11:20:50.587858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:23.295 [2024-11-20 11:20:50.587880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.295 [2024-11-20 11:20:50.587888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:23.295 [2024-11-20 11:20:50.593186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:23.295 [2024-11-20 11:20:50.593207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.295 [2024-11-20 11:20:50.593215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:23.295 [2024-11-20 11:20:50.598465] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:23.295 [2024-11-20 11:20:50.598486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.295 [2024-11-20 11:20:50.598494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:23.295 [2024-11-20 11:20:50.603786] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:23.295 [2024-11-20 11:20:50.603807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.295 [2024-11-20 11:20:50.603815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:23.295 [2024-11-20 11:20:50.609061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:23.295 [2024-11-20 11:20:50.609082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.295 [2024-11-20 11:20:50.609093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:23.295 [2024-11-20 11:20:50.614367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:23.295 [2024-11-20 11:20:50.614388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.295 [2024-11-20 11:20:50.614396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:23.295 [2024-11-20 11:20:50.619625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:23.295 [2024-11-20 11:20:50.619645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.295 [2024-11-20 11:20:50.619653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:23.295 [2024-11-20 11:20:50.624919] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:23.295 [2024-11-20 11:20:50.624941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.295 [2024-11-20 11:20:50.624954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:23.295 [2024-11-20 11:20:50.630194] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:23.295 [2024-11-20 11:20:50.630215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.295 [2024-11-20 11:20:50.630223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:23.295 [2024-11-20 11:20:50.635427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:23.295 [2024-11-20 11:20:50.635449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.296 [2024-11-20 11:20:50.635457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:23.296 [2024-11-20 11:20:50.640684] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:23.296 [2024-11-20 11:20:50.640705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.296 [2024-11-20 11:20:50.640713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:23.296 [2024-11-20 11:20:50.645972] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:23.296 [2024-11-20 11:20:50.645993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.296 [2024-11-20 11:20:50.646001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:23.296 [2024-11-20 11:20:50.651265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:23.296 [2024-11-20 11:20:50.651286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.296 [2024-11-20 11:20:50.651294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:23.296 [2024-11-20 11:20:50.656587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:23.296 [2024-11-20 11:20:50.656612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.296 [2024-11-20 11:20:50.656621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:23.296 [2024-11-20 11:20:50.661921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:23.296 [2024-11-20 11:20:50.661942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.296 [2024-11-20 11:20:50.661957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:23.296 [2024-11-20 11:20:50.667227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:23.296 [2024-11-20 11:20:50.667249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.296 [2024-11-20 11:20:50.667257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:23.296 [2024-11-20 11:20:50.672512] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:23.296 [2024-11-20 11:20:50.672533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.296 [2024-11-20 11:20:50.672541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:23.296 [2024-11-20 11:20:50.677877] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:23.296 [2024-11-20 11:20:50.677898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.296 [2024-11-20 11:20:50.677906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:23.296 [2024-11-20 11:20:50.683166] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:23.296 [2024-11-20 11:20:50.683187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.296 [2024-11-20 11:20:50.683195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:23.296 [2024-11-20 11:20:50.688453] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:23.296 [2024-11-20 11:20:50.688474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.296 [2024-11-20 11:20:50.688482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:23.296 [2024-11-20 11:20:50.693686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:23.296 [2024-11-20 11:20:50.693707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.296 [2024-11-20 11:20:50.693715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:23.296 [2024-11-20 11:20:50.698955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:23.296 [2024-11-20 11:20:50.698975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.296 [2024-11-20 11:20:50.698985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:23.296 [2024-11-20 11:20:50.704183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:23.296 [2024-11-20 11:20:50.704203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.296 [2024-11-20 11:20:50.704211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:23.296 [2024-11-20 11:20:50.709413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:23.296 [2024-11-20 11:20:50.709433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.296 [2024-11-20 11:20:50.709441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:23.296 [2024-11-20 11:20:50.714673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:23.296 [2024-11-20 11:20:50.714693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.296 [2024-11-20 11:20:50.714701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:23.296 [2024-11-20 11:20:50.719945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:23.296 [2024-11-20 11:20:50.719972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.296 [2024-11-20 11:20:50.719980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:23.296 [2024-11-20 11:20:50.725278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:23.296 [2024-11-20 11:20:50.725299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.296 [2024-11-20 11:20:50.725307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:23.296 [2024-11-20 11:20:50.730550] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:23.296 [2024-11-20 11:20:50.730571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.296 [2024-11-20 11:20:50.730579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:23.296 [2024-11-20 11:20:50.735988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:23.296 [2024-11-20 11:20:50.736009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.296 [2024-11-20 11:20:50.736017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:23.297 [2024-11-20 11:20:50.741277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:23.297 [2024-11-20 11:20:50.741298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.297 [2024-11-20 11:20:50.741306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:23.297 [2024-11-20 11:20:50.746527] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:23.297 [2024-11-20 11:20:50.746549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.297 [2024-11-20 11:20:50.746561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:23.297 [2024-11-20 11:20:50.751845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:23.297 [2024-11-20 11:20:50.751866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.297 [2024-11-20 11:20:50.751874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:23.297 [2024-11-20 11:20:50.757153] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:23.297 [2024-11-20 11:20:50.757173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.297 [2024-11-20 11:20:50.757182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:23.297 [2024-11-20 11:20:50.762433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:23.297 [2024-11-20 11:20:50.762454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.297 [2024-11-20 11:20:50.762462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:23.297 [2024-11-20 11:20:50.767726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a42580) 00:26:23.297 [2024-11-20 11:20:50.767747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.297 [2024-11-20 11:20:50.767757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:23.297 5535.00 IOPS, 691.88 MiB/s 00:26:23.297 Latency(us) 00:26:23.297 [2024-11-20T10:20:50.793Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:23.297 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:26:23.297 nvme0n1 : 2.00 5537.25 692.16 0.00 0.00 2886.86 698.10 14930.81 00:26:23.297 [2024-11-20T10:20:50.793Z] =================================================================================================================== 00:26:23.297 [2024-11-20T10:20:50.793Z] Total : 5537.25 692.16 0.00 0.00 2886.86 698.10 14930.81 00:26:23.297 { 00:26:23.297 "results": [ 00:26:23.297 { 00:26:23.297 "job": "nvme0n1", 00:26:23.297 "core_mask": "0x2", 00:26:23.297 "workload": "randread", 00:26:23.297 "status": "finished", 00:26:23.297 "queue_depth": 16, 00:26:23.297 "io_size": 131072, 00:26:23.297 "runtime": 2.002078, 00:26:23.297 "iops": 5537.246800574203, 00:26:23.297 "mibps": 692.1558500717754, 00:26:23.297 "io_failed": 0, 00:26:23.297 "io_timeout": 0, 00:26:23.297 "avg_latency_us": 2886.864745350579, 00:26:23.297 "min_latency_us": 698.1008695652174, 00:26:23.297 "max_latency_us": 14930.810434782608 00:26:23.297 } 00:26:23.297 ], 00:26:23.297 "core_count": 1 00:26:23.297 } 00:26:23.556 11:20:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:23.556 11:20:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:23.556 | .driver_specific 00:26:23.556 | .nvme_error 00:26:23.556 | .status_code 00:26:23.556 | .command_transient_transport_error' 00:26:23.556 11:20:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:23.556 11:20:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:23.556 11:20:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 358 > 0 )) 00:26:23.556 11:20:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 11488 00:26:23.556 11:20:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 11488 ']' 00:26:23.556 11:20:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 11488 00:26:23.556 11:20:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:26:23.556 11:20:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:23.556 11:20:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 11488 00:26:23.815 11:20:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:23.815 11:20:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:23.815 11:20:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 11488' 00:26:23.815 killing process with pid 11488 00:26:23.815 11:20:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 11488 00:26:23.815 Received shutdown signal, test time was about 2.000000 seconds 00:26:23.815 00:26:23.815 Latency(us) 00:26:23.815 [2024-11-20T10:20:51.311Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:23.815 [2024-11-20T10:20:51.311Z] =================================================================================================================== 00:26:23.815 [2024-11-20T10:20:51.311Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:23.815 11:20:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 11488 00:26:23.815 11:20:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:26:23.815 11:20:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:23.815 11:20:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:26:23.815 11:20:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:26:23.815 11:20:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:26:23.815 11:20:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=11969 00:26:23.815 11:20:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 11969 /var/tmp/bperf.sock 00:26:23.815 11:20:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:26:23.815 11:20:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 11969 ']' 00:26:23.815 11:20:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:23.815 11:20:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:23.815 11:20:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:23.815 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:23.815 11:20:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:23.815 11:20:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:23.815 [2024-11-20 11:20:51.253133] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:26:23.815 [2024-11-20 11:20:51.253182] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid11969 ] 00:26:24.073 [2024-11-20 11:20:51.330829] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:24.073 [2024-11-20 11:20:51.368881] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:24.073 11:20:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:24.073 11:20:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:26:24.073 11:20:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:24.073 11:20:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:24.331 11:20:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:24.331 11:20:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.331 11:20:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:24.331 11:20:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.331 11:20:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:24.331 11:20:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:24.589 nvme0n1 00:26:24.589 11:20:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:26:24.589 11:20:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.590 11:20:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:24.849 11:20:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.849 11:20:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:24.849 11:20:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:24.849 Running I/O for 2 seconds... 00:26:24.849 [2024-11-20 11:20:52.200646] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166e1710 00:26:24.849 [2024-11-20 11:20:52.201545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:12381 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.849 [2024-11-20 11:20:52.201576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:24.849 [2024-11-20 11:20:52.210614] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166e6fa8 00:26:24.849 [2024-11-20 11:20:52.211747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:21897 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.849 [2024-11-20 11:20:52.211771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:24.849 [2024-11-20 11:20:52.220279] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166f0bc0 00:26:24.849 [2024-11-20 11:20:52.221540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.849 [2024-11-20 11:20:52.221561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:24.849 [2024-11-20 11:20:52.229939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166e12d8 00:26:24.849 [2024-11-20 11:20:52.231349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:10524 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.849 [2024-11-20 11:20:52.231374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:24.849 [2024-11-20 11:20:52.236595] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166ee5c8 00:26:24.849 [2024-11-20 11:20:52.237282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:21783 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.849 [2024-11-20 11:20:52.237302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:24.849 [2024-11-20 11:20:52.245973] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166e6fa8 00:26:24.849 [2024-11-20 11:20:52.246652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:10071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.849 [2024-11-20 11:20:52.246673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:24.849 [2024-11-20 11:20:52.255597] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166e84c0 00:26:24.849 [2024-11-20 11:20:52.256272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:19979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.849 [2024-11-20 11:20:52.256292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:24.849 [2024-11-20 11:20:52.266167] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166fa3a0 00:26:24.849 [2024-11-20 11:20:52.266866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20835 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.849 [2024-11-20 11:20:52.266886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:24.849 [2024-11-20 11:20:52.275096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166e3d08 00:26:24.849 [2024-11-20 11:20:52.276076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:10713 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.849 [2024-11-20 11:20:52.276096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:24.849 [2024-11-20 11:20:52.284592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166ecc78 00:26:24.849 [2024-11-20 11:20:52.285657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:11519 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.849 [2024-11-20 11:20:52.285676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:24.849 [2024-11-20 11:20:52.295569] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166e7c50 00:26:24.849 [2024-11-20 11:20:52.297109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20543 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.849 [2024-11-20 11:20:52.297128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:24.849 [2024-11-20 11:20:52.302472] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166ec408 00:26:24.849 [2024-11-20 11:20:52.303297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:5097 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.849 [2024-11-20 11:20:52.303315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:24.849 [2024-11-20 11:20:52.313628] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166f96f8 00:26:24.849 [2024-11-20 11:20:52.314786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:6107 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.849 [2024-11-20 11:20:52.314805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:24.849 [2024-11-20 11:20:52.323478] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166ed0b0 00:26:24.850 [2024-11-20 11:20:52.324913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:14932 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.850 [2024-11-20 11:20:52.324932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:24.850 [2024-11-20 11:20:52.330096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166e88f8 00:26:24.850 [2024-11-20 11:20:52.330802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:8210 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.850 [2024-11-20 11:20:52.330821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:24.850 [2024-11-20 11:20:52.339717] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166e99d8 00:26:24.850 [2024-11-20 11:20:52.340568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:16471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.850 [2024-11-20 11:20:52.340590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:25.109 [2024-11-20 11:20:52.349285] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166e7c50 00:26:25.109 [2024-11-20 11:20:52.350119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:8710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.109 [2024-11-20 11:20:52.350141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:25.109 [2024-11-20 11:20:52.358955] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166df988 00:26:25.109 [2024-11-20 11:20:52.359814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4698 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.109 [2024-11-20 11:20:52.359835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:25.109 [2024-11-20 11:20:52.368146] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166df550 00:26:25.109 [2024-11-20 11:20:52.369110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:18563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.109 [2024-11-20 11:20:52.369130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:25.109 [2024-11-20 11:20:52.379550] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166ebb98 00:26:25.109 [2024-11-20 11:20:52.380963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:3211 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.109 [2024-11-20 11:20:52.380982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:25.109 [2024-11-20 11:20:52.389186] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166ff3c8 00:26:25.109 [2024-11-20 11:20:52.390764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.109 [2024-11-20 11:20:52.390783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:25.109 [2024-11-20 11:20:52.395786] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166fc128 00:26:25.109 [2024-11-20 11:20:52.396628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.109 [2024-11-20 11:20:52.396647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:25.109 [2024-11-20 11:20:52.405384] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166e3498 00:26:25.109 [2024-11-20 11:20:52.406359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:2880 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.110 [2024-11-20 11:20:52.406379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:25.110 [2024-11-20 11:20:52.415009] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166ee5c8 00:26:25.110 [2024-11-20 11:20:52.416111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:6798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.110 [2024-11-20 11:20:52.416131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:25.110 [2024-11-20 11:20:52.426282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166ec408 00:26:25.110 [2024-11-20 11:20:52.427792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:5856 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.110 [2024-11-20 11:20:52.427811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:25.110 [2024-11-20 11:20:52.432899] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166ff3c8 00:26:25.110 [2024-11-20 11:20:52.433680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:633 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.110 [2024-11-20 11:20:52.433698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:25.110 [2024-11-20 11:20:52.443862] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166ff3c8 00:26:25.110 [2024-11-20 11:20:52.445201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:15018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.110 [2024-11-20 11:20:52.445220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:25.110 [2024-11-20 11:20:52.453214] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166f1ca0 00:26:25.110 [2024-11-20 11:20:52.454543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13982 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.110 [2024-11-20 11:20:52.454562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:25.110 [2024-11-20 11:20:52.460199] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166fc560 00:26:25.110 [2024-11-20 11:20:52.460940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:17872 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.110 [2024-11-20 11:20:52.460962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:25.110 [2024-11-20 11:20:52.471615] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166f6020 00:26:25.110 [2024-11-20 11:20:52.472967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:6792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.110 [2024-11-20 11:20:52.472994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:25.110 [2024-11-20 11:20:52.480344] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166e5658 00:26:25.110 [2024-11-20 11:20:52.481440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:10191 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.110 [2024-11-20 11:20:52.481459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:25.110 [2024-11-20 11:20:52.489581] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166ea248 00:26:25.110 [2024-11-20 11:20:52.490633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17701 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.110 [2024-11-20 11:20:52.490653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:25.110 [2024-11-20 11:20:52.499769] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166e27f0 00:26:25.110 [2024-11-20 11:20:52.501128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:21860 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.110 [2024-11-20 11:20:52.501147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:25.110 [2024-11-20 11:20:52.507815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166e73e0 00:26:25.110 [2024-11-20 11:20:52.508632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:1334 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.110 [2024-11-20 11:20:52.508651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:25.110 [2024-11-20 11:20:52.518482] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166f6020 00:26:25.110 [2024-11-20 11:20:52.519954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.110 [2024-11-20 11:20:52.519973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:25.110 [2024-11-20 11:20:52.524936] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166e84c0 00:26:25.110 [2024-11-20 11:20:52.525607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:23743 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.110 [2024-11-20 11:20:52.525626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:25.110 [2024-11-20 11:20:52.534274] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166f2510 00:26:25.110 [2024-11-20 11:20:52.534822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:5878 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.110 [2024-11-20 11:20:52.534841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:25.110 [2024-11-20 11:20:52.543996] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166fb048 00:26:25.110 [2024-11-20 11:20:52.544853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:3959 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.110 [2024-11-20 11:20:52.544872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:25.110 [2024-11-20 11:20:52.553584] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166de470 00:26:25.110 [2024-11-20 11:20:52.554584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:19709 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.110 [2024-11-20 11:20:52.554603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:25.110 [2024-11-20 11:20:52.562965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166f7538 00:26:25.110 [2024-11-20 11:20:52.563531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:7223 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.110 [2024-11-20 11:20:52.563551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:25.110 [2024-11-20 11:20:52.571923] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166e27f0 00:26:25.110 [2024-11-20 11:20:52.572739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:2445 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.110 [2024-11-20 11:20:52.572759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:25.110 [2024-11-20 11:20:52.582915] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166f2948 00:26:25.110 [2024-11-20 11:20:52.584271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:10606 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.110 [2024-11-20 11:20:52.584290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:25.110 [2024-11-20 11:20:52.592532] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166f1430 00:26:25.110 [2024-11-20 11:20:52.594018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18069 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.110 [2024-11-20 11:20:52.594038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:25.110 [2024-11-20 11:20:52.599195] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166fef90 00:26:25.110 [2024-11-20 11:20:52.600025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:13450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.110 [2024-11-20 11:20:52.600046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.370 [2024-11-20 11:20:52.611004] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166e5658 00:26:25.370 [2024-11-20 11:20:52.612259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:19424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.370 [2024-11-20 11:20:52.612282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:25.370 [2024-11-20 11:20:52.620613] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166e6b70 00:26:25.370 [2024-11-20 11:20:52.621901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:17519 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.370 [2024-11-20 11:20:52.621921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:25.370 [2024-11-20 11:20:52.628552] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166e7818 00:26:25.370 [2024-11-20 11:20:52.629367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:16645 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.370 [2024-11-20 11:20:52.629387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:25.370 [2024-11-20 11:20:52.637734] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166e7818 00:26:25.370 [2024-11-20 11:20:52.638546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:24290 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.370 [2024-11-20 11:20:52.638565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:25.370 [2024-11-20 11:20:52.647202] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166f6890 00:26:25.370 [2024-11-20 11:20:52.648226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:22103 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.370 [2024-11-20 11:20:52.648247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:25.370 [2024-11-20 11:20:52.656399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166f46d0 00:26:25.370 [2024-11-20 11:20:52.657079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:17661 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.370 [2024-11-20 11:20:52.657099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.370 [2024-11-20 11:20:52.665033] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166e7c50 00:26:25.370 [2024-11-20 11:20:52.666249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:19664 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.370 [2024-11-20 11:20:52.666269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:25.370 [2024-11-20 11:20:52.672973] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166fcdd0 00:26:25.370 [2024-11-20 11:20:52.673625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:16027 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.370 [2024-11-20 11:20:52.673644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:25.370 [2024-11-20 11:20:52.684941] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166e2c28 00:26:25.370 [2024-11-20 11:20:52.686124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:1289 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.370 [2024-11-20 11:20:52.686144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:25.370 [2024-11-20 11:20:52.694655] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166e0630 00:26:25.370 [2024-11-20 11:20:52.695925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:22731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.370 [2024-11-20 11:20:52.695944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:25.370 [2024-11-20 11:20:52.703403] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166ff3c8 00:26:25.370 [2024-11-20 11:20:52.704300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:13604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.370 [2024-11-20 11:20:52.704320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:25.370 [2024-11-20 11:20:52.712840] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166e6300 00:26:25.370 [2024-11-20 11:20:52.713793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:3904 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.370 [2024-11-20 11:20:52.713815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:25.370 [2024-11-20 11:20:52.723823] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166e6300 00:26:25.370 [2024-11-20 11:20:52.725359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:11710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.371 [2024-11-20 11:20:52.725378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:25.371 [2024-11-20 11:20:52.730553] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166dfdc0 00:26:25.371 [2024-11-20 11:20:52.731378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:5309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.371 [2024-11-20 11:20:52.731397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:25.371 [2024-11-20 11:20:52.741979] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166e73e0 00:26:25.371 [2024-11-20 11:20:52.743197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15840 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.371 [2024-11-20 11:20:52.743216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:25.371 [2024-11-20 11:20:52.749898] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166f1868 00:26:25.371 [2024-11-20 11:20:52.750639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:18061 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.371 [2024-11-20 11:20:52.750658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:25.371 [2024-11-20 11:20:52.759018] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166f5378 00:26:25.371 [2024-11-20 11:20:52.759739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:21997 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.371 [2024-11-20 11:20:52.759758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:25.371 [2024-11-20 11:20:52.769393] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166f5378 00:26:25.371 [2024-11-20 11:20:52.770583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5865 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.371 [2024-11-20 11:20:52.770602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:25.371 [2024-11-20 11:20:52.777345] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166e6b70 00:26:25.371 [2024-11-20 11:20:52.778068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:15764 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.371 [2024-11-20 11:20:52.778087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:25.371 [2024-11-20 11:20:52.788735] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166e73e0 00:26:25.371 [2024-11-20 11:20:52.790253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25197 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.371 [2024-11-20 11:20:52.790273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:25.371 [2024-11-20 11:20:52.795200] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166e7818 00:26:25.371 [2024-11-20 11:20:52.795919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:16575 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.371 [2024-11-20 11:20:52.795938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:25.371 [2024-11-20 11:20:52.803909] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166e7c50 00:26:25.371 [2024-11-20 11:20:52.804507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:22670 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.371 [2024-11-20 11:20:52.804526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:25.371 [2024-11-20 11:20:52.814848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166e7c50 00:26:25.371 [2024-11-20 11:20:52.815917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:8064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.371 [2024-11-20 11:20:52.815936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:25.371 [2024-11-20 11:20:52.823426] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166e5658 00:26:25.371 [2024-11-20 11:20:52.824210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:13053 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.371 [2024-11-20 11:20:52.824229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:25.371 [2024-11-20 11:20:52.832147] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166de8a8 00:26:25.371 [2024-11-20 11:20:52.832756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:4361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.371 [2024-11-20 11:20:52.832775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:25.371 [2024-11-20 11:20:52.843600] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166f4f40 00:26:25.371 [2024-11-20 11:20:52.844914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:23586 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.371 [2024-11-20 11:20:52.844934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:25.371 [2024-11-20 11:20:52.851592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166e3498 00:26:25.371 [2024-11-20 11:20:52.852443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:21907 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.371 [2024-11-20 11:20:52.852462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:25.371 [2024-11-20 11:20:52.860403] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166e5658 00:26:25.371 [2024-11-20 11:20:52.861327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:24711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.371 [2024-11-20 11:20:52.861350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:25.631 [2024-11-20 11:20:52.870050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166f96f8 00:26:25.631 [2024-11-20 11:20:52.870979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17235 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.631 [2024-11-20 11:20:52.871002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:25.631 [2024-11-20 11:20:52.879666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166f4298 00:26:25.631 [2024-11-20 11:20:52.880696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:14153 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.631 [2024-11-20 11:20:52.880717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:25.631 [2024-11-20 11:20:52.889285] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166ed4e8 00:26:25.631 [2024-11-20 11:20:52.890482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:15980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.631 [2024-11-20 11:20:52.890502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:25.631 [2024-11-20 11:20:52.897908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166f0ff8 00:26:25.631 [2024-11-20 11:20:52.898735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:18823 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.631 [2024-11-20 11:20:52.898755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:25.631 [2024-11-20 11:20:52.906633] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166e5a90 00:26:25.631 [2024-11-20 11:20:52.907269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:18392 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.631 [2024-11-20 11:20:52.907288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:25.631 [2024-11-20 11:20:52.918170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166fd208 00:26:25.631 [2024-11-20 11:20:52.919592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:22741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.631 [2024-11-20 11:20:52.919612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:25.631 [2024-11-20 11:20:52.927768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166eee38 00:26:25.631 [2024-11-20 11:20:52.929315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:3932 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.631 [2024-11-20 11:20:52.929335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:25.631 [2024-11-20 11:20:52.934507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166e5a90 00:26:25.631 [2024-11-20 11:20:52.935366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:9384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.631 [2024-11-20 11:20:52.935386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:25.631 [2024-11-20 11:20:52.943831] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166f57b0 00:26:25.631 [2024-11-20 11:20:52.944683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.631 [2024-11-20 11:20:52.944702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:25.631 [2024-11-20 11:20:52.955180] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166e0630 00:26:25.631 [2024-11-20 11:20:52.956565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:14026 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.631 [2024-11-20 11:20:52.956589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:25.631 [2024-11-20 11:20:52.961877] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166dfdc0 00:26:25.631 [2024-11-20 11:20:52.962564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:25185 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.631 [2024-11-20 11:20:52.962584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:25.631 [2024-11-20 11:20:52.972879] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166dfdc0 00:26:25.631 [2024-11-20 11:20:52.974122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.631 [2024-11-20 11:20:52.974141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:25.631 [2024-11-20 11:20:52.982282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166e3060 00:26:25.631 [2024-11-20 11:20:52.983527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:13013 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.631 [2024-11-20 11:20:52.983546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:25.631 [2024-11-20 11:20:52.991279] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166f3a28 00:26:25.631 [2024-11-20 11:20:52.992515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:19548 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.631 [2024-11-20 11:20:52.992535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:25.631 [2024-11-20 11:20:53.000634] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166f81e0 00:26:25.631 [2024-11-20 11:20:53.001402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:13354 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.631 [2024-11-20 11:20:53.001422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:25.631 [2024-11-20 11:20:53.010026] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166ff3c8 00:26:25.631 [2024-11-20 11:20:53.011034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:13548 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.631 [2024-11-20 11:20:53.011053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:25.631 [2024-11-20 11:20:53.019129] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166de038 00:26:25.631 [2024-11-20 11:20:53.020137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:18991 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.631 [2024-11-20 11:20:53.020156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:25.632 [2024-11-20 11:20:53.028294] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166de038 00:26:25.632 [2024-11-20 11:20:53.029409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:12270 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.632 [2024-11-20 11:20:53.029428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:25.632 [2024-11-20 11:20:53.038000] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166e1f80 00:26:25.632 [2024-11-20 11:20:53.039323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15897 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.632 [2024-11-20 11:20:53.039343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:25.632 [2024-11-20 11:20:53.047340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166f1868 00:26:25.632 [2024-11-20 11:20:53.048660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:3671 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.632 [2024-11-20 11:20:53.048679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:25.632 [2024-11-20 11:20:53.055429] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166e1710 00:26:25.632 [2024-11-20 11:20:53.056336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:4478 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.632 [2024-11-20 11:20:53.056355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:25.632 [2024-11-20 11:20:53.064460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166feb58 00:26:25.632 [2024-11-20 11:20:53.065338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:1300 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.632 [2024-11-20 11:20:53.065357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:25.632 [2024-11-20 11:20:53.073654] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166eee38 00:26:25.632 [2024-11-20 11:20:53.074520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:11324 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.632 [2024-11-20 11:20:53.074539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:25.632 [2024-11-20 11:20:53.082772] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166e99d8 00:26:25.632 [2024-11-20 11:20:53.083650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:9841 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.632 [2024-11-20 11:20:53.083670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:25.632 [2024-11-20 11:20:53.092704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166fe2e8 00:26:25.632 [2024-11-20 11:20:53.093837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:8556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.632 [2024-11-20 11:20:53.093855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:25.632 [2024-11-20 11:20:53.102525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166f6458 00:26:25.632 [2024-11-20 11:20:53.103719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:19088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.632 [2024-11-20 11:20:53.103738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:25.632 [2024-11-20 11:20:53.112127] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166ed4e8 00:26:25.632 [2024-11-20 11:20:53.113470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:18258 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.632 [2024-11-20 11:20:53.113489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:25.632 [2024-11-20 11:20:53.120712] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166fda78 00:26:25.632 [2024-11-20 11:20:53.121719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:16086 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.632 [2024-11-20 11:20:53.121741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:25.891 [2024-11-20 11:20:53.130015] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166e9168 00:26:25.891 [2024-11-20 11:20:53.130986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:24142 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.891 [2024-11-20 11:20:53.131009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:25.891 [2024-11-20 11:20:53.140396] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166e3498 00:26:25.891 [2024-11-20 11:20:53.141846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:5208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.891 [2024-11-20 11:20:53.141866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:25.891 [2024-11-20 11:20:53.150023] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166ebb98 00:26:25.891 [2024-11-20 11:20:53.151577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:23243 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.891 [2024-11-20 11:20:53.151597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:25.891 [2024-11-20 11:20:53.156483] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166eee38 00:26:25.891 [2024-11-20 11:20:53.157237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:15698 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.891 [2024-11-20 11:20:53.157256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:25.891 [2024-11-20 11:20:53.167323] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166fa3a0 00:26:25.891 [2024-11-20 11:20:53.168599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:15917 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.891 [2024-11-20 11:20:53.168619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:25.891 [2024-11-20 11:20:53.175913] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166eb760 00:26:25.891 [2024-11-20 11:20:53.176855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:12686 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.891 [2024-11-20 11:20:53.176875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:25.891 [2024-11-20 11:20:53.185008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166e38d0 00:26:25.891 [2024-11-20 11:20:53.186277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:12998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.891 [2024-11-20 11:20:53.186296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:25.891 27245.00 IOPS, 106.43 MiB/s [2024-11-20T10:20:53.387Z] [2024-11-20 11:20:53.195347] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166e38d0 00:26:25.891 [2024-11-20 11:20:53.196743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:3551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.892 [2024-11-20 11:20:53.196766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:25.892 [2024-11-20 11:20:53.204961] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166fd640 00:26:25.892 [2024-11-20 11:20:53.206474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:3778 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.892 [2024-11-20 11:20:53.206494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:25.892 [2024-11-20 11:20:53.211607] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166f0bc0 00:26:25.892 [2024-11-20 11:20:53.212319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:21126 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.892 [2024-11-20 11:20:53.212339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:25.892 [2024-11-20 11:20:53.222158] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166f3e60 00:26:25.892 [2024-11-20 11:20:53.223001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:17940 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.892 [2024-11-20 11:20:53.223021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:25.892 [2024-11-20 11:20:53.231053] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166ef6a8 00:26:25.892 [2024-11-20 11:20:53.232091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:23721 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.892 [2024-11-20 11:20:53.232111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:25.892 [2024-11-20 11:20:53.240326] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166f57b0 00:26:25.892 [2024-11-20 11:20:53.241273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.892 [2024-11-20 11:20:53.241291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:25.892 [2024-11-20 11:20:53.249630] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166f3e60 00:26:25.892 [2024-11-20 11:20:53.250563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:3778 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.892 [2024-11-20 11:20:53.250583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:25.892 [2024-11-20 11:20:53.259114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166e0ea0 00:26:25.892 [2024-11-20 11:20:53.260151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22580 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.892 [2024-11-20 11:20:53.260170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:25.892 [2024-11-20 11:20:53.268429] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166f7da8 00:26:25.892 [2024-11-20 11:20:53.269492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:4109 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.892 [2024-11-20 11:20:53.269512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:25.892 [2024-11-20 11:20:53.277606] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166eb328 00:26:25.892 [2024-11-20 11:20:53.278678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:1876 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.892 [2024-11-20 11:20:53.278697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:25.892 [2024-11-20 11:20:53.287101] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166fa3a0 00:26:25.892 [2024-11-20 11:20:53.288276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:19191 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.892 [2024-11-20 11:20:53.288295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:25.892 [2024-11-20 11:20:53.295812] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166de470 00:26:25.892 [2024-11-20 11:20:53.296973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:22356 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.892 [2024-11-20 11:20:53.296993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:25.892 [2024-11-20 11:20:53.305438] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166fcdd0 00:26:25.892 [2024-11-20 11:20:53.306704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:14997 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.892 [2024-11-20 11:20:53.306722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:25.892 [2024-11-20 11:20:53.314634] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166fac10 00:26:25.892 [2024-11-20 11:20:53.315904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:22322 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.892 [2024-11-20 11:20:53.315922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:25.892 [2024-11-20 11:20:53.324231] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166f8a50 00:26:25.892 [2024-11-20 11:20:53.325612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:21257 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.892 [2024-11-20 11:20:53.325631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:25.892 [2024-11-20 11:20:53.333547] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166eaef0 00:26:25.892 [2024-11-20 11:20:53.334931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:23125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.892 [2024-11-20 11:20:53.334953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:25.892 [2024-11-20 11:20:53.339809] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166dece0 00:26:25.892 [2024-11-20 11:20:53.340478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:8444 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.892 [2024-11-20 11:20:53.340496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:25.892 [2024-11-20 11:20:53.349426] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166fe2e8 00:26:25.892 [2024-11-20 11:20:53.350252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:7042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.892 [2024-11-20 11:20:53.350270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:25.892 [2024-11-20 11:20:53.360666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166fb480 00:26:25.892 [2024-11-20 11:20:53.361829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:18442 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.892 [2024-11-20 11:20:53.361849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:25.892 [2024-11-20 11:20:53.369382] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166ed0b0 00:26:25.892 [2024-11-20 11:20:53.370530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:22215 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.892 [2024-11-20 11:20:53.370549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:25.892 [2024-11-20 11:20:53.378981] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166eff18 00:26:25.892 [2024-11-20 11:20:53.380270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:24624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.892 [2024-11-20 11:20:53.380290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:26.152 [2024-11-20 11:20:53.388591] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166ef6a8 00:26:26.152 [2024-11-20 11:20:53.389895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:24172 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.152 [2024-11-20 11:20:53.389917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:26.152 [2024-11-20 11:20:53.396643] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166ea248 00:26:26.152 [2024-11-20 11:20:53.397956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:3646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.152 [2024-11-20 11:20:53.397976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:26.152 [2024-11-20 11:20:53.404512] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166fdeb0 00:26:26.152 [2024-11-20 11:20:53.405192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:18151 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.152 [2024-11-20 11:20:53.405211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:26.152 [2024-11-20 11:20:53.414129] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166eea00 00:26:26.152 [2024-11-20 11:20:53.414922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.152 [2024-11-20 11:20:53.414941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:26.152 [2024-11-20 11:20:53.423749] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166fbcf0 00:26:26.152 [2024-11-20 11:20:53.424671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:25031 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.152 [2024-11-20 11:20:53.424691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:26.152 [2024-11-20 11:20:53.433974] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166fb8b8 00:26:26.152 [2024-11-20 11:20:53.435016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:20249 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.152 [2024-11-20 11:20:53.435039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:26.152 [2024-11-20 11:20:53.443458] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166f81e0 00:26:26.152 [2024-11-20 11:20:53.444621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:16294 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.152 [2024-11-20 11:20:53.444640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:26.152 [2024-11-20 11:20:53.452170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166e23b8 00:26:26.152 [2024-11-20 11:20:53.453342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:9622 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.152 [2024-11-20 11:20:53.453361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:26.152 [2024-11-20 11:20:53.461502] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166fe720 00:26:26.152 [2024-11-20 11:20:53.462670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:9498 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.152 [2024-11-20 11:20:53.462690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:26.152 [2024-11-20 11:20:53.470270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166df118 00:26:26.153 [2024-11-20 11:20:53.471533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:9023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.153 [2024-11-20 11:20:53.471553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:26.153 [2024-11-20 11:20:53.479711] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166fb480 00:26:26.153 [2024-11-20 11:20:53.480569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:24228 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.153 [2024-11-20 11:20:53.480588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:26.153 [2024-11-20 11:20:53.490469] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166f20d8 00:26:26.153 [2024-11-20 11:20:53.491990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23697 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.153 [2024-11-20 11:20:53.492009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:26.153 [2024-11-20 11:20:53.496935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166f1430 00:26:26.153 [2024-11-20 11:20:53.497626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:8095 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.153 [2024-11-20 11:20:53.497645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:26.153 [2024-11-20 11:20:53.506278] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166f1868 00:26:26.153 [2024-11-20 11:20:53.506976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:9346 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.153 [2024-11-20 11:20:53.506995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:26.153 [2024-11-20 11:20:53.514839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166f9f68 00:26:26.153 [2024-11-20 11:20:53.515519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:11312 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.153 [2024-11-20 11:20:53.515538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:26.153 [2024-11-20 11:20:53.524468] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166dfdc0 00:26:26.153 [2024-11-20 11:20:53.525267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2533 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.153 [2024-11-20 11:20:53.525287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:26.153 [2024-11-20 11:20:53.534076] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166e0630 00:26:26.153 [2024-11-20 11:20:53.534981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7739 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.153 [2024-11-20 11:20:53.535000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:26.153 [2024-11-20 11:20:53.543714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166f57b0 00:26:26.153 [2024-11-20 11:20:53.544750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:9189 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.153 [2024-11-20 11:20:53.544768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:26.153 [2024-11-20 11:20:53.553331] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166fd640 00:26:26.153 [2024-11-20 11:20:53.554485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:20810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.153 [2024-11-20 11:20:53.554503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:26.153 [2024-11-20 11:20:53.561878] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166dfdc0 00:26:26.153 [2024-11-20 11:20:53.562687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:3438 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.153 [2024-11-20 11:20:53.562706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:26.153 [2024-11-20 11:20:53.573283] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166e0630 00:26:26.153 [2024-11-20 11:20:53.574802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:8582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.153 [2024-11-20 11:20:53.574822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:26.153 [2024-11-20 11:20:53.579738] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166edd58 00:26:26.153 [2024-11-20 11:20:53.580442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:3997 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.153 [2024-11-20 11:20:53.580460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:26.153 [2024-11-20 11:20:53.588432] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166f57b0 00:26:26.153 [2024-11-20 11:20:53.589120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:6561 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.153 [2024-11-20 11:20:53.589138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:26.153 [2024-11-20 11:20:53.598044] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166eb328 00:26:26.153 [2024-11-20 11:20:53.598876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:22327 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.153 [2024-11-20 11:20:53.598895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:26.153 [2024-11-20 11:20:53.609295] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166e2c28 00:26:26.153 [2024-11-20 11:20:53.610483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:7112 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.153 [2024-11-20 11:20:53.610504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:26.153 [2024-11-20 11:20:53.616798] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166e8d30 00:26:26.153 [2024-11-20 11:20:53.617511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:17678 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.153 [2024-11-20 11:20:53.617530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:26.153 [2024-11-20 11:20:53.627196] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166fe2e8 00:26:26.153 [2024-11-20 11:20:53.628362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12692 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.153 [2024-11-20 11:20:53.628381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:26.153 [2024-11-20 11:20:53.635701] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166ebb98 00:26:26.153 [2024-11-20 11:20:53.636446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:20860 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.153 [2024-11-20 11:20:53.636465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:26.153 [2024-11-20 11:20:53.645246] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166ea248 00:26:26.413 [2024-11-20 11:20:53.645888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:22265 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.413 [2024-11-20 11:20:53.645911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:26.413 [2024-11-20 11:20:53.655054] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166e7818 00:26:26.413 [2024-11-20 11:20:53.655793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:15051 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.413 [2024-11-20 11:20:53.655815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:26.413 [2024-11-20 11:20:53.663718] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166f3e60 00:26:26.413 [2024-11-20 11:20:53.665070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:2100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.413 [2024-11-20 11:20:53.665091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:26.413 [2024-11-20 11:20:53.672245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166ec840 00:26:26.413 [2024-11-20 11:20:53.672955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:9897 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.413 [2024-11-20 11:20:53.672977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:26.413 [2024-11-20 11:20:53.682597] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166e7c50 00:26:26.413 [2024-11-20 11:20:53.683789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:703 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.413 [2024-11-20 11:20:53.683807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:26.413 [2024-11-20 11:20:53.692135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166e38d0 00:26:26.413 [2024-11-20 11:20:53.693316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:7831 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.413 [2024-11-20 11:20:53.693335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:26.413 [2024-11-20 11:20:53.700436] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166e5a90 00:26:26.413 [2024-11-20 11:20:53.701311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:3391 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.413 [2024-11-20 11:20:53.701331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:26.413 [2024-11-20 11:20:53.709654] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166f4f40 00:26:26.413 [2024-11-20 11:20:53.710506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:16217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.413 [2024-11-20 11:20:53.710525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:26.413 [2024-11-20 11:20:53.719001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166df550 00:26:26.413 [2024-11-20 11:20:53.719837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:9842 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.413 [2024-11-20 11:20:53.719857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:26.413 [2024-11-20 11:20:53.729582] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166e4140 00:26:26.413 [2024-11-20 11:20:53.730867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:7013 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.413 [2024-11-20 11:20:53.730886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:26.413 [2024-11-20 11:20:53.738902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166ee5c8 00:26:26.413 [2024-11-20 11:20:53.740198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:5925 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.413 [2024-11-20 11:20:53.740217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:26.413 [2024-11-20 11:20:53.747159] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166f7538 00:26:26.413 [2024-11-20 11:20:53.748436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:1111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.413 [2024-11-20 11:20:53.748456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:26.413 [2024-11-20 11:20:53.755037] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166fe720 00:26:26.413 [2024-11-20 11:20:53.755718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:20251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.413 [2024-11-20 11:20:53.755737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:26.413 [2024-11-20 11:20:53.764638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166e5a90 00:26:26.413 [2024-11-20 11:20:53.765473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:12122 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.413 [2024-11-20 11:20:53.765492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:26.413 [2024-11-20 11:20:53.774887] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166dece0 00:26:26.413 [2024-11-20 11:20:53.775829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:16645 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.413 [2024-11-20 11:20:53.775849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:26.413 [2024-11-20 11:20:53.784349] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166ee5c8 00:26:26.413 [2024-11-20 11:20:53.785429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:16543 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.413 [2024-11-20 11:20:53.785449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:26.413 [2024-11-20 11:20:53.793042] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166df550 00:26:26.414 [2024-11-20 11:20:53.794113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:22896 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.414 [2024-11-20 11:20:53.794132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:26.414 [2024-11-20 11:20:53.802626] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166f5378 00:26:26.414 [2024-11-20 11:20:53.803815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:12744 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.414 [2024-11-20 11:20:53.803834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:26.414 [2024-11-20 11:20:53.812251] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166e3d08 00:26:26.414 [2024-11-20 11:20:53.813535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:21679 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.414 [2024-11-20 11:20:53.813554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:26.414 [2024-11-20 11:20:53.821918] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166fa3a0 00:26:26.414 [2024-11-20 11:20:53.823326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:14172 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.414 [2024-11-20 11:20:53.823345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:26.414 [2024-11-20 11:20:53.831525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166e01f8 00:26:26.414 [2024-11-20 11:20:53.833043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:24536 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.414 [2024-11-20 11:20:53.833062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:26.414 [2024-11-20 11:20:53.837970] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166e27f0 00:26:26.414 [2024-11-20 11:20:53.838711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:24883 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.414 [2024-11-20 11:20:53.838730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:26.414 [2024-11-20 11:20:53.847105] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166f9f68 00:26:26.414 [2024-11-20 11:20:53.847919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:19619 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.414 [2024-11-20 11:20:53.847937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:26.414 [2024-11-20 11:20:53.856719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166ff3c8 00:26:26.414 [2024-11-20 11:20:53.857639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:21994 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.414 [2024-11-20 11:20:53.857658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:26.414 [2024-11-20 11:20:53.866331] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166f6020 00:26:26.414 [2024-11-20 11:20:53.867382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.414 [2024-11-20 11:20:53.867402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:26.414 [2024-11-20 11:20:53.874890] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166f7100 00:26:26.414 [2024-11-20 11:20:53.875618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11599 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.414 [2024-11-20 11:20:53.875637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:26.414 [2024-11-20 11:20:53.883962] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166f3e60 00:26:26.414 [2024-11-20 11:20:53.884672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:7697 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.414 [2024-11-20 11:20:53.884691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:26.414 [2024-11-20 11:20:53.893164] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166f6458 00:26:26.414 [2024-11-20 11:20:53.893885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:24021 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.414 [2024-11-20 11:20:53.893904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:26.414 [2024-11-20 11:20:53.902399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166fa3a0 00:26:26.414 [2024-11-20 11:20:53.903126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.414 [2024-11-20 11:20:53.903147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:26.673 [2024-11-20 11:20:53.911859] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166f9b30 00:26:26.673 [2024-11-20 11:20:53.912604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:11520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.673 [2024-11-20 11:20:53.912631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:26.673 [2024-11-20 11:20:53.921311] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166e3498 00:26:26.673 [2024-11-20 11:20:53.921812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:25303 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.673 [2024-11-20 11:20:53.921832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:26.673 [2024-11-20 11:20:53.930926] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166eb328 00:26:26.673 [2024-11-20 11:20:53.931549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:7895 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.673 [2024-11-20 11:20:53.931569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:26.673 [2024-11-20 11:20:53.941928] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166fc128 00:26:26.673 [2024-11-20 11:20:53.943450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:23267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.673 [2024-11-20 11:20:53.943470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:26.673 [2024-11-20 11:20:53.948408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166ec408 00:26:26.673 [2024-11-20 11:20:53.949114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9303 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.673 [2024-11-20 11:20:53.949134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:26.673 [2024-11-20 11:20:53.958876] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166e2c28 00:26:26.673 [2024-11-20 11:20:53.960029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:10116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.673 [2024-11-20 11:20:53.960049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:26.673 [2024-11-20 11:20:53.968149] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166e27f0 00:26:26.673 [2024-11-20 11:20:53.969321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:14558 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.673 [2024-11-20 11:20:53.969341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:26.673 [2024-11-20 11:20:53.976594] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166ebfd0 00:26:26.673 [2024-11-20 11:20:53.977640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:20837 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.673 [2024-11-20 11:20:53.977661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:26.673 [2024-11-20 11:20:53.985831] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166eff18 00:26:26.673 [2024-11-20 11:20:53.986671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:5787 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.673 [2024-11-20 11:20:53.986691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:26.673 [2024-11-20 11:20:53.994980] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166ddc00 00:26:26.673 [2024-11-20 11:20:53.995920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:14682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.673 [2024-11-20 11:20:53.995938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:26.673 [2024-11-20 11:20:54.004595] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166e84c0 00:26:26.673 [2024-11-20 11:20:54.005674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:3587 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.673 [2024-11-20 11:20:54.005694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:26.673 [2024-11-20 11:20:54.013997] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166f7100 00:26:26.673 [2024-11-20 11:20:54.014624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:25425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.673 [2024-11-20 11:20:54.014645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:26.673 [2024-11-20 11:20:54.022963] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166e6b70 00:26:26.674 [2024-11-20 11:20:54.023853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:20541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.674 [2024-11-20 11:20:54.023872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:26.674 [2024-11-20 11:20:54.032203] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166e8088 00:26:26.674 [2024-11-20 11:20:54.033067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:13381 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.674 [2024-11-20 11:20:54.033086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:26.674 [2024-11-20 11:20:54.040454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166e73e0 00:26:26.674 [2024-11-20 11:20:54.041178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:11410 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.674 [2024-11-20 11:20:54.041197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:26.674 [2024-11-20 11:20:54.050085] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166fdeb0 00:26:26.674 [2024-11-20 11:20:54.050909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:17198 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.674 [2024-11-20 11:20:54.050927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:26.674 [2024-11-20 11:20:54.060341] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166f1430 00:26:26.674 [2024-11-20 11:20:54.061215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.674 [2024-11-20 11:20:54.061235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:26.674 [2024-11-20 11:20:54.069858] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166e84c0 00:26:26.674 [2024-11-20 11:20:54.070933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:4329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.674 [2024-11-20 11:20:54.070957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:26.674 [2024-11-20 11:20:54.078560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166efae0 00:26:26.674 [2024-11-20 11:20:54.079543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:13575 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.674 [2024-11-20 11:20:54.079563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:26.674 [2024-11-20 11:20:54.089539] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166efae0 00:26:26.674 [2024-11-20 11:20:54.091104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:4915 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.674 [2024-11-20 11:20:54.091124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:26.674 [2024-11-20 11:20:54.096085] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166f3e60 00:26:26.674 [2024-11-20 11:20:54.096890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:2394 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.674 [2024-11-20 11:20:54.096909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:26.674 [2024-11-20 11:20:54.107461] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166e0630 00:26:26.674 [2024-11-20 11:20:54.108623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:9874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.674 [2024-11-20 11:20:54.108643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:26.674 [2024-11-20 11:20:54.116698] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166f9b30 00:26:26.674 [2024-11-20 11:20:54.117806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:18446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.674 [2024-11-20 11:20:54.117826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:26.674 [2024-11-20 11:20:54.125324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166e4de8 00:26:26.674 [2024-11-20 11:20:54.126514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17224 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.674 [2024-11-20 11:20:54.126534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:26.674 [2024-11-20 11:20:54.134376] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166ecc78 00:26:26.674 [2024-11-20 11:20:54.135459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:429 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.674 [2024-11-20 11:20:54.135478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:26.674 [2024-11-20 11:20:54.143757] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166e7818 00:26:26.674 [2024-11-20 11:20:54.144413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:4416 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.674 [2024-11-20 11:20:54.144433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:26.674 [2024-11-20 11:20:54.152959] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166f6890 00:26:26.674 [2024-11-20 11:20:54.153840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:16514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.674 [2024-11-20 11:20:54.153863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:26.674 [2024-11-20 11:20:54.162278] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166f0bc0 00:26:26.674 [2024-11-20 11:20:54.163186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:11011 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.674 [2024-11-20 11:20:54.163208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:26.932 [2024-11-20 11:20:54.172022] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166eaab8 00:26:26.932 [2024-11-20 11:20:54.172778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:22284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.932 [2024-11-20 11:20:54.172802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:26.932 [2024-11-20 11:20:54.180962] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177b640) with pdu=0x2000166ebfd0 00:26:26.932 [2024-11-20 11:20:54.181976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:23460 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.932 [2024-11-20 11:20:54.181998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:26.932 27464.50 IOPS, 107.28 MiB/s 00:26:26.932 Latency(us) 00:26:26.932 [2024-11-20T10:20:54.428Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:26.932 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:26.932 nvme0n1 : 2.00 27487.04 107.37 0.00 0.00 4652.11 1802.24 15728.64 00:26:26.932 [2024-11-20T10:20:54.428Z] =================================================================================================================== 00:26:26.932 [2024-11-20T10:20:54.428Z] Total : 27487.04 107.37 0.00 0.00 4652.11 1802.24 15728.64 00:26:26.933 { 00:26:26.933 "results": [ 00:26:26.933 { 00:26:26.933 "job": "nvme0n1", 00:26:26.933 "core_mask": "0x2", 00:26:26.933 "workload": "randwrite", 00:26:26.933 "status": "finished", 00:26:26.933 "queue_depth": 128, 00:26:26.933 "io_size": 4096, 00:26:26.933 "runtime": 2.003017, 00:26:26.933 "iops": 27487.035806485917, 00:26:26.933 "mibps": 107.37123361908561, 00:26:26.933 "io_failed": 0, 00:26:26.933 "io_timeout": 0, 00:26:26.933 "avg_latency_us": 4652.113258622881, 00:26:26.933 "min_latency_us": 1802.24, 00:26:26.933 "max_latency_us": 15728.64 00:26:26.933 } 00:26:26.933 ], 00:26:26.933 "core_count": 1 00:26:26.933 } 00:26:26.933 11:20:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:26.933 11:20:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:26.933 11:20:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:26.933 | .driver_specific 00:26:26.933 | .nvme_error 00:26:26.933 | .status_code 00:26:26.933 | .command_transient_transport_error' 00:26:26.933 11:20:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:26.933 11:20:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 215 > 0 )) 00:26:26.933 11:20:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 11969 00:26:26.933 11:20:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 11969 ']' 00:26:26.933 11:20:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 11969 00:26:27.191 11:20:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:26:27.191 11:20:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:27.191 11:20:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 11969 00:26:27.191 11:20:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:27.191 11:20:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:27.191 11:20:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 11969' 00:26:27.191 killing process with pid 11969 00:26:27.191 11:20:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 11969 00:26:27.191 Received shutdown signal, test time was about 2.000000 seconds 00:26:27.191 00:26:27.192 Latency(us) 00:26:27.192 [2024-11-20T10:20:54.688Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:27.192 [2024-11-20T10:20:54.688Z] =================================================================================================================== 00:26:27.192 [2024-11-20T10:20:54.688Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:27.192 11:20:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 11969 00:26:27.192 11:20:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:26:27.192 11:20:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:27.192 11:20:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:26:27.192 11:20:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:26:27.192 11:20:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:26:27.192 11:20:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=12656 00:26:27.192 11:20:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 12656 /var/tmp/bperf.sock 00:26:27.192 11:20:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:26:27.192 11:20:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 12656 ']' 00:26:27.192 11:20:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:27.192 11:20:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:27.192 11:20:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:27.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:27.192 11:20:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:27.192 11:20:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:27.192 [2024-11-20 11:20:54.682018] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:26:27.192 [2024-11-20 11:20:54.682065] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid12656 ] 00:26:27.192 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:27.192 Zero copy mechanism will not be used. 00:26:27.450 [2024-11-20 11:20:54.756550] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:27.450 [2024-11-20 11:20:54.798652] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:27.450 11:20:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:27.450 11:20:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:26:27.450 11:20:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:27.450 11:20:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:27.708 11:20:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:27.708 11:20:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:27.708 11:20:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:27.708 11:20:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:27.708 11:20:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:27.708 11:20:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:28.275 nvme0n1 00:26:28.275 11:20:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:26:28.275 11:20:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:28.275 11:20:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:28.275 11:20:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:28.275 11:20:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:28.275 11:20:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:28.275 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:28.275 Zero copy mechanism will not be used. 00:26:28.275 Running I/O for 2 seconds... 00:26:28.275 [2024-11-20 11:20:55.615604] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:28.275 [2024-11-20 11:20:55.615763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.275 [2024-11-20 11:20:55.615792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:28.275 [2024-11-20 11:20:55.621977] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:28.275 [2024-11-20 11:20:55.622140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.275 [2024-11-20 11:20:55.622163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:28.275 [2024-11-20 11:20:55.628395] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:28.275 [2024-11-20 11:20:55.628487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.275 [2024-11-20 11:20:55.628509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:28.275 [2024-11-20 11:20:55.633969] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:28.275 [2024-11-20 11:20:55.634054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.275 [2024-11-20 11:20:55.634074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:28.275 [2024-11-20 11:20:55.639616] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:28.275 [2024-11-20 11:20:55.639696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.275 [2024-11-20 11:20:55.639717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:28.275 [2024-11-20 11:20:55.644787] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:28.275 [2024-11-20 11:20:55.644866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.275 [2024-11-20 11:20:55.644886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:28.276 [2024-11-20 11:20:55.649916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:28.276 [2024-11-20 11:20:55.650008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.276 [2024-11-20 11:20:55.650028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:28.276 [2024-11-20 11:20:55.655269] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:28.276 [2024-11-20 11:20:55.655354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.276 [2024-11-20 11:20:55.655375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:28.276 [2024-11-20 11:20:55.661389] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:28.276 [2024-11-20 11:20:55.661452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.276 [2024-11-20 11:20:55.661471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:28.276 [2024-11-20 11:20:55.668426] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:28.276 [2024-11-20 11:20:55.668618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.276 [2024-11-20 11:20:55.668638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:28.276 [2024-11-20 11:20:55.675188] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:28.276 [2024-11-20 11:20:55.675325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.276 [2024-11-20 11:20:55.675344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:28.276 [2024-11-20 11:20:55.681589] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:28.276 [2024-11-20 11:20:55.681860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.276 [2024-11-20 11:20:55.681882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:28.276 [2024-11-20 11:20:55.687855] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:28.276 [2024-11-20 11:20:55.688091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.276 [2024-11-20 11:20:55.688114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:28.276 [2024-11-20 11:20:55.693408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:28.276 [2024-11-20 11:20:55.693631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.276 [2024-11-20 11:20:55.693652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:28.276 [2024-11-20 11:20:55.699004] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:28.276 [2024-11-20 11:20:55.699240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.276 [2024-11-20 11:20:55.699261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:28.276 [2024-11-20 11:20:55.704714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:28.276 [2024-11-20 11:20:55.704913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.276 [2024-11-20 11:20:55.704932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:28.276 [2024-11-20 11:20:55.710244] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:28.276 [2024-11-20 11:20:55.710455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.276 [2024-11-20 11:20:55.710478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:28.276 [2024-11-20 11:20:55.716126] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:28.276 [2024-11-20 11:20:55.716324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.276 [2024-11-20 11:20:55.716343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:28.276 [2024-11-20 11:20:55.721732] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:28.276 [2024-11-20 11:20:55.721927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.276 [2024-11-20 11:20:55.721946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:28.276 [2024-11-20 11:20:55.727602] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:28.276 [2024-11-20 11:20:55.727810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.276 [2024-11-20 11:20:55.727830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:28.276 [2024-11-20 11:20:55.733028] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:28.276 [2024-11-20 11:20:55.733247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.276 [2024-11-20 11:20:55.733268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:28.276 [2024-11-20 11:20:55.738765] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:28.276 [2024-11-20 11:20:55.739002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.276 [2024-11-20 11:20:55.739026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:28.276 [2024-11-20 11:20:55.744086] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:28.276 [2024-11-20 11:20:55.744285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.276 [2024-11-20 11:20:55.744304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:28.276 [2024-11-20 11:20:55.748820] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:28.276 [2024-11-20 11:20:55.749036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.276 [2024-11-20 11:20:55.749054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:28.276 [2024-11-20 11:20:55.754278] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:28.276 [2024-11-20 11:20:55.754492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.276 [2024-11-20 11:20:55.754513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:28.276 [2024-11-20 11:20:55.759938] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:28.276 [2024-11-20 11:20:55.760162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.276 [2024-11-20 11:20:55.760181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:28.276 [2024-11-20 11:20:55.765139] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:28.276 [2024-11-20 11:20:55.765331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.276 [2024-11-20 11:20:55.765354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:28.536 [2024-11-20 11:20:55.769999] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:28.536 [2024-11-20 11:20:55.770196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.536 [2024-11-20 11:20:55.770219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:28.536 [2024-11-20 11:20:55.775260] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:28.536 [2024-11-20 11:20:55.775477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.536 [2024-11-20 11:20:55.775501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:28.536 [2024-11-20 11:20:55.780513] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:28.536 [2024-11-20 11:20:55.780695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.536 [2024-11-20 11:20:55.780716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:28.536 [2024-11-20 11:20:55.784775] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:28.536 [2024-11-20 11:20:55.784994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.536 [2024-11-20 11:20:55.785013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:28.536 [2024-11-20 11:20:55.788890] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:28.536 [2024-11-20 11:20:55.789110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.536 [2024-11-20 11:20:55.789132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:28.536 [2024-11-20 11:20:55.793169] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:28.536 [2024-11-20 11:20:55.793371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.536 [2024-11-20 11:20:55.793391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:28.536 [2024-11-20 11:20:55.797374] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:28.536 [2024-11-20 11:20:55.797580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.536 [2024-11-20 11:20:55.797602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:28.536 [2024-11-20 11:20:55.801311] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:28.536 [2024-11-20 11:20:55.801529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.536 [2024-11-20 11:20:55.801550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:28.536 [2024-11-20 11:20:55.805216] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:28.536 [2024-11-20 11:20:55.805424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.537 [2024-11-20 11:20:55.805445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:28.537 [2024-11-20 11:20:55.809291] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:28.537 [2024-11-20 11:20:55.809484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.537 [2024-11-20 11:20:55.809504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:28.537 [2024-11-20 11:20:55.814904] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:28.537 [2024-11-20 11:20:55.815142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.537 [2024-11-20 11:20:55.815164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:28.537 [2024-11-20 11:20:55.820163] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:28.537 [2024-11-20 11:20:55.820361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.537 [2024-11-20 11:20:55.820381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:28.537 [2024-11-20 11:20:55.825091] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:28.537 [2024-11-20 11:20:55.825310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.537 [2024-11-20 11:20:55.825332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:28.537 [2024-11-20 11:20:55.830115] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:28.537 [2024-11-20 11:20:55.830291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.537 [2024-11-20 11:20:55.830310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:28.537 [2024-11-20 11:20:55.835089] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:28.537 [2024-11-20 11:20:55.835270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.537 [2024-11-20 11:20:55.835289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:28.537 [2024-11-20 11:20:55.840476] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:28.537 [2024-11-20 11:20:55.840706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.537 [2024-11-20 11:20:55.840727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:28.537 [2024-11-20 11:20:55.846384] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:28.537 [2024-11-20 11:20:55.846563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.537 [2024-11-20 11:20:55.846582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:28.537 [2024-11-20 11:20:55.852004] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:28.537 [2024-11-20 11:20:55.852221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.537 [2024-11-20 11:20:55.852242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:28.537 [2024-11-20 11:20:55.858645] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:28.537 [2024-11-20 11:20:55.858838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.537 [2024-11-20 11:20:55.858858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:28.537 [2024-11-20 11:20:55.864709] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:28.537 [2024-11-20 11:20:55.865030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.537 [2024-11-20 11:20:55.865052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:28.537 [2024-11-20 11:20:55.870189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:28.537 [2024-11-20 11:20:55.870496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.537 [2024-11-20 11:20:55.870526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:28.537 [2024-11-20 11:20:55.876230] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:28.537 [2024-11-20 11:20:55.876484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.537 [2024-11-20 11:20:55.876506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:28.537 [2024-11-20 11:20:55.882086] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:28.537 [2024-11-20 11:20:55.882320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.537 [2024-11-20 11:20:55.882342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:28.537 [2024-11-20 11:20:55.888074] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:28.537 [2024-11-20 11:20:55.888263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.537 [2024-11-20 11:20:55.888284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:28.537 [2024-11-20 11:20:55.893937] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:28.537 [2024-11-20 11:20:55.894244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.537 [2024-11-20 11:20:55.894266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:28.537 [2024-11-20 11:20:55.899630] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:28.537 [2024-11-20 11:20:55.899821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.537 [2024-11-20 11:20:55.899841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:28.537 [2024-11-20 11:20:55.905294] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:28.537 [2024-11-20 11:20:55.905558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.537 [2024-11-20 11:20:55.905579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:28.537 [2024-11-20 11:20:55.912400] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:28.537 [2024-11-20 11:20:55.912510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.537 [2024-11-20 11:20:55.912530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:28.537 [2024-11-20 11:20:55.917801] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:28.537 [2024-11-20 11:20:55.917937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.537 [2024-11-20 11:20:55.917962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:28.537 [2024-11-20 11:20:55.923051] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:28.537 [2024-11-20 11:20:55.923207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.537 [2024-11-20 11:20:55.923226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:28.537 [2024-11-20 11:20:55.927683] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:28.537 [2024-11-20 11:20:55.927757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.537 [2024-11-20 11:20:55.927777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:28.537 [2024-11-20 11:20:55.932059] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:28.537 [2024-11-20 11:20:55.932121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.538 [2024-11-20 11:20:55.932140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:28.538 [2024-11-20 11:20:55.936017] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:28.538 [2024-11-20 11:20:55.936073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.538 [2024-11-20 11:20:55.936092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:28.538 [2024-11-20 11:20:55.939880] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:28.538 [2024-11-20 11:20:55.939969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.538 [2024-11-20 11:20:55.939988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:28.538 [2024-11-20 11:20:55.943795] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:28.538 [2024-11-20 11:20:55.943910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.538 [2024-11-20 11:20:55.943929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:28.538 [2024-11-20 11:20:55.947748] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:28.538 [2024-11-20 11:20:55.947843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.538 [2024-11-20 11:20:55.947861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:28.538 [2024-11-20 11:20:55.952462] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:28.538 [2024-11-20 11:20:55.952780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.538 [2024-11-20 11:20:55.952801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:28.538 [2024-11-20 11:20:55.956457] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:28.538 [2024-11-20 11:20:55.956586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.538 [2024-11-20 11:20:55.956606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:28.538 [2024-11-20 11:20:55.960432] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:28.538 [2024-11-20 11:20:55.960491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.538 [2024-11-20 11:20:55.960510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:28.538 [2024-11-20 11:20:55.964226] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:28.538 [2024-11-20 11:20:55.964283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.538 [2024-11-20 11:20:55.964302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:28.538 [2024-11-20 11:20:55.968417] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:28.538 [2024-11-20 11:20:55.968482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.538 [2024-11-20 11:20:55.968502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:28.538 [2024-11-20 11:20:55.972275] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:28.538 [2024-11-20 11:20:55.972325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.538 [2024-11-20 11:20:55.972345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:28.538 [2024-11-20 11:20:55.976117] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:28.538 [2024-11-20 11:20:55.976180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.538 [2024-11-20 11:20:55.976200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:28.538 [2024-11-20 11:20:55.979922] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:28.538 [2024-11-20 11:20:55.979992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.538 [2024-11-20 11:20:55.980011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:28.538 [2024-11-20 11:20:55.983756] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:28.538 [2024-11-20 11:20:55.983830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.538 [2024-11-20 11:20:55.983849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:28.538 [2024-11-20 11:20:55.987883] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:28.538 [2024-11-20 11:20:55.987975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.538 [2024-11-20 11:20:55.987995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:28.538 [2024-11-20 11:20:55.991796] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:28.538 [2024-11-20 11:20:55.991855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.538 [2024-11-20 11:20:55.991878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:28.538 [2024-11-20 11:20:55.995739] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:28.538 [2024-11-20 11:20:55.995819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.538 [2024-11-20 11:20:55.995838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:28.538 [2024-11-20 11:20:56.000354] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:28.538 [2024-11-20 11:20:56.000459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.538 [2024-11-20 11:20:56.000478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:28.538 [2024-11-20 11:20:56.005688] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:28.538 [2024-11-20 11:20:56.005748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.538 [2024-11-20 11:20:56.005768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:28.538 [2024-11-20 11:20:56.010986] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:28.538 [2024-11-20 11:20:56.011070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.538 [2024-11-20 11:20:56.011090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:28.538 [2024-11-20 11:20:56.016007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:28.538 [2024-11-20 11:20:56.016150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.538 [2024-11-20 11:20:56.016169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:28.538 [2024-11-20 11:20:56.021243] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:28.538 [2024-11-20 11:20:56.021330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.538 [2024-11-20 11:20:56.021350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:28.538 [2024-11-20 11:20:56.026088] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:28.538 [2024-11-20 11:20:56.026141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.538 [2024-11-20 11:20:56.026163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:28.799 [2024-11-20 11:20:56.030940] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:28.799 [2024-11-20 11:20:56.031027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.799 [2024-11-20 11:20:56.031050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:28.799 [2024-11-20 11:20:56.036839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:28.799 [2024-11-20 11:20:56.036912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.799 [2024-11-20 11:20:56.036934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:28.799 [2024-11-20 11:20:56.043339] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:28.799 [2024-11-20 11:20:56.043459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.799 [2024-11-20 11:20:56.043479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:28.799 [2024-11-20 11:20:56.049633] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:28.799 [2024-11-20 11:20:56.049763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.799 [2024-11-20 11:20:56.049782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:28.799 [2024-11-20 11:20:56.055766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:28.799 [2024-11-20 11:20:56.055857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.799 [2024-11-20 11:20:56.055877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:28.799 [2024-11-20 11:20:56.061897] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:28.799 [2024-11-20 11:20:56.062023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.799 [2024-11-20 11:20:56.062042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:28.799 [2024-11-20 11:20:56.067666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:28.799 [2024-11-20 11:20:56.067718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.799 [2024-11-20 11:20:56.067737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:28.799 [2024-11-20 11:20:56.073006] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:28.799 [2024-11-20 11:20:56.073061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.799 [2024-11-20 11:20:56.073080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:28.799 [2024-11-20 11:20:56.078329] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:28.799 [2024-11-20 11:20:56.078395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.799 [2024-11-20 11:20:56.078414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:28.799 [2024-11-20 11:20:56.083490] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:28.799 [2024-11-20 11:20:56.083609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.799 [2024-11-20 11:20:56.083629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:28.799 [2024-11-20 11:20:56.088835] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:28.799 [2024-11-20 11:20:56.088898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.800 [2024-11-20 11:20:56.088918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:28.800 [2024-11-20 11:20:56.093968] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:28.800 [2024-11-20 11:20:56.094020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.800 [2024-11-20 11:20:56.094040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:28.800 [2024-11-20 11:20:56.099309] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:28.800 [2024-11-20 11:20:56.099362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.800 [2024-11-20 11:20:56.099381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:28.800 [2024-11-20 11:20:56.104760] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:28.800 [2024-11-20 11:20:56.104865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.800 [2024-11-20 11:20:56.104884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:28.800 [2024-11-20 11:20:56.111184] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:28.800 [2024-11-20 11:20:56.111309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.800 [2024-11-20 11:20:56.111328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:28.800 [2024-11-20 11:20:56.117504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:28.800 [2024-11-20 11:20:56.117661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.800 [2024-11-20 11:20:56.117680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:28.800 [2024-11-20 11:20:56.123944] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:28.800 [2024-11-20 11:20:56.124035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.800 [2024-11-20 11:20:56.124055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:28.800 [2024-11-20 11:20:56.130397] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:28.800 [2024-11-20 11:20:56.130622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.800 [2024-11-20 11:20:56.130643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:28.800 [2024-11-20 11:20:56.137002] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:28.800 [2024-11-20 11:20:56.137091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.800 [2024-11-20 11:20:56.137114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:28.800 [2024-11-20 11:20:56.143604] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:28.800 [2024-11-20 11:20:56.144028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.800 [2024-11-20 11:20:56.144049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:28.800 [2024-11-20 11:20:56.150698] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:28.800 [2024-11-20 11:20:56.150832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.800 [2024-11-20 11:20:56.150851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:28.800 [2024-11-20 11:20:56.157352] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:28.800 [2024-11-20 11:20:56.157471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.800 [2024-11-20 11:20:56.157490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:28.800 [2024-11-20 11:20:56.164504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:28.800 [2024-11-20 11:20:56.164657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.800 [2024-11-20 11:20:56.164676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:28.800 [2024-11-20 11:20:56.170920] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:28.800 [2024-11-20 11:20:56.171055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.800 [2024-11-20 11:20:56.171074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:28.800 [2024-11-20 11:20:56.177919] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:28.800 [2024-11-20 11:20:56.178081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.800 [2024-11-20 11:20:56.178101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:28.800 [2024-11-20 11:20:56.184427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:28.800 [2024-11-20 11:20:56.184550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.800 [2024-11-20 11:20:56.184570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:28.800 [2024-11-20 11:20:56.190827] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:28.800 [2024-11-20 11:20:56.191009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.800 [2024-11-20 11:20:56.191029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:28.800 [2024-11-20 11:20:56.197727] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:28.800 [2024-11-20 11:20:56.197904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.800 [2024-11-20 11:20:56.197924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:28.800 [2024-11-20 11:20:56.203328] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:28.800 [2024-11-20 11:20:56.203502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.800 [2024-11-20 11:20:56.203521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:28.800 [2024-11-20 11:20:56.208676] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:28.800 [2024-11-20 11:20:56.208729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.800 [2024-11-20 11:20:56.208749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:28.800 [2024-11-20 11:20:56.214383] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:28.800 [2024-11-20 11:20:56.214451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.800 [2024-11-20 11:20:56.214471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:28.800 [2024-11-20 11:20:56.219015] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:28.800 [2024-11-20 11:20:56.219093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.800 [2024-11-20 11:20:56.219113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:28.800 [2024-11-20 11:20:56.222965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:28.800 [2024-11-20 11:20:56.223081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.800 [2024-11-20 11:20:56.223100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:28.800 [2024-11-20 11:20:56.226874] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:28.801 [2024-11-20 11:20:56.226952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.801 [2024-11-20 11:20:56.226972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:28.801 [2024-11-20 11:20:56.230765] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:28.801 [2024-11-20 11:20:56.230848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.801 [2024-11-20 11:20:56.230868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:28.801 [2024-11-20 11:20:56.234684] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:28.801 [2024-11-20 11:20:56.234742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.801 [2024-11-20 11:20:56.234761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:28.801 [2024-11-20 11:20:56.238752] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:28.801 [2024-11-20 11:20:56.238833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.801 [2024-11-20 11:20:56.238853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:28.801 [2024-11-20 11:20:56.243631] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:28.801 [2024-11-20 11:20:56.243703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.801 [2024-11-20 11:20:56.243722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:28.801 [2024-11-20 11:20:56.248055] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:28.801 [2024-11-20 11:20:56.248143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.801 [2024-11-20 11:20:56.248162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:28.801 [2024-11-20 11:20:56.252182] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:28.801 [2024-11-20 11:20:56.252257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.801 [2024-11-20 11:20:56.252277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:28.801 [2024-11-20 11:20:56.256390] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:28.801 [2024-11-20 11:20:56.256461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.801 [2024-11-20 11:20:56.256480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:28.801 [2024-11-20 11:20:56.260285] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:28.801 [2024-11-20 11:20:56.260371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.801 [2024-11-20 11:20:56.260391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:28.801 [2024-11-20 11:20:56.264444] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:28.801 [2024-11-20 11:20:56.264537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.801 [2024-11-20 11:20:56.264556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:28.801 [2024-11-20 11:20:56.268314] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:28.801 [2024-11-20 11:20:56.268398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.801 [2024-11-20 11:20:56.268418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:28.801 [2024-11-20 11:20:56.272106] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:28.801 [2024-11-20 11:20:56.272201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.801 [2024-11-20 11:20:56.272225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:28.801 [2024-11-20 11:20:56.276264] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:28.801 [2024-11-20 11:20:56.276332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.801 [2024-11-20 11:20:56.276351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:28.801 [2024-11-20 11:20:56.281179] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:28.801 [2024-11-20 11:20:56.281264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.801 [2024-11-20 11:20:56.281285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:28.801 [2024-11-20 11:20:56.285293] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:28.801 [2024-11-20 11:20:56.285360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.801 [2024-11-20 11:20:56.285379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:28.801 [2024-11-20 11:20:56.289241] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:28.801 [2024-11-20 11:20:56.289550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.801 [2024-11-20 11:20:56.289574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.061 [2024-11-20 11:20:56.293422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.061 [2024-11-20 11:20:56.293510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.061 [2024-11-20 11:20:56.293533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.061 [2024-11-20 11:20:56.297337] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.061 [2024-11-20 11:20:56.297409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.062 [2024-11-20 11:20:56.297430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.062 [2024-11-20 11:20:56.301186] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.062 [2024-11-20 11:20:56.301285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.062 [2024-11-20 11:20:56.301306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.062 [2024-11-20 11:20:56.305017] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.062 [2024-11-20 11:20:56.305113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.062 [2024-11-20 11:20:56.305133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.062 [2024-11-20 11:20:56.308784] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.062 [2024-11-20 11:20:56.308844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.062 [2024-11-20 11:20:56.308863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.062 [2024-11-20 11:20:56.312863] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.062 [2024-11-20 11:20:56.312956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.062 [2024-11-20 11:20:56.312977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.062 [2024-11-20 11:20:56.316644] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.062 [2024-11-20 11:20:56.316698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.062 [2024-11-20 11:20:56.316718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.062 [2024-11-20 11:20:56.320430] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.062 [2024-11-20 11:20:56.320493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.062 [2024-11-20 11:20:56.320513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.062 [2024-11-20 11:20:56.324241] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.062 [2024-11-20 11:20:56.324292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.062 [2024-11-20 11:20:56.324311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.062 [2024-11-20 11:20:56.328017] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.062 [2024-11-20 11:20:56.328085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.062 [2024-11-20 11:20:56.328104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.062 [2024-11-20 11:20:56.332532] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.062 [2024-11-20 11:20:56.332599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.062 [2024-11-20 11:20:56.332618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.062 [2024-11-20 11:20:56.336808] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.062 [2024-11-20 11:20:56.336884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.062 [2024-11-20 11:20:56.336903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.062 [2024-11-20 11:20:56.340771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.062 [2024-11-20 11:20:56.340864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.062 [2024-11-20 11:20:56.340882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.062 [2024-11-20 11:20:56.344695] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.062 [2024-11-20 11:20:56.344793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.062 [2024-11-20 11:20:56.344812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.062 [2024-11-20 11:20:56.348639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.062 [2024-11-20 11:20:56.348703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.062 [2024-11-20 11:20:56.348722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.062 [2024-11-20 11:20:56.352555] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.062 [2024-11-20 11:20:56.352639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.062 [2024-11-20 11:20:56.352658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.062 [2024-11-20 11:20:56.356449] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.062 [2024-11-20 11:20:56.356506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.062 [2024-11-20 11:20:56.356525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.062 [2024-11-20 11:20:56.360415] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.062 [2024-11-20 11:20:56.360464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.062 [2024-11-20 11:20:56.360483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.062 [2024-11-20 11:20:56.364267] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.062 [2024-11-20 11:20:56.364334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.062 [2024-11-20 11:20:56.364354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.062 [2024-11-20 11:20:56.368429] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.062 [2024-11-20 11:20:56.368521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.062 [2024-11-20 11:20:56.368540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.062 [2024-11-20 11:20:56.372412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.062 [2024-11-20 11:20:56.372488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.062 [2024-11-20 11:20:56.372507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.062 [2024-11-20 11:20:56.376367] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.062 [2024-11-20 11:20:56.376422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.062 [2024-11-20 11:20:56.376446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.062 [2024-11-20 11:20:56.380265] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.062 [2024-11-20 11:20:56.380313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.062 [2024-11-20 11:20:56.380334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.062 [2024-11-20 11:20:56.384214] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.062 [2024-11-20 11:20:56.384328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.062 [2024-11-20 11:20:56.384348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.062 [2024-11-20 11:20:56.388213] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.062 [2024-11-20 11:20:56.388291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.063 [2024-11-20 11:20:56.388311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.063 [2024-11-20 11:20:56.392280] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.063 [2024-11-20 11:20:56.392349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.063 [2024-11-20 11:20:56.392369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.063 [2024-11-20 11:20:56.397070] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.063 [2024-11-20 11:20:56.397230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.063 [2024-11-20 11:20:56.397250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.063 [2024-11-20 11:20:56.402368] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.063 [2024-11-20 11:20:56.402497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.063 [2024-11-20 11:20:56.402516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.063 [2024-11-20 11:20:56.408281] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.063 [2024-11-20 11:20:56.408441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.063 [2024-11-20 11:20:56.408460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.063 [2024-11-20 11:20:56.413959] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.063 [2024-11-20 11:20:56.414134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.063 [2024-11-20 11:20:56.414156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.063 [2024-11-20 11:20:56.418658] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.063 [2024-11-20 11:20:56.418711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.063 [2024-11-20 11:20:56.418730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.063 [2024-11-20 11:20:56.422752] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.063 [2024-11-20 11:20:56.422819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.063 [2024-11-20 11:20:56.422839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.063 [2024-11-20 11:20:56.426935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.063 [2024-11-20 11:20:56.427035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.063 [2024-11-20 11:20:56.427055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.063 [2024-11-20 11:20:56.431346] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.063 [2024-11-20 11:20:56.431420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.063 [2024-11-20 11:20:56.431439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.063 [2024-11-20 11:20:56.436148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.063 [2024-11-20 11:20:56.436205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.063 [2024-11-20 11:20:56.436223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.063 [2024-11-20 11:20:56.440520] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.063 [2024-11-20 11:20:56.440627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.063 [2024-11-20 11:20:56.440646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.063 [2024-11-20 11:20:56.445652] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.063 [2024-11-20 11:20:56.445703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.063 [2024-11-20 11:20:56.445722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.063 [2024-11-20 11:20:56.450263] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.063 [2024-11-20 11:20:56.450357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.063 [2024-11-20 11:20:56.450376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.063 [2024-11-20 11:20:56.454354] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.063 [2024-11-20 11:20:56.454412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.063 [2024-11-20 11:20:56.454431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.063 [2024-11-20 11:20:56.458249] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.063 [2024-11-20 11:20:56.458298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.063 [2024-11-20 11:20:56.458317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.063 [2024-11-20 11:20:56.462154] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.063 [2024-11-20 11:20:56.462222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.063 [2024-11-20 11:20:56.462241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.063 [2024-11-20 11:20:56.466068] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.063 [2024-11-20 11:20:56.466131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.063 [2024-11-20 11:20:56.466150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.063 [2024-11-20 11:20:56.470095] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.063 [2024-11-20 11:20:56.470145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.063 [2024-11-20 11:20:56.470164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.063 [2024-11-20 11:20:56.473892] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.063 [2024-11-20 11:20:56.473968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.063 [2024-11-20 11:20:56.473987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.063 [2024-11-20 11:20:56.478067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.063 [2024-11-20 11:20:56.478164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.063 [2024-11-20 11:20:56.478184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.063 [2024-11-20 11:20:56.481762] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.063 [2024-11-20 11:20:56.481823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.063 [2024-11-20 11:20:56.481841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.063 [2024-11-20 11:20:56.485436] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.063 [2024-11-20 11:20:56.485496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.063 [2024-11-20 11:20:56.485515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.063 [2024-11-20 11:20:56.489116] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.064 [2024-11-20 11:20:56.489173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.064 [2024-11-20 11:20:56.489197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.064 [2024-11-20 11:20:56.492811] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.064 [2024-11-20 11:20:56.492870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.064 [2024-11-20 11:20:56.492890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.064 [2024-11-20 11:20:56.496688] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.064 [2024-11-20 11:20:56.496757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.064 [2024-11-20 11:20:56.496776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.064 [2024-11-20 11:20:56.501475] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.064 [2024-11-20 11:20:56.501524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.064 [2024-11-20 11:20:56.501543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.064 [2024-11-20 11:20:56.505998] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.064 [2024-11-20 11:20:56.506058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.064 [2024-11-20 11:20:56.506077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.064 [2024-11-20 11:20:56.509859] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.064 [2024-11-20 11:20:56.509924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.064 [2024-11-20 11:20:56.509943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.064 [2024-11-20 11:20:56.513798] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.064 [2024-11-20 11:20:56.513851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.064 [2024-11-20 11:20:56.513870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.064 [2024-11-20 11:20:56.517729] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.064 [2024-11-20 11:20:56.517802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.064 [2024-11-20 11:20:56.517821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.064 [2024-11-20 11:20:56.521636] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.064 [2024-11-20 11:20:56.521718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.064 [2024-11-20 11:20:56.521738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.064 [2024-11-20 11:20:56.525577] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.064 [2024-11-20 11:20:56.525636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.064 [2024-11-20 11:20:56.525654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.064 [2024-11-20 11:20:56.529504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.064 [2024-11-20 11:20:56.529568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.064 [2024-11-20 11:20:56.529588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.064 [2024-11-20 11:20:56.533457] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.064 [2024-11-20 11:20:56.533509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.064 [2024-11-20 11:20:56.533528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.064 [2024-11-20 11:20:56.537361] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.064 [2024-11-20 11:20:56.537428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.064 [2024-11-20 11:20:56.537448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.064 [2024-11-20 11:20:56.541401] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.064 [2024-11-20 11:20:56.541470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.064 [2024-11-20 11:20:56.541490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.064 [2024-11-20 11:20:56.545377] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.064 [2024-11-20 11:20:56.545454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.064 [2024-11-20 11:20:56.545473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.064 [2024-11-20 11:20:56.549350] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.064 [2024-11-20 11:20:56.549426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.064 [2024-11-20 11:20:56.549447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.064 [2024-11-20 11:20:56.553323] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.064 [2024-11-20 11:20:56.553396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.064 [2024-11-20 11:20:56.553419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.324 [2024-11-20 11:20:56.557325] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.324 [2024-11-20 11:20:56.557380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.324 [2024-11-20 11:20:56.557402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.324 [2024-11-20 11:20:56.561552] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.324 [2024-11-20 11:20:56.561609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.324 [2024-11-20 11:20:56.561631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.324 [2024-11-20 11:20:56.565546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.324 [2024-11-20 11:20:56.565630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.324 [2024-11-20 11:20:56.565651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.324 [2024-11-20 11:20:56.569538] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.324 [2024-11-20 11:20:56.569592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.324 [2024-11-20 11:20:56.569613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.324 [2024-11-20 11:20:56.573592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.324 [2024-11-20 11:20:56.573641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.324 [2024-11-20 11:20:56.573662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.324 [2024-11-20 11:20:56.577635] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.324 [2024-11-20 11:20:56.577683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.324 [2024-11-20 11:20:56.577703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.324 [2024-11-20 11:20:56.581733] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.324 [2024-11-20 11:20:56.581787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.324 [2024-11-20 11:20:56.581807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.324 [2024-11-20 11:20:56.585876] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.324 [2024-11-20 11:20:56.585955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.324 [2024-11-20 11:20:56.585975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.324 [2024-11-20 11:20:56.589790] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.324 [2024-11-20 11:20:56.589859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.324 [2024-11-20 11:20:56.589879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.324 [2024-11-20 11:20:56.593770] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.324 [2024-11-20 11:20:56.593858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.324 [2024-11-20 11:20:56.593884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.324 [2024-11-20 11:20:56.597650] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.324 [2024-11-20 11:20:56.597723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.324 [2024-11-20 11:20:56.597743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.324 [2024-11-20 11:20:56.601519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.324 [2024-11-20 11:20:56.601584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.324 [2024-11-20 11:20:56.601604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.324 [2024-11-20 11:20:56.605400] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.324 [2024-11-20 11:20:56.605458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.324 [2024-11-20 11:20:56.605477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.324 [2024-11-20 11:20:56.609338] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.324 [2024-11-20 11:20:56.609420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.324 [2024-11-20 11:20:56.609439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.324 [2024-11-20 11:20:56.614067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.324 [2024-11-20 11:20:56.614116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.324 [2024-11-20 11:20:56.614136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.324 6465.00 IOPS, 808.12 MiB/s [2024-11-20T10:20:56.820Z] [2024-11-20 11:20:56.619672] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.324 [2024-11-20 11:20:56.619736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.324 [2024-11-20 11:20:56.619756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.324 [2024-11-20 11:20:56.624920] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.324 [2024-11-20 11:20:56.625012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.324 [2024-11-20 11:20:56.625032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.324 [2024-11-20 11:20:56.631421] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.324 [2024-11-20 11:20:56.631505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.324 [2024-11-20 11:20:56.631525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.324 [2024-11-20 11:20:56.638208] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.324 [2024-11-20 11:20:56.638404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.324 [2024-11-20 11:20:56.638423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.325 [2024-11-20 11:20:56.644985] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.325 [2024-11-20 11:20:56.645149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.325 [2024-11-20 11:20:56.645168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.325 [2024-11-20 11:20:56.651778] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.325 [2024-11-20 11:20:56.651889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.325 [2024-11-20 11:20:56.651910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.325 [2024-11-20 11:20:56.658442] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.325 [2024-11-20 11:20:56.658583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.325 [2024-11-20 11:20:56.658602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.325 [2024-11-20 11:20:56.665134] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.325 [2024-11-20 11:20:56.665295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.325 [2024-11-20 11:20:56.665313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.325 [2024-11-20 11:20:56.672325] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.325 [2024-11-20 11:20:56.672497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.325 [2024-11-20 11:20:56.672517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.325 [2024-11-20 11:20:56.679560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.325 [2024-11-20 11:20:56.679684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.325 [2024-11-20 11:20:56.679703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.325 [2024-11-20 11:20:56.686814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.325 [2024-11-20 11:20:56.686985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.325 [2024-11-20 11:20:56.687006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.325 [2024-11-20 11:20:56.694028] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.325 [2024-11-20 11:20:56.694204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.325 [2024-11-20 11:20:56.694223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.325 [2024-11-20 11:20:56.700869] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.325 [2024-11-20 11:20:56.701286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.325 [2024-11-20 11:20:56.701307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.325 [2024-11-20 11:20:56.708123] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.325 [2024-11-20 11:20:56.708316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.325 [2024-11-20 11:20:56.708336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.325 [2024-11-20 11:20:56.715175] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.325 [2024-11-20 11:20:56.715298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.325 [2024-11-20 11:20:56.715317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.325 [2024-11-20 11:20:56.722586] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.325 [2024-11-20 11:20:56.722756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.325 [2024-11-20 11:20:56.722775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.325 [2024-11-20 11:20:56.729768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.325 [2024-11-20 11:20:56.729912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.325 [2024-11-20 11:20:56.729933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.325 [2024-11-20 11:20:56.736430] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.325 [2024-11-20 11:20:56.736562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.325 [2024-11-20 11:20:56.736582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.325 [2024-11-20 11:20:56.743618] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.325 [2024-11-20 11:20:56.743824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.325 [2024-11-20 11:20:56.743843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.325 [2024-11-20 11:20:56.751460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.325 [2024-11-20 11:20:56.751569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.325 [2024-11-20 11:20:56.751588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.325 [2024-11-20 11:20:56.758082] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.325 [2024-11-20 11:20:56.758274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.325 [2024-11-20 11:20:56.758297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.325 [2024-11-20 11:20:56.764792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.325 [2024-11-20 11:20:56.764892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.325 [2024-11-20 11:20:56.764911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.325 [2024-11-20 11:20:56.771825] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.325 [2024-11-20 11:20:56.771915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.325 [2024-11-20 11:20:56.771934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.325 [2024-11-20 11:20:56.778544] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.325 [2024-11-20 11:20:56.778638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.325 [2024-11-20 11:20:56.778658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.325 [2024-11-20 11:20:56.785289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.325 [2024-11-20 11:20:56.785401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.325 [2024-11-20 11:20:56.785421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.325 [2024-11-20 11:20:56.791492] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.325 [2024-11-20 11:20:56.791589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.325 [2024-11-20 11:20:56.791608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.325 [2024-11-20 11:20:56.797610] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.325 [2024-11-20 11:20:56.797672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.325 [2024-11-20 11:20:56.797691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.326 [2024-11-20 11:20:56.803839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.326 [2024-11-20 11:20:56.803939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.326 [2024-11-20 11:20:56.803964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.326 [2024-11-20 11:20:56.810059] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.326 [2024-11-20 11:20:56.810233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.326 [2024-11-20 11:20:56.810252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.326 [2024-11-20 11:20:56.816703] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.586 [2024-11-20 11:20:56.816906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.586 [2024-11-20 11:20:56.816943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.586 [2024-11-20 11:20:56.823866] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.586 [2024-11-20 11:20:56.824275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.586 [2024-11-20 11:20:56.824299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.586 [2024-11-20 11:20:56.831182] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.586 [2024-11-20 11:20:56.831268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.586 [2024-11-20 11:20:56.831289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.586 [2024-11-20 11:20:56.837412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.586 [2024-11-20 11:20:56.837517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.586 [2024-11-20 11:20:56.837537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.586 [2024-11-20 11:20:56.844027] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.586 [2024-11-20 11:20:56.844082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.586 [2024-11-20 11:20:56.844101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.586 [2024-11-20 11:20:56.850744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.586 [2024-11-20 11:20:56.850829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.586 [2024-11-20 11:20:56.850849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.586 [2024-11-20 11:20:56.857007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.586 [2024-11-20 11:20:56.857145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.586 [2024-11-20 11:20:56.857165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.586 [2024-11-20 11:20:56.863976] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.586 [2024-11-20 11:20:56.864047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.586 [2024-11-20 11:20:56.864067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.586 [2024-11-20 11:20:56.870628] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.587 [2024-11-20 11:20:56.870741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.587 [2024-11-20 11:20:56.870762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.587 [2024-11-20 11:20:56.876724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.587 [2024-11-20 11:20:56.876821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.587 [2024-11-20 11:20:56.876841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.587 [2024-11-20 11:20:56.882552] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.587 [2024-11-20 11:20:56.882670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.587 [2024-11-20 11:20:56.882690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.587 [2024-11-20 11:20:56.888269] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.587 [2024-11-20 11:20:56.888373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.587 [2024-11-20 11:20:56.888393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.587 [2024-11-20 11:20:56.894537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.587 [2024-11-20 11:20:56.894646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.587 [2024-11-20 11:20:56.894666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.587 [2024-11-20 11:20:56.899790] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.587 [2024-11-20 11:20:56.899902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.587 [2024-11-20 11:20:56.899921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.587 [2024-11-20 11:20:56.904980] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.587 [2024-11-20 11:20:56.905047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.587 [2024-11-20 11:20:56.905066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.587 [2024-11-20 11:20:56.909982] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.587 [2024-11-20 11:20:56.910051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.587 [2024-11-20 11:20:56.910071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.587 [2024-11-20 11:20:56.915207] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.587 [2024-11-20 11:20:56.915260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.587 [2024-11-20 11:20:56.915280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.587 [2024-11-20 11:20:56.919813] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.587 [2024-11-20 11:20:56.919899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.587 [2024-11-20 11:20:56.919922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.587 [2024-11-20 11:20:56.924716] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.587 [2024-11-20 11:20:56.924790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.587 [2024-11-20 11:20:56.924808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.587 [2024-11-20 11:20:56.930010] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.587 [2024-11-20 11:20:56.930067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.587 [2024-11-20 11:20:56.930086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.587 [2024-11-20 11:20:56.935662] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.587 [2024-11-20 11:20:56.935715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.587 [2024-11-20 11:20:56.935735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.587 [2024-11-20 11:20:56.941451] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.587 [2024-11-20 11:20:56.941505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.587 [2024-11-20 11:20:56.941524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.587 [2024-11-20 11:20:56.947385] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.587 [2024-11-20 11:20:56.947505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.587 [2024-11-20 11:20:56.947524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.587 [2024-11-20 11:20:56.953206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.587 [2024-11-20 11:20:56.953270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.587 [2024-11-20 11:20:56.953289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.587 [2024-11-20 11:20:56.958143] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.587 [2024-11-20 11:20:56.958223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.587 [2024-11-20 11:20:56.958243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.587 [2024-11-20 11:20:56.962706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.587 [2024-11-20 11:20:56.962811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.587 [2024-11-20 11:20:56.962830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.587 [2024-11-20 11:20:56.967849] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.587 [2024-11-20 11:20:56.967942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.587 [2024-11-20 11:20:56.967969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.587 [2024-11-20 11:20:56.972672] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.587 [2024-11-20 11:20:56.972727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.587 [2024-11-20 11:20:56.972747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.587 [2024-11-20 11:20:56.977322] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.587 [2024-11-20 11:20:56.977427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.587 [2024-11-20 11:20:56.977447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.587 [2024-11-20 11:20:56.982430] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.587 [2024-11-20 11:20:56.982506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.587 [2024-11-20 11:20:56.982525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.587 [2024-11-20 11:20:56.986788] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.587 [2024-11-20 11:20:56.986914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.587 [2024-11-20 11:20:56.986933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.587 [2024-11-20 11:20:56.990995] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.588 [2024-11-20 11:20:56.991074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.588 [2024-11-20 11:20:56.991093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.588 [2024-11-20 11:20:56.995106] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.588 [2024-11-20 11:20:56.995164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.588 [2024-11-20 11:20:56.995184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.588 [2024-11-20 11:20:56.999433] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.588 [2024-11-20 11:20:56.999502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.588 [2024-11-20 11:20:56.999521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.588 [2024-11-20 11:20:57.004803] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.588 [2024-11-20 11:20:57.004995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.588 [2024-11-20 11:20:57.005014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.588 [2024-11-20 11:20:57.010315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.588 [2024-11-20 11:20:57.010439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.588 [2024-11-20 11:20:57.010458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.588 [2024-11-20 11:20:57.015904] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.588 [2024-11-20 11:20:57.016108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.588 [2024-11-20 11:20:57.016127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.588 [2024-11-20 11:20:57.022476] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.588 [2024-11-20 11:20:57.022593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.588 [2024-11-20 11:20:57.022612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.588 [2024-11-20 11:20:57.028038] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.588 [2024-11-20 11:20:57.028104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.588 [2024-11-20 11:20:57.028123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.588 [2024-11-20 11:20:57.033035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.588 [2024-11-20 11:20:57.033134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.588 [2024-11-20 11:20:57.033153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.588 [2024-11-20 11:20:57.037878] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.588 [2024-11-20 11:20:57.037997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.588 [2024-11-20 11:20:57.038017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.588 [2024-11-20 11:20:57.042905] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.588 [2024-11-20 11:20:57.043005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.588 [2024-11-20 11:20:57.043024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.588 [2024-11-20 11:20:57.047779] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.588 [2024-11-20 11:20:57.047879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.588 [2024-11-20 11:20:57.047899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.588 [2024-11-20 11:20:57.052463] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.588 [2024-11-20 11:20:57.052523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.588 [2024-11-20 11:20:57.052545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.588 [2024-11-20 11:20:57.056989] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.588 [2024-11-20 11:20:57.057051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.588 [2024-11-20 11:20:57.057070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.588 [2024-11-20 11:20:57.061545] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.588 [2024-11-20 11:20:57.061640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.588 [2024-11-20 11:20:57.061660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.588 [2024-11-20 11:20:57.066278] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.588 [2024-11-20 11:20:57.066371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.588 [2024-11-20 11:20:57.066391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.588 [2024-11-20 11:20:57.070984] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.588 [2024-11-20 11:20:57.071050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.588 [2024-11-20 11:20:57.071070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.588 [2024-11-20 11:20:57.075729] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.588 [2024-11-20 11:20:57.075812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.588 [2024-11-20 11:20:57.075834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.849 [2024-11-20 11:20:57.080130] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.849 [2024-11-20 11:20:57.080194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.849 [2024-11-20 11:20:57.080217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.849 [2024-11-20 11:20:57.084562] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.849 [2024-11-20 11:20:57.084641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.849 [2024-11-20 11:20:57.084662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.849 [2024-11-20 11:20:57.089976] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.849 [2024-11-20 11:20:57.090054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.849 [2024-11-20 11:20:57.090075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.849 [2024-11-20 11:20:57.094995] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.849 [2024-11-20 11:20:57.095060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.849 [2024-11-20 11:20:57.095079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.849 [2024-11-20 11:20:57.099720] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.849 [2024-11-20 11:20:57.099798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.849 [2024-11-20 11:20:57.099818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.849 [2024-11-20 11:20:57.104626] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.849 [2024-11-20 11:20:57.104707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.849 [2024-11-20 11:20:57.104727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.849 [2024-11-20 11:20:57.109829] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.849 [2024-11-20 11:20:57.109944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.849 [2024-11-20 11:20:57.109974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.849 [2024-11-20 11:20:57.114856] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.849 [2024-11-20 11:20:57.114992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.849 [2024-11-20 11:20:57.115012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.849 [2024-11-20 11:20:57.120741] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.849 [2024-11-20 11:20:57.120918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.849 [2024-11-20 11:20:57.120937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.849 [2024-11-20 11:20:57.126624] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.849 [2024-11-20 11:20:57.126710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.849 [2024-11-20 11:20:57.126729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.849 [2024-11-20 11:20:57.131899] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.850 [2024-11-20 11:20:57.131982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.850 [2024-11-20 11:20:57.132001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.850 [2024-11-20 11:20:57.138104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.850 [2024-11-20 11:20:57.138188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.850 [2024-11-20 11:20:57.138208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.850 [2024-11-20 11:20:57.143798] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.850 [2024-11-20 11:20:57.143962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.850 [2024-11-20 11:20:57.143982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.850 [2024-11-20 11:20:57.149047] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.850 [2024-11-20 11:20:57.149171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.850 [2024-11-20 11:20:57.149190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.850 [2024-11-20 11:20:57.154003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.850 [2024-11-20 11:20:57.154068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.850 [2024-11-20 11:20:57.154088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.850 [2024-11-20 11:20:57.158998] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.850 [2024-11-20 11:20:57.159074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.850 [2024-11-20 11:20:57.159094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.850 [2024-11-20 11:20:57.163819] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.850 [2024-11-20 11:20:57.163888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.850 [2024-11-20 11:20:57.163907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.850 [2024-11-20 11:20:57.168507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.850 [2024-11-20 11:20:57.168578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.850 [2024-11-20 11:20:57.168598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.850 [2024-11-20 11:20:57.172719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.850 [2024-11-20 11:20:57.172779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.850 [2024-11-20 11:20:57.172799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.850 [2024-11-20 11:20:57.176936] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.850 [2024-11-20 11:20:57.177023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.850 [2024-11-20 11:20:57.177043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.850 [2024-11-20 11:20:57.181119] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.850 [2024-11-20 11:20:57.181179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.850 [2024-11-20 11:20:57.181202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.850 [2024-11-20 11:20:57.185316] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.850 [2024-11-20 11:20:57.185398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.850 [2024-11-20 11:20:57.185418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.850 [2024-11-20 11:20:57.189677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.850 [2024-11-20 11:20:57.189739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.850 [2024-11-20 11:20:57.189759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.850 [2024-11-20 11:20:57.195052] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.850 [2024-11-20 11:20:57.195137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.850 [2024-11-20 11:20:57.195157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.850 [2024-11-20 11:20:57.200052] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.850 [2024-11-20 11:20:57.200155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.850 [2024-11-20 11:20:57.200175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.850 [2024-11-20 11:20:57.204794] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.850 [2024-11-20 11:20:57.204869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.850 [2024-11-20 11:20:57.204888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.850 [2024-11-20 11:20:57.209471] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.850 [2024-11-20 11:20:57.209529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.850 [2024-11-20 11:20:57.209548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.850 [2024-11-20 11:20:57.214111] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.850 [2024-11-20 11:20:57.214220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.850 [2024-11-20 11:20:57.214240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.850 [2024-11-20 11:20:57.218550] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.850 [2024-11-20 11:20:57.218622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.850 [2024-11-20 11:20:57.218642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.850 [2024-11-20 11:20:57.223102] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.850 [2024-11-20 11:20:57.223348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.850 [2024-11-20 11:20:57.223368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.850 [2024-11-20 11:20:57.228086] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.850 [2024-11-20 11:20:57.228166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.850 [2024-11-20 11:20:57.228185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.850 [2024-11-20 11:20:57.233139] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.850 [2024-11-20 11:20:57.233274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.850 [2024-11-20 11:20:57.233292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.850 [2024-11-20 11:20:57.238332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.850 [2024-11-20 11:20:57.238496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.850 [2024-11-20 11:20:57.238515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.850 [2024-11-20 11:20:57.244279] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.851 [2024-11-20 11:20:57.244445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.851 [2024-11-20 11:20:57.244464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.851 [2024-11-20 11:20:57.250889] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.851 [2024-11-20 11:20:57.251068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.851 [2024-11-20 11:20:57.251088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.851 [2024-11-20 11:20:57.257436] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.851 [2024-11-20 11:20:57.257553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.851 [2024-11-20 11:20:57.257573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.851 [2024-11-20 11:20:57.264268] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.851 [2024-11-20 11:20:57.264453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.851 [2024-11-20 11:20:57.264473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.851 [2024-11-20 11:20:57.270940] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.851 [2024-11-20 11:20:57.271092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.851 [2024-11-20 11:20:57.271112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.851 [2024-11-20 11:20:57.277853] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.851 [2024-11-20 11:20:57.278015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.851 [2024-11-20 11:20:57.278035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.851 [2024-11-20 11:20:57.284776] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.851 [2024-11-20 11:20:57.284971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.851 [2024-11-20 11:20:57.284991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.851 [2024-11-20 11:20:57.291641] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.851 [2024-11-20 11:20:57.291765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.851 [2024-11-20 11:20:57.291784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.851 [2024-11-20 11:20:57.298461] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.851 [2024-11-20 11:20:57.298708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.851 [2024-11-20 11:20:57.298730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.851 [2024-11-20 11:20:57.305860] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.851 [2024-11-20 11:20:57.306042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.851 [2024-11-20 11:20:57.306061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.851 [2024-11-20 11:20:57.312840] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.851 [2024-11-20 11:20:57.313076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.851 [2024-11-20 11:20:57.313096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.851 [2024-11-20 11:20:57.320111] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.851 [2024-11-20 11:20:57.320222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.851 [2024-11-20 11:20:57.320241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.851 [2024-11-20 11:20:57.327207] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.851 [2024-11-20 11:20:57.327348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.851 [2024-11-20 11:20:57.327367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.851 [2024-11-20 11:20:57.334530] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.851 [2024-11-20 11:20:57.334693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.851 [2024-11-20 11:20:57.334715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.851 [2024-11-20 11:20:57.341413] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:29.851 [2024-11-20 11:20:57.341590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.851 [2024-11-20 11:20:57.341612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.114 [2024-11-20 11:20:57.348924] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:30.114 [2024-11-20 11:20:57.349065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.114 [2024-11-20 11:20:57.349088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.114 [2024-11-20 11:20:57.356728] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:30.114 [2024-11-20 11:20:57.356823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.114 [2024-11-20 11:20:57.356843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.114 [2024-11-20 11:20:57.362798] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:30.114 [2024-11-20 11:20:57.362892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.114 [2024-11-20 11:20:57.362912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.114 [2024-11-20 11:20:57.367979] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:30.114 [2024-11-20 11:20:57.368134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.114 [2024-11-20 11:20:57.368153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.114 [2024-11-20 11:20:57.372643] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:30.114 [2024-11-20 11:20:57.372708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.114 [2024-11-20 11:20:57.372727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.114 [2024-11-20 11:20:57.377274] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:30.114 [2024-11-20 11:20:57.377410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.114 [2024-11-20 11:20:57.377429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.114 [2024-11-20 11:20:57.381975] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:30.114 [2024-11-20 11:20:57.382087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.114 [2024-11-20 11:20:57.382107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.114 [2024-11-20 11:20:57.386610] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:30.114 [2024-11-20 11:20:57.386684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.114 [2024-11-20 11:20:57.386708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.114 [2024-11-20 11:20:57.391330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:30.114 [2024-11-20 11:20:57.391403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.114 [2024-11-20 11:20:57.391426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.114 [2024-11-20 11:20:57.395959] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:30.114 [2024-11-20 11:20:57.396123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.114 [2024-11-20 11:20:57.396143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.114 [2024-11-20 11:20:57.401706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:30.114 [2024-11-20 11:20:57.401799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.114 [2024-11-20 11:20:57.401819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.114 [2024-11-20 11:20:57.406514] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:30.114 [2024-11-20 11:20:57.406606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.114 [2024-11-20 11:20:57.406626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.114 [2024-11-20 11:20:57.411370] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:30.114 [2024-11-20 11:20:57.411464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.114 [2024-11-20 11:20:57.411484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.114 [2024-11-20 11:20:57.415629] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:30.114 [2024-11-20 11:20:57.415730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.114 [2024-11-20 11:20:57.415750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.114 [2024-11-20 11:20:57.420546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:30.114 [2024-11-20 11:20:57.420718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.114 [2024-11-20 11:20:57.420738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.114 [2024-11-20 11:20:57.426461] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:30.114 [2024-11-20 11:20:57.426529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.114 [2024-11-20 11:20:57.426550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.114 [2024-11-20 11:20:57.431358] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:30.114 [2024-11-20 11:20:57.431497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.114 [2024-11-20 11:20:57.431517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.114 [2024-11-20 11:20:57.436004] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:30.114 [2024-11-20 11:20:57.436128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.114 [2024-11-20 11:20:57.436146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.114 [2024-11-20 11:20:57.440510] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:30.114 [2024-11-20 11:20:57.440649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.114 [2024-11-20 11:20:57.440668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.114 [2024-11-20 11:20:57.445829] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:30.114 [2024-11-20 11:20:57.445942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.114 [2024-11-20 11:20:57.445969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.114 [2024-11-20 11:20:57.451374] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:30.114 [2024-11-20 11:20:57.451464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.114 [2024-11-20 11:20:57.451484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.114 [2024-11-20 11:20:57.457959] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:30.114 [2024-11-20 11:20:57.458122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.115 [2024-11-20 11:20:57.458141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.115 [2024-11-20 11:20:57.464442] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:30.115 [2024-11-20 11:20:57.464610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.115 [2024-11-20 11:20:57.464628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.115 [2024-11-20 11:20:57.471839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:30.115 [2024-11-20 11:20:57.472019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.115 [2024-11-20 11:20:57.472039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.115 [2024-11-20 11:20:57.479028] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:30.115 [2024-11-20 11:20:57.479115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.115 [2024-11-20 11:20:57.479138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.115 [2024-11-20 11:20:57.485097] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:30.115 [2024-11-20 11:20:57.485206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.115 [2024-11-20 11:20:57.485225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.115 [2024-11-20 11:20:57.490883] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:30.115 [2024-11-20 11:20:57.491024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.115 [2024-11-20 11:20:57.491044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.115 [2024-11-20 11:20:57.496769] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:30.115 [2024-11-20 11:20:57.496894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.115 [2024-11-20 11:20:57.496913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.115 [2024-11-20 11:20:57.503426] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:30.115 [2024-11-20 11:20:57.503512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.115 [2024-11-20 11:20:57.503532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.115 [2024-11-20 11:20:57.510097] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:30.115 [2024-11-20 11:20:57.510261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.115 [2024-11-20 11:20:57.510280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.115 [2024-11-20 11:20:57.516663] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:30.115 [2024-11-20 11:20:57.516719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.115 [2024-11-20 11:20:57.516737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.115 [2024-11-20 11:20:57.523632] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:30.115 [2024-11-20 11:20:57.523909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.115 [2024-11-20 11:20:57.523931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.115 [2024-11-20 11:20:57.530686] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:30.115 [2024-11-20 11:20:57.530842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.115 [2024-11-20 11:20:57.530862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.115 [2024-11-20 11:20:57.537722] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:30.115 [2024-11-20 11:20:57.537887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.115 [2024-11-20 11:20:57.537907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.115 [2024-11-20 11:20:57.544736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:30.115 [2024-11-20 11:20:57.544823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.115 [2024-11-20 11:20:57.544841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.115 [2024-11-20 11:20:57.551859] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:30.115 [2024-11-20 11:20:57.551952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.115 [2024-11-20 11:20:57.551972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.115 [2024-11-20 11:20:57.558768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:30.115 [2024-11-20 11:20:57.558919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.115 [2024-11-20 11:20:57.558940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.115 [2024-11-20 11:20:57.565505] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:30.115 [2024-11-20 11:20:57.565660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.115 [2024-11-20 11:20:57.565680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.115 [2024-11-20 11:20:57.572744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:30.115 [2024-11-20 11:20:57.572903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.115 [2024-11-20 11:20:57.572923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.115 [2024-11-20 11:20:57.579298] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:30.115 [2024-11-20 11:20:57.579437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.115 [2024-11-20 11:20:57.579457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.115 [2024-11-20 11:20:57.586605] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:30.115 [2024-11-20 11:20:57.586784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.115 [2024-11-20 11:20:57.586804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.115 [2024-11-20 11:20:57.593130] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:30.115 [2024-11-20 11:20:57.593276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.115 [2024-11-20 11:20:57.593295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.115 [2024-11-20 11:20:57.599134] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:30.115 [2024-11-20 11:20:57.599250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.115 [2024-11-20 11:20:57.599270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.115 [2024-11-20 11:20:57.605151] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:30.115 [2024-11-20 11:20:57.605243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.115 [2024-11-20 11:20:57.605265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.375 [2024-11-20 11:20:57.611558] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:30.375 [2024-11-20 11:20:57.611674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.375 [2024-11-20 11:20:57.611697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.375 [2024-11-20 11:20:57.617585] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x177bb20) with pdu=0x2000166ff3c8 00:26:30.375 [2024-11-20 11:20:57.617639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.375 [2024-11-20 11:20:57.617660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.375 5894.50 IOPS, 736.81 MiB/s 00:26:30.375 Latency(us) 00:26:30.375 [2024-11-20T10:20:57.871Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:30.375 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:26:30.375 nvme0n1 : 2.00 5892.77 736.60 0.00 0.00 2710.75 1560.04 8035.28 00:26:30.375 [2024-11-20T10:20:57.872Z] =================================================================================================================== 00:26:30.376 [2024-11-20T10:20:57.872Z] Total : 5892.77 736.60 0.00 0.00 2710.75 1560.04 8035.28 00:26:30.376 { 00:26:30.376 "results": [ 00:26:30.376 { 00:26:30.376 "job": "nvme0n1", 00:26:30.376 "core_mask": "0x2", 00:26:30.376 "workload": "randwrite", 00:26:30.376 "status": "finished", 00:26:30.376 "queue_depth": 16, 00:26:30.376 "io_size": 131072, 00:26:30.376 "runtime": 2.00398, 00:26:30.376 "iops": 5892.773380971866, 00:26:30.376 "mibps": 736.5966726214832, 00:26:30.376 "io_failed": 0, 00:26:30.376 "io_timeout": 0, 00:26:30.376 "avg_latency_us": 2710.7495002706114, 00:26:30.376 "min_latency_us": 1560.0417391304347, 00:26:30.376 "max_latency_us": 8035.28347826087 00:26:30.376 } 00:26:30.376 ], 00:26:30.376 "core_count": 1 00:26:30.376 } 00:26:30.376 11:20:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:30.376 11:20:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:30.376 11:20:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:30.376 | .driver_specific 00:26:30.376 | .nvme_error 00:26:30.376 | .status_code 00:26:30.376 | .command_transient_transport_error' 00:26:30.376 11:20:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:30.376 11:20:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 381 > 0 )) 00:26:30.376 11:20:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 12656 00:26:30.376 11:20:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 12656 ']' 00:26:30.376 11:20:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 12656 00:26:30.376 11:20:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:26:30.376 11:20:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:30.376 11:20:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 12656 00:26:30.637 11:20:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:30.637 11:20:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:30.637 11:20:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 12656' 00:26:30.637 killing process with pid 12656 00:26:30.637 11:20:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 12656 00:26:30.637 Received shutdown signal, test time was about 2.000000 seconds 00:26:30.637 00:26:30.637 Latency(us) 00:26:30.637 [2024-11-20T10:20:58.133Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:30.637 [2024-11-20T10:20:58.133Z] =================================================================================================================== 00:26:30.637 [2024-11-20T10:20:58.133Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:30.637 11:20:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 12656 00:26:30.637 11:20:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 10882 00:26:30.637 11:20:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 10882 ']' 00:26:30.637 11:20:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 10882 00:26:30.637 11:20:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:26:30.637 11:20:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:30.637 11:20:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 10882 00:26:30.637 11:20:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:30.637 11:20:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:30.637 11:20:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 10882' 00:26:30.637 killing process with pid 10882 00:26:30.637 11:20:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 10882 00:26:30.637 11:20:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 10882 00:26:30.896 00:26:30.896 real 0m14.033s 00:26:30.896 user 0m26.951s 00:26:30.896 sys 0m4.482s 00:26:30.896 11:20:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:30.896 11:20:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:30.896 ************************************ 00:26:30.896 END TEST nvmf_digest_error 00:26:30.896 ************************************ 00:26:30.896 11:20:58 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:26:30.896 11:20:58 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:26:30.896 11:20:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:30.896 11:20:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:26:30.896 11:20:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:30.896 11:20:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:26:30.896 11:20:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:30.896 11:20:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:30.896 rmmod nvme_tcp 00:26:30.896 rmmod nvme_fabrics 00:26:30.896 rmmod nvme_keyring 00:26:30.896 11:20:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:30.896 11:20:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:26:30.896 11:20:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:26:30.896 11:20:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 10882 ']' 00:26:30.896 11:20:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 10882 00:26:30.896 11:20:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 10882 ']' 00:26:30.896 11:20:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 10882 00:26:30.896 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (10882) - No such process 00:26:30.896 11:20:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 10882 is not found' 00:26:30.896 Process with pid 10882 is not found 00:26:30.896 11:20:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:30.896 11:20:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:30.897 11:20:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:30.897 11:20:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:26:30.897 11:20:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:26:30.897 11:20:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:30.897 11:20:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:26:30.897 11:20:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:30.897 11:20:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:30.897 11:20:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:30.897 11:20:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:30.897 11:20:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:33.435 11:21:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:33.435 00:26:33.435 real 0m36.291s 00:26:33.435 user 0m55.457s 00:26:33.435 sys 0m13.565s 00:26:33.435 11:21:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:33.435 11:21:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:33.435 ************************************ 00:26:33.435 END TEST nvmf_digest 00:26:33.435 ************************************ 00:26:33.435 11:21:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:26:33.435 11:21:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:26:33.435 11:21:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:26:33.435 11:21:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:26:33.435 11:21:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:33.435 11:21:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:33.435 11:21:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.435 ************************************ 00:26:33.435 START TEST nvmf_bdevperf 00:26:33.435 ************************************ 00:26:33.435 11:21:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:26:33.435 * Looking for test storage... 00:26:33.435 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:33.435 11:21:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:33.435 11:21:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:26:33.435 11:21:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:33.435 11:21:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:33.435 11:21:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:33.435 11:21:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:33.435 11:21:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:33.435 11:21:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:26:33.435 11:21:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:26:33.435 11:21:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:26:33.435 11:21:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:26:33.435 11:21:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:26:33.435 11:21:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:26:33.435 11:21:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:26:33.435 11:21:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:33.435 11:21:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:26:33.435 11:21:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:26:33.435 11:21:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:33.435 11:21:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:33.435 11:21:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:26:33.435 11:21:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:26:33.435 11:21:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:33.435 11:21:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:26:33.435 11:21:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:26:33.435 11:21:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:26:33.435 11:21:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:26:33.435 11:21:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:33.435 11:21:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:26:33.435 11:21:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:26:33.435 11:21:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:33.435 11:21:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:33.435 11:21:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:26:33.435 11:21:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:33.435 11:21:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:33.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:33.435 --rc genhtml_branch_coverage=1 00:26:33.435 --rc genhtml_function_coverage=1 00:26:33.435 --rc genhtml_legend=1 00:26:33.435 --rc geninfo_all_blocks=1 00:26:33.435 --rc geninfo_unexecuted_blocks=1 00:26:33.435 00:26:33.435 ' 00:26:33.435 11:21:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:33.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:33.435 --rc genhtml_branch_coverage=1 00:26:33.435 --rc genhtml_function_coverage=1 00:26:33.435 --rc genhtml_legend=1 00:26:33.435 --rc geninfo_all_blocks=1 00:26:33.435 --rc geninfo_unexecuted_blocks=1 00:26:33.435 00:26:33.435 ' 00:26:33.435 11:21:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:33.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:33.435 --rc genhtml_branch_coverage=1 00:26:33.435 --rc genhtml_function_coverage=1 00:26:33.435 --rc genhtml_legend=1 00:26:33.435 --rc geninfo_all_blocks=1 00:26:33.435 --rc geninfo_unexecuted_blocks=1 00:26:33.435 00:26:33.435 ' 00:26:33.435 11:21:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:33.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:33.435 --rc genhtml_branch_coverage=1 00:26:33.435 --rc genhtml_function_coverage=1 00:26:33.435 --rc genhtml_legend=1 00:26:33.435 --rc geninfo_all_blocks=1 00:26:33.435 --rc geninfo_unexecuted_blocks=1 00:26:33.435 00:26:33.435 ' 00:26:33.435 11:21:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:33.435 11:21:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:26:33.435 11:21:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:33.435 11:21:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:33.435 11:21:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:33.435 11:21:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:33.435 11:21:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:33.435 11:21:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:33.435 11:21:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:33.435 11:21:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:33.435 11:21:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:33.435 11:21:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:33.436 11:21:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:26:33.436 11:21:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:26:33.436 11:21:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:33.436 11:21:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:33.436 11:21:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:33.436 11:21:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:33.436 11:21:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:33.436 11:21:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:26:33.436 11:21:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:33.436 11:21:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:33.436 11:21:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:33.436 11:21:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:33.436 11:21:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:33.436 11:21:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:33.436 11:21:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:26:33.436 11:21:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:33.436 11:21:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:26:33.436 11:21:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:33.436 11:21:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:33.436 11:21:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:33.436 11:21:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:33.436 11:21:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:33.436 11:21:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:33.436 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:33.436 11:21:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:33.436 11:21:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:33.436 11:21:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:33.436 11:21:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:33.436 11:21:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:33.436 11:21:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:26:33.436 11:21:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:33.436 11:21:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:33.436 11:21:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:33.436 11:21:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:33.436 11:21:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:33.436 11:21:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:33.436 11:21:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:33.436 11:21:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:33.436 11:21:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:33.436 11:21:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:33.436 11:21:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:26:33.436 11:21:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:40.012 11:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:40.012 11:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:26:40.012 11:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:40.012 11:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:40.012 11:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:40.012 11:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:40.012 11:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:40.012 11:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:26:40.012 11:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:40.012 11:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:26:40.012 11:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:26:40.013 11:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:26:40.013 11:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:26:40.013 11:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:26:40.013 11:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:26:40.013 11:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:40.013 11:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:40.013 11:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:40.013 11:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:40.013 11:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:40.013 11:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:40.013 11:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:40.013 11:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:40.013 11:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:40.013 11:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:40.013 11:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:40.013 11:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:40.013 11:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:40.013 11:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:40.013 11:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:40.013 11:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:40.013 11:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:40.013 11:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:40.013 11:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:40.013 11:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:40.013 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:40.013 11:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:40.013 11:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:40.013 11:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:40.013 11:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:40.013 11:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:40.013 11:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:40.013 11:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:40.013 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:40.013 11:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:40.013 11:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:40.013 11:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:40.013 11:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:40.013 11:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:40.013 11:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:40.013 11:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:40.013 11:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:40.013 11:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:40.013 11:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:40.013 11:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:40.013 11:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:40.013 11:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:40.013 11:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:40.013 11:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:40.013 11:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:40.013 Found net devices under 0000:86:00.0: cvl_0_0 00:26:40.013 11:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:40.013 11:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:40.013 11:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:40.013 11:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:40.013 11:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:40.013 11:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:40.013 11:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:40.013 11:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:40.013 11:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:40.013 Found net devices under 0000:86:00.1: cvl_0_1 00:26:40.013 11:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:40.013 11:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:40.013 11:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:26:40.013 11:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:40.013 11:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:40.013 11:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:40.013 11:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:40.013 11:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:40.013 11:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:40.013 11:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:40.013 11:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:40.013 11:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:40.013 11:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:40.013 11:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:40.013 11:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:40.013 11:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:40.013 11:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:40.013 11:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:40.013 11:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:40.013 11:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:40.013 11:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:40.013 11:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:40.013 11:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:40.013 11:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:40.013 11:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:40.013 11:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:40.013 11:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:40.013 11:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:40.013 11:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:40.013 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:40.013 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.452 ms 00:26:40.013 00:26:40.013 --- 10.0.0.2 ping statistics --- 00:26:40.013 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:40.013 rtt min/avg/max/mdev = 0.452/0.452/0.452/0.000 ms 00:26:40.013 11:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:40.013 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:40.013 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.226 ms 00:26:40.013 00:26:40.013 --- 10.0.0.1 ping statistics --- 00:26:40.013 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:40.013 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:26:40.013 11:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:40.013 11:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:26:40.013 11:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:40.014 11:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:40.014 11:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:40.014 11:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:40.014 11:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:40.014 11:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:40.014 11:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:40.014 11:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:26:40.014 11:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:26:40.014 11:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:40.014 11:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:40.014 11:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:40.014 11:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=16793 00:26:40.014 11:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 16793 00:26:40.014 11:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:40.014 11:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 16793 ']' 00:26:40.014 11:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:40.014 11:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:40.014 11:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:40.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:40.014 11:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:40.014 11:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:40.014 [2024-11-20 11:21:06.719644] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:26:40.014 [2024-11-20 11:21:06.719697] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:40.014 [2024-11-20 11:21:06.801954] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:40.014 [2024-11-20 11:21:06.847020] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:40.014 [2024-11-20 11:21:06.847055] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:40.014 [2024-11-20 11:21:06.847068] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:40.014 [2024-11-20 11:21:06.847076] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:40.014 [2024-11-20 11:21:06.847082] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:40.014 [2024-11-20 11:21:06.848485] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:40.014 [2024-11-20 11:21:06.848590] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:40.014 [2024-11-20 11:21:06.848592] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:40.014 11:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:40.014 11:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:26:40.014 11:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:40.014 11:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:40.014 11:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:40.014 11:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:40.014 11:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:40.014 11:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.014 11:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:40.014 [2024-11-20 11:21:06.984207] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:40.014 11:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.014 11:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:40.014 11:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.014 11:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:40.014 Malloc0 00:26:40.014 11:21:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.014 11:21:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:40.014 11:21:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.014 11:21:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:40.014 11:21:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.014 11:21:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:40.014 11:21:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.014 11:21:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:40.014 11:21:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.014 11:21:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:40.014 11:21:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.014 11:21:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:40.014 [2024-11-20 11:21:07.038075] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:40.014 11:21:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.014 11:21:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:26:40.014 11:21:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:26:40.014 11:21:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:26:40.014 11:21:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:26:40.014 11:21:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:40.014 11:21:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:40.014 { 00:26:40.014 "params": { 00:26:40.014 "name": "Nvme$subsystem", 00:26:40.014 "trtype": "$TEST_TRANSPORT", 00:26:40.014 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:40.014 "adrfam": "ipv4", 00:26:40.014 "trsvcid": "$NVMF_PORT", 00:26:40.014 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:40.014 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:40.014 "hdgst": ${hdgst:-false}, 00:26:40.014 "ddgst": ${ddgst:-false} 00:26:40.014 }, 00:26:40.014 "method": "bdev_nvme_attach_controller" 00:26:40.014 } 00:26:40.014 EOF 00:26:40.014 )") 00:26:40.014 11:21:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:26:40.014 11:21:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:26:40.014 11:21:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:26:40.014 11:21:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:26:40.014 "params": { 00:26:40.014 "name": "Nvme1", 00:26:40.014 "trtype": "tcp", 00:26:40.014 "traddr": "10.0.0.2", 00:26:40.014 "adrfam": "ipv4", 00:26:40.014 "trsvcid": "4420", 00:26:40.014 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:40.014 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:40.014 "hdgst": false, 00:26:40.014 "ddgst": false 00:26:40.014 }, 00:26:40.014 "method": "bdev_nvme_attach_controller" 00:26:40.014 }' 00:26:40.014 [2024-11-20 11:21:07.091037] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:26:40.014 [2024-11-20 11:21:07.091080] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid16828 ] 00:26:40.014 [2024-11-20 11:21:07.166215] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:40.014 [2024-11-20 11:21:07.207919] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:40.014 Running I/O for 1 seconds... 00:26:41.389 11129.00 IOPS, 43.47 MiB/s 00:26:41.389 Latency(us) 00:26:41.389 [2024-11-20T10:21:08.885Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:41.389 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:41.389 Verification LBA range: start 0x0 length 0x4000 00:26:41.389 Nvme1n1 : 1.01 11179.99 43.67 0.00 0.00 11405.32 676.73 11283.59 00:26:41.389 [2024-11-20T10:21:08.885Z] =================================================================================================================== 00:26:41.389 [2024-11-20T10:21:08.885Z] Total : 11179.99 43.67 0.00 0.00 11405.32 676.73 11283.59 00:26:41.389 11:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=17288 00:26:41.389 11:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:26:41.389 11:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:26:41.389 11:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:26:41.389 11:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:26:41.389 11:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:26:41.389 11:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:41.389 11:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:41.389 { 00:26:41.389 "params": { 00:26:41.389 "name": "Nvme$subsystem", 00:26:41.389 "trtype": "$TEST_TRANSPORT", 00:26:41.389 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:41.389 "adrfam": "ipv4", 00:26:41.389 "trsvcid": "$NVMF_PORT", 00:26:41.389 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:41.389 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:41.389 "hdgst": ${hdgst:-false}, 00:26:41.389 "ddgst": ${ddgst:-false} 00:26:41.389 }, 00:26:41.389 "method": "bdev_nvme_attach_controller" 00:26:41.389 } 00:26:41.389 EOF 00:26:41.389 )") 00:26:41.389 11:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:26:41.389 11:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:26:41.389 11:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:26:41.389 11:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:26:41.389 "params": { 00:26:41.389 "name": "Nvme1", 00:26:41.389 "trtype": "tcp", 00:26:41.389 "traddr": "10.0.0.2", 00:26:41.389 "adrfam": "ipv4", 00:26:41.389 "trsvcid": "4420", 00:26:41.389 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:41.389 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:41.389 "hdgst": false, 00:26:41.389 "ddgst": false 00:26:41.389 }, 00:26:41.389 "method": "bdev_nvme_attach_controller" 00:26:41.389 }' 00:26:41.389 [2024-11-20 11:21:08.700459] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:26:41.389 [2024-11-20 11:21:08.700511] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid17288 ] 00:26:41.389 [2024-11-20 11:21:08.775762] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:41.389 [2024-11-20 11:21:08.814590] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:41.646 Running I/O for 15 seconds... 00:26:43.513 11110.00 IOPS, 43.40 MiB/s [2024-11-20T10:21:11.950Z] 11166.00 IOPS, 43.62 MiB/s [2024-11-20T10:21:11.950Z] 11:21:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 16793 00:26:44.454 11:21:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:26:44.454 [2024-11-20 11:21:11.668594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:108344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.454 [2024-11-20 11:21:11.668632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.454 [2024-11-20 11:21:11.668649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:108352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.454 [2024-11-20 11:21:11.668659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.454 [2024-11-20 11:21:11.668668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:108360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.454 [2024-11-20 11:21:11.668677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.454 [2024-11-20 11:21:11.668686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:108368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.454 [2024-11-20 11:21:11.668693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.454 [2024-11-20 11:21:11.668702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:108376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.454 [2024-11-20 11:21:11.668710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.454 [2024-11-20 11:21:11.668719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:108384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.454 [2024-11-20 11:21:11.668727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.454 [2024-11-20 11:21:11.668736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:107552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.454 [2024-11-20 11:21:11.668743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.454 [2024-11-20 11:21:11.668752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:107560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.454 [2024-11-20 11:21:11.668760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.454 [2024-11-20 11:21:11.668768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:107568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.454 [2024-11-20 11:21:11.668775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.454 [2024-11-20 11:21:11.668783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:107576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.454 [2024-11-20 11:21:11.668796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.454 [2024-11-20 11:21:11.668806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:107584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.454 [2024-11-20 11:21:11.668813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.454 [2024-11-20 11:21:11.668822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:107592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.454 [2024-11-20 11:21:11.668831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.454 [2024-11-20 11:21:11.668840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:107600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.454 [2024-11-20 11:21:11.668847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.454 [2024-11-20 11:21:11.668856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:108392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.454 [2024-11-20 11:21:11.668863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.454 [2024-11-20 11:21:11.668871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:108400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.454 [2024-11-20 11:21:11.668879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.454 [2024-11-20 11:21:11.668888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:108408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.454 [2024-11-20 11:21:11.668895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.454 [2024-11-20 11:21:11.668904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:108416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.454 [2024-11-20 11:21:11.668912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.454 [2024-11-20 11:21:11.668922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:108424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.454 [2024-11-20 11:21:11.668930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.454 [2024-11-20 11:21:11.668939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:108432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.454 [2024-11-20 11:21:11.669070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.454 [2024-11-20 11:21:11.669081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:108440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.454 [2024-11-20 11:21:11.669088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.454 [2024-11-20 11:21:11.669097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:108448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.454 [2024-11-20 11:21:11.669104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.454 [2024-11-20 11:21:11.669113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:108456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.454 [2024-11-20 11:21:11.669120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.454 [2024-11-20 11:21:11.669131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:108464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.454 [2024-11-20 11:21:11.669138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.454 [2024-11-20 11:21:11.669146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:108472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.455 [2024-11-20 11:21:11.669153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.455 [2024-11-20 11:21:11.669162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:108480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.455 [2024-11-20 11:21:11.669169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.455 [2024-11-20 11:21:11.669177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:108488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.455 [2024-11-20 11:21:11.669184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.455 [2024-11-20 11:21:11.669193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:108496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.455 [2024-11-20 11:21:11.669200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.455 [2024-11-20 11:21:11.669208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:108504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.455 [2024-11-20 11:21:11.669215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.455 [2024-11-20 11:21:11.669223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:108512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.455 [2024-11-20 11:21:11.669230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.455 [2024-11-20 11:21:11.669239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:108520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.455 [2024-11-20 11:21:11.669245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.455 [2024-11-20 11:21:11.669254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:108528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.455 [2024-11-20 11:21:11.669261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.455 [2024-11-20 11:21:11.669269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:108536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.455 [2024-11-20 11:21:11.669276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.455 [2024-11-20 11:21:11.669286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:108544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.455 [2024-11-20 11:21:11.669292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.455 [2024-11-20 11:21:11.669301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:108552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.455 [2024-11-20 11:21:11.669308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.455 [2024-11-20 11:21:11.669316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:108560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.455 [2024-11-20 11:21:11.669325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.455 [2024-11-20 11:21:11.669333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:107608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.455 [2024-11-20 11:21:11.669340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.455 [2024-11-20 11:21:11.669349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:107616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.455 [2024-11-20 11:21:11.669356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.455 [2024-11-20 11:21:11.669364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:107624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.455 [2024-11-20 11:21:11.669371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.455 [2024-11-20 11:21:11.669379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:107632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.455 [2024-11-20 11:21:11.669385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.455 [2024-11-20 11:21:11.669394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:107640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.455 [2024-11-20 11:21:11.669401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.455 [2024-11-20 11:21:11.669409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:107648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.455 [2024-11-20 11:21:11.669416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.455 [2024-11-20 11:21:11.669424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:107656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.455 [2024-11-20 11:21:11.669431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.455 [2024-11-20 11:21:11.669439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:107664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.455 [2024-11-20 11:21:11.669446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.455 [2024-11-20 11:21:11.669455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:107672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.455 [2024-11-20 11:21:11.669462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.455 [2024-11-20 11:21:11.669471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:107680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.455 [2024-11-20 11:21:11.669478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.455 [2024-11-20 11:21:11.669486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:107688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.455 [2024-11-20 11:21:11.669493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.455 [2024-11-20 11:21:11.669501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:107696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.455 [2024-11-20 11:21:11.669508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.455 [2024-11-20 11:21:11.669518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:107704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.455 [2024-11-20 11:21:11.669525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.455 [2024-11-20 11:21:11.669534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:107712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.455 [2024-11-20 11:21:11.669540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.455 [2024-11-20 11:21:11.669550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:107720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.455 [2024-11-20 11:21:11.669556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.455 [2024-11-20 11:21:11.669565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:107728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.455 [2024-11-20 11:21:11.669572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.455 [2024-11-20 11:21:11.669580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:107736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.455 [2024-11-20 11:21:11.669587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.455 [2024-11-20 11:21:11.669595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:107744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.455 [2024-11-20 11:21:11.669602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.455 [2024-11-20 11:21:11.669611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:107752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.455 [2024-11-20 11:21:11.669618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.455 [2024-11-20 11:21:11.669626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:107760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.455 [2024-11-20 11:21:11.669633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.455 [2024-11-20 11:21:11.669644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:107768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.455 [2024-11-20 11:21:11.669651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.455 [2024-11-20 11:21:11.669660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:107776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.455 [2024-11-20 11:21:11.669667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.456 [2024-11-20 11:21:11.669675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:107784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.456 [2024-11-20 11:21:11.669682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.456 [2024-11-20 11:21:11.669691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:107792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.456 [2024-11-20 11:21:11.669698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.456 [2024-11-20 11:21:11.669706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:107800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.456 [2024-11-20 11:21:11.669715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.456 [2024-11-20 11:21:11.669723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:107808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.456 [2024-11-20 11:21:11.669730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.456 [2024-11-20 11:21:11.669738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:107816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.456 [2024-11-20 11:21:11.669745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.456 [2024-11-20 11:21:11.669753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:107824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.456 [2024-11-20 11:21:11.669760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.456 [2024-11-20 11:21:11.669769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:107832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.456 [2024-11-20 11:21:11.669776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.456 [2024-11-20 11:21:11.669784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:107840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.456 [2024-11-20 11:21:11.669791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.456 [2024-11-20 11:21:11.669800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:107848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.456 [2024-11-20 11:21:11.669807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.456 [2024-11-20 11:21:11.669815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:107856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.456 [2024-11-20 11:21:11.669822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.456 [2024-11-20 11:21:11.669830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:107864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.456 [2024-11-20 11:21:11.669837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.456 [2024-11-20 11:21:11.669846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:107872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.456 [2024-11-20 11:21:11.669853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.456 [2024-11-20 11:21:11.669861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:107880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.456 [2024-11-20 11:21:11.669868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.456 [2024-11-20 11:21:11.669876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:107888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.456 [2024-11-20 11:21:11.669883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.456 [2024-11-20 11:21:11.669895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:108568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.456 [2024-11-20 11:21:11.669902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.456 [2024-11-20 11:21:11.669911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:107896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.456 [2024-11-20 11:21:11.669919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.456 [2024-11-20 11:21:11.669928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:107904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.456 [2024-11-20 11:21:11.669935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.456 [2024-11-20 11:21:11.669943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:107912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.456 [2024-11-20 11:21:11.669955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.456 [2024-11-20 11:21:11.669964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:107920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.456 [2024-11-20 11:21:11.669971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.456 [2024-11-20 11:21:11.669979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:107928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.456 [2024-11-20 11:21:11.669986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.456 [2024-11-20 11:21:11.669994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:107936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.456 [2024-11-20 11:21:11.670001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.456 [2024-11-20 11:21:11.670010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:107944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.456 [2024-11-20 11:21:11.670016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.456 [2024-11-20 11:21:11.670025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:107952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.456 [2024-11-20 11:21:11.670032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.456 [2024-11-20 11:21:11.670041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:107960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.456 [2024-11-20 11:21:11.670047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.456 [2024-11-20 11:21:11.670057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:107968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.456 [2024-11-20 11:21:11.670064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.456 [2024-11-20 11:21:11.670073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:107976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.456 [2024-11-20 11:21:11.670079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.456 [2024-11-20 11:21:11.670088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:107984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.456 [2024-11-20 11:21:11.670095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.456 [2024-11-20 11:21:11.670103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:107992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.456 [2024-11-20 11:21:11.670110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.456 [2024-11-20 11:21:11.670121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:108000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.456 [2024-11-20 11:21:11.670128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.456 [2024-11-20 11:21:11.670137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:108008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.456 [2024-11-20 11:21:11.670145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.456 [2024-11-20 11:21:11.670154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:108016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.456 [2024-11-20 11:21:11.670161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.456 [2024-11-20 11:21:11.670169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:108024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.456 [2024-11-20 11:21:11.670176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.456 [2024-11-20 11:21:11.670184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:108032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.456 [2024-11-20 11:21:11.670191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.456 [2024-11-20 11:21:11.670200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:108040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.456 [2024-11-20 11:21:11.670206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.457 [2024-11-20 11:21:11.670214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:108048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.457 [2024-11-20 11:21:11.670221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.457 [2024-11-20 11:21:11.670230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:108056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.457 [2024-11-20 11:21:11.670236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.457 [2024-11-20 11:21:11.670245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:108064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.457 [2024-11-20 11:21:11.670252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.457 [2024-11-20 11:21:11.670261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:108072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.457 [2024-11-20 11:21:11.670267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.457 [2024-11-20 11:21:11.670275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:108080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.457 [2024-11-20 11:21:11.670282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.457 [2024-11-20 11:21:11.670291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:108088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.457 [2024-11-20 11:21:11.670297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.457 [2024-11-20 11:21:11.670306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:108096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.457 [2024-11-20 11:21:11.670315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.457 [2024-11-20 11:21:11.670323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:108104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.457 [2024-11-20 11:21:11.670330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.457 [2024-11-20 11:21:11.670338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:108112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.457 [2024-11-20 11:21:11.670344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.457 [2024-11-20 11:21:11.670353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:108120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.457 [2024-11-20 11:21:11.670359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.457 [2024-11-20 11:21:11.670368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:108128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.457 [2024-11-20 11:21:11.670375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.457 [2024-11-20 11:21:11.670384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:108136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.457 [2024-11-20 11:21:11.670390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.457 [2024-11-20 11:21:11.670400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:108144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.457 [2024-11-20 11:21:11.670406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.457 [2024-11-20 11:21:11.670414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:108152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.457 [2024-11-20 11:21:11.670422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.457 [2024-11-20 11:21:11.670430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:108160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.457 [2024-11-20 11:21:11.670437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.457 [2024-11-20 11:21:11.670445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:108168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.457 [2024-11-20 11:21:11.670452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.457 [2024-11-20 11:21:11.670460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:108176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.457 [2024-11-20 11:21:11.670468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.457 [2024-11-20 11:21:11.670476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:108184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.457 [2024-11-20 11:21:11.670483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.457 [2024-11-20 11:21:11.670491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:108192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.457 [2024-11-20 11:21:11.670498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.457 [2024-11-20 11:21:11.670511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:108200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.457 [2024-11-20 11:21:11.670518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.457 [2024-11-20 11:21:11.670526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:108208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.457 [2024-11-20 11:21:11.670533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.457 [2024-11-20 11:21:11.670541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:108216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.457 [2024-11-20 11:21:11.670548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.457 [2024-11-20 11:21:11.670557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:108224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.457 [2024-11-20 11:21:11.670564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.457 [2024-11-20 11:21:11.670572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:108232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.457 [2024-11-20 11:21:11.670579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.457 [2024-11-20 11:21:11.670587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:108240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.457 [2024-11-20 11:21:11.670594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.457 [2024-11-20 11:21:11.670602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:108248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.457 [2024-11-20 11:21:11.670609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.457 [2024-11-20 11:21:11.670617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:108256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.457 [2024-11-20 11:21:11.670624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.457 [2024-11-20 11:21:11.670634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:108264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.457 [2024-11-20 11:21:11.670641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.457 [2024-11-20 11:21:11.670649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:108272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.457 [2024-11-20 11:21:11.670656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.457 [2024-11-20 11:21:11.670665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:108280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.457 [2024-11-20 11:21:11.670672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.457 [2024-11-20 11:21:11.670680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:108288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.457 [2024-11-20 11:21:11.670687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.457 [2024-11-20 11:21:11.670695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:108296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.457 [2024-11-20 11:21:11.670703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.457 [2024-11-20 11:21:11.670712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:108304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.457 [2024-11-20 11:21:11.670718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.457 [2024-11-20 11:21:11.670726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:108312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.458 [2024-11-20 11:21:11.670733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.458 [2024-11-20 11:21:11.670742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:108320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.458 [2024-11-20 11:21:11.670748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.458 [2024-11-20 11:21:11.670757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:108328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.458 [2024-11-20 11:21:11.670763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.458 [2024-11-20 11:21:11.670771] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb8cf0 is same with the state(6) to be set 00:26:44.458 [2024-11-20 11:21:11.670781] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:44.458 [2024-11-20 11:21:11.670786] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:44.458 [2024-11-20 11:21:11.670793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:108336 len:8 PRP1 0x0 PRP2 0x0 00:26:44.458 [2024-11-20 11:21:11.670800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.458 [2024-11-20 11:21:11.673694] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.458 [2024-11-20 11:21:11.673748] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:44.458 [2024-11-20 11:21:11.674310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.458 [2024-11-20 11:21:11.674327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:44.458 [2024-11-20 11:21:11.674336] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:44.458 [2024-11-20 11:21:11.674515] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:44.458 [2024-11-20 11:21:11.674694] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.458 [2024-11-20 11:21:11.674703] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.458 [2024-11-20 11:21:11.674712] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.458 [2024-11-20 11:21:11.674720] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.458 [2024-11-20 11:21:11.687213] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.458 [2024-11-20 11:21:11.687644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.458 [2024-11-20 11:21:11.687695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:44.458 [2024-11-20 11:21:11.687720] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:44.458 [2024-11-20 11:21:11.688214] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:44.458 [2024-11-20 11:21:11.688388] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.458 [2024-11-20 11:21:11.688397] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.458 [2024-11-20 11:21:11.688404] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.458 [2024-11-20 11:21:11.688410] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.458 [2024-11-20 11:21:11.700160] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.458 [2024-11-20 11:21:11.700443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.458 [2024-11-20 11:21:11.700460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:44.458 [2024-11-20 11:21:11.700467] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:44.458 [2024-11-20 11:21:11.700650] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:44.458 [2024-11-20 11:21:11.700824] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.458 [2024-11-20 11:21:11.700833] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.458 [2024-11-20 11:21:11.700840] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.458 [2024-11-20 11:21:11.700847] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.458 [2024-11-20 11:21:11.713076] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.458 [2024-11-20 11:21:11.713375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.458 [2024-11-20 11:21:11.713392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:44.458 [2024-11-20 11:21:11.713399] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:44.458 [2024-11-20 11:21:11.713562] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:44.458 [2024-11-20 11:21:11.713725] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.458 [2024-11-20 11:21:11.713733] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.458 [2024-11-20 11:21:11.713740] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.458 [2024-11-20 11:21:11.713746] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.458 [2024-11-20 11:21:11.726068] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.458 [2024-11-20 11:21:11.726340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.458 [2024-11-20 11:21:11.726357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:44.458 [2024-11-20 11:21:11.726364] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:44.458 [2024-11-20 11:21:11.726536] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:44.458 [2024-11-20 11:21:11.726706] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.458 [2024-11-20 11:21:11.726718] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.458 [2024-11-20 11:21:11.726725] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.458 [2024-11-20 11:21:11.726731] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.458 [2024-11-20 11:21:11.738905] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.458 [2024-11-20 11:21:11.739192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.458 [2024-11-20 11:21:11.739209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:44.458 [2024-11-20 11:21:11.739216] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:44.458 [2024-11-20 11:21:11.739388] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:44.458 [2024-11-20 11:21:11.739561] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.458 [2024-11-20 11:21:11.739569] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.458 [2024-11-20 11:21:11.739576] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.458 [2024-11-20 11:21:11.739583] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.458 [2024-11-20 11:21:11.751913] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.458 [2024-11-20 11:21:11.752313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.458 [2024-11-20 11:21:11.752330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:44.458 [2024-11-20 11:21:11.752337] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:44.458 [2024-11-20 11:21:11.752499] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:44.458 [2024-11-20 11:21:11.752661] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.458 [2024-11-20 11:21:11.752668] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.458 [2024-11-20 11:21:11.752675] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.458 [2024-11-20 11:21:11.752681] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.459 [2024-11-20 11:21:11.764876] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.459 [2024-11-20 11:21:11.765181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.459 [2024-11-20 11:21:11.765199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:44.459 [2024-11-20 11:21:11.765207] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:44.459 [2024-11-20 11:21:11.765379] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:44.459 [2024-11-20 11:21:11.765551] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.459 [2024-11-20 11:21:11.765559] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.459 [2024-11-20 11:21:11.765566] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.459 [2024-11-20 11:21:11.765572] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.459 [2024-11-20 11:21:11.777765] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.459 [2024-11-20 11:21:11.778086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.459 [2024-11-20 11:21:11.778104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:44.459 [2024-11-20 11:21:11.778111] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:44.459 [2024-11-20 11:21:11.778290] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:44.459 [2024-11-20 11:21:11.778452] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.459 [2024-11-20 11:21:11.778461] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.459 [2024-11-20 11:21:11.778467] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.459 [2024-11-20 11:21:11.778473] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.459 [2024-11-20 11:21:11.790792] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.459 [2024-11-20 11:21:11.791229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.459 [2024-11-20 11:21:11.791246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:44.459 [2024-11-20 11:21:11.791254] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:44.459 [2024-11-20 11:21:11.791807] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:44.459 [2024-11-20 11:21:11.792123] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.459 [2024-11-20 11:21:11.792132] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.459 [2024-11-20 11:21:11.792139] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.459 [2024-11-20 11:21:11.792146] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.459 [2024-11-20 11:21:11.803694] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.459 [2024-11-20 11:21:11.804118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.459 [2024-11-20 11:21:11.804135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:44.459 [2024-11-20 11:21:11.804143] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:44.459 [2024-11-20 11:21:11.804305] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:44.459 [2024-11-20 11:21:11.804467] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.459 [2024-11-20 11:21:11.804475] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.459 [2024-11-20 11:21:11.804481] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.459 [2024-11-20 11:21:11.804487] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.459 [2024-11-20 11:21:11.816645] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.459 [2024-11-20 11:21:11.817094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.459 [2024-11-20 11:21:11.817149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:44.459 [2024-11-20 11:21:11.817174] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:44.459 [2024-11-20 11:21:11.817753] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:44.459 [2024-11-20 11:21:11.818263] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.459 [2024-11-20 11:21:11.818272] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.459 [2024-11-20 11:21:11.818278] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.459 [2024-11-20 11:21:11.818285] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.459 [2024-11-20 11:21:11.829486] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.459 [2024-11-20 11:21:11.829888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.459 [2024-11-20 11:21:11.829905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:44.459 [2024-11-20 11:21:11.829912] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:44.459 [2024-11-20 11:21:11.830101] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:44.459 [2024-11-20 11:21:11.830273] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.459 [2024-11-20 11:21:11.830281] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.459 [2024-11-20 11:21:11.830288] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.459 [2024-11-20 11:21:11.830294] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.459 [2024-11-20 11:21:11.842400] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.459 [2024-11-20 11:21:11.842802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.459 [2024-11-20 11:21:11.842847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:44.459 [2024-11-20 11:21:11.842870] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:44.459 [2024-11-20 11:21:11.843379] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:44.459 [2024-11-20 11:21:11.843552] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.459 [2024-11-20 11:21:11.843561] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.459 [2024-11-20 11:21:11.843568] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.459 [2024-11-20 11:21:11.843574] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.460 [2024-11-20 11:21:11.857399] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.460 [2024-11-20 11:21:11.857833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.460 [2024-11-20 11:21:11.857855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:44.460 [2024-11-20 11:21:11.857866] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:44.460 [2024-11-20 11:21:11.858128] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:44.460 [2024-11-20 11:21:11.858382] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.460 [2024-11-20 11:21:11.858394] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.460 [2024-11-20 11:21:11.858404] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.460 [2024-11-20 11:21:11.858413] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.460 [2024-11-20 11:21:11.870439] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.460 [2024-11-20 11:21:11.870864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.460 [2024-11-20 11:21:11.870910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:44.460 [2024-11-20 11:21:11.870933] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:44.460 [2024-11-20 11:21:11.871426] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:44.460 [2024-11-20 11:21:11.871599] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.460 [2024-11-20 11:21:11.871607] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.460 [2024-11-20 11:21:11.871614] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.460 [2024-11-20 11:21:11.871620] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.460 [2024-11-20 11:21:11.883253] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.460 [2024-11-20 11:21:11.883676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.460 [2024-11-20 11:21:11.883692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:44.460 [2024-11-20 11:21:11.883699] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:44.460 [2024-11-20 11:21:11.883861] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:44.460 [2024-11-20 11:21:11.884048] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.460 [2024-11-20 11:21:11.884057] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.460 [2024-11-20 11:21:11.884064] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.460 [2024-11-20 11:21:11.884070] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.460 [2024-11-20 11:21:11.896051] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.460 [2024-11-20 11:21:11.896473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.460 [2024-11-20 11:21:11.896519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:44.460 [2024-11-20 11:21:11.896544] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:44.460 [2024-11-20 11:21:11.897134] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:44.460 [2024-11-20 11:21:11.897346] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.460 [2024-11-20 11:21:11.897354] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.460 [2024-11-20 11:21:11.897364] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.460 [2024-11-20 11:21:11.897371] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.460 [2024-11-20 11:21:11.908959] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.460 [2024-11-20 11:21:11.909388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.460 [2024-11-20 11:21:11.909433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:44.460 [2024-11-20 11:21:11.909457] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:44.460 [2024-11-20 11:21:11.910054] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:44.460 [2024-11-20 11:21:11.910612] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.460 [2024-11-20 11:21:11.910620] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.460 [2024-11-20 11:21:11.910627] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.460 [2024-11-20 11:21:11.910634] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.460 [2024-11-20 11:21:11.921830] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.460 [2024-11-20 11:21:11.922272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.460 [2024-11-20 11:21:11.922289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:44.460 [2024-11-20 11:21:11.922296] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:44.460 [2024-11-20 11:21:11.922459] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:44.460 [2024-11-20 11:21:11.922621] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.460 [2024-11-20 11:21:11.922629] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.460 [2024-11-20 11:21:11.922635] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.460 [2024-11-20 11:21:11.922641] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.460 [2024-11-20 11:21:11.934925] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.460 [2024-11-20 11:21:11.935818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.460 [2024-11-20 11:21:11.935840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:44.460 [2024-11-20 11:21:11.935851] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:44.460 [2024-11-20 11:21:11.936043] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:44.460 [2024-11-20 11:21:11.936222] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.460 [2024-11-20 11:21:11.936231] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.460 [2024-11-20 11:21:11.936238] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.460 [2024-11-20 11:21:11.936245] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.721 [2024-11-20 11:21:11.948095] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.721 [2024-11-20 11:21:11.948503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.721 [2024-11-20 11:21:11.948521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:44.721 [2024-11-20 11:21:11.948528] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:44.721 [2024-11-20 11:21:11.948706] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:44.721 [2024-11-20 11:21:11.948883] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.721 [2024-11-20 11:21:11.948893] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.721 [2024-11-20 11:21:11.948901] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.721 [2024-11-20 11:21:11.948909] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.721 [2024-11-20 11:21:11.961234] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.721 [2024-11-20 11:21:11.961581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.721 [2024-11-20 11:21:11.961633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:44.721 [2024-11-20 11:21:11.961658] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:44.721 [2024-11-20 11:21:11.962249] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:44.721 [2024-11-20 11:21:11.962771] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.721 [2024-11-20 11:21:11.962780] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.721 [2024-11-20 11:21:11.962787] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.721 [2024-11-20 11:21:11.962793] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.721 [2024-11-20 11:21:11.974284] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.721 [2024-11-20 11:21:11.974719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.721 [2024-11-20 11:21:11.974758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:44.721 [2024-11-20 11:21:11.974783] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:44.721 [2024-11-20 11:21:11.975349] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:44.721 [2024-11-20 11:21:11.975738] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.721 [2024-11-20 11:21:11.975755] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.721 [2024-11-20 11:21:11.975770] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.721 [2024-11-20 11:21:11.975784] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.721 [2024-11-20 11:21:11.989000] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.721 [2024-11-20 11:21:11.989533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.721 [2024-11-20 11:21:11.989565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:44.721 [2024-11-20 11:21:11.989576] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:44.721 [2024-11-20 11:21:11.989829] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:44.721 [2024-11-20 11:21:11.990091] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.721 [2024-11-20 11:21:11.990104] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.721 [2024-11-20 11:21:11.990113] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.721 [2024-11-20 11:21:11.990123] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.721 9942.67 IOPS, 38.84 MiB/s [2024-11-20T10:21:12.217Z] [2024-11-20 11:21:12.002094] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.721 [2024-11-20 11:21:12.002527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.721 [2024-11-20 11:21:12.002544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:44.721 [2024-11-20 11:21:12.002552] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:44.722 [2024-11-20 11:21:12.002724] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:44.722 [2024-11-20 11:21:12.002895] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.722 [2024-11-20 11:21:12.002904] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.722 [2024-11-20 11:21:12.002911] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.722 [2024-11-20 11:21:12.002917] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.722 [2024-11-20 11:21:12.015072] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.722 [2024-11-20 11:21:12.015491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.722 [2024-11-20 11:21:12.015508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:44.722 [2024-11-20 11:21:12.015516] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:44.722 [2024-11-20 11:21:12.015687] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:44.722 [2024-11-20 11:21:12.015859] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.722 [2024-11-20 11:21:12.015867] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.722 [2024-11-20 11:21:12.015874] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.722 [2024-11-20 11:21:12.015881] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.722 [2024-11-20 11:21:12.027959] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.722 [2024-11-20 11:21:12.028380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.722 [2024-11-20 11:21:12.028396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:44.722 [2024-11-20 11:21:12.028403] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:44.722 [2024-11-20 11:21:12.028568] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:44.722 [2024-11-20 11:21:12.028730] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.722 [2024-11-20 11:21:12.028738] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.722 [2024-11-20 11:21:12.028745] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.722 [2024-11-20 11:21:12.028751] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.722 [2024-11-20 11:21:12.040837] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.722 [2024-11-20 11:21:12.041216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.722 [2024-11-20 11:21:12.041232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:44.722 [2024-11-20 11:21:12.041239] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:44.722 [2024-11-20 11:21:12.041401] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:44.722 [2024-11-20 11:21:12.041564] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.722 [2024-11-20 11:21:12.041572] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.722 [2024-11-20 11:21:12.041578] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.722 [2024-11-20 11:21:12.041584] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.722 [2024-11-20 11:21:12.053764] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.722 [2024-11-20 11:21:12.054189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.722 [2024-11-20 11:21:12.054206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:44.722 [2024-11-20 11:21:12.054213] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:44.722 [2024-11-20 11:21:12.054375] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:44.722 [2024-11-20 11:21:12.054537] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.722 [2024-11-20 11:21:12.054545] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.722 [2024-11-20 11:21:12.054551] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.722 [2024-11-20 11:21:12.054557] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.722 [2024-11-20 11:21:12.066645] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.722 [2024-11-20 11:21:12.067096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.722 [2024-11-20 11:21:12.067143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:44.722 [2024-11-20 11:21:12.067166] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:44.722 [2024-11-20 11:21:12.067707] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:44.722 [2024-11-20 11:21:12.067870] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.722 [2024-11-20 11:21:12.067878] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.722 [2024-11-20 11:21:12.067887] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.722 [2024-11-20 11:21:12.067894] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.722 [2024-11-20 11:21:12.079535] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.722 [2024-11-20 11:21:12.079897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.722 [2024-11-20 11:21:12.079942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:44.722 [2024-11-20 11:21:12.079981] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:44.722 [2024-11-20 11:21:12.080562] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:44.722 [2024-11-20 11:21:12.080845] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.722 [2024-11-20 11:21:12.080853] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.722 [2024-11-20 11:21:12.080860] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.722 [2024-11-20 11:21:12.080867] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.722 [2024-11-20 11:21:12.092386] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.722 [2024-11-20 11:21:12.092790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.722 [2024-11-20 11:21:12.092807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:44.722 [2024-11-20 11:21:12.092815] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:44.722 [2024-11-20 11:21:12.092990] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:44.722 [2024-11-20 11:21:12.093162] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.722 [2024-11-20 11:21:12.093171] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.722 [2024-11-20 11:21:12.093177] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.722 [2024-11-20 11:21:12.093183] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.722 [2024-11-20 11:21:12.105295] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.722 [2024-11-20 11:21:12.105762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.722 [2024-11-20 11:21:12.105807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:44.722 [2024-11-20 11:21:12.105830] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:44.722 [2024-11-20 11:21:12.106439] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:44.722 [2024-11-20 11:21:12.107032] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.722 [2024-11-20 11:21:12.107050] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.723 [2024-11-20 11:21:12.107064] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.723 [2024-11-20 11:21:12.107078] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.723 [2024-11-20 11:21:12.120320] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.723 [2024-11-20 11:21:12.120838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.723 [2024-11-20 11:21:12.120884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:44.723 [2024-11-20 11:21:12.120908] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:44.723 [2024-11-20 11:21:12.121499] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:44.723 [2024-11-20 11:21:12.122076] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.723 [2024-11-20 11:21:12.122088] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.723 [2024-11-20 11:21:12.122098] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.723 [2024-11-20 11:21:12.122107] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.723 [2024-11-20 11:21:12.133320] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.723 [2024-11-20 11:21:12.133717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.723 [2024-11-20 11:21:12.133734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:44.723 [2024-11-20 11:21:12.133741] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:44.723 [2024-11-20 11:21:12.133908] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:44.723 [2024-11-20 11:21:12.134098] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.723 [2024-11-20 11:21:12.134107] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.723 [2024-11-20 11:21:12.134114] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.723 [2024-11-20 11:21:12.134120] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.723 [2024-11-20 11:21:12.146232] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.723 [2024-11-20 11:21:12.146637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.723 [2024-11-20 11:21:12.146681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:44.723 [2024-11-20 11:21:12.146705] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:44.723 [2024-11-20 11:21:12.147255] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:44.723 [2024-11-20 11:21:12.147642] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.723 [2024-11-20 11:21:12.147660] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.723 [2024-11-20 11:21:12.147674] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.723 [2024-11-20 11:21:12.147688] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.723 [2024-11-20 11:21:12.161003] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.723 [2024-11-20 11:21:12.161489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.723 [2024-11-20 11:21:12.161515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:44.723 [2024-11-20 11:21:12.161525] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:44.723 [2024-11-20 11:21:12.161778] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:44.723 [2024-11-20 11:21:12.162039] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.723 [2024-11-20 11:21:12.162051] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.723 [2024-11-20 11:21:12.162061] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.723 [2024-11-20 11:21:12.162070] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.723 [2024-11-20 11:21:12.174023] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.723 [2024-11-20 11:21:12.174449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.723 [2024-11-20 11:21:12.174467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:44.723 [2024-11-20 11:21:12.174474] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:44.723 [2024-11-20 11:21:12.174645] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:44.723 [2024-11-20 11:21:12.174816] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.723 [2024-11-20 11:21:12.174825] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.723 [2024-11-20 11:21:12.174833] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.723 [2024-11-20 11:21:12.174840] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.723 [2024-11-20 11:21:12.187106] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.723 [2024-11-20 11:21:12.187533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.723 [2024-11-20 11:21:12.187550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:44.723 [2024-11-20 11:21:12.187557] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:44.723 [2024-11-20 11:21:12.187739] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:44.723 [2024-11-20 11:21:12.187911] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.723 [2024-11-20 11:21:12.187919] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.723 [2024-11-20 11:21:12.187925] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.723 [2024-11-20 11:21:12.187932] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.723 [2024-11-20 11:21:12.199907] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.723 [2024-11-20 11:21:12.200359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.723 [2024-11-20 11:21:12.200404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:44.723 [2024-11-20 11:21:12.200427] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:44.723 [2024-11-20 11:21:12.201024] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:44.723 [2024-11-20 11:21:12.201216] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.723 [2024-11-20 11:21:12.201224] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.723 [2024-11-20 11:21:12.201231] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.723 [2024-11-20 11:21:12.201237] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.723 [2024-11-20 11:21:12.212994] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.723 [2024-11-20 11:21:12.213423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.723 [2024-11-20 11:21:12.213440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:44.723 [2024-11-20 11:21:12.213448] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:44.723 [2024-11-20 11:21:12.213625] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:44.984 [2024-11-20 11:21:12.213802] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.984 [2024-11-20 11:21:12.213811] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.984 [2024-11-20 11:21:12.213818] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.984 [2024-11-20 11:21:12.213825] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.984 [2024-11-20 11:21:12.225974] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.984 [2024-11-20 11:21:12.226413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.984 [2024-11-20 11:21:12.226458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:44.984 [2024-11-20 11:21:12.226482] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:44.984 [2024-11-20 11:21:12.227075] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:44.984 [2024-11-20 11:21:12.227490] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.984 [2024-11-20 11:21:12.227498] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.984 [2024-11-20 11:21:12.227505] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.984 [2024-11-20 11:21:12.227512] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.984 [2024-11-20 11:21:12.238898] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.984 [2024-11-20 11:21:12.239324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.984 [2024-11-20 11:21:12.239341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:44.984 [2024-11-20 11:21:12.239348] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:44.984 [2024-11-20 11:21:12.239511] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:44.985 [2024-11-20 11:21:12.239674] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.985 [2024-11-20 11:21:12.239682] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.985 [2024-11-20 11:21:12.239692] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.985 [2024-11-20 11:21:12.239699] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.985 [2024-11-20 11:21:12.251758] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.985 [2024-11-20 11:21:12.252113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.985 [2024-11-20 11:21:12.252129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:44.985 [2024-11-20 11:21:12.252136] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:44.985 [2024-11-20 11:21:12.252298] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:44.985 [2024-11-20 11:21:12.252460] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.985 [2024-11-20 11:21:12.252468] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.985 [2024-11-20 11:21:12.252474] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.985 [2024-11-20 11:21:12.252479] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.985 [2024-11-20 11:21:12.264643] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.985 [2024-11-20 11:21:12.265070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.985 [2024-11-20 11:21:12.265116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:44.985 [2024-11-20 11:21:12.265140] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:44.985 [2024-11-20 11:21:12.265345] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:44.985 [2024-11-20 11:21:12.265507] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.985 [2024-11-20 11:21:12.265515] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.985 [2024-11-20 11:21:12.265522] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.985 [2024-11-20 11:21:12.265529] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.985 [2024-11-20 11:21:12.277568] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.985 [2024-11-20 11:21:12.277993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.985 [2024-11-20 11:21:12.278009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:44.985 [2024-11-20 11:21:12.278017] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:44.985 [2024-11-20 11:21:12.278188] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:44.985 [2024-11-20 11:21:12.278364] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.985 [2024-11-20 11:21:12.278372] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.985 [2024-11-20 11:21:12.278378] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.985 [2024-11-20 11:21:12.278384] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.985 [2024-11-20 11:21:12.290511] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.985 [2024-11-20 11:21:12.290901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.985 [2024-11-20 11:21:12.290916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:44.985 [2024-11-20 11:21:12.290923] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:44.985 [2024-11-20 11:21:12.291090] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:44.985 [2024-11-20 11:21:12.291254] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.985 [2024-11-20 11:21:12.291262] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.985 [2024-11-20 11:21:12.291268] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.985 [2024-11-20 11:21:12.291274] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.985 [2024-11-20 11:21:12.303445] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.985 [2024-11-20 11:21:12.303765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.985 [2024-11-20 11:21:12.303781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:44.985 [2024-11-20 11:21:12.303788] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:44.985 [2024-11-20 11:21:12.303955] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:44.985 [2024-11-20 11:21:12.304118] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.985 [2024-11-20 11:21:12.304127] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.985 [2024-11-20 11:21:12.304133] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.985 [2024-11-20 11:21:12.304139] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.985 [2024-11-20 11:21:12.316380] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.985 [2024-11-20 11:21:12.316776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.985 [2024-11-20 11:21:12.316792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:44.985 [2024-11-20 11:21:12.316799] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:44.985 [2024-11-20 11:21:12.316967] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:44.985 [2024-11-20 11:21:12.317157] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.985 [2024-11-20 11:21:12.317165] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.985 [2024-11-20 11:21:12.317172] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.985 [2024-11-20 11:21:12.317178] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.985 [2024-11-20 11:21:12.329322] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.985 [2024-11-20 11:21:12.329726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.985 [2024-11-20 11:21:12.329771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:44.985 [2024-11-20 11:21:12.329803] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:44.985 [2024-11-20 11:21:12.330397] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:44.985 [2024-11-20 11:21:12.330799] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.985 [2024-11-20 11:21:12.330807] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.985 [2024-11-20 11:21:12.330815] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.985 [2024-11-20 11:21:12.330821] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.985 [2024-11-20 11:21:12.342230] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.985 [2024-11-20 11:21:12.342607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.985 [2024-11-20 11:21:12.342650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:44.985 [2024-11-20 11:21:12.342673] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:44.985 [2024-11-20 11:21:12.343266] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:44.985 [2024-11-20 11:21:12.343730] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.985 [2024-11-20 11:21:12.343738] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.985 [2024-11-20 11:21:12.343745] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.986 [2024-11-20 11:21:12.343751] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.986 [2024-11-20 11:21:12.355112] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.986 [2024-11-20 11:21:12.355501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.986 [2024-11-20 11:21:12.355556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:44.986 [2024-11-20 11:21:12.355580] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:44.986 [2024-11-20 11:21:12.356175] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:44.986 [2024-11-20 11:21:12.356670] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.986 [2024-11-20 11:21:12.356678] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.986 [2024-11-20 11:21:12.356685] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.986 [2024-11-20 11:21:12.356691] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.986 [2024-11-20 11:21:12.368037] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.986 [2024-11-20 11:21:12.368429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.986 [2024-11-20 11:21:12.368445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:44.986 [2024-11-20 11:21:12.368452] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:44.986 [2024-11-20 11:21:12.368614] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:44.986 [2024-11-20 11:21:12.368779] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.986 [2024-11-20 11:21:12.368787] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.986 [2024-11-20 11:21:12.368793] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.986 [2024-11-20 11:21:12.368799] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.986 [2024-11-20 11:21:12.380922] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.986 [2024-11-20 11:21:12.381243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.986 [2024-11-20 11:21:12.381259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:44.986 [2024-11-20 11:21:12.381266] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:44.986 [2024-11-20 11:21:12.381428] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:44.986 [2024-11-20 11:21:12.381591] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.986 [2024-11-20 11:21:12.381599] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.986 [2024-11-20 11:21:12.381605] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.986 [2024-11-20 11:21:12.381611] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.986 [2024-11-20 11:21:12.393814] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.986 [2024-11-20 11:21:12.394220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.986 [2024-11-20 11:21:12.394266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:44.986 [2024-11-20 11:21:12.394289] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:44.986 [2024-11-20 11:21:12.394757] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:44.986 [2024-11-20 11:21:12.394919] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.986 [2024-11-20 11:21:12.394928] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.986 [2024-11-20 11:21:12.394934] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.986 [2024-11-20 11:21:12.394940] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.986 [2024-11-20 11:21:12.406629] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.986 [2024-11-20 11:21:12.407022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.986 [2024-11-20 11:21:12.407038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:44.986 [2024-11-20 11:21:12.407045] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:44.986 [2024-11-20 11:21:12.407207] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:44.986 [2024-11-20 11:21:12.407370] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.986 [2024-11-20 11:21:12.407378] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.986 [2024-11-20 11:21:12.407387] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.986 [2024-11-20 11:21:12.407394] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.986 [2024-11-20 11:21:12.419464] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.986 [2024-11-20 11:21:12.419875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.986 [2024-11-20 11:21:12.419910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:44.986 [2024-11-20 11:21:12.419936] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:44.986 [2024-11-20 11:21:12.420532] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:44.986 [2024-11-20 11:21:12.420798] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.986 [2024-11-20 11:21:12.420807] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.986 [2024-11-20 11:21:12.420813] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.986 [2024-11-20 11:21:12.420820] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.986 [2024-11-20 11:21:12.432286] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.986 [2024-11-20 11:21:12.432650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.986 [2024-11-20 11:21:12.432666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:44.986 [2024-11-20 11:21:12.432673] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:44.986 [2024-11-20 11:21:12.432836] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:44.986 [2024-11-20 11:21:12.433023] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.986 [2024-11-20 11:21:12.433032] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.986 [2024-11-20 11:21:12.433039] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.986 [2024-11-20 11:21:12.433045] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.986 [2024-11-20 11:21:12.445521] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.986 [2024-11-20 11:21:12.445887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.986 [2024-11-20 11:21:12.445905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:44.986 [2024-11-20 11:21:12.445912] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:44.986 [2024-11-20 11:21:12.446094] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:44.986 [2024-11-20 11:21:12.446281] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.986 [2024-11-20 11:21:12.446290] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.986 [2024-11-20 11:21:12.446297] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.986 [2024-11-20 11:21:12.446305] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.986 [2024-11-20 11:21:12.458527] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.986 [2024-11-20 11:21:12.458921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.987 [2024-11-20 11:21:12.458977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:44.987 [2024-11-20 11:21:12.459002] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:44.987 [2024-11-20 11:21:12.459582] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:44.987 [2024-11-20 11:21:12.459780] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.987 [2024-11-20 11:21:12.459788] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.987 [2024-11-20 11:21:12.459795] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.987 [2024-11-20 11:21:12.459801] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.987 [2024-11-20 11:21:12.471411] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.987 [2024-11-20 11:21:12.471804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.987 [2024-11-20 11:21:12.471819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:44.987 [2024-11-20 11:21:12.471826] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:44.987 [2024-11-20 11:21:12.472010] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:44.987 [2024-11-20 11:21:12.472197] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.987 [2024-11-20 11:21:12.472206] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.987 [2024-11-20 11:21:12.472212] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.987 [2024-11-20 11:21:12.472219] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.247 [2024-11-20 11:21:12.484487] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.247 [2024-11-20 11:21:12.484916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.247 [2024-11-20 11:21:12.484933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:45.247 [2024-11-20 11:21:12.484940] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:45.247 [2024-11-20 11:21:12.485145] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:45.247 [2024-11-20 11:21:12.485337] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.247 [2024-11-20 11:21:12.485345] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.247 [2024-11-20 11:21:12.485352] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.247 [2024-11-20 11:21:12.485359] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.247 [2024-11-20 11:21:12.497403] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.247 [2024-11-20 11:21:12.497771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.247 [2024-11-20 11:21:12.497787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:45.247 [2024-11-20 11:21:12.497797] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:45.247 [2024-11-20 11:21:12.497965] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:45.247 [2024-11-20 11:21:12.498154] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.247 [2024-11-20 11:21:12.498162] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.247 [2024-11-20 11:21:12.498168] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.247 [2024-11-20 11:21:12.498175] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.247 [2024-11-20 11:21:12.510243] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.247 [2024-11-20 11:21:12.510635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.247 [2024-11-20 11:21:12.510652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:45.247 [2024-11-20 11:21:12.510658] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:45.247 [2024-11-20 11:21:12.510821] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:45.247 [2024-11-20 11:21:12.511006] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.247 [2024-11-20 11:21:12.511015] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.247 [2024-11-20 11:21:12.511021] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.247 [2024-11-20 11:21:12.511028] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.247 [2024-11-20 11:21:12.523152] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.247 [2024-11-20 11:21:12.523520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.247 [2024-11-20 11:21:12.523536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:45.247 [2024-11-20 11:21:12.523543] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:45.247 [2024-11-20 11:21:12.523705] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:45.247 [2024-11-20 11:21:12.523866] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.247 [2024-11-20 11:21:12.523874] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.247 [2024-11-20 11:21:12.523880] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.247 [2024-11-20 11:21:12.523886] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.247 [2024-11-20 11:21:12.536068] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.247 [2024-11-20 11:21:12.536446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.247 [2024-11-20 11:21:12.536463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:45.248 [2024-11-20 11:21:12.536470] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:45.248 [2024-11-20 11:21:12.536632] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:45.248 [2024-11-20 11:21:12.536797] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.248 [2024-11-20 11:21:12.536805] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.248 [2024-11-20 11:21:12.536812] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.248 [2024-11-20 11:21:12.536819] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.248 [2024-11-20 11:21:12.548962] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.248 [2024-11-20 11:21:12.549341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.248 [2024-11-20 11:21:12.549357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:45.248 [2024-11-20 11:21:12.549364] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:45.248 [2024-11-20 11:21:12.549526] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:45.248 [2024-11-20 11:21:12.549688] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.248 [2024-11-20 11:21:12.549695] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.248 [2024-11-20 11:21:12.549701] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.248 [2024-11-20 11:21:12.549707] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.248 [2024-11-20 11:21:12.561844] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.248 [2024-11-20 11:21:12.562243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.248 [2024-11-20 11:21:12.562289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:45.248 [2024-11-20 11:21:12.562312] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:45.248 [2024-11-20 11:21:12.562890] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:45.248 [2024-11-20 11:21:12.563308] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.248 [2024-11-20 11:21:12.563317] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.248 [2024-11-20 11:21:12.563323] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.248 [2024-11-20 11:21:12.563330] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.248 [2024-11-20 11:21:12.574774] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.248 [2024-11-20 11:21:12.575172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.248 [2024-11-20 11:21:12.575221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:45.248 [2024-11-20 11:21:12.575245] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:45.248 [2024-11-20 11:21:12.575823] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:45.248 [2024-11-20 11:21:12.576063] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.248 [2024-11-20 11:21:12.576073] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.248 [2024-11-20 11:21:12.576082] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.248 [2024-11-20 11:21:12.576089] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.248 [2024-11-20 11:21:12.587600] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.248 [2024-11-20 11:21:12.588025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.248 [2024-11-20 11:21:12.588041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:45.248 [2024-11-20 11:21:12.588048] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:45.248 [2024-11-20 11:21:12.588211] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:45.248 [2024-11-20 11:21:12.588373] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.248 [2024-11-20 11:21:12.588381] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.248 [2024-11-20 11:21:12.588387] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.248 [2024-11-20 11:21:12.588393] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.248 [2024-11-20 11:21:12.600480] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.248 [2024-11-20 11:21:12.600879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.248 [2024-11-20 11:21:12.600924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:45.248 [2024-11-20 11:21:12.600962] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:45.248 [2024-11-20 11:21:12.601451] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:45.248 [2024-11-20 11:21:12.601625] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.248 [2024-11-20 11:21:12.601634] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.248 [2024-11-20 11:21:12.601640] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.248 [2024-11-20 11:21:12.601647] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.248 [2024-11-20 11:21:12.613301] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.248 [2024-11-20 11:21:12.613728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.248 [2024-11-20 11:21:12.613773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:45.248 [2024-11-20 11:21:12.613796] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:45.248 [2024-11-20 11:21:12.614306] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:45.248 [2024-11-20 11:21:12.614479] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.248 [2024-11-20 11:21:12.614487] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.248 [2024-11-20 11:21:12.614494] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.248 [2024-11-20 11:21:12.614501] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.248 [2024-11-20 11:21:12.626156] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.248 [2024-11-20 11:21:12.626580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.248 [2024-11-20 11:21:12.626596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:45.248 [2024-11-20 11:21:12.626603] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:45.248 [2024-11-20 11:21:12.626765] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:45.248 [2024-11-20 11:21:12.626928] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.248 [2024-11-20 11:21:12.626936] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.248 [2024-11-20 11:21:12.626942] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.248 [2024-11-20 11:21:12.626955] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.248 [2024-11-20 11:21:12.639038] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.248 [2024-11-20 11:21:12.639470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.248 [2024-11-20 11:21:12.639486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:45.248 [2024-11-20 11:21:12.639493] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:45.248 [2024-11-20 11:21:12.639655] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:45.248 [2024-11-20 11:21:12.639818] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.248 [2024-11-20 11:21:12.639826] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.248 [2024-11-20 11:21:12.639832] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.248 [2024-11-20 11:21:12.639838] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.248 [2024-11-20 11:21:12.652016] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.248 [2024-11-20 11:21:12.652416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.249 [2024-11-20 11:21:12.652460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:45.249 [2024-11-20 11:21:12.652482] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:45.249 [2024-11-20 11:21:12.653073] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:45.249 [2024-11-20 11:21:12.653524] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.249 [2024-11-20 11:21:12.653532] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.249 [2024-11-20 11:21:12.653539] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.249 [2024-11-20 11:21:12.653545] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.249 [2024-11-20 11:21:12.664790] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.249 [2024-11-20 11:21:12.665191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.249 [2024-11-20 11:21:12.665236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:45.249 [2024-11-20 11:21:12.665267] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:45.249 [2024-11-20 11:21:12.665846] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:45.249 [2024-11-20 11:21:12.666384] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.249 [2024-11-20 11:21:12.666393] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.249 [2024-11-20 11:21:12.666399] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.249 [2024-11-20 11:21:12.666406] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.249 [2024-11-20 11:21:12.677707] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.249 [2024-11-20 11:21:12.678106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.249 [2024-11-20 11:21:12.678122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:45.249 [2024-11-20 11:21:12.678129] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:45.249 [2024-11-20 11:21:12.678291] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:45.249 [2024-11-20 11:21:12.678454] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.249 [2024-11-20 11:21:12.678463] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.249 [2024-11-20 11:21:12.678469] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.249 [2024-11-20 11:21:12.678475] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.249 [2024-11-20 11:21:12.690726] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.249 [2024-11-20 11:21:12.691086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.249 [2024-11-20 11:21:12.691105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:45.249 [2024-11-20 11:21:12.691113] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:45.249 [2024-11-20 11:21:12.691285] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:45.249 [2024-11-20 11:21:12.691458] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.249 [2024-11-20 11:21:12.691467] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.249 [2024-11-20 11:21:12.691475] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.249 [2024-11-20 11:21:12.691483] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.249 [2024-11-20 11:21:12.703806] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.249 [2024-11-20 11:21:12.704247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.249 [2024-11-20 11:21:12.704294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:45.249 [2024-11-20 11:21:12.704319] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:45.249 [2024-11-20 11:21:12.704897] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:45.249 [2024-11-20 11:21:12.705499] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.249 [2024-11-20 11:21:12.705526] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.249 [2024-11-20 11:21:12.705557] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.249 [2024-11-20 11:21:12.705564] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.249 [2024-11-20 11:21:12.716648] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.249 [2024-11-20 11:21:12.717062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.249 [2024-11-20 11:21:12.717079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:45.249 [2024-11-20 11:21:12.717086] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:45.249 [2024-11-20 11:21:12.717248] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:45.249 [2024-11-20 11:21:12.717410] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.249 [2024-11-20 11:21:12.717418] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.249 [2024-11-20 11:21:12.717424] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.249 [2024-11-20 11:21:12.717430] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.249 [2024-11-20 11:21:12.729506] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.249 [2024-11-20 11:21:12.729914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.249 [2024-11-20 11:21:12.729969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:45.249 [2024-11-20 11:21:12.729996] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:45.249 [2024-11-20 11:21:12.730511] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:45.249 [2024-11-20 11:21:12.730673] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.249 [2024-11-20 11:21:12.730682] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.249 [2024-11-20 11:21:12.730688] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.249 [2024-11-20 11:21:12.730693] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.511 [2024-11-20 11:21:12.742567] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.511 [2024-11-20 11:21:12.742974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.511 [2024-11-20 11:21:12.742992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:45.511 [2024-11-20 11:21:12.742999] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:45.511 [2024-11-20 11:21:12.743171] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:45.511 [2024-11-20 11:21:12.743348] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.511 [2024-11-20 11:21:12.743356] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.511 [2024-11-20 11:21:12.743362] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.511 [2024-11-20 11:21:12.743371] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.511 [2024-11-20 11:21:12.755372] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.511 [2024-11-20 11:21:12.755786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.511 [2024-11-20 11:21:12.755833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:45.511 [2024-11-20 11:21:12.755857] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:45.511 [2024-11-20 11:21:12.756424] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:45.511 [2024-11-20 11:21:12.756597] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.511 [2024-11-20 11:21:12.756605] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.511 [2024-11-20 11:21:12.756612] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.511 [2024-11-20 11:21:12.756618] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.511 [2024-11-20 11:21:12.768319] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.511 [2024-11-20 11:21:12.768713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.511 [2024-11-20 11:21:12.768755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:45.511 [2024-11-20 11:21:12.768782] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:45.511 [2024-11-20 11:21:12.769375] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:45.511 [2024-11-20 11:21:12.769868] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.511 [2024-11-20 11:21:12.769876] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.511 [2024-11-20 11:21:12.769883] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.511 [2024-11-20 11:21:12.769889] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.511 [2024-11-20 11:21:12.781143] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.511 [2024-11-20 11:21:12.781544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.511 [2024-11-20 11:21:12.781587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:45.511 [2024-11-20 11:21:12.781611] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:45.511 [2024-11-20 11:21:12.782202] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:45.511 [2024-11-20 11:21:12.782692] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.511 [2024-11-20 11:21:12.782700] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.511 [2024-11-20 11:21:12.782707] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.511 [2024-11-20 11:21:12.782713] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.511 [2024-11-20 11:21:12.794110] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.511 [2024-11-20 11:21:12.794455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.511 [2024-11-20 11:21:12.794472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:45.511 [2024-11-20 11:21:12.794479] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:45.511 [2024-11-20 11:21:12.794651] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:45.511 [2024-11-20 11:21:12.794823] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.511 [2024-11-20 11:21:12.794831] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.511 [2024-11-20 11:21:12.794838] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.511 [2024-11-20 11:21:12.794844] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.511 [2024-11-20 11:21:12.807101] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.511 [2024-11-20 11:21:12.807391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.511 [2024-11-20 11:21:12.807408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:45.511 [2024-11-20 11:21:12.807415] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:45.511 [2024-11-20 11:21:12.807578] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:45.511 [2024-11-20 11:21:12.807741] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.512 [2024-11-20 11:21:12.807750] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.512 [2024-11-20 11:21:12.807757] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.512 [2024-11-20 11:21:12.807762] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.512 [2024-11-20 11:21:12.820096] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.512 [2024-11-20 11:21:12.820484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.512 [2024-11-20 11:21:12.820500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:45.512 [2024-11-20 11:21:12.820507] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:45.512 [2024-11-20 11:21:12.820669] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:45.512 [2024-11-20 11:21:12.820832] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.512 [2024-11-20 11:21:12.820840] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.512 [2024-11-20 11:21:12.820846] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.512 [2024-11-20 11:21:12.820852] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.512 [2024-11-20 11:21:12.833127] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.512 [2024-11-20 11:21:12.833554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.512 [2024-11-20 11:21:12.833571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:45.512 [2024-11-20 11:21:12.833578] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:45.512 [2024-11-20 11:21:12.833759] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:45.512 [2024-11-20 11:21:12.833937] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.512 [2024-11-20 11:21:12.833946] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.512 [2024-11-20 11:21:12.833959] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.512 [2024-11-20 11:21:12.833966] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.512 [2024-11-20 11:21:12.845925] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.512 [2024-11-20 11:21:12.846311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.512 [2024-11-20 11:21:12.846330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:45.512 [2024-11-20 11:21:12.846337] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:45.512 [2024-11-20 11:21:12.846510] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:45.512 [2024-11-20 11:21:12.846682] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.512 [2024-11-20 11:21:12.846691] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.512 [2024-11-20 11:21:12.846698] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.512 [2024-11-20 11:21:12.846704] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.512 [2024-11-20 11:21:12.858878] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.512 [2024-11-20 11:21:12.859374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.512 [2024-11-20 11:21:12.859421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:45.512 [2024-11-20 11:21:12.859445] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:45.512 [2024-11-20 11:21:12.859891] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:45.512 [2024-11-20 11:21:12.860070] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.512 [2024-11-20 11:21:12.860079] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.512 [2024-11-20 11:21:12.860086] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.512 [2024-11-20 11:21:12.860093] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.512 [2024-11-20 11:21:12.871750] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.512 [2024-11-20 11:21:12.872148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.512 [2024-11-20 11:21:12.872166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:45.512 [2024-11-20 11:21:12.872173] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:45.512 [2024-11-20 11:21:12.872345] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:45.512 [2024-11-20 11:21:12.872517] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.512 [2024-11-20 11:21:12.872529] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.512 [2024-11-20 11:21:12.872535] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.512 [2024-11-20 11:21:12.872542] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.512 [2024-11-20 11:21:12.884660] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.512 [2024-11-20 11:21:12.885083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.512 [2024-11-20 11:21:12.885137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:45.512 [2024-11-20 11:21:12.885160] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:45.512 [2024-11-20 11:21:12.885739] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:45.512 [2024-11-20 11:21:12.885951] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.512 [2024-11-20 11:21:12.885961] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.512 [2024-11-20 11:21:12.885967] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.512 [2024-11-20 11:21:12.885973] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.512 [2024-11-20 11:21:12.897538] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.512 [2024-11-20 11:21:12.897882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.512 [2024-11-20 11:21:12.897898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:45.512 [2024-11-20 11:21:12.897905] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:45.512 [2024-11-20 11:21:12.898073] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:45.512 [2024-11-20 11:21:12.898238] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.512 [2024-11-20 11:21:12.898246] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.512 [2024-11-20 11:21:12.898252] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.512 [2024-11-20 11:21:12.898258] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.512 [2024-11-20 11:21:12.910603] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.512 [2024-11-20 11:21:12.910970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.512 [2024-11-20 11:21:12.910987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:45.512 [2024-11-20 11:21:12.910994] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:45.512 [2024-11-20 11:21:12.911156] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:45.512 [2024-11-20 11:21:12.911318] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.512 [2024-11-20 11:21:12.911326] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.512 [2024-11-20 11:21:12.911333] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.512 [2024-11-20 11:21:12.911342] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.513 [2024-11-20 11:21:12.923586] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.513 [2024-11-20 11:21:12.924032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.513 [2024-11-20 11:21:12.924049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:45.513 [2024-11-20 11:21:12.924056] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:45.513 [2024-11-20 11:21:12.924244] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:45.513 [2024-11-20 11:21:12.924415] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.513 [2024-11-20 11:21:12.924423] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.513 [2024-11-20 11:21:12.924430] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.513 [2024-11-20 11:21:12.924436] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.513 [2024-11-20 11:21:12.936432] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.513 [2024-11-20 11:21:12.936853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.513 [2024-11-20 11:21:12.936870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:45.513 [2024-11-20 11:21:12.936877] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:45.513 [2024-11-20 11:21:12.937054] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:45.513 [2024-11-20 11:21:12.937236] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.513 [2024-11-20 11:21:12.937244] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.513 [2024-11-20 11:21:12.937250] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.513 [2024-11-20 11:21:12.937256] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.513 [2024-11-20 11:21:12.949370] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.513 [2024-11-20 11:21:12.949725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.513 [2024-11-20 11:21:12.949768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:45.513 [2024-11-20 11:21:12.949792] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:45.513 [2024-11-20 11:21:12.950298] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:45.513 [2024-11-20 11:21:12.950477] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.513 [2024-11-20 11:21:12.950486] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.513 [2024-11-20 11:21:12.950494] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.513 [2024-11-20 11:21:12.950501] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.513 [2024-11-20 11:21:12.962561] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.513 [2024-11-20 11:21:12.962923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.513 [2024-11-20 11:21:12.962939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:45.513 [2024-11-20 11:21:12.962951] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:45.513 [2024-11-20 11:21:12.963129] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:45.513 [2024-11-20 11:21:12.963307] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.513 [2024-11-20 11:21:12.963315] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.513 [2024-11-20 11:21:12.963322] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.513 [2024-11-20 11:21:12.963329] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.513 [2024-11-20 11:21:12.975643] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.513 [2024-11-20 11:21:12.976047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.513 [2024-11-20 11:21:12.976064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:45.513 [2024-11-20 11:21:12.976072] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:45.513 [2024-11-20 11:21:12.976248] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:45.513 [2024-11-20 11:21:12.976425] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.513 [2024-11-20 11:21:12.976434] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.513 [2024-11-20 11:21:12.976441] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.513 [2024-11-20 11:21:12.976448] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.513 [2024-11-20 11:21:12.988777] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.513 [2024-11-20 11:21:12.989211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.513 [2024-11-20 11:21:12.989256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:45.513 [2024-11-20 11:21:12.989279] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:45.513 [2024-11-20 11:21:12.989857] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:45.513 [2024-11-20 11:21:12.990130] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.513 [2024-11-20 11:21:12.990140] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.513 [2024-11-20 11:21:12.990146] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.513 [2024-11-20 11:21:12.990152] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.513 7457.00 IOPS, 29.13 MiB/s [2024-11-20T10:21:13.009Z] [2024-11-20 11:21:13.001950] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.513 [2024-11-20 11:21:13.002331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.513 [2024-11-20 11:21:13.002376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:45.513 [2024-11-20 11:21:13.002400] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:45.513 [2024-11-20 11:21:13.003009] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:45.513 [2024-11-20 11:21:13.003187] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.513 [2024-11-20 11:21:13.003196] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.513 [2024-11-20 11:21:13.003203] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.513 [2024-11-20 11:21:13.003210] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.774 [2024-11-20 11:21:13.015063] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.774 [2024-11-20 11:21:13.015374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.774 [2024-11-20 11:21:13.015391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:45.774 [2024-11-20 11:21:13.015399] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:45.774 [2024-11-20 11:21:13.015576] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:45.774 [2024-11-20 11:21:13.015754] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.774 [2024-11-20 11:21:13.015763] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.774 [2024-11-20 11:21:13.015770] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.774 [2024-11-20 11:21:13.015777] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.775 [2024-11-20 11:21:13.028076] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.775 [2024-11-20 11:21:13.028436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.775 [2024-11-20 11:21:13.028452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:45.775 [2024-11-20 11:21:13.028459] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:45.775 [2024-11-20 11:21:13.028622] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:45.775 [2024-11-20 11:21:13.028785] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.775 [2024-11-20 11:21:13.028793] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.775 [2024-11-20 11:21:13.028800] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.775 [2024-11-20 11:21:13.028806] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.775 [2024-11-20 11:21:13.041150] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.775 [2024-11-20 11:21:13.041504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.775 [2024-11-20 11:21:13.041521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:45.775 [2024-11-20 11:21:13.041528] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:45.775 [2024-11-20 11:21:13.041700] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:45.775 [2024-11-20 11:21:13.041872] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.775 [2024-11-20 11:21:13.041884] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.775 [2024-11-20 11:21:13.041891] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.775 [2024-11-20 11:21:13.041897] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.775 [2024-11-20 11:21:13.054150] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.775 [2024-11-20 11:21:13.054498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.775 [2024-11-20 11:21:13.054538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:45.775 [2024-11-20 11:21:13.054564] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:45.775 [2024-11-20 11:21:13.055126] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:45.775 [2024-11-20 11:21:13.055300] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.775 [2024-11-20 11:21:13.055308] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.775 [2024-11-20 11:21:13.055315] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.775 [2024-11-20 11:21:13.055321] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.775 [2024-11-20 11:21:13.067158] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.775 [2024-11-20 11:21:13.067493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.775 [2024-11-20 11:21:13.067511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:45.775 [2024-11-20 11:21:13.067519] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:45.775 [2024-11-20 11:21:13.067690] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:45.775 [2024-11-20 11:21:13.067861] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.775 [2024-11-20 11:21:13.067870] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.775 [2024-11-20 11:21:13.067876] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.775 [2024-11-20 11:21:13.067883] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.775 [2024-11-20 11:21:13.080122] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.775 [2024-11-20 11:21:13.080518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.775 [2024-11-20 11:21:13.080535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:45.775 [2024-11-20 11:21:13.080542] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:45.775 [2024-11-20 11:21:13.080704] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:45.775 [2024-11-20 11:21:13.080866] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.775 [2024-11-20 11:21:13.080874] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.775 [2024-11-20 11:21:13.080881] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.775 [2024-11-20 11:21:13.080891] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.775 [2024-11-20 11:21:13.092940] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.775 [2024-11-20 11:21:13.093251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.775 [2024-11-20 11:21:13.093268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:45.775 [2024-11-20 11:21:13.093275] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:45.775 [2024-11-20 11:21:13.093446] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:45.775 [2024-11-20 11:21:13.093617] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.775 [2024-11-20 11:21:13.093626] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.775 [2024-11-20 11:21:13.093632] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.775 [2024-11-20 11:21:13.093639] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.775 [2024-11-20 11:21:13.105898] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.775 [2024-11-20 11:21:13.106192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.775 [2024-11-20 11:21:13.106221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:45.775 [2024-11-20 11:21:13.106228] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:45.775 [2024-11-20 11:21:13.106390] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:45.775 [2024-11-20 11:21:13.106553] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.775 [2024-11-20 11:21:13.106561] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.775 [2024-11-20 11:21:13.106567] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.775 [2024-11-20 11:21:13.106574] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.775 [2024-11-20 11:21:13.119041] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.775 [2024-11-20 11:21:13.119377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.775 [2024-11-20 11:21:13.119394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:45.775 [2024-11-20 11:21:13.119402] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:45.775 [2024-11-20 11:21:13.119578] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:45.775 [2024-11-20 11:21:13.119756] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.776 [2024-11-20 11:21:13.119764] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.776 [2024-11-20 11:21:13.119771] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.776 [2024-11-20 11:21:13.119778] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.776 [2024-11-20 11:21:13.132086] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.776 [2024-11-20 11:21:13.132461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.776 [2024-11-20 11:21:13.132477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:45.776 [2024-11-20 11:21:13.132485] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:45.776 [2024-11-20 11:21:13.132661] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:45.776 [2024-11-20 11:21:13.132839] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.776 [2024-11-20 11:21:13.132848] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.776 [2024-11-20 11:21:13.132854] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.776 [2024-11-20 11:21:13.132861] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.776 [2024-11-20 11:21:13.145167] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.776 [2024-11-20 11:21:13.145557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.776 [2024-11-20 11:21:13.145574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:45.776 [2024-11-20 11:21:13.145582] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:45.776 [2024-11-20 11:21:13.145758] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:45.776 [2024-11-20 11:21:13.145936] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.776 [2024-11-20 11:21:13.145944] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.776 [2024-11-20 11:21:13.145958] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.776 [2024-11-20 11:21:13.145965] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.776 [2024-11-20 11:21:13.158277] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.776 [2024-11-20 11:21:13.158712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.776 [2024-11-20 11:21:13.158729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:45.776 [2024-11-20 11:21:13.158737] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:45.776 [2024-11-20 11:21:13.158913] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:45.776 [2024-11-20 11:21:13.159096] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.776 [2024-11-20 11:21:13.159105] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.776 [2024-11-20 11:21:13.159112] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.776 [2024-11-20 11:21:13.159118] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.776 [2024-11-20 11:21:13.171433] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.776 [2024-11-20 11:21:13.171867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.776 [2024-11-20 11:21:13.171884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:45.776 [2024-11-20 11:21:13.171892] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:45.776 [2024-11-20 11:21:13.172079] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:45.776 [2024-11-20 11:21:13.172257] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.776 [2024-11-20 11:21:13.172265] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.776 [2024-11-20 11:21:13.172272] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.776 [2024-11-20 11:21:13.172279] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.776 [2024-11-20 11:21:13.184602] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.776 [2024-11-20 11:21:13.185036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.776 [2024-11-20 11:21:13.185054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:45.776 [2024-11-20 11:21:13.185062] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:45.776 [2024-11-20 11:21:13.185239] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:45.776 [2024-11-20 11:21:13.185417] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.776 [2024-11-20 11:21:13.185426] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.776 [2024-11-20 11:21:13.185432] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.776 [2024-11-20 11:21:13.185439] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.776 [2024-11-20 11:21:13.197758] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.776 [2024-11-20 11:21:13.198125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.776 [2024-11-20 11:21:13.198143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:45.776 [2024-11-20 11:21:13.198150] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:45.776 [2024-11-20 11:21:13.198327] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:45.776 [2024-11-20 11:21:13.198506] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.776 [2024-11-20 11:21:13.198514] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.776 [2024-11-20 11:21:13.198521] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.776 [2024-11-20 11:21:13.198528] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.776 [2024-11-20 11:21:13.210864] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.776 [2024-11-20 11:21:13.211241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.776 [2024-11-20 11:21:13.211259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:45.776 [2024-11-20 11:21:13.211267] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:45.776 [2024-11-20 11:21:13.211445] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:45.776 [2024-11-20 11:21:13.211623] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.776 [2024-11-20 11:21:13.211636] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.776 [2024-11-20 11:21:13.211644] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.776 [2024-11-20 11:21:13.211652] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.777 [2024-11-20 11:21:13.223985] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.777 [2024-11-20 11:21:13.224409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.777 [2024-11-20 11:21:13.224453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:45.777 [2024-11-20 11:21:13.224477] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:45.777 [2024-11-20 11:21:13.224885] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:45.777 [2024-11-20 11:21:13.225069] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.777 [2024-11-20 11:21:13.225078] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.777 [2024-11-20 11:21:13.225085] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.777 [2024-11-20 11:21:13.225092] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.777 [2024-11-20 11:21:13.237079] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.777 [2024-11-20 11:21:13.237503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.777 [2024-11-20 11:21:13.237519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:45.777 [2024-11-20 11:21:13.237526] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:45.777 [2024-11-20 11:21:13.237687] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:45.777 [2024-11-20 11:21:13.237849] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.777 [2024-11-20 11:21:13.237857] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.777 [2024-11-20 11:21:13.237863] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.777 [2024-11-20 11:21:13.237869] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.777 [2024-11-20 11:21:13.249864] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.777 [2024-11-20 11:21:13.250288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.777 [2024-11-20 11:21:13.250305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:45.777 [2024-11-20 11:21:13.250312] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:45.777 [2024-11-20 11:21:13.250484] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:45.777 [2024-11-20 11:21:13.250657] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.777 [2024-11-20 11:21:13.250665] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.777 [2024-11-20 11:21:13.250672] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.777 [2024-11-20 11:21:13.250685] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.777 [2024-11-20 11:21:13.262780] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.777 [2024-11-20 11:21:13.263190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.777 [2024-11-20 11:21:13.263208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:45.777 [2024-11-20 11:21:13.263215] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:45.777 [2024-11-20 11:21:13.263391] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:45.777 [2024-11-20 11:21:13.263568] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.777 [2024-11-20 11:21:13.263576] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.777 [2024-11-20 11:21:13.263583] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.777 [2024-11-20 11:21:13.263589] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.038 [2024-11-20 11:21:13.275807] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.038 [2024-11-20 11:21:13.276224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.038 [2024-11-20 11:21:13.276270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:46.038 [2024-11-20 11:21:13.276293] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:46.038 [2024-11-20 11:21:13.276804] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:46.038 [2024-11-20 11:21:13.276988] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.038 [2024-11-20 11:21:13.277000] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.038 [2024-11-20 11:21:13.277008] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.038 [2024-11-20 11:21:13.277014] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.038 [2024-11-20 11:21:13.288749] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.038 [2024-11-20 11:21:13.289182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.038 [2024-11-20 11:21:13.289226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:46.038 [2024-11-20 11:21:13.289250] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:46.038 [2024-11-20 11:21:13.289701] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:46.038 [2024-11-20 11:21:13.289863] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.038 [2024-11-20 11:21:13.289871] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.038 [2024-11-20 11:21:13.289877] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.038 [2024-11-20 11:21:13.289884] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.038 [2024-11-20 11:21:13.301774] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.038 [2024-11-20 11:21:13.302196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.039 [2024-11-20 11:21:13.302217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:46.039 [2024-11-20 11:21:13.302225] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:46.039 [2024-11-20 11:21:13.302396] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:46.039 [2024-11-20 11:21:13.302569] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.039 [2024-11-20 11:21:13.302577] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.039 [2024-11-20 11:21:13.302585] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.039 [2024-11-20 11:21:13.302592] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.039 [2024-11-20 11:21:13.314702] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.039 [2024-11-20 11:21:13.315118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.039 [2024-11-20 11:21:13.315135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:46.039 [2024-11-20 11:21:13.315142] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:46.039 [2024-11-20 11:21:13.315305] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:46.039 [2024-11-20 11:21:13.315468] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.039 [2024-11-20 11:21:13.315477] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.039 [2024-11-20 11:21:13.315483] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.039 [2024-11-20 11:21:13.315489] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.039 [2024-11-20 11:21:13.327597] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.039 [2024-11-20 11:21:13.328021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.039 [2024-11-20 11:21:13.328037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:46.039 [2024-11-20 11:21:13.328044] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:46.039 [2024-11-20 11:21:13.328206] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:46.039 [2024-11-20 11:21:13.328368] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.039 [2024-11-20 11:21:13.328377] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.039 [2024-11-20 11:21:13.328383] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.039 [2024-11-20 11:21:13.328389] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.039 [2024-11-20 11:21:13.340440] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.039 [2024-11-20 11:21:13.340855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.039 [2024-11-20 11:21:13.340872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:46.039 [2024-11-20 11:21:13.340879] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:46.039 [2024-11-20 11:21:13.341069] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:46.039 [2024-11-20 11:21:13.341242] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.039 [2024-11-20 11:21:13.341250] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.039 [2024-11-20 11:21:13.341257] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.039 [2024-11-20 11:21:13.341264] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.039 [2024-11-20 11:21:13.353301] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.039 [2024-11-20 11:21:13.353726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.039 [2024-11-20 11:21:13.353764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:46.039 [2024-11-20 11:21:13.353789] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:46.039 [2024-11-20 11:21:13.354381] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:46.039 [2024-11-20 11:21:13.354976] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.039 [2024-11-20 11:21:13.355003] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.039 [2024-11-20 11:21:13.355024] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.039 [2024-11-20 11:21:13.355044] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.039 [2024-11-20 11:21:13.366192] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.039 [2024-11-20 11:21:13.366562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.039 [2024-11-20 11:21:13.366579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:46.039 [2024-11-20 11:21:13.366587] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:46.039 [2024-11-20 11:21:13.366758] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:46.039 [2024-11-20 11:21:13.366929] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.039 [2024-11-20 11:21:13.366938] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.039 [2024-11-20 11:21:13.366944] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.039 [2024-11-20 11:21:13.366958] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.039 [2024-11-20 11:21:13.379027] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.039 [2024-11-20 11:21:13.379451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.039 [2024-11-20 11:21:13.379466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:46.039 [2024-11-20 11:21:13.379473] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:46.039 [2024-11-20 11:21:13.379636] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:46.039 [2024-11-20 11:21:13.379799] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.039 [2024-11-20 11:21:13.379810] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.039 [2024-11-20 11:21:13.379817] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.039 [2024-11-20 11:21:13.379823] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.039 [2024-11-20 11:21:13.391854] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.039 [2024-11-20 11:21:13.392290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.039 [2024-11-20 11:21:13.392335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:46.039 [2024-11-20 11:21:13.392359] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:46.039 [2024-11-20 11:21:13.392869] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:46.039 [2024-11-20 11:21:13.393055] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.039 [2024-11-20 11:21:13.393064] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.039 [2024-11-20 11:21:13.393071] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.039 [2024-11-20 11:21:13.393078] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.039 [2024-11-20 11:21:13.404816] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.039 [2024-11-20 11:21:13.405254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.039 [2024-11-20 11:21:13.405300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:46.039 [2024-11-20 11:21:13.405324] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:46.039 [2024-11-20 11:21:13.405902] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:46.039 [2024-11-20 11:21:13.406129] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.039 [2024-11-20 11:21:13.406138] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.040 [2024-11-20 11:21:13.406144] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.040 [2024-11-20 11:21:13.406151] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.040 [2024-11-20 11:21:13.417767] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.040 [2024-11-20 11:21:13.418089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.040 [2024-11-20 11:21:13.418105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:46.040 [2024-11-20 11:21:13.418112] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:46.040 [2024-11-20 11:21:13.418274] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:46.040 [2024-11-20 11:21:13.418436] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.040 [2024-11-20 11:21:13.418444] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.040 [2024-11-20 11:21:13.418451] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.040 [2024-11-20 11:21:13.418457] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.040 [2024-11-20 11:21:13.430692] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.040 [2024-11-20 11:21:13.431115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.040 [2024-11-20 11:21:13.431132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:46.040 [2024-11-20 11:21:13.431139] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:46.040 [2024-11-20 11:21:13.431301] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:46.040 [2024-11-20 11:21:13.431463] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.040 [2024-11-20 11:21:13.431471] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.040 [2024-11-20 11:21:13.431477] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.040 [2024-11-20 11:21:13.431483] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.040 [2024-11-20 11:21:13.443562] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.040 [2024-11-20 11:21:13.443990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.040 [2024-11-20 11:21:13.444034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:46.040 [2024-11-20 11:21:13.444057] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:46.040 [2024-11-20 11:21:13.444480] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:46.040 [2024-11-20 11:21:13.444642] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.040 [2024-11-20 11:21:13.444650] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.040 [2024-11-20 11:21:13.444657] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.040 [2024-11-20 11:21:13.444663] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.040 [2024-11-20 11:21:13.456402] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.040 [2024-11-20 11:21:13.456829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.040 [2024-11-20 11:21:13.456872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:46.040 [2024-11-20 11:21:13.456895] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:46.040 [2024-11-20 11:21:13.457371] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:46.040 [2024-11-20 11:21:13.457543] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.040 [2024-11-20 11:21:13.457552] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.040 [2024-11-20 11:21:13.457559] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.040 [2024-11-20 11:21:13.457565] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.040 [2024-11-20 11:21:13.469314] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.040 [2024-11-20 11:21:13.469744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.040 [2024-11-20 11:21:13.469763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:46.040 [2024-11-20 11:21:13.469771] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:46.040 [2024-11-20 11:21:13.469932] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:46.040 [2024-11-20 11:21:13.470123] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.040 [2024-11-20 11:21:13.470137] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.040 [2024-11-20 11:21:13.470144] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.040 [2024-11-20 11:21:13.470150] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.040 [2024-11-20 11:21:13.482308] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.040 [2024-11-20 11:21:13.482760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.040 [2024-11-20 11:21:13.482778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:46.040 [2024-11-20 11:21:13.482786] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:46.040 [2024-11-20 11:21:13.482968] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:46.040 [2024-11-20 11:21:13.483145] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.040 [2024-11-20 11:21:13.483154] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.040 [2024-11-20 11:21:13.483161] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.040 [2024-11-20 11:21:13.483168] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.040 [2024-11-20 11:21:13.495221] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.040 [2024-11-20 11:21:13.495633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.040 [2024-11-20 11:21:13.495649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:46.040 [2024-11-20 11:21:13.495656] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:46.040 [2024-11-20 11:21:13.495818] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:46.040 [2024-11-20 11:21:13.495985] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.040 [2024-11-20 11:21:13.495993] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.040 [2024-11-20 11:21:13.496000] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.040 [2024-11-20 11:21:13.496006] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.040 [2024-11-20 11:21:13.508114] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.040 [2024-11-20 11:21:13.508524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.040 [2024-11-20 11:21:13.508568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:46.040 [2024-11-20 11:21:13.508591] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:46.040 [2024-11-20 11:21:13.509183] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:46.040 [2024-11-20 11:21:13.509653] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.040 [2024-11-20 11:21:13.509662] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.040 [2024-11-20 11:21:13.509668] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.040 [2024-11-20 11:21:13.509675] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.040 [2024-11-20 11:21:13.520969] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.040 [2024-11-20 11:21:13.521385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.040 [2024-11-20 11:21:13.521401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:46.041 [2024-11-20 11:21:13.521408] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:46.041 [2024-11-20 11:21:13.521570] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:46.041 [2024-11-20 11:21:13.521733] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.041 [2024-11-20 11:21:13.521741] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.041 [2024-11-20 11:21:13.521747] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.041 [2024-11-20 11:21:13.521753] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.301 [2024-11-20 11:21:13.533966] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.301 [2024-11-20 11:21:13.534388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.301 [2024-11-20 11:21:13.534404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:46.301 [2024-11-20 11:21:13.534411] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:46.301 [2024-11-20 11:21:13.534573] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:46.301 [2024-11-20 11:21:13.534735] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.301 [2024-11-20 11:21:13.534742] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.301 [2024-11-20 11:21:13.534748] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.301 [2024-11-20 11:21:13.534754] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.301 [2024-11-20 11:21:13.546865] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.301 [2024-11-20 11:21:13.547290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.301 [2024-11-20 11:21:13.547306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:46.301 [2024-11-20 11:21:13.547313] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:46.301 [2024-11-20 11:21:13.547475] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:46.301 [2024-11-20 11:21:13.547637] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.301 [2024-11-20 11:21:13.547645] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.301 [2024-11-20 11:21:13.547654] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.301 [2024-11-20 11:21:13.547660] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.301 [2024-11-20 11:21:13.559779] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.301 [2024-11-20 11:21:13.560203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.301 [2024-11-20 11:21:13.560219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:46.301 [2024-11-20 11:21:13.560226] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:46.301 [2024-11-20 11:21:13.560388] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:46.302 [2024-11-20 11:21:13.560550] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.302 [2024-11-20 11:21:13.560558] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.302 [2024-11-20 11:21:13.560564] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.302 [2024-11-20 11:21:13.560570] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.302 [2024-11-20 11:21:13.572568] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.302 [2024-11-20 11:21:13.572984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.302 [2024-11-20 11:21:13.573002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:46.302 [2024-11-20 11:21:13.573009] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:46.302 [2024-11-20 11:21:13.573171] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:46.302 [2024-11-20 11:21:13.573332] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.302 [2024-11-20 11:21:13.573340] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.302 [2024-11-20 11:21:13.573347] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.302 [2024-11-20 11:21:13.573353] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.302 [2024-11-20 11:21:13.585350] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.302 [2024-11-20 11:21:13.585786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.302 [2024-11-20 11:21:13.585830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:46.302 [2024-11-20 11:21:13.585853] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:46.302 [2024-11-20 11:21:13.586370] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:46.302 [2024-11-20 11:21:13.586542] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.302 [2024-11-20 11:21:13.586551] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.302 [2024-11-20 11:21:13.586557] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.302 [2024-11-20 11:21:13.586564] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.302 [2024-11-20 11:21:13.598281] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.302 [2024-11-20 11:21:13.598685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.302 [2024-11-20 11:21:13.598731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:46.302 [2024-11-20 11:21:13.598755] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:46.302 [2024-11-20 11:21:13.599348] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:46.302 [2024-11-20 11:21:13.599806] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.302 [2024-11-20 11:21:13.599814] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.302 [2024-11-20 11:21:13.599821] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.302 [2024-11-20 11:21:13.599827] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.302 [2024-11-20 11:21:13.611085] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.302 [2024-11-20 11:21:13.611436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.302 [2024-11-20 11:21:13.611452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:46.302 [2024-11-20 11:21:13.611460] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:46.302 [2024-11-20 11:21:13.611621] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:46.302 [2024-11-20 11:21:13.611784] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.302 [2024-11-20 11:21:13.611792] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.302 [2024-11-20 11:21:13.611798] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.302 [2024-11-20 11:21:13.611804] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.302 [2024-11-20 11:21:13.623979] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.302 [2024-11-20 11:21:13.624371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.302 [2024-11-20 11:21:13.624387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:46.302 [2024-11-20 11:21:13.624394] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:46.302 [2024-11-20 11:21:13.624556] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:46.302 [2024-11-20 11:21:13.624717] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.302 [2024-11-20 11:21:13.624725] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.302 [2024-11-20 11:21:13.624731] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.302 [2024-11-20 11:21:13.624737] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.302 [2024-11-20 11:21:13.636767] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.302 [2024-11-20 11:21:13.637196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.302 [2024-11-20 11:21:13.637241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:46.302 [2024-11-20 11:21:13.637273] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:46.302 [2024-11-20 11:21:13.637699] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:46.302 [2024-11-20 11:21:13.637861] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.302 [2024-11-20 11:21:13.637869] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.302 [2024-11-20 11:21:13.637875] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.302 [2024-11-20 11:21:13.637882] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.302 [2024-11-20 11:21:13.649605] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.302 [2024-11-20 11:21:13.649996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.302 [2024-11-20 11:21:13.650013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:46.302 [2024-11-20 11:21:13.650020] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:46.302 [2024-11-20 11:21:13.650182] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:46.302 [2024-11-20 11:21:13.650343] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.302 [2024-11-20 11:21:13.650351] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.302 [2024-11-20 11:21:13.650357] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.302 [2024-11-20 11:21:13.650363] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.302 [2024-11-20 11:21:13.662442] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.302 [2024-11-20 11:21:13.662876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.302 [2024-11-20 11:21:13.662920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:46.302 [2024-11-20 11:21:13.662943] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:46.302 [2024-11-20 11:21:13.663540] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:46.302 [2024-11-20 11:21:13.663927] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.302 [2024-11-20 11:21:13.663936] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.302 [2024-11-20 11:21:13.663942] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.302 [2024-11-20 11:21:13.663953] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.303 [2024-11-20 11:21:13.675309] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.303 [2024-11-20 11:21:13.675618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.303 [2024-11-20 11:21:13.675635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:46.303 [2024-11-20 11:21:13.675642] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:46.303 [2024-11-20 11:21:13.675804] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:46.303 [2024-11-20 11:21:13.675991] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.303 [2024-11-20 11:21:13.676000] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.303 [2024-11-20 11:21:13.676006] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.303 [2024-11-20 11:21:13.676012] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.303 [2024-11-20 11:21:13.688290] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.303 [2024-11-20 11:21:13.688670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.303 [2024-11-20 11:21:13.688687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:46.303 [2024-11-20 11:21:13.688695] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:46.303 [2024-11-20 11:21:13.688868] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:46.303 [2024-11-20 11:21:13.689047] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.303 [2024-11-20 11:21:13.689056] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.303 [2024-11-20 11:21:13.689063] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.303 [2024-11-20 11:21:13.689069] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.303 [2024-11-20 11:21:13.701131] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.303 [2024-11-20 11:21:13.701553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.303 [2024-11-20 11:21:13.701570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:46.303 [2024-11-20 11:21:13.701577] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:46.303 [2024-11-20 11:21:13.701739] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:46.303 [2024-11-20 11:21:13.701901] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.303 [2024-11-20 11:21:13.701909] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.303 [2024-11-20 11:21:13.701916] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.303 [2024-11-20 11:21:13.701921] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.303 [2024-11-20 11:21:13.714168] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.303 [2024-11-20 11:21:13.714600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.303 [2024-11-20 11:21:13.714646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:46.303 [2024-11-20 11:21:13.714670] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:46.303 [2024-11-20 11:21:13.715116] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:46.303 [2024-11-20 11:21:13.715289] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.303 [2024-11-20 11:21:13.715297] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.303 [2024-11-20 11:21:13.715308] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.303 [2024-11-20 11:21:13.715314] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.303 [2024-11-20 11:21:13.727018] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.303 [2024-11-20 11:21:13.727437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.303 [2024-11-20 11:21:13.727482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:46.303 [2024-11-20 11:21:13.727506] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:46.303 [2024-11-20 11:21:13.727979] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:46.303 [2024-11-20 11:21:13.728154] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.303 [2024-11-20 11:21:13.728163] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.303 [2024-11-20 11:21:13.728169] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.303 [2024-11-20 11:21:13.728176] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.303 [2024-11-20 11:21:13.740096] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.303 [2024-11-20 11:21:13.740535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.303 [2024-11-20 11:21:13.740581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:46.303 [2024-11-20 11:21:13.740605] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:46.303 [2024-11-20 11:21:13.741055] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:46.303 [2024-11-20 11:21:13.741234] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.303 [2024-11-20 11:21:13.741242] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.303 [2024-11-20 11:21:13.741249] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.303 [2024-11-20 11:21:13.741256] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.303 [2024-11-20 11:21:13.752927] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.303 [2024-11-20 11:21:13.753349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.303 [2024-11-20 11:21:13.753365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:46.303 [2024-11-20 11:21:13.753372] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:46.303 [2024-11-20 11:21:13.753534] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:46.303 [2024-11-20 11:21:13.753696] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.303 [2024-11-20 11:21:13.753704] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.303 [2024-11-20 11:21:13.753711] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.303 [2024-11-20 11:21:13.753717] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.303 [2024-11-20 11:21:13.765774] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.303 [2024-11-20 11:21:13.766198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.303 [2024-11-20 11:21:13.766215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:46.303 [2024-11-20 11:21:13.766222] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:46.303 [2024-11-20 11:21:13.766384] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:46.303 [2024-11-20 11:21:13.766546] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.303 [2024-11-20 11:21:13.766554] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.303 [2024-11-20 11:21:13.766560] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.303 [2024-11-20 11:21:13.766566] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.303 [2024-11-20 11:21:13.778599] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.303 [2024-11-20 11:21:13.779018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.303 [2024-11-20 11:21:13.779034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:46.303 [2024-11-20 11:21:13.779041] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:46.303 [2024-11-20 11:21:13.779204] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:46.304 [2024-11-20 11:21:13.779366] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.304 [2024-11-20 11:21:13.779374] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.304 [2024-11-20 11:21:13.779380] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.304 [2024-11-20 11:21:13.779386] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.304 [2024-11-20 11:21:13.791668] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.304 [2024-11-20 11:21:13.792107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.304 [2024-11-20 11:21:13.792125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:46.304 [2024-11-20 11:21:13.792132] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:46.304 [2024-11-20 11:21:13.792309] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:46.304 [2024-11-20 11:21:13.792485] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.304 [2024-11-20 11:21:13.792494] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.304 [2024-11-20 11:21:13.792501] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.304 [2024-11-20 11:21:13.792507] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.564 [2024-11-20 11:21:13.804639] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.564 [2024-11-20 11:21:13.805074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.564 [2024-11-20 11:21:13.805091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:46.565 [2024-11-20 11:21:13.805102] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:46.565 [2024-11-20 11:21:13.805280] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:46.565 [2024-11-20 11:21:13.805443] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.565 [2024-11-20 11:21:13.805451] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.565 [2024-11-20 11:21:13.805458] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.565 [2024-11-20 11:21:13.805464] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.565 [2024-11-20 11:21:13.817660] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.565 [2024-11-20 11:21:13.818061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.565 [2024-11-20 11:21:13.818107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:46.565 [2024-11-20 11:21:13.818130] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:46.565 [2024-11-20 11:21:13.818708] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:46.565 [2024-11-20 11:21:13.819299] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.565 [2024-11-20 11:21:13.819326] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.565 [2024-11-20 11:21:13.819349] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.565 [2024-11-20 11:21:13.819369] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.565 [2024-11-20 11:21:13.830499] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.565 [2024-11-20 11:21:13.830923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.565 [2024-11-20 11:21:13.830979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:46.565 [2024-11-20 11:21:13.831004] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:46.565 [2024-11-20 11:21:13.831583] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:46.565 [2024-11-20 11:21:13.832182] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.565 [2024-11-20 11:21:13.832191] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.565 [2024-11-20 11:21:13.832197] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.565 [2024-11-20 11:21:13.832204] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.565 [2024-11-20 11:21:13.843364] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.565 [2024-11-20 11:21:13.843809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.565 [2024-11-20 11:21:13.843826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:46.565 [2024-11-20 11:21:13.843834] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:46.565 [2024-11-20 11:21:13.844010] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:46.565 [2024-11-20 11:21:13.844186] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.565 [2024-11-20 11:21:13.844194] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.565 [2024-11-20 11:21:13.844201] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.565 [2024-11-20 11:21:13.844207] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.565 [2024-11-20 11:21:13.856281] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.565 [2024-11-20 11:21:13.856607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.565 [2024-11-20 11:21:13.856622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:46.565 [2024-11-20 11:21:13.856629] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:46.565 [2024-11-20 11:21:13.856791] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:46.565 [2024-11-20 11:21:13.856959] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.565 [2024-11-20 11:21:13.856967] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.565 [2024-11-20 11:21:13.856989] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.565 [2024-11-20 11:21:13.856996] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.565 [2024-11-20 11:21:13.869117] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.565 [2024-11-20 11:21:13.869532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.565 [2024-11-20 11:21:13.869548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:46.565 [2024-11-20 11:21:13.869555] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:46.565 [2024-11-20 11:21:13.869717] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:46.565 [2024-11-20 11:21:13.869879] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.565 [2024-11-20 11:21:13.869887] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.565 [2024-11-20 11:21:13.869893] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.565 [2024-11-20 11:21:13.869899] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.565 [2024-11-20 11:21:13.881931] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.565 [2024-11-20 11:21:13.882335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.565 [2024-11-20 11:21:13.882379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:46.565 [2024-11-20 11:21:13.882402] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:46.565 [2024-11-20 11:21:13.882992] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:46.565 [2024-11-20 11:21:13.883460] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.565 [2024-11-20 11:21:13.883470] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.565 [2024-11-20 11:21:13.883481] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.565 [2024-11-20 11:21:13.883488] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.565 [2024-11-20 11:21:13.894766] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.565 [2024-11-20 11:21:13.895152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.565 [2024-11-20 11:21:13.895169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:46.565 [2024-11-20 11:21:13.895176] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:46.565 [2024-11-20 11:21:13.895339] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:46.565 [2024-11-20 11:21:13.895501] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.565 [2024-11-20 11:21:13.895509] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.565 [2024-11-20 11:21:13.895515] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.565 [2024-11-20 11:21:13.895521] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.565 [2024-11-20 11:21:13.907630] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.565 [2024-11-20 11:21:13.908027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.565 [2024-11-20 11:21:13.908044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:46.565 [2024-11-20 11:21:13.908051] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:46.565 [2024-11-20 11:21:13.908214] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:46.565 [2024-11-20 11:21:13.908376] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.566 [2024-11-20 11:21:13.908384] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.566 [2024-11-20 11:21:13.908390] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.566 [2024-11-20 11:21:13.908396] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.566 [2024-11-20 11:21:13.920548] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.566 [2024-11-20 11:21:13.920963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.566 [2024-11-20 11:21:13.920980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:46.566 [2024-11-20 11:21:13.920987] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:46.566 [2024-11-20 11:21:13.921149] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:46.566 [2024-11-20 11:21:13.921310] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.566 [2024-11-20 11:21:13.921318] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.566 [2024-11-20 11:21:13.921325] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.566 [2024-11-20 11:21:13.921330] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.566 [2024-11-20 11:21:13.933415] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.566 [2024-11-20 11:21:13.933820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.566 [2024-11-20 11:21:13.933865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:46.566 [2024-11-20 11:21:13.933888] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:46.566 [2024-11-20 11:21:13.934430] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:46.566 [2024-11-20 11:21:13.934602] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.566 [2024-11-20 11:21:13.934610] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.566 [2024-11-20 11:21:13.934617] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.566 [2024-11-20 11:21:13.934624] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.566 [2024-11-20 11:21:13.946258] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.566 [2024-11-20 11:21:13.946589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.566 [2024-11-20 11:21:13.946605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:46.566 [2024-11-20 11:21:13.946612] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:46.566 [2024-11-20 11:21:13.946773] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:46.566 [2024-11-20 11:21:13.946935] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.566 [2024-11-20 11:21:13.946943] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.566 [2024-11-20 11:21:13.946956] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.566 [2024-11-20 11:21:13.946962] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.566 [2024-11-20 11:21:13.959094] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.566 [2024-11-20 11:21:13.959572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.566 [2024-11-20 11:21:13.959617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:46.566 [2024-11-20 11:21:13.959641] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:46.566 [2024-11-20 11:21:13.960233] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:46.566 [2024-11-20 11:21:13.960728] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.566 [2024-11-20 11:21:13.960736] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.566 [2024-11-20 11:21:13.960742] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.566 [2024-11-20 11:21:13.960749] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.566 [2024-11-20 11:21:13.972151] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.566 [2024-11-20 11:21:13.972615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.566 [2024-11-20 11:21:13.972660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:46.566 [2024-11-20 11:21:13.972691] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:46.566 [2024-11-20 11:21:13.973279] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:46.566 [2024-11-20 11:21:13.973750] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.566 [2024-11-20 11:21:13.973759] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.566 [2024-11-20 11:21:13.973766] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.566 [2024-11-20 11:21:13.973772] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.566 [2024-11-20 11:21:13.984965] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.566 [2024-11-20 11:21:13.985403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.566 [2024-11-20 11:21:13.985420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:46.566 [2024-11-20 11:21:13.985427] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:46.566 [2024-11-20 11:21:13.985589] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:46.566 [2024-11-20 11:21:13.985751] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.566 [2024-11-20 11:21:13.985759] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.566 [2024-11-20 11:21:13.985766] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.566 [2024-11-20 11:21:13.985773] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.566 5965.60 IOPS, 23.30 MiB/s [2024-11-20T10:21:14.062Z] [2024-11-20 11:21:13.998099] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.566 [2024-11-20 11:21:13.998533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.566 [2024-11-20 11:21:13.998551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:46.566 [2024-11-20 11:21:13.998559] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:46.566 [2024-11-20 11:21:13.998736] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:46.566 [2024-11-20 11:21:13.998914] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.566 [2024-11-20 11:21:13.998924] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.566 [2024-11-20 11:21:13.998932] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.566 [2024-11-20 11:21:13.998939] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.566 [2024-11-20 11:21:14.011277] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.566 [2024-11-20 11:21:14.011654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.566 [2024-11-20 11:21:14.011699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:46.566 [2024-11-20 11:21:14.011723] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:46.566 [2024-11-20 11:21:14.012318] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:46.566 [2024-11-20 11:21:14.012594] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.566 [2024-11-20 11:21:14.012602] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.566 [2024-11-20 11:21:14.012609] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.566 [2024-11-20 11:21:14.012616] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.566 [2024-11-20 11:21:14.024436] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.567 [2024-11-20 11:21:14.024874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.567 [2024-11-20 11:21:14.024891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:46.567 [2024-11-20 11:21:14.024899] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:46.567 [2024-11-20 11:21:14.025081] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:46.567 [2024-11-20 11:21:14.025259] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.567 [2024-11-20 11:21:14.025268] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.567 [2024-11-20 11:21:14.025275] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.567 [2024-11-20 11:21:14.025281] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.567 [2024-11-20 11:21:14.037621] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.567 [2024-11-20 11:21:14.038065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.567 [2024-11-20 11:21:14.038112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:46.567 [2024-11-20 11:21:14.038136] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:46.567 [2024-11-20 11:21:14.038562] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:46.567 [2024-11-20 11:21:14.038739] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.567 [2024-11-20 11:21:14.038748] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.567 [2024-11-20 11:21:14.038755] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.567 [2024-11-20 11:21:14.038762] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.567 [2024-11-20 11:21:14.050676] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.567 [2024-11-20 11:21:14.051115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.567 [2024-11-20 11:21:14.051161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:46.567 [2024-11-20 11:21:14.051185] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:46.567 [2024-11-20 11:21:14.051721] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:46.567 [2024-11-20 11:21:14.051905] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.567 [2024-11-20 11:21:14.051913] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.567 [2024-11-20 11:21:14.051923] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.567 [2024-11-20 11:21:14.051930] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.829 [2024-11-20 11:21:14.063624] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.829 [2024-11-20 11:21:14.064079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.829 [2024-11-20 11:21:14.064096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:46.829 [2024-11-20 11:21:14.064103] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:46.829 [2024-11-20 11:21:14.064279] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:46.829 [2024-11-20 11:21:14.064457] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.829 [2024-11-20 11:21:14.064465] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.829 [2024-11-20 11:21:14.064472] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.829 [2024-11-20 11:21:14.064479] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.829 [2024-11-20 11:21:14.076476] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.829 [2024-11-20 11:21:14.076896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.829 [2024-11-20 11:21:14.076913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:46.829 [2024-11-20 11:21:14.076919] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:46.829 [2024-11-20 11:21:14.077109] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:46.829 [2024-11-20 11:21:14.077281] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.829 [2024-11-20 11:21:14.077288] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.829 [2024-11-20 11:21:14.077295] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.829 [2024-11-20 11:21:14.077301] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.829 [2024-11-20 11:21:14.089263] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.829 [2024-11-20 11:21:14.089705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.829 [2024-11-20 11:21:14.089722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:46.829 [2024-11-20 11:21:14.089729] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:46.829 [2024-11-20 11:21:14.089900] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:46.829 [2024-11-20 11:21:14.090077] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.829 [2024-11-20 11:21:14.090086] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.829 [2024-11-20 11:21:14.090092] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.829 [2024-11-20 11:21:14.090099] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.829 [2024-11-20 11:21:14.102167] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.829 [2024-11-20 11:21:14.102569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.829 [2024-11-20 11:21:14.102586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:46.829 [2024-11-20 11:21:14.102593] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:46.829 [2024-11-20 11:21:14.102765] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:46.829 [2024-11-20 11:21:14.102937] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.829 [2024-11-20 11:21:14.102946] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.829 [2024-11-20 11:21:14.102958] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.829 [2024-11-20 11:21:14.102965] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.829 [2024-11-20 11:21:14.115097] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.829 [2024-11-20 11:21:14.115497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.829 [2024-11-20 11:21:14.115542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:46.829 [2024-11-20 11:21:14.115566] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:46.829 [2024-11-20 11:21:14.116085] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:46.829 [2024-11-20 11:21:14.116257] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.829 [2024-11-20 11:21:14.116266] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.829 [2024-11-20 11:21:14.116272] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.829 [2024-11-20 11:21:14.116279] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.829 [2024-11-20 11:21:14.128294] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.829 [2024-11-20 11:21:14.128747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.829 [2024-11-20 11:21:14.128791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:46.829 [2024-11-20 11:21:14.128814] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:46.829 [2024-11-20 11:21:14.129407] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:46.829 [2024-11-20 11:21:14.129995] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.829 [2024-11-20 11:21:14.130004] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.829 [2024-11-20 11:21:14.130011] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.829 [2024-11-20 11:21:14.130017] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.829 [2024-11-20 11:21:14.141307] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.829 [2024-11-20 11:21:14.141749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.829 [2024-11-20 11:21:14.141793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:46.829 [2024-11-20 11:21:14.141824] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:46.829 [2024-11-20 11:21:14.142413] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:46.829 [2024-11-20 11:21:14.142965] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.829 [2024-11-20 11:21:14.142973] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.829 [2024-11-20 11:21:14.142980] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.829 [2024-11-20 11:21:14.143003] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.829 [2024-11-20 11:21:14.154306] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.829 [2024-11-20 11:21:14.154722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.829 [2024-11-20 11:21:14.154738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:46.829 [2024-11-20 11:21:14.154745] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:46.829 [2024-11-20 11:21:14.154907] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:46.829 [2024-11-20 11:21:14.155097] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.829 [2024-11-20 11:21:14.155106] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.829 [2024-11-20 11:21:14.155112] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.829 [2024-11-20 11:21:14.155119] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.830 [2024-11-20 11:21:14.167224] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.830 [2024-11-20 11:21:14.167652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.830 [2024-11-20 11:21:14.167697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:46.830 [2024-11-20 11:21:14.167720] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:46.830 [2024-11-20 11:21:14.168123] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:46.830 [2024-11-20 11:21:14.168296] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.830 [2024-11-20 11:21:14.168304] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.830 [2024-11-20 11:21:14.168310] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.830 [2024-11-20 11:21:14.168317] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.830 [2024-11-20 11:21:14.180219] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.830 [2024-11-20 11:21:14.180646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.830 [2024-11-20 11:21:14.180691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:46.830 [2024-11-20 11:21:14.180714] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:46.830 [2024-11-20 11:21:14.181235] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:46.830 [2024-11-20 11:21:14.181429] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.830 [2024-11-20 11:21:14.181437] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.830 [2024-11-20 11:21:14.181444] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.830 [2024-11-20 11:21:14.181450] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.830 [2024-11-20 11:21:14.193076] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.830 [2024-11-20 11:21:14.193422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.830 [2024-11-20 11:21:14.193438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:46.830 [2024-11-20 11:21:14.193445] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:46.830 [2024-11-20 11:21:14.193607] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:46.830 [2024-11-20 11:21:14.193769] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.830 [2024-11-20 11:21:14.193777] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.830 [2024-11-20 11:21:14.193783] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.830 [2024-11-20 11:21:14.193789] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.830 [2024-11-20 11:21:14.206199] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.830 [2024-11-20 11:21:14.206617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.830 [2024-11-20 11:21:14.206635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:46.830 [2024-11-20 11:21:14.206642] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:46.830 [2024-11-20 11:21:14.206820] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:46.830 [2024-11-20 11:21:14.207003] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.830 [2024-11-20 11:21:14.207012] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.830 [2024-11-20 11:21:14.207019] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.830 [2024-11-20 11:21:14.207026] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.830 [2024-11-20 11:21:14.219335] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.830 [2024-11-20 11:21:14.219743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.830 [2024-11-20 11:21:14.219759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:46.830 [2024-11-20 11:21:14.219767] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:46.830 [2024-11-20 11:21:14.219944] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:46.830 [2024-11-20 11:21:14.220127] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.830 [2024-11-20 11:21:14.220136] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.830 [2024-11-20 11:21:14.220143] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.830 [2024-11-20 11:21:14.220153] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.830 [2024-11-20 11:21:14.232466] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.830 [2024-11-20 11:21:14.232877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.830 [2024-11-20 11:21:14.232895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:46.830 [2024-11-20 11:21:14.232903] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:46.830 [2024-11-20 11:21:14.233086] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:46.830 [2024-11-20 11:21:14.233264] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.830 [2024-11-20 11:21:14.233273] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.830 [2024-11-20 11:21:14.233280] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.830 [2024-11-20 11:21:14.233286] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.830 [2024-11-20 11:21:14.245593] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.830 [2024-11-20 11:21:14.245963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.830 [2024-11-20 11:21:14.245981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:46.830 [2024-11-20 11:21:14.245989] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:46.830 [2024-11-20 11:21:14.246167] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:46.830 [2024-11-20 11:21:14.246344] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.830 [2024-11-20 11:21:14.246354] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.830 [2024-11-20 11:21:14.246361] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.830 [2024-11-20 11:21:14.246368] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.830 [2024-11-20 11:21:14.258701] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.830 [2024-11-20 11:21:14.259134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.830 [2024-11-20 11:21:14.259151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:46.830 [2024-11-20 11:21:14.259159] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:46.830 [2024-11-20 11:21:14.259336] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:46.830 [2024-11-20 11:21:14.259514] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.830 [2024-11-20 11:21:14.259523] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.830 [2024-11-20 11:21:14.259531] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.830 [2024-11-20 11:21:14.259537] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.830 [2024-11-20 11:21:14.271866] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.830 [2024-11-20 11:21:14.272309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.830 [2024-11-20 11:21:14.272325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:46.830 [2024-11-20 11:21:14.272333] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:46.830 [2024-11-20 11:21:14.272510] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:46.831 [2024-11-20 11:21:14.272687] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.831 [2024-11-20 11:21:14.272696] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.831 [2024-11-20 11:21:14.272703] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.831 [2024-11-20 11:21:14.272709] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.831 [2024-11-20 11:21:14.285048] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.831 [2024-11-20 11:21:14.285351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.831 [2024-11-20 11:21:14.285368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:46.831 [2024-11-20 11:21:14.285376] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:46.831 [2024-11-20 11:21:14.285552] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:46.831 [2024-11-20 11:21:14.285729] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.831 [2024-11-20 11:21:14.285738] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.831 [2024-11-20 11:21:14.285745] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.831 [2024-11-20 11:21:14.285752] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.831 [2024-11-20 11:21:14.298075] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.831 [2024-11-20 11:21:14.298446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.831 [2024-11-20 11:21:14.298463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:46.831 [2024-11-20 11:21:14.298471] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:46.831 [2024-11-20 11:21:14.298642] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:46.831 [2024-11-20 11:21:14.298813] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.831 [2024-11-20 11:21:14.298822] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.831 [2024-11-20 11:21:14.298828] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.831 [2024-11-20 11:21:14.298835] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.831 [2024-11-20 11:21:14.310998] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.831 [2024-11-20 11:21:14.311404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.831 [2024-11-20 11:21:14.311449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:46.831 [2024-11-20 11:21:14.311472] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:46.831 [2024-11-20 11:21:14.312070] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:46.831 [2024-11-20 11:21:14.312319] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.831 [2024-11-20 11:21:14.312327] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.831 [2024-11-20 11:21:14.312333] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.831 [2024-11-20 11:21:14.312340] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:47.092 [2024-11-20 11:21:14.324164] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:47.092 [2024-11-20 11:21:14.324546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.092 [2024-11-20 11:21:14.324562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:47.092 [2024-11-20 11:21:14.324569] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:47.092 [2024-11-20 11:21:14.324732] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:47.092 [2024-11-20 11:21:14.324895] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:47.092 [2024-11-20 11:21:14.324903] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:47.092 [2024-11-20 11:21:14.324910] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:47.092 [2024-11-20 11:21:14.324916] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:47.092 [2024-11-20 11:21:14.337150] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:47.092 [2024-11-20 11:21:14.337566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.092 [2024-11-20 11:21:14.337582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:47.092 [2024-11-20 11:21:14.337589] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:47.092 [2024-11-20 11:21:14.337751] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:47.092 [2024-11-20 11:21:14.337914] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:47.092 [2024-11-20 11:21:14.337922] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:47.092 [2024-11-20 11:21:14.337928] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:47.092 [2024-11-20 11:21:14.337934] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:47.092 [2024-11-20 11:21:14.350026] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:47.092 [2024-11-20 11:21:14.350364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.092 [2024-11-20 11:21:14.350380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:47.092 [2024-11-20 11:21:14.350387] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:47.092 [2024-11-20 11:21:14.350559] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:47.092 [2024-11-20 11:21:14.350732] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:47.092 [2024-11-20 11:21:14.350743] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:47.092 [2024-11-20 11:21:14.350749] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:47.092 [2024-11-20 11:21:14.350756] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:47.092 [2024-11-20 11:21:14.363029] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:47.092 [2024-11-20 11:21:14.363373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.092 [2024-11-20 11:21:14.363389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:47.092 [2024-11-20 11:21:14.363397] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:47.092 [2024-11-20 11:21:14.363559] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:47.092 [2024-11-20 11:21:14.363720] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:47.092 [2024-11-20 11:21:14.363728] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:47.092 [2024-11-20 11:21:14.363735] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:47.092 [2024-11-20 11:21:14.363741] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:47.092 [2024-11-20 11:21:14.376066] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:47.092 [2024-11-20 11:21:14.376403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.092 [2024-11-20 11:21:14.376419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:47.092 [2024-11-20 11:21:14.376426] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:47.092 [2024-11-20 11:21:14.376588] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:47.092 [2024-11-20 11:21:14.376750] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:47.092 [2024-11-20 11:21:14.376758] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:47.092 [2024-11-20 11:21:14.376765] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:47.092 [2024-11-20 11:21:14.376771] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:47.092 [2024-11-20 11:21:14.389085] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:47.092 [2024-11-20 11:21:14.389538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.092 [2024-11-20 11:21:14.389555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:47.092 [2024-11-20 11:21:14.389563] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:47.092 [2024-11-20 11:21:14.389734] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:47.092 [2024-11-20 11:21:14.389907] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:47.092 [2024-11-20 11:21:14.389915] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:47.092 [2024-11-20 11:21:14.389922] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:47.092 [2024-11-20 11:21:14.389932] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:47.092 [2024-11-20 11:21:14.402076] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:47.092 [2024-11-20 11:21:14.402426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.092 [2024-11-20 11:21:14.402442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:47.092 [2024-11-20 11:21:14.402449] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:47.092 [2024-11-20 11:21:14.402611] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:47.092 [2024-11-20 11:21:14.402793] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:47.093 [2024-11-20 11:21:14.402804] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:47.093 [2024-11-20 11:21:14.402810] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:47.093 [2024-11-20 11:21:14.402817] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:47.093 [2024-11-20 11:21:14.415061] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:47.093 [2024-11-20 11:21:14.415342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.093 [2024-11-20 11:21:14.415360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:47.093 [2024-11-20 11:21:14.415367] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:47.093 [2024-11-20 11:21:14.415539] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:47.093 [2024-11-20 11:21:14.415710] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:47.093 [2024-11-20 11:21:14.415719] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:47.093 [2024-11-20 11:21:14.415725] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:47.093 [2024-11-20 11:21:14.415731] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:47.093 [2024-11-20 11:21:14.427927] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:47.093 [2024-11-20 11:21:14.428222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.093 [2024-11-20 11:21:14.428238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:47.093 [2024-11-20 11:21:14.428245] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:47.093 [2024-11-20 11:21:14.428407] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:47.093 [2024-11-20 11:21:14.428571] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:47.093 [2024-11-20 11:21:14.428579] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:47.093 [2024-11-20 11:21:14.428585] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:47.093 [2024-11-20 11:21:14.428591] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:47.093 [2024-11-20 11:21:14.440822] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:47.093 [2024-11-20 11:21:14.441187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.093 [2024-11-20 11:21:14.441203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:47.093 [2024-11-20 11:21:14.441211] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:47.093 [2024-11-20 11:21:14.441383] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:47.093 [2024-11-20 11:21:14.441555] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:47.093 [2024-11-20 11:21:14.441564] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:47.093 [2024-11-20 11:21:14.441571] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:47.093 [2024-11-20 11:21:14.441577] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:47.093 [2024-11-20 11:21:14.453904] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:47.093 [2024-11-20 11:21:14.454252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.093 [2024-11-20 11:21:14.454269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:47.093 [2024-11-20 11:21:14.454277] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:47.093 [2024-11-20 11:21:14.454448] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:47.093 [2024-11-20 11:21:14.454620] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:47.093 [2024-11-20 11:21:14.454630] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:47.093 [2024-11-20 11:21:14.454636] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:47.093 [2024-11-20 11:21:14.454643] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:47.093 [2024-11-20 11:21:14.466857] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:47.093 [2024-11-20 11:21:14.467219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.093 [2024-11-20 11:21:14.467236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:47.093 [2024-11-20 11:21:14.467243] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:47.093 [2024-11-20 11:21:14.467405] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:47.093 [2024-11-20 11:21:14.467569] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:47.093 [2024-11-20 11:21:14.467577] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:47.093 [2024-11-20 11:21:14.467583] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:47.093 [2024-11-20 11:21:14.467589] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:47.093 [2024-11-20 11:21:14.479742] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:47.093 [2024-11-20 11:21:14.480182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.093 [2024-11-20 11:21:14.480199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:47.093 [2024-11-20 11:21:14.480206] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:47.093 [2024-11-20 11:21:14.480370] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:47.093 [2024-11-20 11:21:14.480532] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:47.093 [2024-11-20 11:21:14.480540] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:47.093 [2024-11-20 11:21:14.480546] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:47.093 [2024-11-20 11:21:14.480552] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:47.093 [2024-11-20 11:21:14.492791] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:47.093 [2024-11-20 11:21:14.493131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.093 [2024-11-20 11:21:14.493147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:47.093 [2024-11-20 11:21:14.493155] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:47.093 [2024-11-20 11:21:14.493326] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:47.093 [2024-11-20 11:21:14.493498] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:47.093 [2024-11-20 11:21:14.493507] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:47.093 [2024-11-20 11:21:14.493513] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:47.093 [2024-11-20 11:21:14.493520] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:47.093 [2024-11-20 11:21:14.505794] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:47.093 [2024-11-20 11:21:14.506151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.093 [2024-11-20 11:21:14.506169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:47.093 [2024-11-20 11:21:14.506176] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:47.093 [2024-11-20 11:21:14.506347] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:47.093 [2024-11-20 11:21:14.506520] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:47.093 [2024-11-20 11:21:14.506529] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:47.093 [2024-11-20 11:21:14.506536] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:47.093 [2024-11-20 11:21:14.506542] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:47.093 [2024-11-20 11:21:14.519042] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:47.093 [2024-11-20 11:21:14.519380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.094 [2024-11-20 11:21:14.519397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:47.094 [2024-11-20 11:21:14.519405] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:47.094 [2024-11-20 11:21:14.519582] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:47.094 [2024-11-20 11:21:14.519760] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:47.094 [2024-11-20 11:21:14.519774] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:47.094 [2024-11-20 11:21:14.519781] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:47.094 [2024-11-20 11:21:14.519787] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:47.094 [2024-11-20 11:21:14.531957] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:47.094 [2024-11-20 11:21:14.532387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.094 [2024-11-20 11:21:14.532432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:47.094 [2024-11-20 11:21:14.532455] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:47.094 [2024-11-20 11:21:14.532968] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:47.094 [2024-11-20 11:21:14.533133] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:47.094 [2024-11-20 11:21:14.533141] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:47.094 [2024-11-20 11:21:14.533148] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:47.094 [2024-11-20 11:21:14.533154] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:47.094 [2024-11-20 11:21:14.544808] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:47.094 [2024-11-20 11:21:14.545171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.094 [2024-11-20 11:21:14.545189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:47.094 [2024-11-20 11:21:14.545196] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:47.094 [2024-11-20 11:21:14.545371] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:47.094 [2024-11-20 11:21:14.545533] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:47.094 [2024-11-20 11:21:14.545542] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:47.094 [2024-11-20 11:21:14.545548] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:47.094 [2024-11-20 11:21:14.545554] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:47.094 [2024-11-20 11:21:14.557951] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:47.094 [2024-11-20 11:21:14.558384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.094 [2024-11-20 11:21:14.558401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:47.094 [2024-11-20 11:21:14.558408] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:47.094 [2024-11-20 11:21:14.558586] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:47.094 [2024-11-20 11:21:14.558763] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:47.094 [2024-11-20 11:21:14.558772] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:47.094 [2024-11-20 11:21:14.558779] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:47.094 [2024-11-20 11:21:14.558795] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:47.094 [2024-11-20 11:21:14.571113] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:47.094 [2024-11-20 11:21:14.571537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.094 [2024-11-20 11:21:14.571554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:47.094 [2024-11-20 11:21:14.571562] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:47.094 [2024-11-20 11:21:14.571739] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:47.094 [2024-11-20 11:21:14.571916] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:47.094 [2024-11-20 11:21:14.571925] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:47.094 [2024-11-20 11:21:14.571932] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:47.094 [2024-11-20 11:21:14.571938] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:47.094 [2024-11-20 11:21:14.584281] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:47.094 [2024-11-20 11:21:14.584627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.094 [2024-11-20 11:21:14.584644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:47.094 [2024-11-20 11:21:14.584651] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:47.355 [2024-11-20 11:21:14.584828] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:47.355 [2024-11-20 11:21:14.585012] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:47.355 [2024-11-20 11:21:14.585022] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:47.355 [2024-11-20 11:21:14.585028] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:47.355 [2024-11-20 11:21:14.585035] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:47.355 [2024-11-20 11:21:14.597323] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:47.355 [2024-11-20 11:21:14.597753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.355 [2024-11-20 11:21:14.597801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:47.355 [2024-11-20 11:21:14.597825] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:47.355 [2024-11-20 11:21:14.598418] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:47.355 [2024-11-20 11:21:14.599013] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:47.355 [2024-11-20 11:21:14.599036] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:47.355 [2024-11-20 11:21:14.599043] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:47.355 [2024-11-20 11:21:14.599050] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:47.355 [2024-11-20 11:21:14.610330] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:47.355 [2024-11-20 11:21:14.610763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.355 [2024-11-20 11:21:14.610808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:47.355 [2024-11-20 11:21:14.610834] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:47.355 [2024-11-20 11:21:14.611428] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:47.355 [2024-11-20 11:21:14.612019] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:47.355 [2024-11-20 11:21:14.612046] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:47.355 [2024-11-20 11:21:14.612068] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:47.355 [2024-11-20 11:21:14.612088] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:47.355 [2024-11-20 11:21:14.625158] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:47.355 [2024-11-20 11:21:14.625598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.355 [2024-11-20 11:21:14.625619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:47.355 [2024-11-20 11:21:14.625629] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:47.355 [2024-11-20 11:21:14.625871] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:47.356 [2024-11-20 11:21:14.626121] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:47.356 [2024-11-20 11:21:14.626133] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:47.356 [2024-11-20 11:21:14.626143] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:47.356 [2024-11-20 11:21:14.626151] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:47.356 [2024-11-20 11:21:14.638086] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:47.356 [2024-11-20 11:21:14.638407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.356 [2024-11-20 11:21:14.638423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:47.356 [2024-11-20 11:21:14.638430] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:47.356 [2024-11-20 11:21:14.638592] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:47.356 [2024-11-20 11:21:14.638754] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:47.356 [2024-11-20 11:21:14.638762] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:47.356 [2024-11-20 11:21:14.638769] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:47.356 [2024-11-20 11:21:14.638775] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:47.356 [2024-11-20 11:21:14.650907] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:47.356 [2024-11-20 11:21:14.651349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.356 [2024-11-20 11:21:14.651365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:47.356 [2024-11-20 11:21:14.651373] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:47.356 [2024-11-20 11:21:14.651548] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:47.356 [2024-11-20 11:21:14.651720] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:47.356 [2024-11-20 11:21:14.651728] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:47.356 [2024-11-20 11:21:14.651734] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:47.356 [2024-11-20 11:21:14.651741] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:47.356 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 16793 Killed "${NVMF_APP[@]}" "$@" 00:26:47.356 11:21:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:26:47.356 [2024-11-20 11:21:14.663821] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:47.356 11:21:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:26:47.356 [2024-11-20 11:21:14.664265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.356 [2024-11-20 11:21:14.664283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:47.356 [2024-11-20 11:21:14.664290] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:47.356 [2024-11-20 11:21:14.664462] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:47.356 11:21:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:47.356 [2024-11-20 11:21:14.664633] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:47.356 [2024-11-20 11:21:14.664642] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:47.356 [2024-11-20 11:21:14.664649] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:47.356 [2024-11-20 11:21:14.664655] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:47.356 11:21:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:47.356 11:21:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:47.356 11:21:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=18589 00:26:47.356 11:21:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 18589 00:26:47.356 11:21:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:47.356 11:21:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 18589 ']' 00:26:47.356 11:21:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:47.356 11:21:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:47.356 11:21:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:47.356 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:47.356 11:21:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:47.356 11:21:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:47.356 [2024-11-20 11:21:14.676972] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:47.356 [2024-11-20 11:21:14.677408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.356 [2024-11-20 11:21:14.677425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:47.356 [2024-11-20 11:21:14.677436] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:47.356 [2024-11-20 11:21:14.677613] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:47.356 [2024-11-20 11:21:14.677791] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:47.356 [2024-11-20 11:21:14.677800] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:47.356 [2024-11-20 11:21:14.677807] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:47.356 [2024-11-20 11:21:14.677814] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:47.356 [2024-11-20 11:21:14.690153] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:47.356 [2024-11-20 11:21:14.690578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.356 [2024-11-20 11:21:14.690596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:47.356 [2024-11-20 11:21:14.690605] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:47.356 [2024-11-20 11:21:14.690783] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:47.356 [2024-11-20 11:21:14.690967] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:47.356 [2024-11-20 11:21:14.690976] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:47.356 [2024-11-20 11:21:14.690984] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:47.356 [2024-11-20 11:21:14.690992] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:47.356 [2024-11-20 11:21:14.703178] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:47.356 [2024-11-20 11:21:14.703528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.356 [2024-11-20 11:21:14.703545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:47.356 [2024-11-20 11:21:14.703552] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:47.356 [2024-11-20 11:21:14.703724] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:47.356 [2024-11-20 11:21:14.703896] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:47.356 [2024-11-20 11:21:14.703905] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:47.356 [2024-11-20 11:21:14.703912] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:47.356 [2024-11-20 11:21:14.703919] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:47.356 [2024-11-20 11:21:14.716247] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:47.356 [2024-11-20 11:21:14.716674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.356 [2024-11-20 11:21:14.716691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:47.356 [2024-11-20 11:21:14.716699] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:47.356 [2024-11-20 11:21:14.716871] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:47.357 [2024-11-20 11:21:14.717053] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:47.357 [2024-11-20 11:21:14.717062] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:47.357 [2024-11-20 11:21:14.717068] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:47.357 [2024-11-20 11:21:14.717074] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:47.357 [2024-11-20 11:21:14.719258] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:26:47.357 [2024-11-20 11:21:14.719296] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:47.357 [2024-11-20 11:21:14.729259] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:47.357 [2024-11-20 11:21:14.729664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.357 [2024-11-20 11:21:14.729681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:47.357 [2024-11-20 11:21:14.729689] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:47.357 [2024-11-20 11:21:14.729861] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:47.357 [2024-11-20 11:21:14.730041] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:47.357 [2024-11-20 11:21:14.730049] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:47.357 [2024-11-20 11:21:14.730056] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:47.357 [2024-11-20 11:21:14.730063] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:47.357 [2024-11-20 11:21:14.742245] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:47.357 [2024-11-20 11:21:14.742656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.357 [2024-11-20 11:21:14.742674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:47.357 [2024-11-20 11:21:14.742681] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:47.357 [2024-11-20 11:21:14.742854] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:47.357 [2024-11-20 11:21:14.743031] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:47.357 [2024-11-20 11:21:14.743040] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:47.357 [2024-11-20 11:21:14.743047] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:47.357 [2024-11-20 11:21:14.743054] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:47.357 [2024-11-20 11:21:14.755342] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:47.357 [2024-11-20 11:21:14.755707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.357 [2024-11-20 11:21:14.755724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:47.357 [2024-11-20 11:21:14.755732] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:47.357 [2024-11-20 11:21:14.755903] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:47.357 [2024-11-20 11:21:14.756084] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:47.357 [2024-11-20 11:21:14.756092] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:47.357 [2024-11-20 11:21:14.756099] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:47.357 [2024-11-20 11:21:14.756105] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:47.357 [2024-11-20 11:21:14.768520] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:47.357 [2024-11-20 11:21:14.768782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.357 [2024-11-20 11:21:14.768807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:47.357 [2024-11-20 11:21:14.768815] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:47.357 [2024-11-20 11:21:14.769009] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:47.357 [2024-11-20 11:21:14.769191] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:47.357 [2024-11-20 11:21:14.769199] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:47.357 [2024-11-20 11:21:14.769206] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:47.357 [2024-11-20 11:21:14.769213] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:47.357 [2024-11-20 11:21:14.781487] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:47.357 [2024-11-20 11:21:14.781883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.357 [2024-11-20 11:21:14.781899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:47.357 [2024-11-20 11:21:14.781907] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:47.357 [2024-11-20 11:21:14.782085] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:47.357 [2024-11-20 11:21:14.782258] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:47.357 [2024-11-20 11:21:14.782266] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:47.357 [2024-11-20 11:21:14.782273] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:47.357 [2024-11-20 11:21:14.782279] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:47.357 [2024-11-20 11:21:14.794486] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:47.357 [2024-11-20 11:21:14.794918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.357 [2024-11-20 11:21:14.794935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:47.357 [2024-11-20 11:21:14.794943] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:47.357 [2024-11-20 11:21:14.795122] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:47.357 [2024-11-20 11:21:14.795296] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:47.357 [2024-11-20 11:21:14.795305] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:47.357 [2024-11-20 11:21:14.795316] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:47.357 [2024-11-20 11:21:14.795322] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:47.357 [2024-11-20 11:21:14.799012] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:47.357 [2024-11-20 11:21:14.807525] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:47.357 [2024-11-20 11:21:14.807891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.357 [2024-11-20 11:21:14.807909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:47.357 [2024-11-20 11:21:14.807925] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:47.357 [2024-11-20 11:21:14.808106] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:47.357 [2024-11-20 11:21:14.808282] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:47.357 [2024-11-20 11:21:14.808291] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:47.357 [2024-11-20 11:21:14.808298] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:47.357 [2024-11-20 11:21:14.808305] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:47.357 [2024-11-20 11:21:14.820487] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:47.357 [2024-11-20 11:21:14.820926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.357 [2024-11-20 11:21:14.820944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:47.357 [2024-11-20 11:21:14.820960] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:47.357 [2024-11-20 11:21:14.821133] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:47.357 [2024-11-20 11:21:14.821305] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:47.357 [2024-11-20 11:21:14.821313] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:47.358 [2024-11-20 11:21:14.821320] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:47.358 [2024-11-20 11:21:14.821326] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:47.358 [2024-11-20 11:21:14.833510] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:47.358 [2024-11-20 11:21:14.833845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.358 [2024-11-20 11:21:14.833863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:47.358 [2024-11-20 11:21:14.833873] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:47.358 [2024-11-20 11:21:14.834053] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:47.358 [2024-11-20 11:21:14.834227] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:47.358 [2024-11-20 11:21:14.834237] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:47.358 [2024-11-20 11:21:14.834245] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:47.358 [2024-11-20 11:21:14.834251] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:47.358 [2024-11-20 11:21:14.840372] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:47.358 [2024-11-20 11:21:14.840397] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:47.358 [2024-11-20 11:21:14.840407] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:47.358 [2024-11-20 11:21:14.840415] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:47.358 [2024-11-20 11:21:14.840422] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:47.358 [2024-11-20 11:21:14.841810] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:47.358 [2024-11-20 11:21:14.841918] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:47.358 [2024-11-20 11:21:14.841920] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:47.358 [2024-11-20 11:21:14.846676] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:47.358 [2024-11-20 11:21:14.847107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.358 [2024-11-20 11:21:14.847128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:47.358 [2024-11-20 11:21:14.847137] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:47.358 [2024-11-20 11:21:14.847316] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:47.358 [2024-11-20 11:21:14.847496] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:47.358 [2024-11-20 11:21:14.847505] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:47.358 [2024-11-20 11:21:14.847512] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:47.358 [2024-11-20 11:21:14.847520] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:47.618 [2024-11-20 11:21:14.859838] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:47.618 [2024-11-20 11:21:14.860192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.618 [2024-11-20 11:21:14.860212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:47.618 [2024-11-20 11:21:14.860221] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:47.618 [2024-11-20 11:21:14.860399] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:47.618 [2024-11-20 11:21:14.860578] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:47.618 [2024-11-20 11:21:14.860586] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:47.618 [2024-11-20 11:21:14.860594] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:47.618 [2024-11-20 11:21:14.860601] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:47.618 [2024-11-20 11:21:14.872924] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:47.618 [2024-11-20 11:21:14.873351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.618 [2024-11-20 11:21:14.873371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:47.618 [2024-11-20 11:21:14.873379] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:47.618 [2024-11-20 11:21:14.873557] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:47.618 [2024-11-20 11:21:14.873743] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:47.618 [2024-11-20 11:21:14.873752] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:47.618 [2024-11-20 11:21:14.873759] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:47.618 [2024-11-20 11:21:14.873767] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:47.618 [2024-11-20 11:21:14.886080] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:47.618 [2024-11-20 11:21:14.886485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.618 [2024-11-20 11:21:14.886505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:47.618 [2024-11-20 11:21:14.886514] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:47.618 [2024-11-20 11:21:14.886703] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:47.618 [2024-11-20 11:21:14.886882] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:47.618 [2024-11-20 11:21:14.886891] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:47.618 [2024-11-20 11:21:14.886898] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:47.618 [2024-11-20 11:21:14.886906] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:47.618 [2024-11-20 11:21:14.899218] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:47.618 [2024-11-20 11:21:14.899651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.618 [2024-11-20 11:21:14.899672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:47.618 [2024-11-20 11:21:14.899681] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:47.619 [2024-11-20 11:21:14.899860] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:47.619 [2024-11-20 11:21:14.900045] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:47.619 [2024-11-20 11:21:14.900055] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:47.619 [2024-11-20 11:21:14.900063] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:47.619 [2024-11-20 11:21:14.900070] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:47.619 [2024-11-20 11:21:14.912396] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:47.619 [2024-11-20 11:21:14.912821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.619 [2024-11-20 11:21:14.912839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:47.619 [2024-11-20 11:21:14.912848] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:47.619 [2024-11-20 11:21:14.913029] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:47.619 [2024-11-20 11:21:14.913208] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:47.619 [2024-11-20 11:21:14.913216] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:47.619 [2024-11-20 11:21:14.913228] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:47.619 [2024-11-20 11:21:14.913235] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:47.619 [2024-11-20 11:21:14.925539] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:47.619 [2024-11-20 11:21:14.925883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.619 [2024-11-20 11:21:14.925900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:47.619 [2024-11-20 11:21:14.925908] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:47.619 [2024-11-20 11:21:14.926090] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:47.619 [2024-11-20 11:21:14.926294] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:47.619 [2024-11-20 11:21:14.926303] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:47.619 [2024-11-20 11:21:14.926311] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:47.619 [2024-11-20 11:21:14.926317] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:47.619 [2024-11-20 11:21:14.938624] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:47.619 [2024-11-20 11:21:14.938975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.619 [2024-11-20 11:21:14.938992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:47.619 [2024-11-20 11:21:14.939001] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:47.619 [2024-11-20 11:21:14.939177] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:47.619 [2024-11-20 11:21:14.939354] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:47.619 [2024-11-20 11:21:14.939363] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:47.619 [2024-11-20 11:21:14.939370] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:47.619 [2024-11-20 11:21:14.939376] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:47.619 11:21:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:47.619 11:21:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:26:47.619 11:21:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:47.619 11:21:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:47.619 11:21:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:47.619 [2024-11-20 11:21:14.951683] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:47.619 [2024-11-20 11:21:14.952119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.619 [2024-11-20 11:21:14.952137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:47.619 [2024-11-20 11:21:14.952145] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:47.619 [2024-11-20 11:21:14.952321] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:47.619 [2024-11-20 11:21:14.952499] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:47.619 [2024-11-20 11:21:14.952511] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:47.619 [2024-11-20 11:21:14.952519] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:47.619 [2024-11-20 11:21:14.952527] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:47.619 [2024-11-20 11:21:14.964844] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:47.619 [2024-11-20 11:21:14.965276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.619 [2024-11-20 11:21:14.965293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:47.619 [2024-11-20 11:21:14.965301] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:47.619 [2024-11-20 11:21:14.965477] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:47.619 [2024-11-20 11:21:14.965654] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:47.619 [2024-11-20 11:21:14.965662] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:47.619 [2024-11-20 11:21:14.965669] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:47.619 [2024-11-20 11:21:14.965676] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:47.619 [2024-11-20 11:21:14.977996] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:47.619 [2024-11-20 11:21:14.978287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.619 [2024-11-20 11:21:14.978304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:47.619 [2024-11-20 11:21:14.978311] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:47.619 [2024-11-20 11:21:14.978487] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:47.619 [2024-11-20 11:21:14.978664] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:47.619 [2024-11-20 11:21:14.978672] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:47.619 [2024-11-20 11:21:14.978679] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:47.619 [2024-11-20 11:21:14.978685] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:47.619 11:21:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:47.619 11:21:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:47.619 11:21:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.619 11:21:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:47.619 [2024-11-20 11:21:14.991176] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:47.619 [2024-11-20 11:21:14.991212] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:47.619 [2024-11-20 11:21:14.991622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.619 [2024-11-20 11:21:14.991638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:47.619 [2024-11-20 11:21:14.991646] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:47.619 [2024-11-20 11:21:14.991822] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:47.619 [2024-11-20 11:21:14.992008] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:47.619 [2024-11-20 11:21:14.992017] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:47.619 [2024-11-20 11:21:14.992024] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:47.619 [2024-11-20 11:21:14.992031] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:47.619 11:21:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.620 11:21:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:47.620 11:21:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.620 11:21:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:47.620 4971.33 IOPS, 19.42 MiB/s [2024-11-20T10:21:15.116Z] [2024-11-20 11:21:15.004321] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:47.620 [2024-11-20 11:21:15.004728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.620 [2024-11-20 11:21:15.004745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:47.620 [2024-11-20 11:21:15.004752] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:47.620 [2024-11-20 11:21:15.004929] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:47.620 [2024-11-20 11:21:15.005111] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:47.620 [2024-11-20 11:21:15.005121] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:47.620 [2024-11-20 11:21:15.005128] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:47.620 [2024-11-20 11:21:15.005135] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:47.620 [2024-11-20 11:21:15.017464] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:47.620 [2024-11-20 11:21:15.017856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.620 [2024-11-20 11:21:15.017873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:47.620 [2024-11-20 11:21:15.017881] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:47.620 [2024-11-20 11:21:15.018061] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:47.620 [2024-11-20 11:21:15.018242] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:47.620 [2024-11-20 11:21:15.018251] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:47.620 [2024-11-20 11:21:15.018258] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:47.620 [2024-11-20 11:21:15.018264] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:47.620 Malloc0 00:26:47.620 11:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.620 [2024-11-20 11:21:15.030594] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:47.620 11:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:47.620 [2024-11-20 11:21:15.030931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.620 [2024-11-20 11:21:15.030954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:47.620 [2024-11-20 11:21:15.030967] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:47.620 [2024-11-20 11:21:15.031145] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:47.620 11:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.620 [2024-11-20 11:21:15.031323] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:47.620 [2024-11-20 11:21:15.031332] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:47.620 [2024-11-20 11:21:15.031340] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:47.620 [2024-11-20 11:21:15.031348] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:47.620 11:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:47.620 11:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.620 11:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:47.620 11:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.620 11:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:47.620 [2024-11-20 11:21:15.043663] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:47.620 [2024-11-20 11:21:15.044062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.620 [2024-11-20 11:21:15.044080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8f500 with addr=10.0.0.2, port=4420 00:26:47.620 [2024-11-20 11:21:15.044088] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8f500 is same with the state(6) to be set 00:26:47.620 [2024-11-20 11:21:15.044264] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8f500 (9): Bad file descriptor 00:26:47.620 [2024-11-20 11:21:15.044441] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:47.620 [2024-11-20 11:21:15.044450] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:47.620 [2024-11-20 11:21:15.044456] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:47.620 [2024-11-20 11:21:15.044463] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:47.620 11:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.620 11:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:47.620 11:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.620 11:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:47.620 [2024-11-20 11:21:15.049637] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:47.620 11:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.620 11:21:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 17288 00:26:47.620 [2024-11-20 11:21:15.056774] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:47.879 [2024-11-20 11:21:15.120755] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:26:49.756 5650.57 IOPS, 22.07 MiB/s [2024-11-20T10:21:18.189Z] 6332.50 IOPS, 24.74 MiB/s [2024-11-20T10:21:19.126Z] 6874.00 IOPS, 26.85 MiB/s [2024-11-20T10:21:20.062Z] 7299.60 IOPS, 28.51 MiB/s [2024-11-20T10:21:21.438Z] 7649.00 IOPS, 29.88 MiB/s [2024-11-20T10:21:22.373Z] 7927.17 IOPS, 30.97 MiB/s [2024-11-20T10:21:23.310Z] 8174.62 IOPS, 31.93 MiB/s [2024-11-20T10:21:24.248Z] 8377.64 IOPS, 32.73 MiB/s 00:26:56.752 Latency(us) 00:26:56.752 [2024-11-20T10:21:24.248Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:56.752 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:56.752 Verification LBA range: start 0x0 length 0x4000 00:26:56.752 Nvme1n1 : 15.00 8565.08 33.46 10945.75 0.00 6540.42 680.29 15728.64 00:26:56.752 [2024-11-20T10:21:24.248Z] =================================================================================================================== 00:26:56.752 [2024-11-20T10:21:24.248Z] Total : 8565.08 33.46 10945.75 0.00 6540.42 680.29 15728.64 00:26:56.752 11:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:26:56.752 11:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:56.752 11:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.752 11:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:56.752 11:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.752 11:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:26:56.752 11:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:26:56.752 11:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:56.752 11:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:26:56.752 11:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:56.752 11:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:26:56.753 11:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:56.753 11:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:56.753 rmmod nvme_tcp 00:26:56.753 rmmod nvme_fabrics 00:26:56.753 rmmod nvme_keyring 00:26:56.753 11:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:57.012 11:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:26:57.012 11:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:26:57.013 11:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 18589 ']' 00:26:57.013 11:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 18589 00:26:57.013 11:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 18589 ']' 00:26:57.013 11:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 18589 00:26:57.013 11:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:26:57.013 11:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:57.013 11:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 18589 00:26:57.013 11:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:57.013 11:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:57.013 11:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 18589' 00:26:57.013 killing process with pid 18589 00:26:57.013 11:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 18589 00:26:57.013 11:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 18589 00:26:57.013 11:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:57.013 11:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:57.013 11:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:57.013 11:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:26:57.013 11:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:26:57.013 11:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:57.013 11:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:26:57.013 11:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:57.013 11:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:57.013 11:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:57.013 11:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:57.013 11:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:59.550 11:21:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:59.550 00:26:59.550 real 0m26.034s 00:26:59.550 user 1m0.528s 00:26:59.550 sys 0m6.816s 00:26:59.550 11:21:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:59.550 11:21:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:59.550 ************************************ 00:26:59.550 END TEST nvmf_bdevperf 00:26:59.550 ************************************ 00:26:59.550 11:21:26 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:26:59.551 11:21:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:59.551 11:21:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:59.551 11:21:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.551 ************************************ 00:26:59.551 START TEST nvmf_target_disconnect 00:26:59.551 ************************************ 00:26:59.551 11:21:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:26:59.551 * Looking for test storage... 00:26:59.551 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:59.551 11:21:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:59.551 11:21:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:26:59.551 11:21:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:59.551 11:21:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:59.551 11:21:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:59.551 11:21:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:59.551 11:21:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:59.551 11:21:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:26:59.551 11:21:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:26:59.551 11:21:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:26:59.551 11:21:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:26:59.551 11:21:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:26:59.551 11:21:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:26:59.551 11:21:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:26:59.551 11:21:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:59.551 11:21:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:26:59.551 11:21:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:26:59.551 11:21:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:59.551 11:21:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:59.551 11:21:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:26:59.551 11:21:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:26:59.551 11:21:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:59.551 11:21:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:26:59.551 11:21:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:26:59.551 11:21:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:26:59.551 11:21:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:26:59.551 11:21:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:59.551 11:21:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:26:59.551 11:21:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:26:59.551 11:21:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:59.551 11:21:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:59.551 11:21:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:26:59.551 11:21:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:59.551 11:21:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:59.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:59.551 --rc genhtml_branch_coverage=1 00:26:59.551 --rc genhtml_function_coverage=1 00:26:59.551 --rc genhtml_legend=1 00:26:59.551 --rc geninfo_all_blocks=1 00:26:59.551 --rc geninfo_unexecuted_blocks=1 00:26:59.551 00:26:59.551 ' 00:26:59.551 11:21:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:59.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:59.551 --rc genhtml_branch_coverage=1 00:26:59.551 --rc genhtml_function_coverage=1 00:26:59.551 --rc genhtml_legend=1 00:26:59.551 --rc geninfo_all_blocks=1 00:26:59.551 --rc geninfo_unexecuted_blocks=1 00:26:59.551 00:26:59.551 ' 00:26:59.551 11:21:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:59.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:59.551 --rc genhtml_branch_coverage=1 00:26:59.551 --rc genhtml_function_coverage=1 00:26:59.551 --rc genhtml_legend=1 00:26:59.551 --rc geninfo_all_blocks=1 00:26:59.551 --rc geninfo_unexecuted_blocks=1 00:26:59.551 00:26:59.551 ' 00:26:59.551 11:21:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:59.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:59.551 --rc genhtml_branch_coverage=1 00:26:59.551 --rc genhtml_function_coverage=1 00:26:59.551 --rc genhtml_legend=1 00:26:59.551 --rc geninfo_all_blocks=1 00:26:59.551 --rc geninfo_unexecuted_blocks=1 00:26:59.551 00:26:59.551 ' 00:26:59.551 11:21:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:59.551 11:21:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:26:59.551 11:21:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:59.551 11:21:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:59.551 11:21:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:59.551 11:21:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:59.551 11:21:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:59.551 11:21:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:59.551 11:21:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:59.551 11:21:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:59.551 11:21:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:59.551 11:21:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:59.551 11:21:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:26:59.551 11:21:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:26:59.551 11:21:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:59.551 11:21:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:59.551 11:21:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:59.551 11:21:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:59.551 11:21:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:59.551 11:21:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:26:59.551 11:21:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:59.551 11:21:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:59.551 11:21:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:59.551 11:21:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:59.551 11:21:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:59.552 11:21:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:59.552 11:21:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:26:59.552 11:21:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:59.552 11:21:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:26:59.552 11:21:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:59.552 11:21:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:59.552 11:21:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:59.552 11:21:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:59.552 11:21:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:59.552 11:21:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:59.552 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:59.552 11:21:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:59.552 11:21:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:59.552 11:21:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:59.552 11:21:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:26:59.552 11:21:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:26:59.552 11:21:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:26:59.552 11:21:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:26:59.552 11:21:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:59.552 11:21:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:59.552 11:21:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:59.552 11:21:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:59.552 11:21:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:59.552 11:21:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:59.552 11:21:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:59.552 11:21:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:59.552 11:21:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:59.552 11:21:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:59.552 11:21:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:26:59.552 11:21:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:06.186 11:21:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:06.186 11:21:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:27:06.186 11:21:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:06.186 11:21:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:06.186 11:21:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:06.186 11:21:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:06.186 11:21:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:06.186 11:21:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:27:06.186 11:21:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:06.186 11:21:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:27:06.186 11:21:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:27:06.186 11:21:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:27:06.186 11:21:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:27:06.186 11:21:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:27:06.186 11:21:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:27:06.186 11:21:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:06.186 11:21:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:06.186 11:21:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:06.186 11:21:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:06.186 11:21:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:06.186 11:21:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:06.186 11:21:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:06.186 11:21:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:06.186 11:21:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:06.186 11:21:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:06.186 11:21:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:06.186 11:21:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:06.186 11:21:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:06.186 11:21:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:06.186 11:21:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:06.186 11:21:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:06.186 11:21:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:06.186 11:21:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:06.186 11:21:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:06.186 11:21:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:06.186 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:06.186 11:21:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:06.186 11:21:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:06.186 11:21:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:06.186 11:21:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:06.186 11:21:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:06.186 11:21:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:06.186 11:21:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:06.186 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:06.186 11:21:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:06.186 11:21:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:06.186 11:21:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:06.186 11:21:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:06.186 11:21:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:06.186 11:21:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:06.186 11:21:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:06.186 11:21:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:06.186 11:21:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:06.186 11:21:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:06.186 11:21:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:06.186 11:21:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:06.186 11:21:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:06.186 11:21:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:06.186 11:21:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:06.186 11:21:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:06.186 Found net devices under 0000:86:00.0: cvl_0_0 00:27:06.186 11:21:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:06.186 11:21:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:06.187 11:21:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:06.187 11:21:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:06.187 11:21:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:06.187 11:21:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:06.187 11:21:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:06.187 11:21:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:06.187 11:21:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:06.187 Found net devices under 0000:86:00.1: cvl_0_1 00:27:06.187 11:21:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:06.187 11:21:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:06.187 11:21:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:27:06.187 11:21:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:06.187 11:21:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:06.187 11:21:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:06.187 11:21:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:06.187 11:21:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:06.187 11:21:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:06.187 11:21:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:06.187 11:21:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:06.187 11:21:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:06.187 11:21:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:06.187 11:21:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:06.187 11:21:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:06.187 11:21:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:06.187 11:21:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:06.187 11:21:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:06.187 11:21:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:06.187 11:21:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:06.187 11:21:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:06.187 11:21:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:06.187 11:21:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:06.187 11:21:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:06.187 11:21:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:06.187 11:21:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:06.187 11:21:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:06.187 11:21:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:06.187 11:21:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:06.187 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:06.187 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.383 ms 00:27:06.187 00:27:06.187 --- 10.0.0.2 ping statistics --- 00:27:06.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:06.187 rtt min/avg/max/mdev = 0.383/0.383/0.383/0.000 ms 00:27:06.187 11:21:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:06.187 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:06.187 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:27:06.187 00:27:06.187 --- 10.0.0.1 ping statistics --- 00:27:06.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:06.187 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:27:06.187 11:21:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:06.187 11:21:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:27:06.187 11:21:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:06.187 11:21:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:06.187 11:21:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:06.187 11:21:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:06.187 11:21:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:06.187 11:21:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:06.187 11:21:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:06.187 11:21:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:27:06.187 11:21:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:06.187 11:21:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:06.187 11:21:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:06.187 ************************************ 00:27:06.187 START TEST nvmf_target_disconnect_tc1 00:27:06.187 ************************************ 00:27:06.187 11:21:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:27:06.187 11:21:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:06.187 11:21:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:27:06.187 11:21:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:06.187 11:21:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:06.187 11:21:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:06.187 11:21:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:06.187 11:21:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:06.187 11:21:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:06.187 11:21:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:06.187 11:21:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:06.187 11:21:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:27:06.187 11:21:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:06.187 [2024-11-20 11:21:32.969185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.187 [2024-11-20 11:21:32.969238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13eeab0 with addr=10.0.0.2, port=4420 00:27:06.187 [2024-11-20 11:21:32.969257] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:27:06.187 [2024-11-20 11:21:32.969270] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:27:06.187 [2024-11-20 11:21:32.969277] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:27:06.187 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:27:06.187 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:27:06.187 Initializing NVMe Controllers 00:27:06.187 11:21:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:27:06.187 11:21:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:06.187 11:21:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:06.187 11:21:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:06.187 00:27:06.187 real 0m0.126s 00:27:06.188 user 0m0.052s 00:27:06.188 sys 0m0.069s 00:27:06.188 11:21:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:06.188 11:21:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:06.188 ************************************ 00:27:06.188 END TEST nvmf_target_disconnect_tc1 00:27:06.188 ************************************ 00:27:06.188 11:21:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:27:06.188 11:21:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:06.188 11:21:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:06.188 11:21:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:06.188 ************************************ 00:27:06.188 START TEST nvmf_target_disconnect_tc2 00:27:06.188 ************************************ 00:27:06.188 11:21:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:27:06.188 11:21:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:27:06.188 11:21:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:27:06.188 11:21:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:06.188 11:21:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:06.188 11:21:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:06.188 11:21:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=23587 00:27:06.188 11:21:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 23587 00:27:06.188 11:21:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:27:06.188 11:21:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 23587 ']' 00:27:06.188 11:21:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:06.188 11:21:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:06.188 11:21:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:06.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:06.188 11:21:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:06.188 11:21:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:06.188 [2024-11-20 11:21:33.109429] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:27:06.188 [2024-11-20 11:21:33.109471] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:06.188 [2024-11-20 11:21:33.190632] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:06.188 [2024-11-20 11:21:33.234122] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:06.188 [2024-11-20 11:21:33.234160] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:06.188 [2024-11-20 11:21:33.234168] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:06.188 [2024-11-20 11:21:33.234173] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:06.188 [2024-11-20 11:21:33.234178] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:06.188 [2024-11-20 11:21:33.235881] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:27:06.188 [2024-11-20 11:21:33.235988] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:27:06.188 [2024-11-20 11:21:33.236096] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:27:06.188 [2024-11-20 11:21:33.236096] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:27:06.188 11:21:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:06.188 11:21:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:27:06.188 11:21:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:06.188 11:21:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:06.188 11:21:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:06.188 11:21:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:06.188 11:21:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:06.188 11:21:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.188 11:21:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:06.188 Malloc0 00:27:06.188 11:21:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.188 11:21:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:06.188 11:21:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.188 11:21:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:06.188 [2024-11-20 11:21:33.425316] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:06.188 11:21:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.188 11:21:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:06.188 11:21:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.188 11:21:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:06.188 11:21:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.188 11:21:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:06.188 11:21:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.188 11:21:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:06.188 11:21:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.188 11:21:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:06.188 11:21:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.188 11:21:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:06.188 [2024-11-20 11:21:33.457556] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:06.188 11:21:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.188 11:21:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:06.188 11:21:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.188 11:21:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:06.188 11:21:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.188 11:21:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=23783 00:27:06.188 11:21:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:27:06.188 11:21:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:08.190 11:21:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 23587 00:27:08.190 11:21:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:27:08.190 Read completed with error (sct=0, sc=8) 00:27:08.190 starting I/O failed 00:27:08.190 Read completed with error (sct=0, sc=8) 00:27:08.190 starting I/O failed 00:27:08.190 Write completed with error (sct=0, sc=8) 00:27:08.190 starting I/O failed 00:27:08.190 Write completed with error (sct=0, sc=8) 00:27:08.190 starting I/O failed 00:27:08.190 Read completed with error (sct=0, sc=8) 00:27:08.190 starting I/O failed 00:27:08.190 Write completed with error (sct=0, sc=8) 00:27:08.190 starting I/O failed 00:27:08.190 Read completed with error (sct=0, sc=8) 00:27:08.190 starting I/O failed 00:27:08.190 Read completed with error (sct=0, sc=8) 00:27:08.190 starting I/O failed 00:27:08.190 Read completed with error (sct=0, sc=8) 00:27:08.190 starting I/O failed 00:27:08.190 Read completed with error (sct=0, sc=8) 00:27:08.190 starting I/O failed 00:27:08.190 Write completed with error (sct=0, sc=8) 00:27:08.190 starting I/O failed 00:27:08.190 Write completed with error (sct=0, sc=8) 00:27:08.190 starting I/O failed 00:27:08.190 Read completed with error (sct=0, sc=8) 00:27:08.190 starting I/O failed 00:27:08.190 Read completed with error (sct=0, sc=8) 00:27:08.190 starting I/O failed 00:27:08.190 Read completed with error (sct=0, sc=8) 00:27:08.190 starting I/O failed 00:27:08.190 Read completed with error (sct=0, sc=8) 00:27:08.190 starting I/O failed 00:27:08.190 Write completed with error (sct=0, sc=8) 00:27:08.190 starting I/O failed 00:27:08.190 Write completed with error (sct=0, sc=8) 00:27:08.190 starting I/O failed 00:27:08.190 Write completed with error (sct=0, sc=8) 00:27:08.190 starting I/O failed 00:27:08.190 Write completed with error (sct=0, sc=8) 00:27:08.190 starting I/O failed 00:27:08.190 Read completed with error (sct=0, sc=8) 00:27:08.190 starting I/O failed 00:27:08.190 Read completed with error (sct=0, sc=8) 00:27:08.190 starting I/O failed 00:27:08.190 Read completed with error (sct=0, sc=8) 00:27:08.190 starting I/O failed 00:27:08.190 Write completed with error (sct=0, sc=8) 00:27:08.190 starting I/O failed 00:27:08.190 Read completed with error (sct=0, sc=8) 00:27:08.190 starting I/O failed 00:27:08.190 Write completed with error (sct=0, sc=8) 00:27:08.190 starting I/O failed 00:27:08.190 Read completed with error (sct=0, sc=8) 00:27:08.190 starting I/O failed 00:27:08.190 Write completed with error (sct=0, sc=8) 00:27:08.190 starting I/O failed 00:27:08.190 Write completed with error (sct=0, sc=8) 00:27:08.190 starting I/O failed 00:27:08.190 Read completed with error (sct=0, sc=8) 00:27:08.190 starting I/O failed 00:27:08.190 Write completed with error (sct=0, sc=8) 00:27:08.190 starting I/O failed 00:27:08.190 Write completed with error (sct=0, sc=8) 00:27:08.190 starting I/O failed 00:27:08.190 Read completed with error (sct=0, sc=8) 00:27:08.190 starting I/O failed 00:27:08.190 Read completed with error (sct=0, sc=8) 00:27:08.190 starting I/O failed 00:27:08.190 [2024-11-20 11:21:35.493391] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:08.190 Read completed with error (sct=0, sc=8) 00:27:08.190 starting I/O failed 00:27:08.190 Read completed with error (sct=0, sc=8) 00:27:08.190 starting I/O failed 00:27:08.190 Read completed with error (sct=0, sc=8) 00:27:08.190 starting I/O failed 00:27:08.190 Read completed with error (sct=0, sc=8) 00:27:08.190 starting I/O failed 00:27:08.190 Read completed with error (sct=0, sc=8) 00:27:08.190 starting I/O failed 00:27:08.190 Read completed with error (sct=0, sc=8) 00:27:08.190 starting I/O failed 00:27:08.190 Read completed with error (sct=0, sc=8) 00:27:08.190 starting I/O failed 00:27:08.190 Read completed with error (sct=0, sc=8) 00:27:08.190 starting I/O failed 00:27:08.190 Read completed with error (sct=0, sc=8) 00:27:08.190 starting I/O failed 00:27:08.190 Read completed with error (sct=0, sc=8) 00:27:08.190 starting I/O failed 00:27:08.190 Read completed with error (sct=0, sc=8) 00:27:08.190 starting I/O failed 00:27:08.190 Read completed with error (sct=0, sc=8) 00:27:08.190 starting I/O failed 00:27:08.190 Write completed with error (sct=0, sc=8) 00:27:08.190 starting I/O failed 00:27:08.190 Write completed with error (sct=0, sc=8) 00:27:08.190 starting I/O failed 00:27:08.190 Read completed with error (sct=0, sc=8) 00:27:08.190 starting I/O failed 00:27:08.190 Write completed with error (sct=0, sc=8) 00:27:08.190 starting I/O failed 00:27:08.190 Write completed with error (sct=0, sc=8) 00:27:08.190 starting I/O failed 00:27:08.190 Write completed with error (sct=0, sc=8) 00:27:08.190 starting I/O failed 00:27:08.190 Read completed with error (sct=0, sc=8) 00:27:08.190 starting I/O failed 00:27:08.190 Write completed with error (sct=0, sc=8) 00:27:08.190 starting I/O failed 00:27:08.190 Read completed with error (sct=0, sc=8) 00:27:08.190 starting I/O failed 00:27:08.190 Write completed with error (sct=0, sc=8) 00:27:08.190 starting I/O failed 00:27:08.190 Read completed with error (sct=0, sc=8) 00:27:08.190 starting I/O failed 00:27:08.190 Write completed with error (sct=0, sc=8) 00:27:08.190 starting I/O failed 00:27:08.190 Write completed with error (sct=0, sc=8) 00:27:08.190 starting I/O failed 00:27:08.190 Read completed with error (sct=0, sc=8) 00:27:08.190 starting I/O failed 00:27:08.190 Write completed with error (sct=0, sc=8) 00:27:08.190 starting I/O failed 00:27:08.190 Write completed with error (sct=0, sc=8) 00:27:08.190 starting I/O failed 00:27:08.190 Read completed with error (sct=0, sc=8) 00:27:08.190 starting I/O failed 00:27:08.190 Read completed with error (sct=0, sc=8) 00:27:08.190 starting I/O failed 00:27:08.190 [2024-11-20 11:21:35.493602] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:08.190 Read completed with error (sct=0, sc=8) 00:27:08.190 starting I/O failed 00:27:08.190 Read completed with error (sct=0, sc=8) 00:27:08.190 starting I/O failed 00:27:08.190 Read completed with error (sct=0, sc=8) 00:27:08.190 starting I/O failed 00:27:08.190 Read completed with error (sct=0, sc=8) 00:27:08.190 starting I/O failed 00:27:08.190 Read completed with error (sct=0, sc=8) 00:27:08.190 starting I/O failed 00:27:08.190 Read completed with error (sct=0, sc=8) 00:27:08.190 starting I/O failed 00:27:08.190 Read completed with error (sct=0, sc=8) 00:27:08.190 starting I/O failed 00:27:08.190 Read completed with error (sct=0, sc=8) 00:27:08.190 starting I/O failed 00:27:08.190 Read completed with error (sct=0, sc=8) 00:27:08.190 starting I/O failed 00:27:08.190 Write completed with error (sct=0, sc=8) 00:27:08.190 starting I/O failed 00:27:08.190 Read completed with error (sct=0, sc=8) 00:27:08.190 starting I/O failed 00:27:08.190 Read completed with error (sct=0, sc=8) 00:27:08.190 starting I/O failed 00:27:08.190 Write completed with error (sct=0, sc=8) 00:27:08.190 starting I/O failed 00:27:08.190 Write completed with error (sct=0, sc=8) 00:27:08.190 starting I/O failed 00:27:08.190 Read completed with error (sct=0, sc=8) 00:27:08.190 starting I/O failed 00:27:08.190 Read completed with error (sct=0, sc=8) 00:27:08.190 starting I/O failed 00:27:08.190 Read completed with error (sct=0, sc=8) 00:27:08.190 starting I/O failed 00:27:08.190 Read completed with error (sct=0, sc=8) 00:27:08.190 starting I/O failed 00:27:08.190 Read completed with error (sct=0, sc=8) 00:27:08.190 starting I/O failed 00:27:08.190 Read completed with error (sct=0, sc=8) 00:27:08.190 starting I/O failed 00:27:08.190 Write completed with error (sct=0, sc=8) 00:27:08.190 starting I/O failed 00:27:08.190 Write completed with error (sct=0, sc=8) 00:27:08.190 starting I/O failed 00:27:08.190 Read completed with error (sct=0, sc=8) 00:27:08.190 starting I/O failed 00:27:08.190 Write completed with error (sct=0, sc=8) 00:27:08.190 starting I/O failed 00:27:08.190 Write completed with error (sct=0, sc=8) 00:27:08.190 starting I/O failed 00:27:08.190 Write completed with error (sct=0, sc=8) 00:27:08.190 starting I/O failed 00:27:08.190 Read completed with error (sct=0, sc=8) 00:27:08.190 starting I/O failed 00:27:08.190 Write completed with error (sct=0, sc=8) 00:27:08.190 starting I/O failed 00:27:08.190 Read completed with error (sct=0, sc=8) 00:27:08.190 starting I/O failed 00:27:08.190 Write completed with error (sct=0, sc=8) 00:27:08.191 starting I/O failed 00:27:08.191 Read completed with error (sct=0, sc=8) 00:27:08.191 starting I/O failed 00:27:08.191 Write completed with error (sct=0, sc=8) 00:27:08.191 starting I/O failed 00:27:08.191 [2024-11-20 11:21:35.493794] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:08.191 Read completed with error (sct=0, sc=8) 00:27:08.191 starting I/O failed 00:27:08.191 Read completed with error (sct=0, sc=8) 00:27:08.191 starting I/O failed 00:27:08.191 Read completed with error (sct=0, sc=8) 00:27:08.191 starting I/O failed 00:27:08.191 Read completed with error (sct=0, sc=8) 00:27:08.191 starting I/O failed 00:27:08.191 Read completed with error (sct=0, sc=8) 00:27:08.191 starting I/O failed 00:27:08.191 Read completed with error (sct=0, sc=8) 00:27:08.191 starting I/O failed 00:27:08.191 Read completed with error (sct=0, sc=8) 00:27:08.191 starting I/O failed 00:27:08.191 Read completed with error (sct=0, sc=8) 00:27:08.191 starting I/O failed 00:27:08.191 Read completed with error (sct=0, sc=8) 00:27:08.191 starting I/O failed 00:27:08.191 Read completed with error (sct=0, sc=8) 00:27:08.191 starting I/O failed 00:27:08.191 Read completed with error (sct=0, sc=8) 00:27:08.191 starting I/O failed 00:27:08.191 Write completed with error (sct=0, sc=8) 00:27:08.191 starting I/O failed 00:27:08.191 Write completed with error (sct=0, sc=8) 00:27:08.191 starting I/O failed 00:27:08.191 Write completed with error (sct=0, sc=8) 00:27:08.191 starting I/O failed 00:27:08.191 Read completed with error (sct=0, sc=8) 00:27:08.191 starting I/O failed 00:27:08.191 Write completed with error (sct=0, sc=8) 00:27:08.191 starting I/O failed 00:27:08.191 Write completed with error (sct=0, sc=8) 00:27:08.191 starting I/O failed 00:27:08.191 Read completed with error (sct=0, sc=8) 00:27:08.191 starting I/O failed 00:27:08.191 Read completed with error (sct=0, sc=8) 00:27:08.191 starting I/O failed 00:27:08.191 Read completed with error (sct=0, sc=8) 00:27:08.191 starting I/O failed 00:27:08.191 Write completed with error (sct=0, sc=8) 00:27:08.191 starting I/O failed 00:27:08.191 Read completed with error (sct=0, sc=8) 00:27:08.191 starting I/O failed 00:27:08.191 Write completed with error (sct=0, sc=8) 00:27:08.191 starting I/O failed 00:27:08.191 Read completed with error (sct=0, sc=8) 00:27:08.191 starting I/O failed 00:27:08.191 Read completed with error (sct=0, sc=8) 00:27:08.191 starting I/O failed 00:27:08.191 Write completed with error (sct=0, sc=8) 00:27:08.191 starting I/O failed 00:27:08.191 Read completed with error (sct=0, sc=8) 00:27:08.191 starting I/O failed 00:27:08.191 Read completed with error (sct=0, sc=8) 00:27:08.191 starting I/O failed 00:27:08.191 Read completed with error (sct=0, sc=8) 00:27:08.191 starting I/O failed 00:27:08.191 Read completed with error (sct=0, sc=8) 00:27:08.191 starting I/O failed 00:27:08.191 Write completed with error (sct=0, sc=8) 00:27:08.191 starting I/O failed 00:27:08.191 Read completed with error (sct=0, sc=8) 00:27:08.191 starting I/O failed 00:27:08.191 [2024-11-20 11:21:35.494015] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.191 [2024-11-20 11:21:35.494189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.191 [2024-11-20 11:21:35.494261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.191 qpair failed and we were unable to recover it. 00:27:08.191 [2024-11-20 11:21:35.494542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.191 [2024-11-20 11:21:35.494582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.191 qpair failed and we were unable to recover it. 00:27:08.191 [2024-11-20 11:21:35.494710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.191 [2024-11-20 11:21:35.494742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.191 qpair failed and we were unable to recover it. 00:27:08.191 [2024-11-20 11:21:35.495013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.191 [2024-11-20 11:21:35.495047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.191 qpair failed and we were unable to recover it. 00:27:08.191 [2024-11-20 11:21:35.495258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.191 [2024-11-20 11:21:35.495268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.191 qpair failed and we were unable to recover it. 00:27:08.191 [2024-11-20 11:21:35.495398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.191 [2024-11-20 11:21:35.495408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.191 qpair failed and we were unable to recover it. 00:27:08.191 [2024-11-20 11:21:35.495495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.191 [2024-11-20 11:21:35.495505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.191 qpair failed and we were unable to recover it. 00:27:08.191 [2024-11-20 11:21:35.495568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.191 [2024-11-20 11:21:35.495578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.191 qpair failed and we were unable to recover it. 00:27:08.191 [2024-11-20 11:21:35.495673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.191 [2024-11-20 11:21:35.495683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.191 qpair failed and we were unable to recover it. 00:27:08.191 [2024-11-20 11:21:35.495760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.191 [2024-11-20 11:21:35.495770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.191 qpair failed and we were unable to recover it. 00:27:08.191 [2024-11-20 11:21:35.496006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.191 [2024-11-20 11:21:35.496039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.191 qpair failed and we were unable to recover it. 00:27:08.191 [2024-11-20 11:21:35.496172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.191 [2024-11-20 11:21:35.496202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.191 qpair failed and we were unable to recover it. 00:27:08.191 [2024-11-20 11:21:35.496400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.191 [2024-11-20 11:21:35.496430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.191 qpair failed and we were unable to recover it. 00:27:08.191 [2024-11-20 11:21:35.496531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.191 [2024-11-20 11:21:35.496562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.191 qpair failed and we were unable to recover it. 00:27:08.191 [2024-11-20 11:21:35.496847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.191 [2024-11-20 11:21:35.496877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.191 qpair failed and we were unable to recover it. 00:27:08.191 [2024-11-20 11:21:35.496983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.191 [2024-11-20 11:21:35.496994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.191 qpair failed and we were unable to recover it. 00:27:08.191 [2024-11-20 11:21:35.497087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.191 [2024-11-20 11:21:35.497097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.191 qpair failed and we were unable to recover it. 00:27:08.191 [2024-11-20 11:21:35.497253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.191 [2024-11-20 11:21:35.497284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.191 qpair failed and we were unable to recover it. 00:27:08.191 [2024-11-20 11:21:35.497400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.191 [2024-11-20 11:21:35.497431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.191 qpair failed and we were unable to recover it. 00:27:08.191 [2024-11-20 11:21:35.497641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.191 [2024-11-20 11:21:35.497672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.191 qpair failed and we were unable to recover it. 00:27:08.191 [2024-11-20 11:21:35.497844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.191 [2024-11-20 11:21:35.497875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.191 qpair failed and we were unable to recover it. 00:27:08.192 [2024-11-20 11:21:35.498018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.192 [2024-11-20 11:21:35.498029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.192 qpair failed and we were unable to recover it. 00:27:08.192 [2024-11-20 11:21:35.498108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.192 [2024-11-20 11:21:35.498118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.192 qpair failed and we were unable to recover it. 00:27:08.192 [2024-11-20 11:21:35.498181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.192 [2024-11-20 11:21:35.498191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.192 qpair failed and we were unable to recover it. 00:27:08.192 [2024-11-20 11:21:35.498252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.192 [2024-11-20 11:21:35.498262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.192 qpair failed and we were unable to recover it. 00:27:08.192 [2024-11-20 11:21:35.498433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.192 [2024-11-20 11:21:35.498453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.192 qpair failed and we were unable to recover it. 00:27:08.192 [2024-11-20 11:21:35.498528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.192 [2024-11-20 11:21:35.498550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.192 qpair failed and we were unable to recover it. 00:27:08.192 [2024-11-20 11:21:35.498699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.192 [2024-11-20 11:21:35.498710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.192 qpair failed and we were unable to recover it. 00:27:08.192 [2024-11-20 11:21:35.498838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.192 [2024-11-20 11:21:35.498870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.192 qpair failed and we were unable to recover it. 00:27:08.192 [2024-11-20 11:21:35.499076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.192 [2024-11-20 11:21:35.499108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.192 qpair failed and we were unable to recover it. 00:27:08.192 [2024-11-20 11:21:35.499236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.192 [2024-11-20 11:21:35.499268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.192 qpair failed and we were unable to recover it. 00:27:08.192 [2024-11-20 11:21:35.499370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.192 [2024-11-20 11:21:35.499379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.192 qpair failed and we were unable to recover it. 00:27:08.192 [2024-11-20 11:21:35.499442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.192 [2024-11-20 11:21:35.499451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.192 qpair failed and we were unable to recover it. 00:27:08.192 [2024-11-20 11:21:35.499598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.192 [2024-11-20 11:21:35.499608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.192 qpair failed and we were unable to recover it. 00:27:08.192 [2024-11-20 11:21:35.499731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.192 [2024-11-20 11:21:35.499741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.192 qpair failed and we were unable to recover it. 00:27:08.192 [2024-11-20 11:21:35.499824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.192 [2024-11-20 11:21:35.499834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.192 qpair failed and we were unable to recover it. 00:27:08.192 [2024-11-20 11:21:35.499983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.192 [2024-11-20 11:21:35.499993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.192 qpair failed and we were unable to recover it. 00:27:08.192 [2024-11-20 11:21:35.500131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.192 [2024-11-20 11:21:35.500162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.192 qpair failed and we were unable to recover it. 00:27:08.192 [2024-11-20 11:21:35.500343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.192 [2024-11-20 11:21:35.500381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.192 qpair failed and we were unable to recover it. 00:27:08.192 [2024-11-20 11:21:35.500490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.192 [2024-11-20 11:21:35.500523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.192 qpair failed and we were unable to recover it. 00:27:08.192 [2024-11-20 11:21:35.500729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.192 [2024-11-20 11:21:35.500760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.192 qpair failed and we were unable to recover it. 00:27:08.192 [2024-11-20 11:21:35.500873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.192 [2024-11-20 11:21:35.500904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.192 qpair failed and we were unable to recover it. 00:27:08.192 [2024-11-20 11:21:35.501106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.192 [2024-11-20 11:21:35.501139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.192 qpair failed and we were unable to recover it. 00:27:08.192 [2024-11-20 11:21:35.501330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.192 [2024-11-20 11:21:35.501361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.192 qpair failed and we were unable to recover it. 00:27:08.192 [2024-11-20 11:21:35.501547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.192 [2024-11-20 11:21:35.501579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.192 qpair failed and we were unable to recover it. 00:27:08.192 [2024-11-20 11:21:35.501752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.192 [2024-11-20 11:21:35.501783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.192 qpair failed and we were unable to recover it. 00:27:08.192 [2024-11-20 11:21:35.501915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.192 [2024-11-20 11:21:35.501946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.192 qpair failed and we were unable to recover it. 00:27:08.192 [2024-11-20 11:21:35.502080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.192 [2024-11-20 11:21:35.502112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.192 qpair failed and we were unable to recover it. 00:27:08.192 [2024-11-20 11:21:35.502295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.192 [2024-11-20 11:21:35.502326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.192 qpair failed and we were unable to recover it. 00:27:08.192 [2024-11-20 11:21:35.502512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.192 [2024-11-20 11:21:35.502542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.192 qpair failed and we were unable to recover it. 00:27:08.192 [2024-11-20 11:21:35.502665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.192 [2024-11-20 11:21:35.502697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.192 qpair failed and we were unable to recover it. 00:27:08.192 [2024-11-20 11:21:35.502878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.192 [2024-11-20 11:21:35.502910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.192 qpair failed and we were unable to recover it. 00:27:08.192 [2024-11-20 11:21:35.503147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.192 [2024-11-20 11:21:35.503179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.192 qpair failed and we were unable to recover it. 00:27:08.192 [2024-11-20 11:21:35.503418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.193 [2024-11-20 11:21:35.503450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.193 qpair failed and we were unable to recover it. 00:27:08.193 [2024-11-20 11:21:35.503632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.193 [2024-11-20 11:21:35.503664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.193 qpair failed and we were unable to recover it. 00:27:08.193 [2024-11-20 11:21:35.503783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.193 [2024-11-20 11:21:35.503814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.193 qpair failed and we were unable to recover it. 00:27:08.193 [2024-11-20 11:21:35.503987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.193 [2024-11-20 11:21:35.504020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.193 qpair failed and we were unable to recover it. 00:27:08.193 [2024-11-20 11:21:35.504126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.193 [2024-11-20 11:21:35.504155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.193 qpair failed and we were unable to recover it. 00:27:08.193 [2024-11-20 11:21:35.505557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.193 [2024-11-20 11:21:35.505592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.193 qpair failed and we were unable to recover it. 00:27:08.193 [2024-11-20 11:21:35.505768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.193 [2024-11-20 11:21:35.505797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.193 qpair failed and we were unable to recover it. 00:27:08.193 [2024-11-20 11:21:35.506033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.193 [2024-11-20 11:21:35.506065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.193 qpair failed and we were unable to recover it. 00:27:08.193 [2024-11-20 11:21:35.506177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.193 [2024-11-20 11:21:35.506208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.193 qpair failed and we were unable to recover it. 00:27:08.193 [2024-11-20 11:21:35.506445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.193 [2024-11-20 11:21:35.506476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.193 qpair failed and we were unable to recover it. 00:27:08.193 [2024-11-20 11:21:35.506715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.193 [2024-11-20 11:21:35.506746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.193 qpair failed and we were unable to recover it. 00:27:08.193 [2024-11-20 11:21:35.506913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.193 [2024-11-20 11:21:35.506944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.193 qpair failed and we were unable to recover it. 00:27:08.193 [2024-11-20 11:21:35.507276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.193 [2024-11-20 11:21:35.507348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.193 qpair failed and we were unable to recover it. 00:27:08.193 [2024-11-20 11:21:35.507539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.193 [2024-11-20 11:21:35.507576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.193 qpair failed and we were unable to recover it. 00:27:08.193 [2024-11-20 11:21:35.507692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.193 [2024-11-20 11:21:35.507725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.193 qpair failed and we were unable to recover it. 00:27:08.193 [2024-11-20 11:21:35.507894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.193 [2024-11-20 11:21:35.507925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.193 qpair failed and we were unable to recover it. 00:27:08.193 [2024-11-20 11:21:35.508060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.193 [2024-11-20 11:21:35.508092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.193 qpair failed and we were unable to recover it. 00:27:08.193 [2024-11-20 11:21:35.508287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.193 [2024-11-20 11:21:35.508319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.193 qpair failed and we were unable to recover it. 00:27:08.193 [2024-11-20 11:21:35.508578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.193 [2024-11-20 11:21:35.508609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.193 qpair failed and we were unable to recover it. 00:27:08.193 [2024-11-20 11:21:35.508821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.193 [2024-11-20 11:21:35.508853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.193 qpair failed and we were unable to recover it. 00:27:08.193 [2024-11-20 11:21:35.509042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.193 [2024-11-20 11:21:35.509075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.193 qpair failed and we were unable to recover it. 00:27:08.193 [2024-11-20 11:21:35.509332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.193 [2024-11-20 11:21:35.509364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.193 qpair failed and we were unable to recover it. 00:27:08.193 [2024-11-20 11:21:35.509622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.193 [2024-11-20 11:21:35.509652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.193 qpair failed and we were unable to recover it. 00:27:08.193 [2024-11-20 11:21:35.509821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.193 [2024-11-20 11:21:35.509853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.193 qpair failed and we were unable to recover it. 00:27:08.193 [2024-11-20 11:21:35.509980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.193 [2024-11-20 11:21:35.510014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.193 qpair failed and we were unable to recover it. 00:27:08.193 [2024-11-20 11:21:35.510149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.193 [2024-11-20 11:21:35.510180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.193 qpair failed and we were unable to recover it. 00:27:08.193 [2024-11-20 11:21:35.510430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.193 [2024-11-20 11:21:35.510462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.193 qpair failed and we were unable to recover it. 00:27:08.193 [2024-11-20 11:21:35.510702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.193 [2024-11-20 11:21:35.510732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.193 qpair failed and we were unable to recover it. 00:27:08.193 [2024-11-20 11:21:35.510968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.193 [2024-11-20 11:21:35.511002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.193 qpair failed and we were unable to recover it. 00:27:08.193 [2024-11-20 11:21:35.511134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.193 [2024-11-20 11:21:35.511166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.193 qpair failed and we were unable to recover it. 00:27:08.193 [2024-11-20 11:21:35.511372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.193 [2024-11-20 11:21:35.511403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.193 qpair failed and we were unable to recover it. 00:27:08.193 [2024-11-20 11:21:35.511635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.193 [2024-11-20 11:21:35.511666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.193 qpair failed and we were unable to recover it. 00:27:08.193 [2024-11-20 11:21:35.511785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.193 [2024-11-20 11:21:35.511816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.193 qpair failed and we were unable to recover it. 00:27:08.193 [2024-11-20 11:21:35.511988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.194 [2024-11-20 11:21:35.512021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.194 qpair failed and we were unable to recover it. 00:27:08.194 [2024-11-20 11:21:35.512197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.194 [2024-11-20 11:21:35.512229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.194 qpair failed and we were unable to recover it. 00:27:08.194 [2024-11-20 11:21:35.512396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.194 [2024-11-20 11:21:35.512427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.194 qpair failed and we were unable to recover it. 00:27:08.194 [2024-11-20 11:21:35.512621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.194 [2024-11-20 11:21:35.512653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.194 qpair failed and we were unable to recover it. 00:27:08.194 [2024-11-20 11:21:35.512859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.194 [2024-11-20 11:21:35.512890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.194 qpair failed and we were unable to recover it. 00:27:08.194 [2024-11-20 11:21:35.513139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.194 [2024-11-20 11:21:35.513172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.194 qpair failed and we were unable to recover it. 00:27:08.194 [2024-11-20 11:21:35.513297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.194 [2024-11-20 11:21:35.513336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.194 qpair failed and we were unable to recover it. 00:27:08.194 [2024-11-20 11:21:35.513515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.194 [2024-11-20 11:21:35.513547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.194 qpair failed and we were unable to recover it. 00:27:08.194 [2024-11-20 11:21:35.513748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.194 [2024-11-20 11:21:35.513780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.194 qpair failed and we were unable to recover it. 00:27:08.194 [2024-11-20 11:21:35.513942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.194 [2024-11-20 11:21:35.513987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.194 qpair failed and we were unable to recover it. 00:27:08.194 [2024-11-20 11:21:35.514221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.194 [2024-11-20 11:21:35.514254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.194 qpair failed and we were unable to recover it. 00:27:08.194 [2024-11-20 11:21:35.514455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.194 [2024-11-20 11:21:35.514487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.194 qpair failed and we were unable to recover it. 00:27:08.194 [2024-11-20 11:21:35.514734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.194 [2024-11-20 11:21:35.514767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.194 qpair failed and we were unable to recover it. 00:27:08.194 [2024-11-20 11:21:35.514945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.194 [2024-11-20 11:21:35.514990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.194 qpair failed and we were unable to recover it. 00:27:08.194 [2024-11-20 11:21:35.515199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.194 [2024-11-20 11:21:35.515231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.194 qpair failed and we were unable to recover it. 00:27:08.194 [2024-11-20 11:21:35.515467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.194 [2024-11-20 11:21:35.515499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.194 qpair failed and we were unable to recover it. 00:27:08.194 [2024-11-20 11:21:35.515681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.194 [2024-11-20 11:21:35.515713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.194 qpair failed and we were unable to recover it. 00:27:08.194 [2024-11-20 11:21:35.515900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.194 [2024-11-20 11:21:35.515931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.194 qpair failed and we were unable to recover it. 00:27:08.194 [2024-11-20 11:21:35.516155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.194 [2024-11-20 11:21:35.516187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.194 qpair failed and we were unable to recover it. 00:27:08.194 [2024-11-20 11:21:35.516313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.194 [2024-11-20 11:21:35.516343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.194 qpair failed and we were unable to recover it. 00:27:08.194 [2024-11-20 11:21:35.516452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.194 [2024-11-20 11:21:35.516484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.194 qpair failed and we were unable to recover it. 00:27:08.194 [2024-11-20 11:21:35.516607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.194 [2024-11-20 11:21:35.516639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.194 qpair failed and we were unable to recover it. 00:27:08.194 [2024-11-20 11:21:35.516807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.194 [2024-11-20 11:21:35.516838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.194 qpair failed and we were unable to recover it. 00:27:08.194 [2024-11-20 11:21:35.517027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.194 [2024-11-20 11:21:35.517061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.194 qpair failed and we were unable to recover it. 00:27:08.194 [2024-11-20 11:21:35.517187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.194 [2024-11-20 11:21:35.517219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.194 qpair failed and we were unable to recover it. 00:27:08.194 [2024-11-20 11:21:35.517327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.194 [2024-11-20 11:21:35.517357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.194 qpair failed and we were unable to recover it. 00:27:08.194 [2024-11-20 11:21:35.517487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.194 [2024-11-20 11:21:35.517518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.194 qpair failed and we were unable to recover it. 00:27:08.194 [2024-11-20 11:21:35.517703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.194 [2024-11-20 11:21:35.517734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.194 qpair failed and we were unable to recover it. 00:27:08.194 [2024-11-20 11:21:35.517873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.194 [2024-11-20 11:21:35.517904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.194 qpair failed and we were unable to recover it. 00:27:08.194 [2024-11-20 11:21:35.518112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.194 [2024-11-20 11:21:35.518145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.194 qpair failed and we were unable to recover it. 00:27:08.194 [2024-11-20 11:21:35.518323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.194 [2024-11-20 11:21:35.518355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.194 qpair failed and we were unable to recover it. 00:27:08.194 [2024-11-20 11:21:35.518485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.194 [2024-11-20 11:21:35.518515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.194 qpair failed and we were unable to recover it. 00:27:08.194 [2024-11-20 11:21:35.518693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.194 [2024-11-20 11:21:35.518725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.194 qpair failed and we were unable to recover it. 00:27:08.195 [2024-11-20 11:21:35.518901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.195 [2024-11-20 11:21:35.518940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.195 qpair failed and we were unable to recover it. 00:27:08.195 [2024-11-20 11:21:35.519080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.195 [2024-11-20 11:21:35.519112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.195 qpair failed and we were unable to recover it. 00:27:08.195 [2024-11-20 11:21:35.519309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.195 [2024-11-20 11:21:35.519341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.195 qpair failed and we were unable to recover it. 00:27:08.195 [2024-11-20 11:21:35.519461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.195 [2024-11-20 11:21:35.519492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.195 qpair failed and we were unable to recover it. 00:27:08.195 [2024-11-20 11:21:35.519748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.195 [2024-11-20 11:21:35.519780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.195 qpair failed and we were unable to recover it. 00:27:08.195 [2024-11-20 11:21:35.519906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.195 [2024-11-20 11:21:35.519938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.195 qpair failed and we were unable to recover it. 00:27:08.195 [2024-11-20 11:21:35.520133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.195 [2024-11-20 11:21:35.520165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.195 qpair failed and we were unable to recover it. 00:27:08.195 [2024-11-20 11:21:35.520360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.195 [2024-11-20 11:21:35.520392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.195 qpair failed and we were unable to recover it. 00:27:08.195 [2024-11-20 11:21:35.520508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.195 [2024-11-20 11:21:35.520538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.195 qpair failed and we were unable to recover it. 00:27:08.195 [2024-11-20 11:21:35.520709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.195 [2024-11-20 11:21:35.520740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.195 qpair failed and we were unable to recover it. 00:27:08.195 [2024-11-20 11:21:35.520936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.195 [2024-11-20 11:21:35.520979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.195 qpair failed and we were unable to recover it. 00:27:08.195 [2024-11-20 11:21:35.521094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.195 [2024-11-20 11:21:35.521126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.195 qpair failed and we were unable to recover it. 00:27:08.195 [2024-11-20 11:21:35.521334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.195 [2024-11-20 11:21:35.521365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.195 qpair failed and we were unable to recover it. 00:27:08.195 [2024-11-20 11:21:35.521554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.195 [2024-11-20 11:21:35.521586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.195 qpair failed and we were unable to recover it. 00:27:08.195 [2024-11-20 11:21:35.521769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.195 [2024-11-20 11:21:35.521801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.195 qpair failed and we were unable to recover it. 00:27:08.195 [2024-11-20 11:21:35.521915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.195 [2024-11-20 11:21:35.521946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.195 qpair failed and we were unable to recover it. 00:27:08.195 [2024-11-20 11:21:35.522132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.195 [2024-11-20 11:21:35.522164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.195 qpair failed and we were unable to recover it. 00:27:08.195 [2024-11-20 11:21:35.522386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.195 [2024-11-20 11:21:35.522419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.195 qpair failed and we were unable to recover it. 00:27:08.195 [2024-11-20 11:21:35.522595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.195 [2024-11-20 11:21:35.522627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.195 qpair failed and we were unable to recover it. 00:27:08.195 [2024-11-20 11:21:35.522882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.195 [2024-11-20 11:21:35.522914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.195 qpair failed and we were unable to recover it. 00:27:08.195 [2024-11-20 11:21:35.523109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.195 [2024-11-20 11:21:35.523142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.195 qpair failed and we were unable to recover it. 00:27:08.195 [2024-11-20 11:21:35.523331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.195 [2024-11-20 11:21:35.523362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.195 qpair failed and we were unable to recover it. 00:27:08.195 [2024-11-20 11:21:35.523497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.195 [2024-11-20 11:21:35.523539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.195 qpair failed and we were unable to recover it. 00:27:08.195 [2024-11-20 11:21:35.523719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.195 [2024-11-20 11:21:35.523751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.195 qpair failed and we were unable to recover it. 00:27:08.195 [2024-11-20 11:21:35.524005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.195 [2024-11-20 11:21:35.524040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.195 qpair failed and we were unable to recover it. 00:27:08.195 [2024-11-20 11:21:35.524225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.195 [2024-11-20 11:21:35.524258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.195 qpair failed and we were unable to recover it. 00:27:08.195 [2024-11-20 11:21:35.524483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.195 [2024-11-20 11:21:35.524514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.195 qpair failed and we were unable to recover it. 00:27:08.195 [2024-11-20 11:21:35.524694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.195 [2024-11-20 11:21:35.524733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.196 qpair failed and we were unable to recover it. 00:27:08.196 [2024-11-20 11:21:35.525006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.196 [2024-11-20 11:21:35.525039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.196 qpair failed and we were unable to recover it. 00:27:08.196 [2024-11-20 11:21:35.525159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.196 [2024-11-20 11:21:35.525190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.196 qpair failed and we were unable to recover it. 00:27:08.196 [2024-11-20 11:21:35.525453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.196 [2024-11-20 11:21:35.525485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.196 qpair failed and we were unable to recover it. 00:27:08.196 [2024-11-20 11:21:35.525742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.196 [2024-11-20 11:21:35.525774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.196 qpair failed and we were unable to recover it. 00:27:08.196 [2024-11-20 11:21:35.525945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.196 [2024-11-20 11:21:35.525985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.196 qpair failed and we were unable to recover it. 00:27:08.196 [2024-11-20 11:21:35.526243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.196 [2024-11-20 11:21:35.526274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.196 qpair failed and we were unable to recover it. 00:27:08.196 [2024-11-20 11:21:35.526407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.196 [2024-11-20 11:21:35.526438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.196 qpair failed and we were unable to recover it. 00:27:08.196 [2024-11-20 11:21:35.526565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.196 [2024-11-20 11:21:35.526597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.196 qpair failed and we were unable to recover it. 00:27:08.196 [2024-11-20 11:21:35.526860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.196 [2024-11-20 11:21:35.526890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.196 qpair failed and we were unable to recover it. 00:27:08.196 [2024-11-20 11:21:35.527163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.196 [2024-11-20 11:21:35.527197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.196 qpair failed and we were unable to recover it. 00:27:08.196 [2024-11-20 11:21:35.527323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.196 [2024-11-20 11:21:35.527354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.196 qpair failed and we were unable to recover it. 00:27:08.196 [2024-11-20 11:21:35.527635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.196 [2024-11-20 11:21:35.527667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.196 qpair failed and we were unable to recover it. 00:27:08.196 [2024-11-20 11:21:35.527923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.196 [2024-11-20 11:21:35.527967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.196 qpair failed and we were unable to recover it. 00:27:08.196 [2024-11-20 11:21:35.528184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.196 [2024-11-20 11:21:35.528215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.196 qpair failed and we were unable to recover it. 00:27:08.196 [2024-11-20 11:21:35.528450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.196 [2024-11-20 11:21:35.528482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.196 qpair failed and we were unable to recover it. 00:27:08.196 [2024-11-20 11:21:35.528694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.196 [2024-11-20 11:21:35.528725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.196 qpair failed and we were unable to recover it. 00:27:08.196 [2024-11-20 11:21:35.528905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.196 [2024-11-20 11:21:35.528936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.196 qpair failed and we were unable to recover it. 00:27:08.196 [2024-11-20 11:21:35.529132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.196 [2024-11-20 11:21:35.529165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.196 qpair failed and we were unable to recover it. 00:27:08.196 [2024-11-20 11:21:35.529361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.196 [2024-11-20 11:21:35.529392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.196 qpair failed and we were unable to recover it. 00:27:08.196 [2024-11-20 11:21:35.529652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.196 [2024-11-20 11:21:35.529683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.196 qpair failed and we were unable to recover it. 00:27:08.196 [2024-11-20 11:21:35.529880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.196 [2024-11-20 11:21:35.529911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.196 qpair failed and we were unable to recover it. 00:27:08.196 [2024-11-20 11:21:35.530107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.196 [2024-11-20 11:21:35.530141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.196 qpair failed and we were unable to recover it. 00:27:08.196 [2024-11-20 11:21:35.530265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.196 [2024-11-20 11:21:35.530296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.196 qpair failed and we were unable to recover it. 00:27:08.196 [2024-11-20 11:21:35.530490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.196 [2024-11-20 11:21:35.530521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.196 qpair failed and we were unable to recover it. 00:27:08.196 [2024-11-20 11:21:35.530732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.196 [2024-11-20 11:21:35.530764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.196 qpair failed and we were unable to recover it. 00:27:08.196 [2024-11-20 11:21:35.531032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.196 [2024-11-20 11:21:35.531066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.196 qpair failed and we were unable to recover it. 00:27:08.196 [2024-11-20 11:21:35.531195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.196 [2024-11-20 11:21:35.531227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.196 qpair failed and we were unable to recover it. 00:27:08.196 [2024-11-20 11:21:35.531352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.196 [2024-11-20 11:21:35.531383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.196 qpair failed and we were unable to recover it. 00:27:08.196 [2024-11-20 11:21:35.531594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.196 [2024-11-20 11:21:35.531625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.196 qpair failed and we were unable to recover it. 00:27:08.196 [2024-11-20 11:21:35.531808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.196 [2024-11-20 11:21:35.531840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.196 qpair failed and we were unable to recover it. 00:27:08.196 [2024-11-20 11:21:35.532076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.196 [2024-11-20 11:21:35.532109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.196 qpair failed and we were unable to recover it. 00:27:08.196 [2024-11-20 11:21:35.532291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.196 [2024-11-20 11:21:35.532323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.196 qpair failed and we were unable to recover it. 00:27:08.196 [2024-11-20 11:21:35.532439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.196 [2024-11-20 11:21:35.532470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.196 qpair failed and we were unable to recover it. 00:27:08.197 [2024-11-20 11:21:35.532596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.197 [2024-11-20 11:21:35.532627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.197 qpair failed and we were unable to recover it. 00:27:08.197 [2024-11-20 11:21:35.532864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.197 [2024-11-20 11:21:35.532895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.197 qpair failed and we were unable to recover it. 00:27:08.197 [2024-11-20 11:21:35.533076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.197 [2024-11-20 11:21:35.533110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.197 qpair failed and we were unable to recover it. 00:27:08.197 [2024-11-20 11:21:35.533284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.197 [2024-11-20 11:21:35.533317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.197 qpair failed and we were unable to recover it. 00:27:08.197 [2024-11-20 11:21:35.533499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.197 [2024-11-20 11:21:35.533529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.197 qpair failed and we were unable to recover it. 00:27:08.197 [2024-11-20 11:21:35.533782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.197 [2024-11-20 11:21:35.533814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.197 qpair failed and we were unable to recover it. 00:27:08.197 [2024-11-20 11:21:35.534048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.197 [2024-11-20 11:21:35.534082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.197 qpair failed and we were unable to recover it. 00:27:08.197 [2024-11-20 11:21:35.534223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.197 [2024-11-20 11:21:35.534254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.197 qpair failed and we were unable to recover it. 00:27:08.197 [2024-11-20 11:21:35.534435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.197 [2024-11-20 11:21:35.534466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.197 qpair failed and we were unable to recover it. 00:27:08.197 [2024-11-20 11:21:35.534597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.197 [2024-11-20 11:21:35.534629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.197 qpair failed and we were unable to recover it. 00:27:08.197 [2024-11-20 11:21:35.534814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.197 [2024-11-20 11:21:35.534845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.197 qpair failed and we were unable to recover it. 00:27:08.197 [2024-11-20 11:21:35.535017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.197 [2024-11-20 11:21:35.535050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.197 qpair failed and we were unable to recover it. 00:27:08.197 [2024-11-20 11:21:35.535313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.197 [2024-11-20 11:21:35.535345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.197 qpair failed and we were unable to recover it. 00:27:08.197 [2024-11-20 11:21:35.535511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.197 [2024-11-20 11:21:35.535544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.197 qpair failed and we were unable to recover it. 00:27:08.197 [2024-11-20 11:21:35.535712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.197 [2024-11-20 11:21:35.535743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.197 qpair failed and we were unable to recover it. 00:27:08.197 [2024-11-20 11:21:35.535940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.197 [2024-11-20 11:21:35.536003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.197 qpair failed and we were unable to recover it. 00:27:08.197 [2024-11-20 11:21:35.536174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.197 [2024-11-20 11:21:35.536206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.197 qpair failed and we were unable to recover it. 00:27:08.197 [2024-11-20 11:21:35.536347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.197 [2024-11-20 11:21:35.536379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.197 qpair failed and we were unable to recover it. 00:27:08.197 [2024-11-20 11:21:35.536477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.197 [2024-11-20 11:21:35.536508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.197 qpair failed and we were unable to recover it. 00:27:08.197 [2024-11-20 11:21:35.536691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.197 [2024-11-20 11:21:35.536721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.197 qpair failed and we were unable to recover it. 00:27:08.197 [2024-11-20 11:21:35.536836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.197 [2024-11-20 11:21:35.536867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.197 qpair failed and we were unable to recover it. 00:27:08.197 [2024-11-20 11:21:35.537001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.197 [2024-11-20 11:21:35.537035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.197 qpair failed and we were unable to recover it. 00:27:08.197 [2024-11-20 11:21:35.537242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.197 [2024-11-20 11:21:35.537274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.197 qpair failed and we were unable to recover it. 00:27:08.197 [2024-11-20 11:21:35.537440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.197 [2024-11-20 11:21:35.537471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.197 qpair failed and we were unable to recover it. 00:27:08.197 [2024-11-20 11:21:35.537735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.197 [2024-11-20 11:21:35.537767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.197 qpair failed and we were unable to recover it. 00:27:08.197 [2024-11-20 11:21:35.537955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.197 [2024-11-20 11:21:35.537988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.197 qpair failed and we were unable to recover it. 00:27:08.197 [2024-11-20 11:21:35.538166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.197 [2024-11-20 11:21:35.538198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.197 qpair failed and we were unable to recover it. 00:27:08.197 [2024-11-20 11:21:35.538474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.197 [2024-11-20 11:21:35.538506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.197 qpair failed and we were unable to recover it. 00:27:08.197 [2024-11-20 11:21:35.538634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.197 [2024-11-20 11:21:35.538666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.197 qpair failed and we were unable to recover it. 00:27:08.197 [2024-11-20 11:21:35.538845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.197 [2024-11-20 11:21:35.538876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.197 qpair failed and we were unable to recover it. 00:27:08.197 [2024-11-20 11:21:35.539138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.197 [2024-11-20 11:21:35.539172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.197 qpair failed and we were unable to recover it. 00:27:08.197 [2024-11-20 11:21:35.539294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.197 [2024-11-20 11:21:35.539325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.197 qpair failed and we were unable to recover it. 00:27:08.197 [2024-11-20 11:21:35.539493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.197 [2024-11-20 11:21:35.539525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.197 qpair failed and we were unable to recover it. 00:27:08.197 [2024-11-20 11:21:35.539771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.198 [2024-11-20 11:21:35.539803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.198 qpair failed and we were unable to recover it. 00:27:08.198 [2024-11-20 11:21:35.539932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.198 [2024-11-20 11:21:35.539996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.198 qpair failed and we were unable to recover it. 00:27:08.198 [2024-11-20 11:21:35.540263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.198 [2024-11-20 11:21:35.540296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.198 qpair failed and we were unable to recover it. 00:27:08.198 [2024-11-20 11:21:35.540501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.198 [2024-11-20 11:21:35.540533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.198 qpair failed and we were unable to recover it. 00:27:08.198 [2024-11-20 11:21:35.540753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.198 [2024-11-20 11:21:35.540785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.198 qpair failed and we were unable to recover it. 00:27:08.198 [2024-11-20 11:21:35.540990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.198 [2024-11-20 11:21:35.541024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.198 qpair failed and we were unable to recover it. 00:27:08.198 [2024-11-20 11:21:35.541288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.198 [2024-11-20 11:21:35.541320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.198 qpair failed and we were unable to recover it. 00:27:08.198 [2024-11-20 11:21:35.541490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.198 [2024-11-20 11:21:35.541522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.198 qpair failed and we were unable to recover it. 00:27:08.198 [2024-11-20 11:21:35.541716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.198 [2024-11-20 11:21:35.541748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.198 qpair failed and we were unable to recover it. 00:27:08.198 [2024-11-20 11:21:35.541936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.198 [2024-11-20 11:21:35.541979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.198 qpair failed and we were unable to recover it. 00:27:08.198 [2024-11-20 11:21:35.542198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.198 [2024-11-20 11:21:35.542231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.198 qpair failed and we were unable to recover it. 00:27:08.198 [2024-11-20 11:21:35.542410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.198 [2024-11-20 11:21:35.542441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.198 qpair failed and we were unable to recover it. 00:27:08.198 [2024-11-20 11:21:35.542677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.198 [2024-11-20 11:21:35.542709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.198 qpair failed and we were unable to recover it. 00:27:08.198 [2024-11-20 11:21:35.542896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.198 [2024-11-20 11:21:35.542927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.198 qpair failed and we were unable to recover it. 00:27:08.198 [2024-11-20 11:21:35.543108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.198 [2024-11-20 11:21:35.543141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.198 qpair failed and we were unable to recover it. 00:27:08.198 [2024-11-20 11:21:35.543407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.198 [2024-11-20 11:21:35.543440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.198 qpair failed and we were unable to recover it. 00:27:08.198 [2024-11-20 11:21:35.543618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.198 [2024-11-20 11:21:35.543649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.198 qpair failed and we were unable to recover it. 00:27:08.198 [2024-11-20 11:21:35.543907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.198 [2024-11-20 11:21:35.543938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.198 qpair failed and we were unable to recover it. 00:27:08.198 [2024-11-20 11:21:35.544141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.198 [2024-11-20 11:21:35.544174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.198 qpair failed and we were unable to recover it. 00:27:08.198 [2024-11-20 11:21:35.544374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.198 [2024-11-20 11:21:35.544406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.198 qpair failed and we were unable to recover it. 00:27:08.198 [2024-11-20 11:21:35.544582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.198 [2024-11-20 11:21:35.544614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.198 qpair failed and we were unable to recover it. 00:27:08.198 [2024-11-20 11:21:35.544783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.198 [2024-11-20 11:21:35.544815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.198 qpair failed and we were unable to recover it. 00:27:08.198 [2024-11-20 11:21:35.544996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.198 [2024-11-20 11:21:35.545030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.198 qpair failed and we were unable to recover it. 00:27:08.198 [2024-11-20 11:21:35.545223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.198 [2024-11-20 11:21:35.545255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.198 qpair failed and we were unable to recover it. 00:27:08.198 [2024-11-20 11:21:35.545434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.198 [2024-11-20 11:21:35.545466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.198 qpair failed and we were unable to recover it. 00:27:08.198 [2024-11-20 11:21:35.545747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.198 [2024-11-20 11:21:35.545779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.198 qpair failed and we were unable to recover it. 00:27:08.198 [2024-11-20 11:21:35.545986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.198 [2024-11-20 11:21:35.546020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.198 qpair failed and we were unable to recover it. 00:27:08.198 [2024-11-20 11:21:35.546145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.198 [2024-11-20 11:21:35.546177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.198 qpair failed and we were unable to recover it. 00:27:08.198 [2024-11-20 11:21:35.546290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.198 [2024-11-20 11:21:35.546326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.198 qpair failed and we were unable to recover it. 00:27:08.198 [2024-11-20 11:21:35.546504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.198 [2024-11-20 11:21:35.546536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.198 qpair failed and we were unable to recover it. 00:27:08.198 [2024-11-20 11:21:35.546715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.198 [2024-11-20 11:21:35.546746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.198 qpair failed and we were unable to recover it. 00:27:08.198 [2024-11-20 11:21:35.546874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.198 [2024-11-20 11:21:35.546905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.198 qpair failed and we were unable to recover it. 00:27:08.198 [2024-11-20 11:21:35.547041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.198 [2024-11-20 11:21:35.547073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.198 qpair failed and we were unable to recover it. 00:27:08.198 [2024-11-20 11:21:35.547190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.199 [2024-11-20 11:21:35.547221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.199 qpair failed and we were unable to recover it. 00:27:08.199 [2024-11-20 11:21:35.547400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.199 [2024-11-20 11:21:35.547432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.199 qpair failed and we were unable to recover it. 00:27:08.199 [2024-11-20 11:21:35.547695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.199 [2024-11-20 11:21:35.547726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.199 qpair failed and we were unable to recover it. 00:27:08.199 [2024-11-20 11:21:35.547832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.199 [2024-11-20 11:21:35.547863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.199 qpair failed and we were unable to recover it. 00:27:08.199 [2024-11-20 11:21:35.548002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.199 [2024-11-20 11:21:35.548036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.199 qpair failed and we were unable to recover it. 00:27:08.199 [2024-11-20 11:21:35.548223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.199 [2024-11-20 11:21:35.548255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.199 qpair failed and we were unable to recover it. 00:27:08.199 [2024-11-20 11:21:35.548464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.199 [2024-11-20 11:21:35.548495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.199 qpair failed and we were unable to recover it. 00:27:08.199 [2024-11-20 11:21:35.548674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.199 [2024-11-20 11:21:35.548705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.199 qpair failed and we were unable to recover it. 00:27:08.199 [2024-11-20 11:21:35.548970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.199 [2024-11-20 11:21:35.549004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.199 qpair failed and we were unable to recover it. 00:27:08.199 [2024-11-20 11:21:35.549201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.199 [2024-11-20 11:21:35.549233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.199 qpair failed and we were unable to recover it. 00:27:08.199 [2024-11-20 11:21:35.549421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.199 [2024-11-20 11:21:35.549453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.199 qpair failed and we were unable to recover it. 00:27:08.199 [2024-11-20 11:21:35.549579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.199 [2024-11-20 11:21:35.549611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.199 qpair failed and we were unable to recover it. 00:27:08.199 [2024-11-20 11:21:35.549795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.199 [2024-11-20 11:21:35.549827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.199 qpair failed and we were unable to recover it. 00:27:08.199 [2024-11-20 11:21:35.550020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.199 [2024-11-20 11:21:35.550053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.199 qpair failed and we were unable to recover it. 00:27:08.199 [2024-11-20 11:21:35.550176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.199 [2024-11-20 11:21:35.550209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.199 qpair failed and we were unable to recover it. 00:27:08.199 [2024-11-20 11:21:35.550393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.199 [2024-11-20 11:21:35.550423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.199 qpair failed and we were unable to recover it. 00:27:08.199 [2024-11-20 11:21:35.550556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.199 [2024-11-20 11:21:35.550588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.199 qpair failed and we were unable to recover it. 00:27:08.199 [2024-11-20 11:21:35.550778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.199 [2024-11-20 11:21:35.550809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.199 qpair failed and we were unable to recover it. 00:27:08.199 [2024-11-20 11:21:35.550992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.199 [2024-11-20 11:21:35.551025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.199 qpair failed and we were unable to recover it. 00:27:08.199 [2024-11-20 11:21:35.551291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.199 [2024-11-20 11:21:35.551323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.199 qpair failed and we were unable to recover it. 00:27:08.199 [2024-11-20 11:21:35.551454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.199 [2024-11-20 11:21:35.551484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.199 qpair failed and we were unable to recover it. 00:27:08.199 [2024-11-20 11:21:35.551658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.199 [2024-11-20 11:21:35.551688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.199 qpair failed and we were unable to recover it. 00:27:08.199 [2024-11-20 11:21:35.551789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.199 [2024-11-20 11:21:35.551821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.199 qpair failed and we were unable to recover it. 00:27:08.199 [2024-11-20 11:21:35.552029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.199 [2024-11-20 11:21:35.552065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.199 qpair failed and we were unable to recover it. 00:27:08.199 [2024-11-20 11:21:35.552235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.199 [2024-11-20 11:21:35.552267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.199 qpair failed and we were unable to recover it. 00:27:08.199 [2024-11-20 11:21:35.552537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.199 [2024-11-20 11:21:35.552568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.199 qpair failed and we were unable to recover it. 00:27:08.199 [2024-11-20 11:21:35.552682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.199 [2024-11-20 11:21:35.552713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.199 qpair failed and we were unable to recover it. 00:27:08.199 [2024-11-20 11:21:35.552931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.199 [2024-11-20 11:21:35.552974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.199 qpair failed and we were unable to recover it. 00:27:08.199 [2024-11-20 11:21:35.553157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.199 [2024-11-20 11:21:35.553190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.199 qpair failed and we were unable to recover it. 00:27:08.199 [2024-11-20 11:21:35.553327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.199 [2024-11-20 11:21:35.553358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.199 qpair failed and we were unable to recover it. 00:27:08.199 [2024-11-20 11:21:35.553621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.199 [2024-11-20 11:21:35.553653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.199 qpair failed and we were unable to recover it. 00:27:08.199 [2024-11-20 11:21:35.553890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.199 [2024-11-20 11:21:35.553921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.199 qpair failed and we were unable to recover it. 00:27:08.199 [2024-11-20 11:21:35.554164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.199 [2024-11-20 11:21:35.554197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.199 qpair failed and we were unable to recover it. 00:27:08.199 [2024-11-20 11:21:35.554431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.200 [2024-11-20 11:21:35.554463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.200 qpair failed and we were unable to recover it. 00:27:08.200 [2024-11-20 11:21:35.554599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.200 [2024-11-20 11:21:35.554630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.200 qpair failed and we were unable to recover it. 00:27:08.200 [2024-11-20 11:21:35.554808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.200 [2024-11-20 11:21:35.554838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.200 qpair failed and we were unable to recover it. 00:27:08.200 [2024-11-20 11:21:35.555016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.200 [2024-11-20 11:21:35.555051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.200 qpair failed and we were unable to recover it. 00:27:08.200 [2024-11-20 11:21:35.555238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.200 [2024-11-20 11:21:35.555269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.200 qpair failed and we were unable to recover it. 00:27:08.200 [2024-11-20 11:21:35.555446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.200 [2024-11-20 11:21:35.555478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.200 qpair failed and we were unable to recover it. 00:27:08.200 [2024-11-20 11:21:35.555666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.200 [2024-11-20 11:21:35.555697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.200 qpair failed and we were unable to recover it. 00:27:08.200 [2024-11-20 11:21:35.555881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.200 [2024-11-20 11:21:35.555913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.200 qpair failed and we were unable to recover it. 00:27:08.200 [2024-11-20 11:21:35.556107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.200 [2024-11-20 11:21:35.556141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.200 qpair failed and we were unable to recover it. 00:27:08.200 [2024-11-20 11:21:35.556347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.200 [2024-11-20 11:21:35.556379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.200 qpair failed and we were unable to recover it. 00:27:08.200 [2024-11-20 11:21:35.556616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.200 [2024-11-20 11:21:35.556647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.200 qpair failed and we were unable to recover it. 00:27:08.200 [2024-11-20 11:21:35.556759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.200 [2024-11-20 11:21:35.556791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.200 qpair failed and we were unable to recover it. 00:27:08.200 [2024-11-20 11:21:35.556964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.200 [2024-11-20 11:21:35.556998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.200 qpair failed and we were unable to recover it. 00:27:08.200 [2024-11-20 11:21:35.557264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.200 [2024-11-20 11:21:35.557297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.200 qpair failed and we were unable to recover it. 00:27:08.200 [2024-11-20 11:21:35.557481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.200 [2024-11-20 11:21:35.557512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.200 qpair failed and we were unable to recover it. 00:27:08.200 [2024-11-20 11:21:35.557629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.200 [2024-11-20 11:21:35.557661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.200 qpair failed and we were unable to recover it. 00:27:08.200 [2024-11-20 11:21:35.557830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.200 [2024-11-20 11:21:35.557862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.200 qpair failed and we were unable to recover it. 00:27:08.200 [2024-11-20 11:21:35.558040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.200 [2024-11-20 11:21:35.558074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.200 qpair failed and we were unable to recover it. 00:27:08.200 [2024-11-20 11:21:35.558212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.200 [2024-11-20 11:21:35.558244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.200 qpair failed and we were unable to recover it. 00:27:08.200 [2024-11-20 11:21:35.558429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.200 [2024-11-20 11:21:35.558461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.200 qpair failed and we were unable to recover it. 00:27:08.200 [2024-11-20 11:21:35.558575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.200 [2024-11-20 11:21:35.558607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.200 qpair failed and we were unable to recover it. 00:27:08.200 [2024-11-20 11:21:35.558789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.200 [2024-11-20 11:21:35.558820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.200 qpair failed and we were unable to recover it. 00:27:08.200 [2024-11-20 11:21:35.559015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.200 [2024-11-20 11:21:35.559049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.200 qpair failed and we were unable to recover it. 00:27:08.200 [2024-11-20 11:21:35.559290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.200 [2024-11-20 11:21:35.559322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.200 qpair failed and we were unable to recover it. 00:27:08.200 [2024-11-20 11:21:35.559501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.200 [2024-11-20 11:21:35.559533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.200 qpair failed and we were unable to recover it. 00:27:08.200 [2024-11-20 11:21:35.559655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.200 [2024-11-20 11:21:35.559687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.200 qpair failed and we were unable to recover it. 00:27:08.200 [2024-11-20 11:21:35.559890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.200 [2024-11-20 11:21:35.559922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.200 qpair failed and we were unable to recover it. 00:27:08.200 [2024-11-20 11:21:35.560065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.200 [2024-11-20 11:21:35.560098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.200 qpair failed and we were unable to recover it. 00:27:08.200 [2024-11-20 11:21:35.560301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.200 [2024-11-20 11:21:35.560334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.200 qpair failed and we were unable to recover it. 00:27:08.200 [2024-11-20 11:21:35.560520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.200 [2024-11-20 11:21:35.560551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.200 qpair failed and we were unable to recover it. 00:27:08.201 [2024-11-20 11:21:35.560684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.201 [2024-11-20 11:21:35.560723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.201 qpair failed and we were unable to recover it. 00:27:08.201 [2024-11-20 11:21:35.560838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.201 [2024-11-20 11:21:35.560871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.201 qpair failed and we were unable to recover it. 00:27:08.201 [2024-11-20 11:21:35.560992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.201 [2024-11-20 11:21:35.561028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.201 qpair failed and we were unable to recover it. 00:27:08.201 [2024-11-20 11:21:35.561170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.201 [2024-11-20 11:21:35.561202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.201 qpair failed and we were unable to recover it. 00:27:08.201 [2024-11-20 11:21:35.561320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.201 [2024-11-20 11:21:35.561351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.201 qpair failed and we were unable to recover it. 00:27:08.201 [2024-11-20 11:21:35.561531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.201 [2024-11-20 11:21:35.561564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.201 qpair failed and we were unable to recover it. 00:27:08.201 [2024-11-20 11:21:35.561686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.201 [2024-11-20 11:21:35.561717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.201 qpair failed and we were unable to recover it. 00:27:08.201 [2024-11-20 11:21:35.561822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.201 [2024-11-20 11:21:35.561854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.201 qpair failed and we were unable to recover it. 00:27:08.201 [2024-11-20 11:21:35.561966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.201 [2024-11-20 11:21:35.562000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.201 qpair failed and we were unable to recover it. 00:27:08.201 [2024-11-20 11:21:35.562206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.201 [2024-11-20 11:21:35.562239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.201 qpair failed and we were unable to recover it. 00:27:08.201 [2024-11-20 11:21:35.562442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.201 [2024-11-20 11:21:35.562474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.201 qpair failed and we were unable to recover it. 00:27:08.201 [2024-11-20 11:21:35.562671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.201 [2024-11-20 11:21:35.562703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.201 qpair failed and we were unable to recover it. 00:27:08.201 [2024-11-20 11:21:35.562825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.201 [2024-11-20 11:21:35.562857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.201 qpair failed and we were unable to recover it. 00:27:08.201 [2024-11-20 11:21:35.562980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.201 [2024-11-20 11:21:35.563014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.201 qpair failed and we were unable to recover it. 00:27:08.201 [2024-11-20 11:21:35.563195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.201 [2024-11-20 11:21:35.563227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.201 qpair failed and we were unable to recover it. 00:27:08.201 [2024-11-20 11:21:35.563464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.201 [2024-11-20 11:21:35.563496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.201 qpair failed and we were unable to recover it. 00:27:08.201 [2024-11-20 11:21:35.563690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.201 [2024-11-20 11:21:35.563721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.201 qpair failed and we were unable to recover it. 00:27:08.201 [2024-11-20 11:21:35.563963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.201 [2024-11-20 11:21:35.563996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.201 qpair failed and we were unable to recover it. 00:27:08.201 [2024-11-20 11:21:35.564164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.201 [2024-11-20 11:21:35.564195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.201 qpair failed and we were unable to recover it. 00:27:08.201 [2024-11-20 11:21:35.564331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.201 [2024-11-20 11:21:35.564363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.201 qpair failed and we were unable to recover it. 00:27:08.201 [2024-11-20 11:21:35.564545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.201 [2024-11-20 11:21:35.564576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.201 qpair failed and we were unable to recover it. 00:27:08.201 [2024-11-20 11:21:35.564756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.201 [2024-11-20 11:21:35.564786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.201 qpair failed and we were unable to recover it. 00:27:08.201 [2024-11-20 11:21:35.564963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.201 [2024-11-20 11:21:35.564996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.201 qpair failed and we were unable to recover it. 00:27:08.201 [2024-11-20 11:21:35.565175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.201 [2024-11-20 11:21:35.565207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.201 qpair failed and we were unable to recover it. 00:27:08.201 [2024-11-20 11:21:35.565316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.201 [2024-11-20 11:21:35.565347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.201 qpair failed and we were unable to recover it. 00:27:08.201 [2024-11-20 11:21:35.565530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.201 [2024-11-20 11:21:35.565561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.201 qpair failed and we were unable to recover it. 00:27:08.201 [2024-11-20 11:21:35.565739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.201 [2024-11-20 11:21:35.565771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.201 qpair failed and we were unable to recover it. 00:27:08.201 [2024-11-20 11:21:35.565876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.201 [2024-11-20 11:21:35.565912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.201 qpair failed and we were unable to recover it. 00:27:08.201 [2024-11-20 11:21:35.566036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.201 [2024-11-20 11:21:35.566068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.201 qpair failed and we were unable to recover it. 00:27:08.201 [2024-11-20 11:21:35.566193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.201 [2024-11-20 11:21:35.566224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.201 qpair failed and we were unable to recover it. 00:27:08.201 [2024-11-20 11:21:35.566393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.201 [2024-11-20 11:21:35.566424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.201 qpair failed and we were unable to recover it. 00:27:08.201 [2024-11-20 11:21:35.566659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.201 [2024-11-20 11:21:35.566690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.201 qpair failed and we were unable to recover it. 00:27:08.201 [2024-11-20 11:21:35.566869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.201 [2024-11-20 11:21:35.566900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.201 qpair failed and we were unable to recover it. 00:27:08.202 [2024-11-20 11:21:35.567026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.202 [2024-11-20 11:21:35.567058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.202 qpair failed and we were unable to recover it. 00:27:08.202 [2024-11-20 11:21:35.567166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.202 [2024-11-20 11:21:35.567195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.202 qpair failed and we were unable to recover it. 00:27:08.202 [2024-11-20 11:21:35.567304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.202 [2024-11-20 11:21:35.567333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.202 qpair failed and we were unable to recover it. 00:27:08.202 [2024-11-20 11:21:35.567537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.202 [2024-11-20 11:21:35.567567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.202 qpair failed and we were unable to recover it. 00:27:08.202 [2024-11-20 11:21:35.567757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.202 [2024-11-20 11:21:35.567789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.202 qpair failed and we were unable to recover it. 00:27:08.202 [2024-11-20 11:21:35.567985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.202 [2024-11-20 11:21:35.568019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.202 qpair failed and we were unable to recover it. 00:27:08.202 [2024-11-20 11:21:35.568141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.202 [2024-11-20 11:21:35.568173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.202 qpair failed and we were unable to recover it. 00:27:08.202 [2024-11-20 11:21:35.568285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.202 [2024-11-20 11:21:35.568316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.202 qpair failed and we were unable to recover it. 00:27:08.202 [2024-11-20 11:21:35.568436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.202 [2024-11-20 11:21:35.568468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.202 qpair failed and we were unable to recover it. 00:27:08.202 [2024-11-20 11:21:35.568646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.202 [2024-11-20 11:21:35.568678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.202 qpair failed and we were unable to recover it. 00:27:08.202 [2024-11-20 11:21:35.568789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.202 [2024-11-20 11:21:35.568820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.202 qpair failed and we were unable to recover it. 00:27:08.202 [2024-11-20 11:21:35.569002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.202 [2024-11-20 11:21:35.569035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.202 qpair failed and we were unable to recover it. 00:27:08.202 [2024-11-20 11:21:35.569336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.202 [2024-11-20 11:21:35.569368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.202 qpair failed and we were unable to recover it. 00:27:08.202 [2024-11-20 11:21:35.569542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.202 [2024-11-20 11:21:35.569573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.202 qpair failed and we were unable to recover it. 00:27:08.202 [2024-11-20 11:21:35.569752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.202 [2024-11-20 11:21:35.569783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.202 qpair failed and we were unable to recover it. 00:27:08.202 [2024-11-20 11:21:35.569968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.202 [2024-11-20 11:21:35.570001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.202 qpair failed and we were unable to recover it. 00:27:08.202 [2024-11-20 11:21:35.570171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.202 [2024-11-20 11:21:35.570203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.202 qpair failed and we were unable to recover it. 00:27:08.202 [2024-11-20 11:21:35.570334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.202 [2024-11-20 11:21:35.570365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.202 qpair failed and we were unable to recover it. 00:27:08.202 [2024-11-20 11:21:35.570481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.202 [2024-11-20 11:21:35.570512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.202 qpair failed and we were unable to recover it. 00:27:08.202 [2024-11-20 11:21:35.570614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.202 [2024-11-20 11:21:35.570646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.202 qpair failed and we were unable to recover it. 00:27:08.202 [2024-11-20 11:21:35.570755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.202 [2024-11-20 11:21:35.570786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.202 qpair failed and we were unable to recover it. 00:27:08.202 [2024-11-20 11:21:35.570893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.202 [2024-11-20 11:21:35.570930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.202 qpair failed and we were unable to recover it. 00:27:08.202 [2024-11-20 11:21:35.571113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.202 [2024-11-20 11:21:35.571145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.202 qpair failed and we were unable to recover it. 00:27:08.202 [2024-11-20 11:21:35.571272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.202 [2024-11-20 11:21:35.571303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.202 qpair failed and we were unable to recover it. 00:27:08.202 [2024-11-20 11:21:35.571498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.202 [2024-11-20 11:21:35.571530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.202 qpair failed and we were unable to recover it. 00:27:08.202 [2024-11-20 11:21:35.571651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.202 [2024-11-20 11:21:35.571682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.202 qpair failed and we were unable to recover it. 00:27:08.202 [2024-11-20 11:21:35.571785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.202 [2024-11-20 11:21:35.571818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.202 qpair failed and we were unable to recover it. 00:27:08.202 [2024-11-20 11:21:35.571998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.202 [2024-11-20 11:21:35.572030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.202 qpair failed and we were unable to recover it. 00:27:08.202 [2024-11-20 11:21:35.572209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.202 [2024-11-20 11:21:35.572239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.202 qpair failed and we were unable to recover it. 00:27:08.202 [2024-11-20 11:21:35.572346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.202 [2024-11-20 11:21:35.572378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.202 qpair failed and we were unable to recover it. 00:27:08.202 [2024-11-20 11:21:35.572617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.202 [2024-11-20 11:21:35.572647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.202 qpair failed and we were unable to recover it. 00:27:08.202 [2024-11-20 11:21:35.572747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.202 [2024-11-20 11:21:35.572778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.202 qpair failed and we were unable to recover it. 00:27:08.202 [2024-11-20 11:21:35.572888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.202 [2024-11-20 11:21:35.572919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.202 qpair failed and we were unable to recover it. 00:27:08.203 [2024-11-20 11:21:35.573062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.203 [2024-11-20 11:21:35.573093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.203 qpair failed and we were unable to recover it. 00:27:08.203 [2024-11-20 11:21:35.573269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.203 [2024-11-20 11:21:35.573300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.203 qpair failed and we were unable to recover it. 00:27:08.203 [2024-11-20 11:21:35.573477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.203 [2024-11-20 11:21:35.573507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.203 qpair failed and we were unable to recover it. 00:27:08.203 [2024-11-20 11:21:35.573709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.203 [2024-11-20 11:21:35.573740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.203 qpair failed and we were unable to recover it. 00:27:08.203 [2024-11-20 11:21:35.573982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.203 [2024-11-20 11:21:35.574014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.203 qpair failed and we were unable to recover it. 00:27:08.203 [2024-11-20 11:21:35.574184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.203 [2024-11-20 11:21:35.574216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.203 qpair failed and we were unable to recover it. 00:27:08.203 [2024-11-20 11:21:35.574398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.203 [2024-11-20 11:21:35.574430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.203 qpair failed and we were unable to recover it. 00:27:08.203 [2024-11-20 11:21:35.574616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.203 [2024-11-20 11:21:35.574646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.203 qpair failed and we were unable to recover it. 00:27:08.203 [2024-11-20 11:21:35.574899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.203 [2024-11-20 11:21:35.574931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.203 qpair failed and we were unable to recover it. 00:27:08.203 [2024-11-20 11:21:35.575061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.203 [2024-11-20 11:21:35.575093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.203 qpair failed and we were unable to recover it. 00:27:08.203 [2024-11-20 11:21:35.575319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.203 [2024-11-20 11:21:35.575349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.203 qpair failed and we were unable to recover it. 00:27:08.203 [2024-11-20 11:21:35.575471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.203 [2024-11-20 11:21:35.575503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.203 qpair failed and we were unable to recover it. 00:27:08.203 [2024-11-20 11:21:35.575676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.203 [2024-11-20 11:21:35.575708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.203 qpair failed and we were unable to recover it. 00:27:08.203 [2024-11-20 11:21:35.575908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.203 [2024-11-20 11:21:35.575940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.203 qpair failed and we were unable to recover it. 00:27:08.203 [2024-11-20 11:21:35.576059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.203 [2024-11-20 11:21:35.576091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.203 qpair failed and we were unable to recover it. 00:27:08.203 [2024-11-20 11:21:35.576188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.203 [2024-11-20 11:21:35.576219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.203 qpair failed and we were unable to recover it. 00:27:08.203 [2024-11-20 11:21:35.576354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.203 [2024-11-20 11:21:35.576386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.203 qpair failed and we were unable to recover it. 00:27:08.203 [2024-11-20 11:21:35.576570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.203 [2024-11-20 11:21:35.576606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.203 qpair failed and we were unable to recover it. 00:27:08.203 [2024-11-20 11:21:35.576722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.203 [2024-11-20 11:21:35.576755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.203 qpair failed and we were unable to recover it. 00:27:08.203 [2024-11-20 11:21:35.576933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.203 [2024-11-20 11:21:35.576977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.203 qpair failed and we were unable to recover it. 00:27:08.203 [2024-11-20 11:21:35.577168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.203 [2024-11-20 11:21:35.577200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.203 qpair failed and we were unable to recover it. 00:27:08.203 [2024-11-20 11:21:35.577326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.203 [2024-11-20 11:21:35.577357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.203 qpair failed and we were unable to recover it. 00:27:08.203 [2024-11-20 11:21:35.577524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.203 [2024-11-20 11:21:35.577556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.203 qpair failed and we were unable to recover it. 00:27:08.203 [2024-11-20 11:21:35.577671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.203 [2024-11-20 11:21:35.577703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.203 qpair failed and we were unable to recover it. 00:27:08.203 [2024-11-20 11:21:35.577969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.203 [2024-11-20 11:21:35.578002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.203 qpair failed and we were unable to recover it. 00:27:08.203 [2024-11-20 11:21:35.578172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.203 [2024-11-20 11:21:35.578204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.203 qpair failed and we were unable to recover it. 00:27:08.203 [2024-11-20 11:21:35.578449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.203 [2024-11-20 11:21:35.578481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.203 qpair failed and we were unable to recover it. 00:27:08.203 [2024-11-20 11:21:35.578661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.203 [2024-11-20 11:21:35.578692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.203 qpair failed and we were unable to recover it. 00:27:08.203 [2024-11-20 11:21:35.578876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.203 [2024-11-20 11:21:35.578908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.203 qpair failed and we were unable to recover it. 00:27:08.203 [2024-11-20 11:21:35.579188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.203 [2024-11-20 11:21:35.579221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.203 qpair failed and we were unable to recover it. 00:27:08.203 [2024-11-20 11:21:35.579389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.203 [2024-11-20 11:21:35.579421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.203 qpair failed and we were unable to recover it. 00:27:08.203 [2024-11-20 11:21:35.579603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.203 [2024-11-20 11:21:35.579635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.203 qpair failed and we were unable to recover it. 00:27:08.203 [2024-11-20 11:21:35.579745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.203 [2024-11-20 11:21:35.579776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.203 qpair failed and we were unable to recover it. 00:27:08.203 [2024-11-20 11:21:35.579895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.204 [2024-11-20 11:21:35.579927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.204 qpair failed and we were unable to recover it. 00:27:08.204 [2024-11-20 11:21:35.580220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.204 [2024-11-20 11:21:35.580252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.204 qpair failed and we were unable to recover it. 00:27:08.204 [2024-11-20 11:21:35.580431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.204 [2024-11-20 11:21:35.580462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.204 qpair failed and we were unable to recover it. 00:27:08.204 [2024-11-20 11:21:35.580578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.204 [2024-11-20 11:21:35.580609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.204 qpair failed and we were unable to recover it. 00:27:08.204 [2024-11-20 11:21:35.580782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.204 [2024-11-20 11:21:35.580812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.204 qpair failed and we were unable to recover it. 00:27:08.204 [2024-11-20 11:21:35.580986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.204 [2024-11-20 11:21:35.581019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.204 qpair failed and we were unable to recover it. 00:27:08.204 [2024-11-20 11:21:35.581189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.204 [2024-11-20 11:21:35.581220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.204 qpair failed and we were unable to recover it. 00:27:08.204 [2024-11-20 11:21:35.581342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.204 [2024-11-20 11:21:35.581374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.204 qpair failed and we were unable to recover it. 00:27:08.204 [2024-11-20 11:21:35.581589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.204 [2024-11-20 11:21:35.581621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.204 qpair failed and we were unable to recover it. 00:27:08.204 [2024-11-20 11:21:35.581796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.204 [2024-11-20 11:21:35.581827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.204 qpair failed and we were unable to recover it. 00:27:08.204 [2024-11-20 11:21:35.581955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.204 [2024-11-20 11:21:35.581988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.204 qpair failed and we were unable to recover it. 00:27:08.204 [2024-11-20 11:21:35.582094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.204 [2024-11-20 11:21:35.582125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.204 qpair failed and we were unable to recover it. 00:27:08.204 [2024-11-20 11:21:35.582240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.204 [2024-11-20 11:21:35.582272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.204 qpair failed and we were unable to recover it. 00:27:08.204 [2024-11-20 11:21:35.582392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.204 [2024-11-20 11:21:35.582424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.204 qpair failed and we were unable to recover it. 00:27:08.204 [2024-11-20 11:21:35.582594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.204 [2024-11-20 11:21:35.582626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.204 qpair failed and we were unable to recover it. 00:27:08.204 [2024-11-20 11:21:35.582809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.204 [2024-11-20 11:21:35.582841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.204 qpair failed and we were unable to recover it. 00:27:08.204 [2024-11-20 11:21:35.583075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.204 [2024-11-20 11:21:35.583108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.204 qpair failed and we were unable to recover it. 00:27:08.204 [2024-11-20 11:21:35.583285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.204 [2024-11-20 11:21:35.583317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.204 qpair failed and we were unable to recover it. 00:27:08.204 [2024-11-20 11:21:35.583558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.204 [2024-11-20 11:21:35.583590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.204 qpair failed and we were unable to recover it. 00:27:08.204 [2024-11-20 11:21:35.583763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.204 [2024-11-20 11:21:35.583795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.204 qpair failed and we were unable to recover it. 00:27:08.204 [2024-11-20 11:21:35.583913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.204 [2024-11-20 11:21:35.583945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.204 qpair failed and we were unable to recover it. 00:27:08.204 [2024-11-20 11:21:35.584143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.204 [2024-11-20 11:21:35.584176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.204 qpair failed and we were unable to recover it. 00:27:08.204 [2024-11-20 11:21:35.584357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.204 [2024-11-20 11:21:35.584388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.204 qpair failed and we were unable to recover it. 00:27:08.204 [2024-11-20 11:21:35.584575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.204 [2024-11-20 11:21:35.584612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.204 qpair failed and we were unable to recover it. 00:27:08.204 [2024-11-20 11:21:35.584800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.204 [2024-11-20 11:21:35.584831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.204 qpair failed and we were unable to recover it. 00:27:08.204 [2024-11-20 11:21:35.584935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.204 [2024-11-20 11:21:35.584981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.204 qpair failed and we were unable to recover it. 00:27:08.204 [2024-11-20 11:21:35.585232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.204 [2024-11-20 11:21:35.585263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.204 qpair failed and we were unable to recover it. 00:27:08.204 [2024-11-20 11:21:35.585432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.204 [2024-11-20 11:21:35.585463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.204 qpair failed and we were unable to recover it. 00:27:08.204 [2024-11-20 11:21:35.585569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.204 [2024-11-20 11:21:35.585600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.204 qpair failed and we were unable to recover it. 00:27:08.204 [2024-11-20 11:21:35.585832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.204 [2024-11-20 11:21:35.585864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.204 qpair failed and we were unable to recover it. 00:27:08.204 [2024-11-20 11:21:35.586041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.204 [2024-11-20 11:21:35.586075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.204 qpair failed and we were unable to recover it. 00:27:08.204 [2024-11-20 11:21:35.586195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.204 [2024-11-20 11:21:35.586227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.204 qpair failed and we were unable to recover it. 00:27:08.204 [2024-11-20 11:21:35.586404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.204 [2024-11-20 11:21:35.586435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.204 qpair failed and we were unable to recover it. 00:27:08.204 [2024-11-20 11:21:35.586614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.205 [2024-11-20 11:21:35.586644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.205 qpair failed and we were unable to recover it. 00:27:08.205 [2024-11-20 11:21:35.586827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.205 [2024-11-20 11:21:35.586859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.205 qpair failed and we were unable to recover it. 00:27:08.205 [2024-11-20 11:21:35.586981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.205 [2024-11-20 11:21:35.587013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.205 qpair failed and we were unable to recover it. 00:27:08.205 [2024-11-20 11:21:35.587260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.205 [2024-11-20 11:21:35.587292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.205 qpair failed and we were unable to recover it. 00:27:08.205 [2024-11-20 11:21:35.587464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.205 [2024-11-20 11:21:35.587496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.205 qpair failed and we were unable to recover it. 00:27:08.205 [2024-11-20 11:21:35.587757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.205 [2024-11-20 11:21:35.587789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.205 qpair failed and we were unable to recover it. 00:27:08.205 [2024-11-20 11:21:35.587992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.205 [2024-11-20 11:21:35.588025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.205 qpair failed and we were unable to recover it. 00:27:08.205 [2024-11-20 11:21:35.588306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.205 [2024-11-20 11:21:35.588338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.205 qpair failed and we were unable to recover it. 00:27:08.205 [2024-11-20 11:21:35.588517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.205 [2024-11-20 11:21:35.588548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.205 qpair failed and we were unable to recover it. 00:27:08.205 [2024-11-20 11:21:35.588663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.205 [2024-11-20 11:21:35.588694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.205 qpair failed and we were unable to recover it. 00:27:08.205 [2024-11-20 11:21:35.588864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.205 [2024-11-20 11:21:35.588896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.205 qpair failed and we were unable to recover it. 00:27:08.205 [2024-11-20 11:21:35.589082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.205 [2024-11-20 11:21:35.589114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.205 qpair failed and we were unable to recover it. 00:27:08.205 [2024-11-20 11:21:35.589283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.205 [2024-11-20 11:21:35.589315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.205 qpair failed and we were unable to recover it. 00:27:08.205 [2024-11-20 11:21:35.589518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.205 [2024-11-20 11:21:35.589550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.205 qpair failed and we were unable to recover it. 00:27:08.205 [2024-11-20 11:21:35.589807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.205 [2024-11-20 11:21:35.589839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.205 qpair failed and we were unable to recover it. 00:27:08.205 [2024-11-20 11:21:35.590023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.205 [2024-11-20 11:21:35.590056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.205 qpair failed and we were unable to recover it. 00:27:08.205 [2024-11-20 11:21:35.590293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.205 [2024-11-20 11:21:35.590324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.205 qpair failed and we were unable to recover it. 00:27:08.205 [2024-11-20 11:21:35.590493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.205 [2024-11-20 11:21:35.590529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.205 qpair failed and we were unable to recover it. 00:27:08.205 [2024-11-20 11:21:35.590703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.205 [2024-11-20 11:21:35.590734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.205 qpair failed and we were unable to recover it. 00:27:08.205 [2024-11-20 11:21:35.590904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.205 [2024-11-20 11:21:35.590935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.205 qpair failed and we were unable to recover it. 00:27:08.205 [2024-11-20 11:21:35.591144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.205 [2024-11-20 11:21:35.591176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.205 qpair failed and we were unable to recover it. 00:27:08.205 [2024-11-20 11:21:35.591442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.205 [2024-11-20 11:21:35.591473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.205 qpair failed and we were unable to recover it. 00:27:08.205 [2024-11-20 11:21:35.591654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.205 [2024-11-20 11:21:35.591686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.205 qpair failed and we were unable to recover it. 00:27:08.205 [2024-11-20 11:21:35.591873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.205 [2024-11-20 11:21:35.591904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.205 qpair failed and we were unable to recover it. 00:27:08.205 [2024-11-20 11:21:35.592094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.205 [2024-11-20 11:21:35.592126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.205 qpair failed and we were unable to recover it. 00:27:08.205 [2024-11-20 11:21:35.592227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.205 [2024-11-20 11:21:35.592256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.205 qpair failed and we were unable to recover it. 00:27:08.205 [2024-11-20 11:21:35.592489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.205 [2024-11-20 11:21:35.592520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.205 qpair failed and we were unable to recover it. 00:27:08.205 [2024-11-20 11:21:35.592702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.205 [2024-11-20 11:21:35.592734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.205 qpair failed and we were unable to recover it. 00:27:08.205 [2024-11-20 11:21:35.592923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.205 [2024-11-20 11:21:35.592963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.205 qpair failed and we were unable to recover it. 00:27:08.205 [2024-11-20 11:21:35.593151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.205 [2024-11-20 11:21:35.593183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.205 qpair failed and we were unable to recover it. 00:27:08.205 [2024-11-20 11:21:35.593446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.206 [2024-11-20 11:21:35.593478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.206 qpair failed and we were unable to recover it. 00:27:08.206 [2024-11-20 11:21:35.593669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.206 [2024-11-20 11:21:35.593701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.206 qpair failed and we were unable to recover it. 00:27:08.206 [2024-11-20 11:21:35.593964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.206 [2024-11-20 11:21:35.593998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.206 qpair failed and we were unable to recover it. 00:27:08.206 [2024-11-20 11:21:35.594193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.206 [2024-11-20 11:21:35.594224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.206 qpair failed and we were unable to recover it. 00:27:08.206 [2024-11-20 11:21:35.594412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.206 [2024-11-20 11:21:35.594443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.206 qpair failed and we were unable to recover it. 00:27:08.206 [2024-11-20 11:21:35.594609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.206 [2024-11-20 11:21:35.594640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.206 qpair failed and we were unable to recover it. 00:27:08.206 [2024-11-20 11:21:35.594757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.206 [2024-11-20 11:21:35.594788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.206 qpair failed and we were unable to recover it. 00:27:08.206 [2024-11-20 11:21:35.594974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.206 [2024-11-20 11:21:35.595007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.206 qpair failed and we were unable to recover it. 00:27:08.206 [2024-11-20 11:21:35.595136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.206 [2024-11-20 11:21:35.595167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.206 qpair failed and we were unable to recover it. 00:27:08.206 [2024-11-20 11:21:35.595272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.206 [2024-11-20 11:21:35.595302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.206 qpair failed and we were unable to recover it. 00:27:08.206 [2024-11-20 11:21:35.595487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.206 [2024-11-20 11:21:35.595517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.206 qpair failed and we were unable to recover it. 00:27:08.206 [2024-11-20 11:21:35.595766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.206 [2024-11-20 11:21:35.595797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.206 qpair failed and we were unable to recover it. 00:27:08.206 [2024-11-20 11:21:35.595913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.206 [2024-11-20 11:21:35.595944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.206 qpair failed and we were unable to recover it. 00:27:08.206 [2024-11-20 11:21:35.596081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.206 [2024-11-20 11:21:35.596112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.206 qpair failed and we were unable to recover it. 00:27:08.206 [2024-11-20 11:21:35.596240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.206 [2024-11-20 11:21:35.596272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.206 qpair failed and we were unable to recover it. 00:27:08.206 [2024-11-20 11:21:35.596486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.206 [2024-11-20 11:21:35.596516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.206 qpair failed and we were unable to recover it. 00:27:08.206 [2024-11-20 11:21:35.596695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.206 [2024-11-20 11:21:35.596726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.206 qpair failed and we were unable to recover it. 00:27:08.206 [2024-11-20 11:21:35.596908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.206 [2024-11-20 11:21:35.596938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.206 qpair failed and we were unable to recover it. 00:27:08.206 [2024-11-20 11:21:35.597076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.206 [2024-11-20 11:21:35.597107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.206 qpair failed and we were unable to recover it. 00:27:08.206 [2024-11-20 11:21:35.597296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.206 [2024-11-20 11:21:35.597327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.206 qpair failed and we were unable to recover it. 00:27:08.206 [2024-11-20 11:21:35.597456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.206 [2024-11-20 11:21:35.597487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.206 qpair failed and we were unable to recover it. 00:27:08.206 [2024-11-20 11:21:35.597674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.206 [2024-11-20 11:21:35.597705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.206 qpair failed and we were unable to recover it. 00:27:08.206 [2024-11-20 11:21:35.597803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.206 [2024-11-20 11:21:35.597834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.206 qpair failed and we were unable to recover it. 00:27:08.206 [2024-11-20 11:21:35.598075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.206 [2024-11-20 11:21:35.598107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.206 qpair failed and we were unable to recover it. 00:27:08.206 [2024-11-20 11:21:35.598313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.206 [2024-11-20 11:21:35.598344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.206 qpair failed and we were unable to recover it. 00:27:08.206 [2024-11-20 11:21:35.598481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.206 [2024-11-20 11:21:35.598512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.206 qpair failed and we were unable to recover it. 00:27:08.206 [2024-11-20 11:21:35.598692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.206 [2024-11-20 11:21:35.598724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.206 qpair failed and we were unable to recover it. 00:27:08.206 [2024-11-20 11:21:35.598905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.206 [2024-11-20 11:21:35.598936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.206 qpair failed and we were unable to recover it. 00:27:08.206 [2024-11-20 11:21:35.599193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.206 [2024-11-20 11:21:35.599264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.206 qpair failed and we were unable to recover it. 00:27:08.206 [2024-11-20 11:21:35.599460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.206 [2024-11-20 11:21:35.599495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.206 qpair failed and we were unable to recover it. 00:27:08.206 [2024-11-20 11:21:35.599670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.206 [2024-11-20 11:21:35.599701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.206 qpair failed and we were unable to recover it. 00:27:08.206 [2024-11-20 11:21:35.599872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.206 [2024-11-20 11:21:35.599903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.206 qpair failed and we were unable to recover it. 00:27:08.206 [2024-11-20 11:21:35.600103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.206 [2024-11-20 11:21:35.600137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.206 qpair failed and we were unable to recover it. 00:27:08.206 [2024-11-20 11:21:35.600374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.207 [2024-11-20 11:21:35.600405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.207 qpair failed and we were unable to recover it. 00:27:08.207 [2024-11-20 11:21:35.600638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.207 [2024-11-20 11:21:35.600669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.207 qpair failed and we were unable to recover it. 00:27:08.207 [2024-11-20 11:21:35.600856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.207 [2024-11-20 11:21:35.600886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.207 qpair failed and we were unable to recover it. 00:27:08.207 [2024-11-20 11:21:35.601001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.207 [2024-11-20 11:21:35.601032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.207 qpair failed and we were unable to recover it. 00:27:08.207 [2024-11-20 11:21:35.601217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.207 [2024-11-20 11:21:35.601248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.207 qpair failed and we were unable to recover it. 00:27:08.207 [2024-11-20 11:21:35.601484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.207 [2024-11-20 11:21:35.601516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.207 qpair failed and we were unable to recover it. 00:27:08.207 [2024-11-20 11:21:35.601699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.207 [2024-11-20 11:21:35.601730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.207 qpair failed and we were unable to recover it. 00:27:08.207 [2024-11-20 11:21:35.601898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.207 [2024-11-20 11:21:35.601928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.207 qpair failed and we were unable to recover it. 00:27:08.207 [2024-11-20 11:21:35.602175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.207 [2024-11-20 11:21:35.602223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.207 qpair failed and we were unable to recover it. 00:27:08.207 [2024-11-20 11:21:35.602361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.207 [2024-11-20 11:21:35.602390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.207 qpair failed and we were unable to recover it. 00:27:08.207 [2024-11-20 11:21:35.602521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.207 [2024-11-20 11:21:35.602553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.207 qpair failed and we were unable to recover it. 00:27:08.207 [2024-11-20 11:21:35.602736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.207 [2024-11-20 11:21:35.602767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.207 qpair failed and we were unable to recover it. 00:27:08.207 [2024-11-20 11:21:35.602939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.207 [2024-11-20 11:21:35.602982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.207 qpair failed and we were unable to recover it. 00:27:08.207 [2024-11-20 11:21:35.603183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.207 [2024-11-20 11:21:35.603214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.207 qpair failed and we were unable to recover it. 00:27:08.207 [2024-11-20 11:21:35.603424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.207 [2024-11-20 11:21:35.603455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.207 qpair failed and we were unable to recover it. 00:27:08.207 [2024-11-20 11:21:35.603623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.207 [2024-11-20 11:21:35.603654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.207 qpair failed and we were unable to recover it. 00:27:08.207 [2024-11-20 11:21:35.603869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.207 [2024-11-20 11:21:35.603899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.207 qpair failed and we were unable to recover it. 00:27:08.207 [2024-11-20 11:21:35.604085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.207 [2024-11-20 11:21:35.604117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.207 qpair failed and we were unable to recover it. 00:27:08.207 [2024-11-20 11:21:35.604257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.207 [2024-11-20 11:21:35.604287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.207 qpair failed and we were unable to recover it. 00:27:08.207 [2024-11-20 11:21:35.604475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.207 [2024-11-20 11:21:35.604506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.207 qpair failed and we were unable to recover it. 00:27:08.207 [2024-11-20 11:21:35.604633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.207 [2024-11-20 11:21:35.604665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.207 qpair failed and we were unable to recover it. 00:27:08.207 [2024-11-20 11:21:35.604908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.207 [2024-11-20 11:21:35.604939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.207 qpair failed and we were unable to recover it. 00:27:08.207 [2024-11-20 11:21:35.605195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.207 [2024-11-20 11:21:35.605228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.207 qpair failed and we were unable to recover it. 00:27:08.207 [2024-11-20 11:21:35.605474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.207 [2024-11-20 11:21:35.605505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.207 qpair failed and we were unable to recover it. 00:27:08.207 [2024-11-20 11:21:35.605769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.207 [2024-11-20 11:21:35.605799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.207 qpair failed and we were unable to recover it. 00:27:08.207 [2024-11-20 11:21:35.605978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.207 [2024-11-20 11:21:35.606012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.207 qpair failed and we were unable to recover it. 00:27:08.207 [2024-11-20 11:21:35.606180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.207 [2024-11-20 11:21:35.606210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.207 qpair failed and we were unable to recover it. 00:27:08.207 [2024-11-20 11:21:35.606327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.207 [2024-11-20 11:21:35.606357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.207 qpair failed and we were unable to recover it. 00:27:08.207 [2024-11-20 11:21:35.606528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.207 [2024-11-20 11:21:35.606560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.207 qpair failed and we were unable to recover it. 00:27:08.207 [2024-11-20 11:21:35.606745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.207 [2024-11-20 11:21:35.606775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.207 qpair failed and we were unable to recover it. 00:27:08.207 [2024-11-20 11:21:35.606938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.207 [2024-11-20 11:21:35.606991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.207 qpair failed and we were unable to recover it. 00:27:08.207 [2024-11-20 11:21:35.607171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.207 [2024-11-20 11:21:35.607203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.207 qpair failed and we were unable to recover it. 00:27:08.207 [2024-11-20 11:21:35.607389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.207 [2024-11-20 11:21:35.607419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.207 qpair failed and we were unable to recover it. 00:27:08.208 [2024-11-20 11:21:35.607531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.208 [2024-11-20 11:21:35.607562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.208 qpair failed and we were unable to recover it. 00:27:08.208 [2024-11-20 11:21:35.607734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.208 [2024-11-20 11:21:35.607765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.208 qpair failed and we were unable to recover it. 00:27:08.208 [2024-11-20 11:21:35.608033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.208 [2024-11-20 11:21:35.608105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.208 qpair failed and we were unable to recover it. 00:27:08.208 [2024-11-20 11:21:35.608306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.208 [2024-11-20 11:21:35.608339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.208 qpair failed and we were unable to recover it. 00:27:08.208 [2024-11-20 11:21:35.608529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.208 [2024-11-20 11:21:35.608560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.208 qpair failed and we were unable to recover it. 00:27:08.208 [2024-11-20 11:21:35.608676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.208 [2024-11-20 11:21:35.608707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.208 qpair failed and we were unable to recover it. 00:27:08.208 [2024-11-20 11:21:35.608844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.208 [2024-11-20 11:21:35.608874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.208 qpair failed and we were unable to recover it. 00:27:08.208 [2024-11-20 11:21:35.609108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.208 [2024-11-20 11:21:35.609141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.208 qpair failed and we were unable to recover it. 00:27:08.208 [2024-11-20 11:21:35.609387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.208 [2024-11-20 11:21:35.609418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.208 qpair failed and we were unable to recover it. 00:27:08.208 [2024-11-20 11:21:35.609679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.208 [2024-11-20 11:21:35.609709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.208 qpair failed and we were unable to recover it. 00:27:08.208 [2024-11-20 11:21:35.609942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.208 [2024-11-20 11:21:35.609984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.208 qpair failed and we were unable to recover it. 00:27:08.208 [2024-11-20 11:21:35.610192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.208 [2024-11-20 11:21:35.610223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.208 qpair failed and we were unable to recover it. 00:27:08.208 [2024-11-20 11:21:35.610390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.208 [2024-11-20 11:21:35.610420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.208 qpair failed and we were unable to recover it. 00:27:08.208 [2024-11-20 11:21:35.610647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.208 [2024-11-20 11:21:35.610677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.208 qpair failed and we were unable to recover it. 00:27:08.208 [2024-11-20 11:21:35.610846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.208 [2024-11-20 11:21:35.610876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.208 qpair failed and we were unable to recover it. 00:27:08.208 [2024-11-20 11:21:35.611047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.208 [2024-11-20 11:21:35.611089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.208 qpair failed and we were unable to recover it. 00:27:08.208 [2024-11-20 11:21:35.611347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.208 [2024-11-20 11:21:35.611378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.208 qpair failed and we were unable to recover it. 00:27:08.208 [2024-11-20 11:21:35.611512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.208 [2024-11-20 11:21:35.611543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.208 qpair failed and we were unable to recover it. 00:27:08.208 [2024-11-20 11:21:35.611675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.208 [2024-11-20 11:21:35.611706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.208 qpair failed and we were unable to recover it. 00:27:08.208 [2024-11-20 11:21:35.611939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.208 [2024-11-20 11:21:35.611980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.208 qpair failed and we were unable to recover it. 00:27:08.208 [2024-11-20 11:21:35.612178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.208 [2024-11-20 11:21:35.612208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.208 qpair failed and we were unable to recover it. 00:27:08.208 [2024-11-20 11:21:35.612389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.208 [2024-11-20 11:21:35.612420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.208 qpair failed and we were unable to recover it. 00:27:08.208 [2024-11-20 11:21:35.612606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.208 [2024-11-20 11:21:35.612637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.208 qpair failed and we were unable to recover it. 00:27:08.208 [2024-11-20 11:21:35.612814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.208 [2024-11-20 11:21:35.612844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.208 qpair failed and we were unable to recover it. 00:27:08.208 [2024-11-20 11:21:35.612964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.208 [2024-11-20 11:21:35.612996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.208 qpair failed and we were unable to recover it. 00:27:08.208 [2024-11-20 11:21:35.613175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.208 [2024-11-20 11:21:35.613206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.208 qpair failed and we were unable to recover it. 00:27:08.208 [2024-11-20 11:21:35.613416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.208 [2024-11-20 11:21:35.613446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.208 qpair failed and we were unable to recover it. 00:27:08.208 [2024-11-20 11:21:35.613578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.208 [2024-11-20 11:21:35.613608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.208 qpair failed and we were unable to recover it. 00:27:08.208 [2024-11-20 11:21:35.613732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.208 [2024-11-20 11:21:35.613763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.208 qpair failed and we were unable to recover it. 00:27:08.208 [2024-11-20 11:21:35.613969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.208 [2024-11-20 11:21:35.614002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.208 qpair failed and we were unable to recover it. 00:27:08.208 [2024-11-20 11:21:35.614279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.208 [2024-11-20 11:21:35.614309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.208 qpair failed and we were unable to recover it. 00:27:08.208 [2024-11-20 11:21:35.614497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.208 [2024-11-20 11:21:35.614528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.208 qpair failed and we were unable to recover it. 00:27:08.208 [2024-11-20 11:21:35.614710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.209 [2024-11-20 11:21:35.614740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.209 qpair failed and we were unable to recover it. 00:27:08.209 [2024-11-20 11:21:35.614979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.209 [2024-11-20 11:21:35.615011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.209 qpair failed and we were unable to recover it. 00:27:08.209 [2024-11-20 11:21:35.615246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.209 [2024-11-20 11:21:35.615277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.209 qpair failed and we were unable to recover it. 00:27:08.209 [2024-11-20 11:21:35.615522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.209 [2024-11-20 11:21:35.615552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.209 qpair failed and we were unable to recover it. 00:27:08.209 [2024-11-20 11:21:35.615844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.209 [2024-11-20 11:21:35.615875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.209 qpair failed and we were unable to recover it. 00:27:08.209 [2024-11-20 11:21:35.616131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.209 [2024-11-20 11:21:35.616163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.209 qpair failed and we were unable to recover it. 00:27:08.209 [2024-11-20 11:21:35.616287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.209 [2024-11-20 11:21:35.616318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.209 qpair failed and we were unable to recover it. 00:27:08.209 [2024-11-20 11:21:35.616567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.209 [2024-11-20 11:21:35.616598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.209 qpair failed and we were unable to recover it. 00:27:08.209 [2024-11-20 11:21:35.616785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.209 [2024-11-20 11:21:35.616816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.209 qpair failed and we were unable to recover it. 00:27:08.209 [2024-11-20 11:21:35.616988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.209 [2024-11-20 11:21:35.617019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.209 qpair failed and we were unable to recover it. 00:27:08.209 [2024-11-20 11:21:35.617160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.209 [2024-11-20 11:21:35.617191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.209 qpair failed and we were unable to recover it. 00:27:08.209 [2024-11-20 11:21:35.617431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.209 [2024-11-20 11:21:35.617461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.209 qpair failed and we were unable to recover it. 00:27:08.209 [2024-11-20 11:21:35.617721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.209 [2024-11-20 11:21:35.617753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.209 qpair failed and we were unable to recover it. 00:27:08.209 [2024-11-20 11:21:35.617931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.209 [2024-11-20 11:21:35.617983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.209 qpair failed and we were unable to recover it. 00:27:08.209 [2024-11-20 11:21:35.618116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.209 [2024-11-20 11:21:35.618147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.209 qpair failed and we were unable to recover it. 00:27:08.209 [2024-11-20 11:21:35.618382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.209 [2024-11-20 11:21:35.618412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.209 qpair failed and we were unable to recover it. 00:27:08.209 [2024-11-20 11:21:35.618605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.209 [2024-11-20 11:21:35.618636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.209 qpair failed and we were unable to recover it. 00:27:08.209 [2024-11-20 11:21:35.618897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.209 [2024-11-20 11:21:35.618928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.209 qpair failed and we were unable to recover it. 00:27:08.209 [2024-11-20 11:21:35.619121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.209 [2024-11-20 11:21:35.619153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.209 qpair failed and we were unable to recover it. 00:27:08.209 [2024-11-20 11:21:35.619352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.209 [2024-11-20 11:21:35.619383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.209 qpair failed and we were unable to recover it. 00:27:08.209 [2024-11-20 11:21:35.619570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.209 [2024-11-20 11:21:35.619601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.209 qpair failed and we were unable to recover it. 00:27:08.209 [2024-11-20 11:21:35.619705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.209 [2024-11-20 11:21:35.619734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.209 qpair failed and we were unable to recover it. 00:27:08.209 [2024-11-20 11:21:35.619920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.209 [2024-11-20 11:21:35.619960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.209 qpair failed and we were unable to recover it. 00:27:08.209 [2024-11-20 11:21:35.620193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.209 [2024-11-20 11:21:35.620230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.209 qpair failed and we were unable to recover it. 00:27:08.209 [2024-11-20 11:21:35.620495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.209 [2024-11-20 11:21:35.620525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.209 qpair failed and we were unable to recover it. 00:27:08.209 [2024-11-20 11:21:35.620776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.209 [2024-11-20 11:21:35.620806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.209 qpair failed and we were unable to recover it. 00:27:08.209 [2024-11-20 11:21:35.621018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.209 [2024-11-20 11:21:35.621052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.209 qpair failed and we were unable to recover it. 00:27:08.209 [2024-11-20 11:21:35.621301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.209 [2024-11-20 11:21:35.621331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.209 qpair failed and we were unable to recover it. 00:27:08.209 [2024-11-20 11:21:35.621508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.209 [2024-11-20 11:21:35.621539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.209 qpair failed and we were unable to recover it. 00:27:08.209 [2024-11-20 11:21:35.621714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.209 [2024-11-20 11:21:35.621745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.209 qpair failed and we were unable to recover it. 00:27:08.209 [2024-11-20 11:21:35.621885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.209 [2024-11-20 11:21:35.621915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.209 qpair failed and we were unable to recover it. 00:27:08.209 [2024-11-20 11:21:35.622144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.209 [2024-11-20 11:21:35.622179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.209 qpair failed and we were unable to recover it. 00:27:08.209 [2024-11-20 11:21:35.622351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.209 [2024-11-20 11:21:35.622382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.209 qpair failed and we were unable to recover it. 00:27:08.209 [2024-11-20 11:21:35.622568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.210 [2024-11-20 11:21:35.622599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.210 qpair failed and we were unable to recover it. 00:27:08.210 [2024-11-20 11:21:35.622764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.210 [2024-11-20 11:21:35.622795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.210 qpair failed and we were unable to recover it. 00:27:08.210 [2024-11-20 11:21:35.622975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.210 [2024-11-20 11:21:35.623007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.210 qpair failed and we were unable to recover it. 00:27:08.210 [2024-11-20 11:21:35.623176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.210 [2024-11-20 11:21:35.623206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.210 qpair failed and we were unable to recover it. 00:27:08.210 [2024-11-20 11:21:35.623481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.210 [2024-11-20 11:21:35.623513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.210 qpair failed and we were unable to recover it. 00:27:08.210 [2024-11-20 11:21:35.623711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.210 [2024-11-20 11:21:35.623741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.210 qpair failed and we were unable to recover it. 00:27:08.210 [2024-11-20 11:21:35.623854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.210 [2024-11-20 11:21:35.623885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.210 qpair failed and we were unable to recover it. 00:27:08.210 [2024-11-20 11:21:35.624171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.210 [2024-11-20 11:21:35.624203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.210 qpair failed and we were unable to recover it. 00:27:08.210 [2024-11-20 11:21:35.624466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.210 [2024-11-20 11:21:35.624497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.210 qpair failed and we were unable to recover it. 00:27:08.210 [2024-11-20 11:21:35.624679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.210 [2024-11-20 11:21:35.624710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.210 qpair failed and we were unable to recover it. 00:27:08.210 [2024-11-20 11:21:35.624889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.210 [2024-11-20 11:21:35.624920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.210 qpair failed and we were unable to recover it. 00:27:08.210 [2024-11-20 11:21:35.625071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.210 [2024-11-20 11:21:35.625103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.210 qpair failed and we were unable to recover it. 00:27:08.210 [2024-11-20 11:21:35.625282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.210 [2024-11-20 11:21:35.625313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.210 qpair failed and we were unable to recover it. 00:27:08.210 [2024-11-20 11:21:35.625436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.210 [2024-11-20 11:21:35.625468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.210 qpair failed and we were unable to recover it. 00:27:08.210 [2024-11-20 11:21:35.625588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.210 [2024-11-20 11:21:35.625619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.210 qpair failed and we were unable to recover it. 00:27:08.210 [2024-11-20 11:21:35.625796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.210 [2024-11-20 11:21:35.625827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.210 qpair failed and we were unable to recover it. 00:27:08.210 [2024-11-20 11:21:35.626021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.210 [2024-11-20 11:21:35.626053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.210 qpair failed and we were unable to recover it. 00:27:08.210 [2024-11-20 11:21:35.626220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.210 [2024-11-20 11:21:35.626292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.210 qpair failed and we were unable to recover it. 00:27:08.210 [2024-11-20 11:21:35.626551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.210 [2024-11-20 11:21:35.626588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.210 qpair failed and we were unable to recover it. 00:27:08.210 [2024-11-20 11:21:35.626802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.210 [2024-11-20 11:21:35.626833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.210 qpair failed and we were unable to recover it. 00:27:08.210 [2024-11-20 11:21:35.627037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.210 [2024-11-20 11:21:35.627070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.210 qpair failed and we were unable to recover it. 00:27:08.210 [2024-11-20 11:21:35.627208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.210 [2024-11-20 11:21:35.627240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.210 qpair failed and we were unable to recover it. 00:27:08.210 [2024-11-20 11:21:35.627410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.210 [2024-11-20 11:21:35.627441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.210 qpair failed and we were unable to recover it. 00:27:08.210 [2024-11-20 11:21:35.627558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.210 [2024-11-20 11:21:35.627588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.210 qpair failed and we were unable to recover it. 00:27:08.210 [2024-11-20 11:21:35.627713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.210 [2024-11-20 11:21:35.627745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.210 qpair failed and we were unable to recover it. 00:27:08.210 [2024-11-20 11:21:35.627859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.210 [2024-11-20 11:21:35.627891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.210 qpair failed and we were unable to recover it. 00:27:08.210 [2024-11-20 11:21:35.628108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.210 [2024-11-20 11:21:35.628141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.210 qpair failed and we were unable to recover it. 00:27:08.210 [2024-11-20 11:21:35.628332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.210 [2024-11-20 11:21:35.628364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.210 qpair failed and we were unable to recover it. 00:27:08.210 [2024-11-20 11:21:35.628535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.210 [2024-11-20 11:21:35.628568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.210 qpair failed and we were unable to recover it. 00:27:08.210 [2024-11-20 11:21:35.628688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.210 [2024-11-20 11:21:35.628719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.210 qpair failed and we were unable to recover it. 00:27:08.210 [2024-11-20 11:21:35.628962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.210 [2024-11-20 11:21:35.628995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.210 qpair failed and we were unable to recover it. 00:27:08.210 [2024-11-20 11:21:35.629192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.210 [2024-11-20 11:21:35.629223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.210 qpair failed and we were unable to recover it. 00:27:08.210 [2024-11-20 11:21:35.629392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.210 [2024-11-20 11:21:35.629423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.210 qpair failed and we were unable to recover it. 00:27:08.210 [2024-11-20 11:21:35.629658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.211 [2024-11-20 11:21:35.629689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.211 qpair failed and we were unable to recover it. 00:27:08.211 [2024-11-20 11:21:35.629870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.211 [2024-11-20 11:21:35.629901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.211 qpair failed and we were unable to recover it. 00:27:08.211 [2024-11-20 11:21:35.630048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.211 [2024-11-20 11:21:35.630081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.211 qpair failed and we were unable to recover it. 00:27:08.211 [2024-11-20 11:21:35.630274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.211 [2024-11-20 11:21:35.630305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.211 qpair failed and we were unable to recover it. 00:27:08.211 [2024-11-20 11:21:35.630491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.211 [2024-11-20 11:21:35.630522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.211 qpair failed and we were unable to recover it. 00:27:08.211 [2024-11-20 11:21:35.630701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.211 [2024-11-20 11:21:35.630732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.211 qpair failed and we were unable to recover it. 00:27:08.211 [2024-11-20 11:21:35.630923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.211 [2024-11-20 11:21:35.630964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.211 qpair failed and we were unable to recover it. 00:27:08.211 [2024-11-20 11:21:35.631141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.211 [2024-11-20 11:21:35.631172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.211 qpair failed and we were unable to recover it. 00:27:08.211 [2024-11-20 11:21:35.631373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.211 [2024-11-20 11:21:35.631404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.211 qpair failed and we were unable to recover it. 00:27:08.211 [2024-11-20 11:21:35.631541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.211 [2024-11-20 11:21:35.631572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.211 qpair failed and we were unable to recover it. 00:27:08.211 [2024-11-20 11:21:35.631697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.211 [2024-11-20 11:21:35.631729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.211 qpair failed and we were unable to recover it. 00:27:08.211 [2024-11-20 11:21:35.631899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.211 [2024-11-20 11:21:35.631935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.211 qpair failed and we were unable to recover it. 00:27:08.211 [2024-11-20 11:21:35.632138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.211 [2024-11-20 11:21:35.632169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.211 qpair failed and we were unable to recover it. 00:27:08.211 [2024-11-20 11:21:35.632361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.211 [2024-11-20 11:21:35.632392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.211 qpair failed and we were unable to recover it. 00:27:08.211 [2024-11-20 11:21:35.632571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.211 [2024-11-20 11:21:35.632603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.211 qpair failed and we were unable to recover it. 00:27:08.211 [2024-11-20 11:21:35.632857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.211 [2024-11-20 11:21:35.632889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.211 qpair failed and we were unable to recover it. 00:27:08.211 [2024-11-20 11:21:35.633086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.211 [2024-11-20 11:21:35.633118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.211 qpair failed and we were unable to recover it. 00:27:08.211 [2024-11-20 11:21:35.633381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.211 [2024-11-20 11:21:35.633413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.211 qpair failed and we were unable to recover it. 00:27:08.211 [2024-11-20 11:21:35.633604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.211 [2024-11-20 11:21:35.633634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.211 qpair failed and we were unable to recover it. 00:27:08.211 [2024-11-20 11:21:35.633819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.211 [2024-11-20 11:21:35.633850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.211 qpair failed and we were unable to recover it. 00:27:08.211 [2024-11-20 11:21:35.634046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.211 [2024-11-20 11:21:35.634079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.211 qpair failed and we were unable to recover it. 00:27:08.211 [2024-11-20 11:21:35.634349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.211 [2024-11-20 11:21:35.634381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.211 qpair failed and we were unable to recover it. 00:27:08.211 [2024-11-20 11:21:35.634554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.211 [2024-11-20 11:21:35.634585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.211 qpair failed and we were unable to recover it. 00:27:08.211 [2024-11-20 11:21:35.634769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.211 [2024-11-20 11:21:35.634801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.211 qpair failed and we were unable to recover it. 00:27:08.211 [2024-11-20 11:21:35.634917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.211 [2024-11-20 11:21:35.634959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.211 qpair failed and we were unable to recover it. 00:27:08.211 [2024-11-20 11:21:35.635157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.211 [2024-11-20 11:21:35.635190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.211 qpair failed and we were unable to recover it. 00:27:08.211 [2024-11-20 11:21:35.635295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.211 [2024-11-20 11:21:35.635327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.211 qpair failed and we were unable to recover it. 00:27:08.211 [2024-11-20 11:21:35.635514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.211 [2024-11-20 11:21:35.635546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.211 qpair failed and we were unable to recover it. 00:27:08.211 [2024-11-20 11:21:35.635666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.211 [2024-11-20 11:21:35.635697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.211 qpair failed and we were unable to recover it. 00:27:08.211 [2024-11-20 11:21:35.635873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.211 [2024-11-20 11:21:35.635904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.211 qpair failed and we were unable to recover it. 00:27:08.212 [2024-11-20 11:21:35.636116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.212 [2024-11-20 11:21:35.636148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.212 qpair failed and we were unable to recover it. 00:27:08.212 [2024-11-20 11:21:35.636319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.212 [2024-11-20 11:21:35.636350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.212 qpair failed and we were unable to recover it. 00:27:08.212 [2024-11-20 11:21:35.636526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.212 [2024-11-20 11:21:35.636558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.212 qpair failed and we were unable to recover it. 00:27:08.212 [2024-11-20 11:21:35.636681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.212 [2024-11-20 11:21:35.636712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.212 qpair failed and we were unable to recover it. 00:27:08.212 [2024-11-20 11:21:35.636895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.212 [2024-11-20 11:21:35.636927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.212 qpair failed and we were unable to recover it. 00:27:08.212 [2024-11-20 11:21:35.637170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.212 [2024-11-20 11:21:35.637202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.212 qpair failed and we were unable to recover it. 00:27:08.212 [2024-11-20 11:21:35.637434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.212 [2024-11-20 11:21:35.637464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.212 qpair failed and we were unable to recover it. 00:27:08.212 [2024-11-20 11:21:35.637649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.212 [2024-11-20 11:21:35.637681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.212 qpair failed and we were unable to recover it. 00:27:08.212 [2024-11-20 11:21:35.637899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.212 [2024-11-20 11:21:35.637936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.212 qpair failed and we were unable to recover it. 00:27:08.212 [2024-11-20 11:21:35.638086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.212 [2024-11-20 11:21:35.638119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.212 qpair failed and we were unable to recover it. 00:27:08.212 [2024-11-20 11:21:35.638291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.212 [2024-11-20 11:21:35.638322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.212 qpair failed and we were unable to recover it. 00:27:08.212 [2024-11-20 11:21:35.638558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.212 [2024-11-20 11:21:35.638589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.212 qpair failed and we were unable to recover it. 00:27:08.212 [2024-11-20 11:21:35.638871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.212 [2024-11-20 11:21:35.638903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.212 qpair failed and we were unable to recover it. 00:27:08.212 [2024-11-20 11:21:35.639092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.212 [2024-11-20 11:21:35.639125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.212 qpair failed and we were unable to recover it. 00:27:08.212 [2024-11-20 11:21:35.639310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.212 [2024-11-20 11:21:35.639341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.212 qpair failed and we were unable to recover it. 00:27:08.212 [2024-11-20 11:21:35.639519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.212 [2024-11-20 11:21:35.639550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.212 qpair failed and we were unable to recover it. 00:27:08.212 [2024-11-20 11:21:35.639757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.212 [2024-11-20 11:21:35.639789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.212 qpair failed and we were unable to recover it. 00:27:08.212 [2024-11-20 11:21:35.639967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.212 [2024-11-20 11:21:35.640001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.212 qpair failed and we were unable to recover it. 00:27:08.212 [2024-11-20 11:21:35.640246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.212 [2024-11-20 11:21:35.640277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.212 qpair failed and we were unable to recover it. 00:27:08.212 [2024-11-20 11:21:35.640461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.212 [2024-11-20 11:21:35.640492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.212 qpair failed and we were unable to recover it. 00:27:08.212 [2024-11-20 11:21:35.640673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.212 [2024-11-20 11:21:35.640704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.212 qpair failed and we were unable to recover it. 00:27:08.212 [2024-11-20 11:21:35.640944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.212 [2024-11-20 11:21:35.640987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.212 qpair failed and we were unable to recover it. 00:27:08.212 [2024-11-20 11:21:35.641230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.212 [2024-11-20 11:21:35.641262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.212 qpair failed and we were unable to recover it. 00:27:08.212 [2024-11-20 11:21:35.641498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.212 [2024-11-20 11:21:35.641530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.212 qpair failed and we were unable to recover it. 00:27:08.212 [2024-11-20 11:21:35.641715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.212 [2024-11-20 11:21:35.641746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.212 qpair failed and we were unable to recover it. 00:27:08.212 [2024-11-20 11:21:35.642012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.212 [2024-11-20 11:21:35.642044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.212 qpair failed and we were unable to recover it. 00:27:08.212 [2024-11-20 11:21:35.642157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.212 [2024-11-20 11:21:35.642188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.212 qpair failed and we were unable to recover it. 00:27:08.212 [2024-11-20 11:21:35.642320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.212 [2024-11-20 11:21:35.642352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.212 qpair failed and we were unable to recover it. 00:27:08.212 [2024-11-20 11:21:35.642469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.212 [2024-11-20 11:21:35.642500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.212 qpair failed and we were unable to recover it. 00:27:08.212 [2024-11-20 11:21:35.642669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.212 [2024-11-20 11:21:35.642700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.212 qpair failed and we were unable to recover it. 00:27:08.212 [2024-11-20 11:21:35.642888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.212 [2024-11-20 11:21:35.642919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.212 qpair failed and we were unable to recover it. 00:27:08.212 [2024-11-20 11:21:35.643054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.212 [2024-11-20 11:21:35.643085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.212 qpair failed and we were unable to recover it. 00:27:08.212 [2024-11-20 11:21:35.643268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.212 [2024-11-20 11:21:35.643299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.212 qpair failed and we were unable to recover it. 00:27:08.213 [2024-11-20 11:21:35.643560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.213 [2024-11-20 11:21:35.643591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.213 qpair failed and we were unable to recover it. 00:27:08.213 [2024-11-20 11:21:35.643849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.213 [2024-11-20 11:21:35.643881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.213 qpair failed and we were unable to recover it. 00:27:08.213 [2024-11-20 11:21:35.644154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.213 [2024-11-20 11:21:35.644188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.213 qpair failed and we were unable to recover it. 00:27:08.213 [2024-11-20 11:21:35.644430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.213 [2024-11-20 11:21:35.644462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.213 qpair failed and we were unable to recover it. 00:27:08.213 [2024-11-20 11:21:35.644720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.213 [2024-11-20 11:21:35.644752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.213 qpair failed and we were unable to recover it. 00:27:08.213 [2024-11-20 11:21:35.644925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.213 [2024-11-20 11:21:35.644962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.213 qpair failed and we were unable to recover it. 00:27:08.213 [2024-11-20 11:21:35.645100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.213 [2024-11-20 11:21:35.645133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.213 qpair failed and we were unable to recover it. 00:27:08.213 [2024-11-20 11:21:35.645320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.213 [2024-11-20 11:21:35.645351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.213 qpair failed and we were unable to recover it. 00:27:08.213 [2024-11-20 11:21:35.645532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.213 [2024-11-20 11:21:35.645564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.213 qpair failed and we were unable to recover it. 00:27:08.213 [2024-11-20 11:21:35.645733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.213 [2024-11-20 11:21:35.645764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.213 qpair failed and we were unable to recover it. 00:27:08.213 [2024-11-20 11:21:35.646004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.213 [2024-11-20 11:21:35.646036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.213 qpair failed and we were unable to recover it. 00:27:08.213 [2024-11-20 11:21:35.646267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.213 [2024-11-20 11:21:35.646298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.213 qpair failed and we were unable to recover it. 00:27:08.213 [2024-11-20 11:21:35.646478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.213 [2024-11-20 11:21:35.646509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.213 qpair failed and we were unable to recover it. 00:27:08.213 [2024-11-20 11:21:35.646743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.213 [2024-11-20 11:21:35.646774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.213 qpair failed and we were unable to recover it. 00:27:08.213 [2024-11-20 11:21:35.646989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.213 [2024-11-20 11:21:35.647022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.213 qpair failed and we were unable to recover it. 00:27:08.213 [2024-11-20 11:21:35.647132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.213 [2024-11-20 11:21:35.647164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.213 qpair failed and we were unable to recover it. 00:27:08.213 [2024-11-20 11:21:35.647436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.213 [2024-11-20 11:21:35.647523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.213 qpair failed and we were unable to recover it. 00:27:08.213 [2024-11-20 11:21:35.647772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.213 [2024-11-20 11:21:35.647817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.213 qpair failed and we were unable to recover it. 00:27:08.213 [2024-11-20 11:21:35.647980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.213 [2024-11-20 11:21:35.648035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.213 qpair failed and we were unable to recover it. 00:27:08.213 [2024-11-20 11:21:35.648309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.213 [2024-11-20 11:21:35.648349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.213 qpair failed and we were unable to recover it. 00:27:08.213 [2024-11-20 11:21:35.648567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.213 [2024-11-20 11:21:35.648607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.213 qpair failed and we were unable to recover it. 00:27:08.213 [2024-11-20 11:21:35.648823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.213 [2024-11-20 11:21:35.648861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.213 qpair failed and we were unable to recover it. 00:27:08.213 [2024-11-20 11:21:35.649112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.213 [2024-11-20 11:21:35.649147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.213 qpair failed and we were unable to recover it. 00:27:08.213 [2024-11-20 11:21:35.649272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.213 [2024-11-20 11:21:35.649305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.213 qpair failed and we were unable to recover it. 00:27:08.213 [2024-11-20 11:21:35.649514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.213 [2024-11-20 11:21:35.649545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.213 qpair failed and we were unable to recover it. 00:27:08.213 [2024-11-20 11:21:35.649805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.213 [2024-11-20 11:21:35.649837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.213 qpair failed and we were unable to recover it. 00:27:08.213 [2024-11-20 11:21:35.650024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.213 [2024-11-20 11:21:35.650058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.213 qpair failed and we were unable to recover it. 00:27:08.213 [2024-11-20 11:21:35.650187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.213 [2024-11-20 11:21:35.650219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.213 qpair failed and we were unable to recover it. 00:27:08.213 [2024-11-20 11:21:35.650401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.213 [2024-11-20 11:21:35.650433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.213 qpair failed and we were unable to recover it. 00:27:08.213 [2024-11-20 11:21:35.650668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.213 [2024-11-20 11:21:35.650700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.213 qpair failed and we were unable to recover it. 00:27:08.213 [2024-11-20 11:21:35.650991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.213 [2024-11-20 11:21:35.651026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.213 qpair failed and we were unable to recover it. 00:27:08.213 [2024-11-20 11:21:35.651207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.213 [2024-11-20 11:21:35.651238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.213 qpair failed and we were unable to recover it. 00:27:08.213 [2024-11-20 11:21:35.651360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.213 [2024-11-20 11:21:35.651392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.213 qpair failed and we were unable to recover it. 00:27:08.213 [2024-11-20 11:21:35.651639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.214 [2024-11-20 11:21:35.651671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.214 qpair failed and we were unable to recover it. 00:27:08.214 [2024-11-20 11:21:35.651862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.214 [2024-11-20 11:21:35.651893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.214 qpair failed and we were unable to recover it. 00:27:08.214 [2024-11-20 11:21:35.652024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.214 [2024-11-20 11:21:35.652055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.214 qpair failed and we were unable to recover it. 00:27:08.214 [2024-11-20 11:21:35.652219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.214 [2024-11-20 11:21:35.652250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.214 qpair failed and we were unable to recover it. 00:27:08.214 [2024-11-20 11:21:35.652435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.214 [2024-11-20 11:21:35.652466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.214 qpair failed and we were unable to recover it. 00:27:08.214 [2024-11-20 11:21:35.652603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.214 [2024-11-20 11:21:35.652636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.214 qpair failed and we were unable to recover it. 00:27:08.214 [2024-11-20 11:21:35.652817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.214 [2024-11-20 11:21:35.652848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.214 qpair failed and we were unable to recover it. 00:27:08.214 [2024-11-20 11:21:35.653052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.214 [2024-11-20 11:21:35.653083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.214 qpair failed and we were unable to recover it. 00:27:08.214 [2024-11-20 11:21:35.653262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.214 [2024-11-20 11:21:35.653293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.214 qpair failed and we were unable to recover it. 00:27:08.214 [2024-11-20 11:21:35.653533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.214 [2024-11-20 11:21:35.653566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.214 qpair failed and we were unable to recover it. 00:27:08.214 [2024-11-20 11:21:35.653698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.214 [2024-11-20 11:21:35.653731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.214 qpair failed and we were unable to recover it. 00:27:08.214 [2024-11-20 11:21:35.653909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.214 [2024-11-20 11:21:35.653941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.214 qpair failed and we were unable to recover it. 00:27:08.214 [2024-11-20 11:21:35.654191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.214 [2024-11-20 11:21:35.654223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.214 qpair failed and we were unable to recover it. 00:27:08.214 [2024-11-20 11:21:35.654349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.214 [2024-11-20 11:21:35.654380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.214 qpair failed and we were unable to recover it. 00:27:08.214 [2024-11-20 11:21:35.654638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.214 [2024-11-20 11:21:35.654669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.214 qpair failed and we were unable to recover it. 00:27:08.214 [2024-11-20 11:21:35.654902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.214 [2024-11-20 11:21:35.654933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.214 qpair failed and we were unable to recover it. 00:27:08.214 [2024-11-20 11:21:35.655165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.214 [2024-11-20 11:21:35.655196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.214 qpair failed and we were unable to recover it. 00:27:08.214 [2024-11-20 11:21:35.655430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.214 [2024-11-20 11:21:35.655460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.214 qpair failed and we were unable to recover it. 00:27:08.214 [2024-11-20 11:21:35.655666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.214 [2024-11-20 11:21:35.655697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.214 qpair failed and we were unable to recover it. 00:27:08.214 [2024-11-20 11:21:35.655968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.214 [2024-11-20 11:21:35.656001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.214 qpair failed and we were unable to recover it. 00:27:08.214 [2024-11-20 11:21:35.656237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.214 [2024-11-20 11:21:35.656270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.214 qpair failed and we were unable to recover it. 00:27:08.214 [2024-11-20 11:21:35.656532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.214 [2024-11-20 11:21:35.656564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.214 qpair failed and we were unable to recover it. 00:27:08.214 [2024-11-20 11:21:35.656845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.214 [2024-11-20 11:21:35.656878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.214 qpair failed and we were unable to recover it. 00:27:08.214 [2024-11-20 11:21:35.657068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.214 [2024-11-20 11:21:35.657101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.214 qpair failed and we were unable to recover it. 00:27:08.214 [2024-11-20 11:21:35.657291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.214 [2024-11-20 11:21:35.657323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.214 qpair failed and we were unable to recover it. 00:27:08.214 [2024-11-20 11:21:35.657438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.214 [2024-11-20 11:21:35.657468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.214 qpair failed and we were unable to recover it. 00:27:08.214 [2024-11-20 11:21:35.657660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.214 [2024-11-20 11:21:35.657692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.214 qpair failed and we were unable to recover it. 00:27:08.214 [2024-11-20 11:21:35.657867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.214 [2024-11-20 11:21:35.657900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.214 qpair failed and we were unable to recover it. 00:27:08.214 [2024-11-20 11:21:35.658115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.214 [2024-11-20 11:21:35.658149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.214 qpair failed and we were unable to recover it. 00:27:08.214 [2024-11-20 11:21:35.658270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.214 [2024-11-20 11:21:35.658300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.214 qpair failed and we were unable to recover it. 00:27:08.214 [2024-11-20 11:21:35.658412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.214 [2024-11-20 11:21:35.658442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.214 qpair failed and we were unable to recover it. 00:27:08.214 [2024-11-20 11:21:35.658682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.214 [2024-11-20 11:21:35.658714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.214 qpair failed and we were unable to recover it. 00:27:08.214 [2024-11-20 11:21:35.658958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.214 [2024-11-20 11:21:35.658991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.214 qpair failed and we were unable to recover it. 00:27:08.214 [2024-11-20 11:21:35.659166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.215 [2024-11-20 11:21:35.659197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.215 qpair failed and we were unable to recover it. 00:27:08.215 [2024-11-20 11:21:35.659381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.215 [2024-11-20 11:21:35.659414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.215 qpair failed and we were unable to recover it. 00:27:08.215 [2024-11-20 11:21:35.659536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.215 [2024-11-20 11:21:35.659567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.215 qpair failed and we were unable to recover it. 00:27:08.215 [2024-11-20 11:21:35.659804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.215 [2024-11-20 11:21:35.659836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.215 qpair failed and we were unable to recover it. 00:27:08.215 [2024-11-20 11:21:35.660072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.215 [2024-11-20 11:21:35.660109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.215 qpair failed and we were unable to recover it. 00:27:08.215 [2024-11-20 11:21:35.660295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.215 [2024-11-20 11:21:35.660325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.215 qpair failed and we were unable to recover it. 00:27:08.215 [2024-11-20 11:21:35.660605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.215 [2024-11-20 11:21:35.660637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.215 qpair failed and we were unable to recover it. 00:27:08.215 [2024-11-20 11:21:35.660836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.215 [2024-11-20 11:21:35.660867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.215 qpair failed and we were unable to recover it. 00:27:08.215 [2024-11-20 11:21:35.661051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.215 [2024-11-20 11:21:35.661083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.215 qpair failed and we were unable to recover it. 00:27:08.215 [2024-11-20 11:21:35.661268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.215 [2024-11-20 11:21:35.661299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.215 qpair failed and we were unable to recover it. 00:27:08.215 [2024-11-20 11:21:35.661579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.215 [2024-11-20 11:21:35.661610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.215 qpair failed and we were unable to recover it. 00:27:08.215 [2024-11-20 11:21:35.661791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.215 [2024-11-20 11:21:35.661823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.215 qpair failed and we were unable to recover it. 00:27:08.215 [2024-11-20 11:21:35.662007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.215 [2024-11-20 11:21:35.662040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.215 qpair failed and we were unable to recover it. 00:27:08.215 [2024-11-20 11:21:35.662221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.215 [2024-11-20 11:21:35.662252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.215 qpair failed and we were unable to recover it. 00:27:08.215 [2024-11-20 11:21:35.662506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.215 [2024-11-20 11:21:35.662538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.215 qpair failed and we were unable to recover it. 00:27:08.215 [2024-11-20 11:21:35.662682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.215 [2024-11-20 11:21:35.662713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.215 qpair failed and we were unable to recover it. 00:27:08.215 [2024-11-20 11:21:35.662883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.215 [2024-11-20 11:21:35.662914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.215 qpair failed and we were unable to recover it. 00:27:08.215 [2024-11-20 11:21:35.663166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.215 [2024-11-20 11:21:35.663199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.215 qpair failed and we were unable to recover it. 00:27:08.215 [2024-11-20 11:21:35.663451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.215 [2024-11-20 11:21:35.663483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.215 qpair failed and we were unable to recover it. 00:27:08.215 [2024-11-20 11:21:35.663714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.215 [2024-11-20 11:21:35.663747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.215 qpair failed and we were unable to recover it. 00:27:08.215 [2024-11-20 11:21:35.663967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.215 [2024-11-20 11:21:35.663998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.215 qpair failed and we were unable to recover it. 00:27:08.215 [2024-11-20 11:21:35.664234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.215 [2024-11-20 11:21:35.664266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.215 qpair failed and we were unable to recover it. 00:27:08.215 [2024-11-20 11:21:35.664448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.215 [2024-11-20 11:21:35.664478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.215 qpair failed and we were unable to recover it. 00:27:08.215 [2024-11-20 11:21:35.664712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.215 [2024-11-20 11:21:35.664742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.215 qpair failed and we were unable to recover it. 00:27:08.215 [2024-11-20 11:21:35.664912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.215 [2024-11-20 11:21:35.664944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.215 qpair failed and we were unable to recover it. 00:27:08.215 [2024-11-20 11:21:35.665201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.215 [2024-11-20 11:21:35.665234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.215 qpair failed and we were unable to recover it. 00:27:08.215 [2024-11-20 11:21:35.665478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.215 [2024-11-20 11:21:35.665510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.215 qpair failed and we were unable to recover it. 00:27:08.215 [2024-11-20 11:21:35.665623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.215 [2024-11-20 11:21:35.665655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.215 qpair failed and we were unable to recover it. 00:27:08.215 [2024-11-20 11:21:35.665756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.215 [2024-11-20 11:21:35.665785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.215 qpair failed and we were unable to recover it. 00:27:08.215 [2024-11-20 11:21:35.666061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.215 [2024-11-20 11:21:35.666094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.215 qpair failed and we were unable to recover it. 00:27:08.215 [2024-11-20 11:21:35.666215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.215 [2024-11-20 11:21:35.666245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.215 qpair failed and we were unable to recover it. 00:27:08.215 [2024-11-20 11:21:35.666418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.215 [2024-11-20 11:21:35.666455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.215 qpair failed and we were unable to recover it. 00:27:08.215 [2024-11-20 11:21:35.666690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.215 [2024-11-20 11:21:35.666722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.215 qpair failed and we were unable to recover it. 00:27:08.215 [2024-11-20 11:21:35.666974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.216 [2024-11-20 11:21:35.667006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.216 qpair failed and we were unable to recover it. 00:27:08.216 [2024-11-20 11:21:35.667129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.216 [2024-11-20 11:21:35.667161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.216 qpair failed and we were unable to recover it. 00:27:08.216 [2024-11-20 11:21:35.667346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.216 [2024-11-20 11:21:35.667376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.216 qpair failed and we were unable to recover it. 00:27:08.216 [2024-11-20 11:21:35.667504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.216 [2024-11-20 11:21:35.667535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.216 qpair failed and we were unable to recover it. 00:27:08.216 [2024-11-20 11:21:35.667798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.216 [2024-11-20 11:21:35.667831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.216 qpair failed and we were unable to recover it. 00:27:08.216 [2024-11-20 11:21:35.667929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.216 [2024-11-20 11:21:35.667969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.216 qpair failed and we were unable to recover it. 00:27:08.216 [2024-11-20 11:21:35.668083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.216 [2024-11-20 11:21:35.668114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.216 qpair failed and we were unable to recover it. 00:27:08.216 [2024-11-20 11:21:35.668293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.216 [2024-11-20 11:21:35.668323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.216 qpair failed and we were unable to recover it. 00:27:08.216 [2024-11-20 11:21:35.668508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.216 [2024-11-20 11:21:35.668540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.216 qpair failed and we were unable to recover it. 00:27:08.216 [2024-11-20 11:21:35.668746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.216 [2024-11-20 11:21:35.668778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.216 qpair failed and we were unable to recover it. 00:27:08.216 [2024-11-20 11:21:35.668963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.216 [2024-11-20 11:21:35.668995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.216 qpair failed and we were unable to recover it. 00:27:08.216 [2024-11-20 11:21:35.669281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.216 [2024-11-20 11:21:35.669313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.216 qpair failed and we were unable to recover it. 00:27:08.216 [2024-11-20 11:21:35.669449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.216 [2024-11-20 11:21:35.669481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.216 qpair failed and we were unable to recover it. 00:27:08.216 [2024-11-20 11:21:35.669657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.216 [2024-11-20 11:21:35.669688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.216 qpair failed and we were unable to recover it. 00:27:08.216 [2024-11-20 11:21:35.669877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.216 [2024-11-20 11:21:35.669908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.216 qpair failed and we were unable to recover it. 00:27:08.216 [2024-11-20 11:21:35.670141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.216 [2024-11-20 11:21:35.670175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.216 qpair failed and we were unable to recover it. 00:27:08.216 [2024-11-20 11:21:35.670415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.216 [2024-11-20 11:21:35.670446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.216 qpair failed and we were unable to recover it. 00:27:08.216 [2024-11-20 11:21:35.670682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.216 [2024-11-20 11:21:35.670712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.216 qpair failed and we were unable to recover it. 00:27:08.216 [2024-11-20 11:21:35.670829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.216 [2024-11-20 11:21:35.670859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.216 qpair failed and we were unable to recover it. 00:27:08.216 [2024-11-20 11:21:35.671117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.216 [2024-11-20 11:21:35.671150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.216 qpair failed and we were unable to recover it. 00:27:08.216 [2024-11-20 11:21:35.671266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.216 [2024-11-20 11:21:35.671298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.216 qpair failed and we were unable to recover it. 00:27:08.216 [2024-11-20 11:21:35.671484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.216 [2024-11-20 11:21:35.671514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.216 qpair failed and we were unable to recover it. 00:27:08.496 [2024-11-20 11:21:35.671699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.496 [2024-11-20 11:21:35.671730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.496 qpair failed and we were unable to recover it. 00:27:08.496 [2024-11-20 11:21:35.671904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.496 [2024-11-20 11:21:35.671934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.496 qpair failed and we were unable to recover it. 00:27:08.496 [2024-11-20 11:21:35.672197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.496 [2024-11-20 11:21:35.672230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.496 qpair failed and we were unable to recover it. 00:27:08.496 [2024-11-20 11:21:35.672483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.496 [2024-11-20 11:21:35.672521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.496 qpair failed and we were unable to recover it. 00:27:08.496 [2024-11-20 11:21:35.672654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.496 [2024-11-20 11:21:35.672685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.496 qpair failed and we were unable to recover it. 00:27:08.496 [2024-11-20 11:21:35.672855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.496 [2024-11-20 11:21:35.672886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.496 qpair failed and we were unable to recover it. 00:27:08.496 [2024-11-20 11:21:35.672997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.496 [2024-11-20 11:21:35.673027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.496 qpair failed and we were unable to recover it. 00:27:08.496 [2024-11-20 11:21:35.673210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.496 [2024-11-20 11:21:35.673242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.496 qpair failed and we were unable to recover it. 00:27:08.496 [2024-11-20 11:21:35.673432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.496 [2024-11-20 11:21:35.673463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.496 qpair failed and we were unable to recover it. 00:27:08.496 [2024-11-20 11:21:35.673577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.496 [2024-11-20 11:21:35.673608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.496 qpair failed and we were unable to recover it. 00:27:08.496 [2024-11-20 11:21:35.673732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.496 [2024-11-20 11:21:35.673764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.496 qpair failed and we were unable to recover it. 00:27:08.496 [2024-11-20 11:21:35.673997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.496 [2024-11-20 11:21:35.674030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.496 qpair failed and we were unable to recover it. 00:27:08.496 [2024-11-20 11:21:35.674201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.496 [2024-11-20 11:21:35.674233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.496 qpair failed and we were unable to recover it. 00:27:08.497 [2024-11-20 11:21:35.674399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.497 [2024-11-20 11:21:35.674431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.497 qpair failed and we were unable to recover it. 00:27:08.497 [2024-11-20 11:21:35.674634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.497 [2024-11-20 11:21:35.674666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.497 qpair failed and we were unable to recover it. 00:27:08.497 [2024-11-20 11:21:35.674875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.497 [2024-11-20 11:21:35.674907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.497 qpair failed and we were unable to recover it. 00:27:08.497 [2024-11-20 11:21:35.675101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.497 [2024-11-20 11:21:35.675133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.497 qpair failed and we were unable to recover it. 00:27:08.497 [2024-11-20 11:21:35.675312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.497 [2024-11-20 11:21:35.675343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.497 qpair failed and we were unable to recover it. 00:27:08.497 [2024-11-20 11:21:35.675514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.497 [2024-11-20 11:21:35.675546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.497 qpair failed and we were unable to recover it. 00:27:08.497 [2024-11-20 11:21:35.675787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.497 [2024-11-20 11:21:35.675819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.497 qpair failed and we were unable to recover it. 00:27:08.497 [2024-11-20 11:21:35.676050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.497 [2024-11-20 11:21:35.676083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.497 qpair failed and we were unable to recover it. 00:27:08.497 [2024-11-20 11:21:35.676199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.497 [2024-11-20 11:21:35.676230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.497 qpair failed and we were unable to recover it. 00:27:08.497 [2024-11-20 11:21:35.676366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.497 [2024-11-20 11:21:35.676397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.497 qpair failed and we were unable to recover it. 00:27:08.497 [2024-11-20 11:21:35.676519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.497 [2024-11-20 11:21:35.676550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.497 qpair failed and we were unable to recover it. 00:27:08.497 [2024-11-20 11:21:35.676717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.497 [2024-11-20 11:21:35.676750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.497 qpair failed and we were unable to recover it. 00:27:08.497 [2024-11-20 11:21:35.676923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.497 [2024-11-20 11:21:35.676963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.497 qpair failed and we were unable to recover it. 00:27:08.497 [2024-11-20 11:21:35.677139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.497 [2024-11-20 11:21:35.677170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.497 qpair failed and we were unable to recover it. 00:27:08.497 [2024-11-20 11:21:35.677280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.497 [2024-11-20 11:21:35.677310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.497 qpair failed and we were unable to recover it. 00:27:08.497 [2024-11-20 11:21:35.677421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.497 [2024-11-20 11:21:35.677451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.497 qpair failed and we were unable to recover it. 00:27:08.497 [2024-11-20 11:21:35.677578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.497 [2024-11-20 11:21:35.677609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.497 qpair failed and we were unable to recover it. 00:27:08.497 [2024-11-20 11:21:35.677784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.497 [2024-11-20 11:21:35.677816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.497 qpair failed and we were unable to recover it. 00:27:08.497 [2024-11-20 11:21:35.678061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.497 [2024-11-20 11:21:35.678095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.497 qpair failed and we were unable to recover it. 00:27:08.497 [2024-11-20 11:21:35.678278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.497 [2024-11-20 11:21:35.678309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.497 qpair failed and we were unable to recover it. 00:27:08.497 [2024-11-20 11:21:35.678499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.497 [2024-11-20 11:21:35.678532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.497 qpair failed and we were unable to recover it. 00:27:08.497 [2024-11-20 11:21:35.678718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.497 [2024-11-20 11:21:35.678750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.497 qpair failed and we were unable to recover it. 00:27:08.497 [2024-11-20 11:21:35.678932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.497 [2024-11-20 11:21:35.678971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.497 qpair failed and we were unable to recover it. 00:27:08.497 [2024-11-20 11:21:35.679138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.497 [2024-11-20 11:21:35.679169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.497 qpair failed and we were unable to recover it. 00:27:08.497 [2024-11-20 11:21:35.679357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.497 [2024-11-20 11:21:35.679388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.497 qpair failed and we were unable to recover it. 00:27:08.497 [2024-11-20 11:21:35.679645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.497 [2024-11-20 11:21:35.679677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.497 qpair failed and we were unable to recover it. 00:27:08.497 [2024-11-20 11:21:35.679856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.497 [2024-11-20 11:21:35.679888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.497 qpair failed and we were unable to recover it. 00:27:08.497 [2024-11-20 11:21:35.680028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.497 [2024-11-20 11:21:35.680059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.497 qpair failed and we were unable to recover it. 00:27:08.497 [2024-11-20 11:21:35.680228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.497 [2024-11-20 11:21:35.680260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.497 qpair failed and we were unable to recover it. 00:27:08.497 [2024-11-20 11:21:35.680465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.497 [2024-11-20 11:21:35.680496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.497 qpair failed and we were unable to recover it. 00:27:08.497 [2024-11-20 11:21:35.680681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.497 [2024-11-20 11:21:35.680714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.497 qpair failed and we were unable to recover it. 00:27:08.497 [2024-11-20 11:21:35.680833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.497 [2024-11-20 11:21:35.680870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.497 qpair failed and we were unable to recover it. 00:27:08.497 [2024-11-20 11:21:35.680984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.497 [2024-11-20 11:21:35.681016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.497 qpair failed and we were unable to recover it. 00:27:08.498 [2024-11-20 11:21:35.681186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.498 [2024-11-20 11:21:35.681219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.498 qpair failed and we were unable to recover it. 00:27:08.498 [2024-11-20 11:21:35.681388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.498 [2024-11-20 11:21:35.681419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.498 qpair failed and we were unable to recover it. 00:27:08.498 [2024-11-20 11:21:35.681629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.498 [2024-11-20 11:21:35.681661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.498 qpair failed and we were unable to recover it. 00:27:08.498 [2024-11-20 11:21:35.681777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.498 [2024-11-20 11:21:35.681808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.498 qpair failed and we were unable to recover it. 00:27:08.498 [2024-11-20 11:21:35.682045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.498 [2024-11-20 11:21:35.682077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.498 qpair failed and we were unable to recover it. 00:27:08.498 [2024-11-20 11:21:35.682362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.498 [2024-11-20 11:21:35.682394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.498 qpair failed and we were unable to recover it. 00:27:08.498 [2024-11-20 11:21:35.682579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.498 [2024-11-20 11:21:35.682609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.498 qpair failed and we were unable to recover it. 00:27:08.498 [2024-11-20 11:21:35.682780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.498 [2024-11-20 11:21:35.682810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.498 qpair failed and we were unable to recover it. 00:27:08.498 [2024-11-20 11:21:35.683028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.498 [2024-11-20 11:21:35.683062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.498 qpair failed and we were unable to recover it. 00:27:08.498 [2024-11-20 11:21:35.683266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.498 [2024-11-20 11:21:35.683297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.498 qpair failed and we were unable to recover it. 00:27:08.498 [2024-11-20 11:21:35.683419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.498 [2024-11-20 11:21:35.683451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.498 qpair failed and we were unable to recover it. 00:27:08.498 [2024-11-20 11:21:35.683633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.498 [2024-11-20 11:21:35.683663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.498 qpair failed and we were unable to recover it. 00:27:08.498 [2024-11-20 11:21:35.683836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.498 [2024-11-20 11:21:35.683868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.498 qpair failed and we were unable to recover it. 00:27:08.498 [2024-11-20 11:21:35.684055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.498 [2024-11-20 11:21:35.684089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.498 qpair failed and we were unable to recover it. 00:27:08.498 [2024-11-20 11:21:35.684326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.498 [2024-11-20 11:21:35.684358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.498 qpair failed and we were unable to recover it. 00:27:08.498 [2024-11-20 11:21:35.684806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.498 [2024-11-20 11:21:35.684843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.498 qpair failed and we were unable to recover it. 00:27:08.498 [2024-11-20 11:21:35.685092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.498 [2024-11-20 11:21:35.685127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.498 qpair failed and we were unable to recover it. 00:27:08.498 [2024-11-20 11:21:35.685251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.498 [2024-11-20 11:21:35.685282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.498 qpair failed and we were unable to recover it. 00:27:08.498 [2024-11-20 11:21:35.685546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.498 [2024-11-20 11:21:35.685578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.498 qpair failed and we were unable to recover it. 00:27:08.498 [2024-11-20 11:21:35.685763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.498 [2024-11-20 11:21:35.685794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.498 qpair failed and we were unable to recover it. 00:27:08.498 [2024-11-20 11:21:35.685897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.498 [2024-11-20 11:21:35.685929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.498 qpair failed and we were unable to recover it. 00:27:08.498 [2024-11-20 11:21:35.686202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.498 [2024-11-20 11:21:35.686236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.498 qpair failed and we were unable to recover it. 00:27:08.498 [2024-11-20 11:21:35.686478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.498 [2024-11-20 11:21:35.686509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.498 qpair failed and we were unable to recover it. 00:27:08.498 [2024-11-20 11:21:35.686642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.498 [2024-11-20 11:21:35.686672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.498 qpair failed and we were unable to recover it. 00:27:08.498 [2024-11-20 11:21:35.686806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.498 [2024-11-20 11:21:35.686838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.498 qpair failed and we were unable to recover it. 00:27:08.498 [2024-11-20 11:21:35.687020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.498 [2024-11-20 11:21:35.687061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.498 qpair failed and we were unable to recover it. 00:27:08.498 [2024-11-20 11:21:35.687299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.498 [2024-11-20 11:21:35.687330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.498 qpair failed and we were unable to recover it. 00:27:08.498 [2024-11-20 11:21:35.687565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.498 [2024-11-20 11:21:35.687596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.498 qpair failed and we were unable to recover it. 00:27:08.498 [2024-11-20 11:21:35.687849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.498 [2024-11-20 11:21:35.687880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.498 qpair failed and we were unable to recover it. 00:27:08.498 [2024-11-20 11:21:35.688167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.498 [2024-11-20 11:21:35.688200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.498 qpair failed and we were unable to recover it. 00:27:08.498 [2024-11-20 11:21:35.688373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.498 [2024-11-20 11:21:35.688405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.498 qpair failed and we were unable to recover it. 00:27:08.498 [2024-11-20 11:21:35.688538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.498 [2024-11-20 11:21:35.688569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.498 qpair failed and we were unable to recover it. 00:27:08.498 [2024-11-20 11:21:35.688850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.498 [2024-11-20 11:21:35.688882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.499 qpair failed and we were unable to recover it. 00:27:08.499 [2024-11-20 11:21:35.688988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.499 [2024-11-20 11:21:35.689019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.499 qpair failed and we were unable to recover it. 00:27:08.499 [2024-11-20 11:21:35.689124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.499 [2024-11-20 11:21:35.689155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.499 qpair failed and we were unable to recover it. 00:27:08.499 [2024-11-20 11:21:35.689347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.499 [2024-11-20 11:21:35.689379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.499 qpair failed and we were unable to recover it. 00:27:08.499 [2024-11-20 11:21:35.689498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.499 [2024-11-20 11:21:35.689529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.499 qpair failed and we were unable to recover it. 00:27:08.499 [2024-11-20 11:21:35.689705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.499 [2024-11-20 11:21:35.689736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.499 qpair failed and we were unable to recover it. 00:27:08.499 [2024-11-20 11:21:35.689877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.499 [2024-11-20 11:21:35.689908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.499 qpair failed and we were unable to recover it. 00:27:08.499 [2024-11-20 11:21:35.690107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.499 [2024-11-20 11:21:35.690141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.499 qpair failed and we were unable to recover it. 00:27:08.499 [2024-11-20 11:21:35.690242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.499 [2024-11-20 11:21:35.690273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.499 qpair failed and we were unable to recover it. 00:27:08.499 [2024-11-20 11:21:35.690535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.499 [2024-11-20 11:21:35.690567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.499 qpair failed and we were unable to recover it. 00:27:08.499 [2024-11-20 11:21:35.690751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.499 [2024-11-20 11:21:35.690783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.499 qpair failed and we were unable to recover it. 00:27:08.499 [2024-11-20 11:21:35.690983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.499 [2024-11-20 11:21:35.691017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.499 qpair failed and we were unable to recover it. 00:27:08.499 [2024-11-20 11:21:35.691259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.499 [2024-11-20 11:21:35.691291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.499 qpair failed and we were unable to recover it. 00:27:08.499 [2024-11-20 11:21:35.691492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.499 [2024-11-20 11:21:35.691525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.499 qpair failed and we were unable to recover it. 00:27:08.499 [2024-11-20 11:21:35.691776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.499 [2024-11-20 11:21:35.691808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.499 qpair failed and we were unable to recover it. 00:27:08.499 [2024-11-20 11:21:35.691934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.499 [2024-11-20 11:21:35.691977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.499 qpair failed and we were unable to recover it. 00:27:08.499 [2024-11-20 11:21:35.692100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.499 [2024-11-20 11:21:35.692131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.499 qpair failed and we were unable to recover it. 00:27:08.499 [2024-11-20 11:21:35.692299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.499 [2024-11-20 11:21:35.692331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.499 qpair failed and we were unable to recover it. 00:27:08.499 [2024-11-20 11:21:35.692580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.499 [2024-11-20 11:21:35.692611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.499 qpair failed and we were unable to recover it. 00:27:08.499 [2024-11-20 11:21:35.692727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.499 [2024-11-20 11:21:35.692760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.499 qpair failed and we were unable to recover it. 00:27:08.499 [2024-11-20 11:21:35.692968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.499 [2024-11-20 11:21:35.693012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.499 qpair failed and we were unable to recover it. 00:27:08.499 [2024-11-20 11:21:35.693129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.499 [2024-11-20 11:21:35.693160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.499 qpair failed and we were unable to recover it. 00:27:08.499 [2024-11-20 11:21:35.693400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.499 [2024-11-20 11:21:35.693432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.499 qpair failed and we were unable to recover it. 00:27:08.499 [2024-11-20 11:21:35.693548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.499 [2024-11-20 11:21:35.693579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.499 qpair failed and we were unable to recover it. 00:27:08.499 [2024-11-20 11:21:35.693765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.499 [2024-11-20 11:21:35.693796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.499 qpair failed and we were unable to recover it. 00:27:08.499 [2024-11-20 11:21:35.693998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.499 [2024-11-20 11:21:35.694031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.499 qpair failed and we were unable to recover it. 00:27:08.499 [2024-11-20 11:21:35.694146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.499 [2024-11-20 11:21:35.694178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.499 qpair failed and we were unable to recover it. 00:27:08.499 [2024-11-20 11:21:35.694434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.499 [2024-11-20 11:21:35.694465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.499 qpair failed and we were unable to recover it. 00:27:08.499 [2024-11-20 11:21:35.694641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.499 [2024-11-20 11:21:35.694671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.499 qpair failed and we were unable to recover it. 00:27:08.499 [2024-11-20 11:21:35.694836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.499 [2024-11-20 11:21:35.694867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.499 qpair failed and we were unable to recover it. 00:27:08.499 [2024-11-20 11:21:35.695048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.499 [2024-11-20 11:21:35.695081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.499 qpair failed and we were unable to recover it. 00:27:08.499 [2024-11-20 11:21:35.695295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.499 [2024-11-20 11:21:35.695328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.499 qpair failed and we were unable to recover it. 00:27:08.499 [2024-11-20 11:21:35.695515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.499 [2024-11-20 11:21:35.695546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.499 qpair failed and we were unable to recover it. 00:27:08.499 [2024-11-20 11:21:35.695725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.499 [2024-11-20 11:21:35.695757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.499 qpair failed and we were unable to recover it. 00:27:08.500 [2024-11-20 11:21:35.695886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.500 [2024-11-20 11:21:35.695918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.500 qpair failed and we were unable to recover it. 00:27:08.500 [2024-11-20 11:21:35.696093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.500 [2024-11-20 11:21:35.696165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.500 qpair failed and we were unable to recover it. 00:27:08.500 [2024-11-20 11:21:35.696370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.500 [2024-11-20 11:21:35.696407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.500 qpair failed and we were unable to recover it. 00:27:08.500 [2024-11-20 11:21:35.696652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.500 [2024-11-20 11:21:35.696684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.500 qpair failed and we were unable to recover it. 00:27:08.500 [2024-11-20 11:21:35.696864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.500 [2024-11-20 11:21:35.696896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.500 qpair failed and we were unable to recover it. 00:27:08.500 [2024-11-20 11:21:35.697165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.500 [2024-11-20 11:21:35.697199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.500 qpair failed and we were unable to recover it. 00:27:08.500 [2024-11-20 11:21:35.697470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.500 [2024-11-20 11:21:35.697501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.500 qpair failed and we were unable to recover it. 00:27:08.500 [2024-11-20 11:21:35.697623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.500 [2024-11-20 11:21:35.697653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.500 qpair failed and we were unable to recover it. 00:27:08.500 [2024-11-20 11:21:35.697922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.500 [2024-11-20 11:21:35.697960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.500 qpair failed and we were unable to recover it. 00:27:08.500 [2024-11-20 11:21:35.698150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.500 [2024-11-20 11:21:35.698181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.500 qpair failed and we were unable to recover it. 00:27:08.500 [2024-11-20 11:21:35.698418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.500 [2024-11-20 11:21:35.698449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.500 qpair failed and we were unable to recover it. 00:27:08.500 [2024-11-20 11:21:35.698660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.500 [2024-11-20 11:21:35.698691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.500 qpair failed and we were unable to recover it. 00:27:08.500 [2024-11-20 11:21:35.698925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.500 [2024-11-20 11:21:35.698966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.500 qpair failed and we were unable to recover it. 00:27:08.500 [2024-11-20 11:21:35.699147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.500 [2024-11-20 11:21:35.699188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.500 qpair failed and we were unable to recover it. 00:27:08.500 [2024-11-20 11:21:35.699444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.500 [2024-11-20 11:21:35.699475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.500 qpair failed and we were unable to recover it. 00:27:08.500 [2024-11-20 11:21:35.699655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.500 [2024-11-20 11:21:35.699686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.500 qpair failed and we were unable to recover it. 00:27:08.500 [2024-11-20 11:21:35.699963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.500 [2024-11-20 11:21:35.699997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.500 qpair failed and we were unable to recover it. 00:27:08.500 [2024-11-20 11:21:35.700262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.500 [2024-11-20 11:21:35.700293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.500 qpair failed and we were unable to recover it. 00:27:08.500 [2024-11-20 11:21:35.700475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.500 [2024-11-20 11:21:35.700505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.500 qpair failed and we were unable to recover it. 00:27:08.500 [2024-11-20 11:21:35.700718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.500 [2024-11-20 11:21:35.700749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.500 qpair failed and we were unable to recover it. 00:27:08.500 [2024-11-20 11:21:35.700916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.500 [2024-11-20 11:21:35.700946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.500 qpair failed and we were unable to recover it. 00:27:08.500 [2024-11-20 11:21:35.701194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.500 [2024-11-20 11:21:35.701225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.500 qpair failed and we were unable to recover it. 00:27:08.500 [2024-11-20 11:21:35.701344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.500 [2024-11-20 11:21:35.701375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.500 qpair failed and we were unable to recover it. 00:27:08.500 [2024-11-20 11:21:35.701546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.500 [2024-11-20 11:21:35.701576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.500 qpair failed and we were unable to recover it. 00:27:08.500 [2024-11-20 11:21:35.701767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.500 [2024-11-20 11:21:35.701797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.500 qpair failed and we were unable to recover it. 00:27:08.500 [2024-11-20 11:21:35.701931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.500 [2024-11-20 11:21:35.701973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.500 qpair failed and we were unable to recover it. 00:27:08.500 [2024-11-20 11:21:35.702107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.500 [2024-11-20 11:21:35.702139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.500 qpair failed and we were unable to recover it. 00:27:08.500 [2024-11-20 11:21:35.702410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.500 [2024-11-20 11:21:35.702441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.500 qpair failed and we were unable to recover it. 00:27:08.500 [2024-11-20 11:21:35.702678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.500 [2024-11-20 11:21:35.702708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.500 qpair failed and we were unable to recover it. 00:27:08.500 [2024-11-20 11:21:35.702892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.500 [2024-11-20 11:21:35.702922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.500 qpair failed and we were unable to recover it. 00:27:08.500 [2024-11-20 11:21:35.703102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.500 [2024-11-20 11:21:35.703133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.500 qpair failed and we were unable to recover it. 00:27:08.500 [2024-11-20 11:21:35.703353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.500 [2024-11-20 11:21:35.703384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.500 qpair failed and we were unable to recover it. 00:27:08.500 [2024-11-20 11:21:35.703568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.500 [2024-11-20 11:21:35.703599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.501 qpair failed and we were unable to recover it. 00:27:08.501 [2024-11-20 11:21:35.703776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.501 [2024-11-20 11:21:35.703807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.501 qpair failed and we were unable to recover it. 00:27:08.501 [2024-11-20 11:21:35.704039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.501 [2024-11-20 11:21:35.704072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.501 qpair failed and we were unable to recover it. 00:27:08.501 [2024-11-20 11:21:35.704259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.501 [2024-11-20 11:21:35.704290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.501 qpair failed and we were unable to recover it. 00:27:08.501 [2024-11-20 11:21:35.704546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.501 [2024-11-20 11:21:35.704576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.501 qpair failed and we were unable to recover it. 00:27:08.501 [2024-11-20 11:21:35.704815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.501 [2024-11-20 11:21:35.704846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.501 qpair failed and we were unable to recover it. 00:27:08.501 [2024-11-20 11:21:35.704973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.501 [2024-11-20 11:21:35.705004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.501 qpair failed and we were unable to recover it. 00:27:08.501 [2024-11-20 11:21:35.705119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.501 [2024-11-20 11:21:35.705149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.501 qpair failed and we were unable to recover it. 00:27:08.501 [2024-11-20 11:21:35.705378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.501 [2024-11-20 11:21:35.705451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.501 qpair failed and we were unable to recover it. 00:27:08.501 [2024-11-20 11:21:35.705599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.501 [2024-11-20 11:21:35.705634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.501 qpair failed and we were unable to recover it. 00:27:08.501 [2024-11-20 11:21:35.705822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.501 [2024-11-20 11:21:35.705853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.501 qpair failed and we were unable to recover it. 00:27:08.501 [2024-11-20 11:21:35.706028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.501 [2024-11-20 11:21:35.706062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.501 qpair failed and we were unable to recover it. 00:27:08.501 [2024-11-20 11:21:35.706352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.501 [2024-11-20 11:21:35.706384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.501 qpair failed and we were unable to recover it. 00:27:08.501 [2024-11-20 11:21:35.706497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.501 [2024-11-20 11:21:35.706528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.501 qpair failed and we were unable to recover it. 00:27:08.501 [2024-11-20 11:21:35.706664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.501 [2024-11-20 11:21:35.706696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.501 qpair failed and we were unable to recover it. 00:27:08.501 [2024-11-20 11:21:35.706863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.501 [2024-11-20 11:21:35.706895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.501 qpair failed and we were unable to recover it. 00:27:08.501 [2024-11-20 11:21:35.707113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.501 [2024-11-20 11:21:35.707146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.501 qpair failed and we were unable to recover it. 00:27:08.501 [2024-11-20 11:21:35.707407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.501 [2024-11-20 11:21:35.707438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.501 qpair failed and we were unable to recover it. 00:27:08.501 [2024-11-20 11:21:35.707573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.501 [2024-11-20 11:21:35.707605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.501 qpair failed and we were unable to recover it. 00:27:08.501 [2024-11-20 11:21:35.707785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.501 [2024-11-20 11:21:35.707816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.501 qpair failed and we were unable to recover it. 00:27:08.501 [2024-11-20 11:21:35.708013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.501 [2024-11-20 11:21:35.708046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.501 qpair failed and we were unable to recover it. 00:27:08.501 [2024-11-20 11:21:35.708223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.501 [2024-11-20 11:21:35.708254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.501 qpair failed and we were unable to recover it. 00:27:08.501 [2024-11-20 11:21:35.708399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.501 [2024-11-20 11:21:35.708429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.501 qpair failed and we were unable to recover it. 00:27:08.501 [2024-11-20 11:21:35.708533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.501 [2024-11-20 11:21:35.708565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.501 qpair failed and we were unable to recover it. 00:27:08.501 [2024-11-20 11:21:35.708737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.501 [2024-11-20 11:21:35.708768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.501 qpair failed and we were unable to recover it. 00:27:08.501 [2024-11-20 11:21:35.708937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.501 [2024-11-20 11:21:35.708978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.501 qpair failed and we were unable to recover it. 00:27:08.501 [2024-11-20 11:21:35.709183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.501 [2024-11-20 11:21:35.709215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.501 qpair failed and we were unable to recover it. 00:27:08.501 [2024-11-20 11:21:35.709333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.501 [2024-11-20 11:21:35.709365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.501 qpair failed and we were unable to recover it. 00:27:08.501 [2024-11-20 11:21:35.709538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.502 [2024-11-20 11:21:35.709568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.502 qpair failed and we were unable to recover it. 00:27:08.502 [2024-11-20 11:21:35.709756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.502 [2024-11-20 11:21:35.709786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.502 qpair failed and we were unable to recover it. 00:27:08.502 [2024-11-20 11:21:35.709999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.502 [2024-11-20 11:21:35.710031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.502 qpair failed and we were unable to recover it. 00:27:08.502 [2024-11-20 11:21:35.710209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.502 [2024-11-20 11:21:35.710239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.502 qpair failed and we were unable to recover it. 00:27:08.502 [2024-11-20 11:21:35.710412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.502 [2024-11-20 11:21:35.710443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.502 qpair failed and we were unable to recover it. 00:27:08.502 [2024-11-20 11:21:35.710680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.502 [2024-11-20 11:21:35.710712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.502 qpair failed and we were unable to recover it. 00:27:08.502 [2024-11-20 11:21:35.710977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.502 [2024-11-20 11:21:35.711009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.502 qpair failed and we were unable to recover it. 00:27:08.502 [2024-11-20 11:21:35.711188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.502 [2024-11-20 11:21:35.711225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.502 qpair failed and we were unable to recover it. 00:27:08.502 [2024-11-20 11:21:35.711415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.502 [2024-11-20 11:21:35.711447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.502 qpair failed and we were unable to recover it. 00:27:08.502 [2024-11-20 11:21:35.711580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.502 [2024-11-20 11:21:35.711610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.502 qpair failed and we were unable to recover it. 00:27:08.502 [2024-11-20 11:21:35.711873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.502 [2024-11-20 11:21:35.711904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.502 qpair failed and we were unable to recover it. 00:27:08.502 [2024-11-20 11:21:35.712042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.502 [2024-11-20 11:21:35.712072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.502 qpair failed and we were unable to recover it. 00:27:08.502 [2024-11-20 11:21:35.712173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.502 [2024-11-20 11:21:35.712205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.502 qpair failed and we were unable to recover it. 00:27:08.502 [2024-11-20 11:21:35.712389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.502 [2024-11-20 11:21:35.712420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.502 qpair failed and we were unable to recover it. 00:27:08.502 [2024-11-20 11:21:35.712600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.502 [2024-11-20 11:21:35.712631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.502 qpair failed and we were unable to recover it. 00:27:08.502 [2024-11-20 11:21:35.712890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.502 [2024-11-20 11:21:35.712920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.502 qpair failed and we were unable to recover it. 00:27:08.502 [2024-11-20 11:21:35.713042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.502 [2024-11-20 11:21:35.713075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.502 qpair failed and we were unable to recover it. 00:27:08.502 [2024-11-20 11:21:35.713277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.502 [2024-11-20 11:21:35.713308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.502 qpair failed and we were unable to recover it. 00:27:08.502 [2024-11-20 11:21:35.713479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.502 [2024-11-20 11:21:35.713509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.502 qpair failed and we were unable to recover it. 00:27:08.502 [2024-11-20 11:21:35.713677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.502 [2024-11-20 11:21:35.713707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.502 qpair failed and we were unable to recover it. 00:27:08.502 [2024-11-20 11:21:35.713892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.502 [2024-11-20 11:21:35.713923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.502 qpair failed and we were unable to recover it. 00:27:08.502 [2024-11-20 11:21:35.714121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.502 [2024-11-20 11:21:35.714154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.502 qpair failed and we were unable to recover it. 00:27:08.502 [2024-11-20 11:21:35.714324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.502 [2024-11-20 11:21:35.714355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.502 qpair failed and we were unable to recover it. 00:27:08.502 [2024-11-20 11:21:35.714523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.502 [2024-11-20 11:21:35.714554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.502 qpair failed and we were unable to recover it. 00:27:08.502 [2024-11-20 11:21:35.714655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.502 [2024-11-20 11:21:35.714686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.502 qpair failed and we were unable to recover it. 00:27:08.502 [2024-11-20 11:21:35.714814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.502 [2024-11-20 11:21:35.714845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.502 qpair failed and we were unable to recover it. 00:27:08.502 [2024-11-20 11:21:35.715050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.502 [2024-11-20 11:21:35.715081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.502 qpair failed and we were unable to recover it. 00:27:08.502 [2024-11-20 11:21:35.715343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.502 [2024-11-20 11:21:35.715375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.502 qpair failed and we were unable to recover it. 00:27:08.502 [2024-11-20 11:21:35.715494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.502 [2024-11-20 11:21:35.715524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.502 qpair failed and we were unable to recover it. 00:27:08.502 [2024-11-20 11:21:35.715647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.502 [2024-11-20 11:21:35.715678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.502 qpair failed and we were unable to recover it. 00:27:08.502 [2024-11-20 11:21:35.715858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.502 [2024-11-20 11:21:35.715889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.502 qpair failed and we were unable to recover it. 00:27:08.502 [2024-11-20 11:21:35.716067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.502 [2024-11-20 11:21:35.716099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.502 qpair failed and we were unable to recover it. 00:27:08.502 [2024-11-20 11:21:35.716288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.502 [2024-11-20 11:21:35.716318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.502 qpair failed and we were unable to recover it. 00:27:08.503 [2024-11-20 11:21:35.716498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.503 [2024-11-20 11:21:35.716528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.503 qpair failed and we were unable to recover it. 00:27:08.503 [2024-11-20 11:21:35.716642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.503 [2024-11-20 11:21:35.716674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.503 qpair failed and we were unable to recover it. 00:27:08.503 [2024-11-20 11:21:35.716943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.503 [2024-11-20 11:21:35.716985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.503 qpair failed and we were unable to recover it. 00:27:08.503 [2024-11-20 11:21:35.717191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.503 [2024-11-20 11:21:35.717222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.503 qpair failed and we were unable to recover it. 00:27:08.503 [2024-11-20 11:21:35.717405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.503 [2024-11-20 11:21:35.717436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.503 qpair failed and we were unable to recover it. 00:27:08.503 [2024-11-20 11:21:35.717551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.503 [2024-11-20 11:21:35.717581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.503 qpair failed and we were unable to recover it. 00:27:08.503 [2024-11-20 11:21:35.717792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.503 [2024-11-20 11:21:35.717822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.503 qpair failed and we were unable to recover it. 00:27:08.503 [2024-11-20 11:21:35.718028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.503 [2024-11-20 11:21:35.718060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.503 qpair failed and we were unable to recover it. 00:27:08.503 [2024-11-20 11:21:35.718188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.503 [2024-11-20 11:21:35.718219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.503 qpair failed and we were unable to recover it. 00:27:08.503 [2024-11-20 11:21:35.718387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.503 [2024-11-20 11:21:35.718417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.503 qpair failed and we were unable to recover it. 00:27:08.503 [2024-11-20 11:21:35.718603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.503 [2024-11-20 11:21:35.718633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.503 qpair failed and we were unable to recover it. 00:27:08.503 [2024-11-20 11:21:35.718745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.503 [2024-11-20 11:21:35.718777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.503 qpair failed and we were unable to recover it. 00:27:08.503 [2024-11-20 11:21:35.718879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.503 [2024-11-20 11:21:35.718909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.503 qpair failed and we were unable to recover it. 00:27:08.503 [2024-11-20 11:21:35.719093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.503 [2024-11-20 11:21:35.719126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.503 qpair failed and we were unable to recover it. 00:27:08.503 [2024-11-20 11:21:35.719293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.503 [2024-11-20 11:21:35.719329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.503 qpair failed and we were unable to recover it. 00:27:08.503 [2024-11-20 11:21:35.719448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.503 [2024-11-20 11:21:35.719479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.503 qpair failed and we were unable to recover it. 00:27:08.503 [2024-11-20 11:21:35.719651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.503 [2024-11-20 11:21:35.719680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.503 qpair failed and we were unable to recover it. 00:27:08.503 [2024-11-20 11:21:35.719955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.503 [2024-11-20 11:21:35.719988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.503 qpair failed and we were unable to recover it. 00:27:08.503 [2024-11-20 11:21:35.720158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.503 [2024-11-20 11:21:35.720188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.503 qpair failed and we were unable to recover it. 00:27:08.503 [2024-11-20 11:21:35.720364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.503 [2024-11-20 11:21:35.720395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.503 qpair failed and we were unable to recover it. 00:27:08.503 [2024-11-20 11:21:35.720629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.503 [2024-11-20 11:21:35.720659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.503 qpair failed and we were unable to recover it. 00:27:08.503 [2024-11-20 11:21:35.720832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.503 [2024-11-20 11:21:35.720863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.503 qpair failed and we were unable to recover it. 00:27:08.503 [2024-11-20 11:21:35.721126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.503 [2024-11-20 11:21:35.721158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.503 qpair failed and we were unable to recover it. 00:27:08.503 [2024-11-20 11:21:35.721339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.503 [2024-11-20 11:21:35.721370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.503 qpair failed and we were unable to recover it. 00:27:08.503 [2024-11-20 11:21:35.721537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.503 [2024-11-20 11:21:35.721568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.503 qpair failed and we were unable to recover it. 00:27:08.503 [2024-11-20 11:21:35.721743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.503 [2024-11-20 11:21:35.721774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.503 qpair failed and we were unable to recover it. 00:27:08.503 [2024-11-20 11:21:35.721963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.503 [2024-11-20 11:21:35.721997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.503 qpair failed and we were unable to recover it. 00:27:08.503 [2024-11-20 11:21:35.722178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.503 [2024-11-20 11:21:35.722209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.503 qpair failed and we were unable to recover it. 00:27:08.503 [2024-11-20 11:21:35.722478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.503 [2024-11-20 11:21:35.722509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.503 qpair failed and we were unable to recover it. 00:27:08.503 [2024-11-20 11:21:35.722690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.503 [2024-11-20 11:21:35.722720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.503 qpair failed and we were unable to recover it. 00:27:08.503 [2024-11-20 11:21:35.722964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.503 [2024-11-20 11:21:35.722996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.503 qpair failed and we were unable to recover it. 00:27:08.503 [2024-11-20 11:21:35.723192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.503 [2024-11-20 11:21:35.723223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.503 qpair failed and we were unable to recover it. 00:27:08.503 [2024-11-20 11:21:35.723362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.503 [2024-11-20 11:21:35.723393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.503 qpair failed and we were unable to recover it. 00:27:08.504 [2024-11-20 11:21:35.723506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.504 [2024-11-20 11:21:35.723536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.504 qpair failed and we were unable to recover it. 00:27:08.504 [2024-11-20 11:21:35.723707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.504 [2024-11-20 11:21:35.723737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.504 qpair failed and we were unable to recover it. 00:27:08.504 [2024-11-20 11:21:35.723963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.504 [2024-11-20 11:21:35.723996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.504 qpair failed and we were unable to recover it. 00:27:08.504 [2024-11-20 11:21:35.724192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.504 [2024-11-20 11:21:35.724223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.504 qpair failed and we were unable to recover it. 00:27:08.504 [2024-11-20 11:21:35.724344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.504 [2024-11-20 11:21:35.724374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.504 qpair failed and we were unable to recover it. 00:27:08.504 [2024-11-20 11:21:35.724539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.504 [2024-11-20 11:21:35.724570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.504 qpair failed and we were unable to recover it. 00:27:08.504 [2024-11-20 11:21:35.724825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.504 [2024-11-20 11:21:35.724856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.504 qpair failed and we were unable to recover it. 00:27:08.504 [2024-11-20 11:21:35.725063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.504 [2024-11-20 11:21:35.725095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.504 qpair failed and we were unable to recover it. 00:27:08.504 [2024-11-20 11:21:35.725359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.504 [2024-11-20 11:21:35.725391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.504 qpair failed and we were unable to recover it. 00:27:08.504 [2024-11-20 11:21:35.725643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.504 [2024-11-20 11:21:35.725673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.504 qpair failed and we were unable to recover it. 00:27:08.504 [2024-11-20 11:21:35.725857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.504 [2024-11-20 11:21:35.725888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.504 qpair failed and we were unable to recover it. 00:27:08.504 [2024-11-20 11:21:35.726110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.504 [2024-11-20 11:21:35.726144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.504 qpair failed and we were unable to recover it. 00:27:08.504 [2024-11-20 11:21:35.726385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.504 [2024-11-20 11:21:35.726416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.504 qpair failed and we were unable to recover it. 00:27:08.504 [2024-11-20 11:21:35.726533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.504 [2024-11-20 11:21:35.726563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.504 qpair failed and we were unable to recover it. 00:27:08.504 [2024-11-20 11:21:35.726736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.504 [2024-11-20 11:21:35.726766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.504 qpair failed and we were unable to recover it. 00:27:08.504 [2024-11-20 11:21:35.726943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.504 [2024-11-20 11:21:35.726983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.504 qpair failed and we were unable to recover it. 00:27:08.504 [2024-11-20 11:21:35.727120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.504 [2024-11-20 11:21:35.727151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.504 qpair failed and we were unable to recover it. 00:27:08.504 [2024-11-20 11:21:35.727315] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f3af0 is same with the state(6) to be set 00:27:08.504 [2024-11-20 11:21:35.727629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.504 [2024-11-20 11:21:35.727700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.504 qpair failed and we were unable to recover it. 00:27:08.504 [2024-11-20 11:21:35.727872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.504 [2024-11-20 11:21:35.727942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.504 qpair failed and we were unable to recover it. 00:27:08.504 [2024-11-20 11:21:35.728164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.504 [2024-11-20 11:21:35.728200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.504 qpair failed and we were unable to recover it. 00:27:08.504 [2024-11-20 11:21:35.728416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.504 [2024-11-20 11:21:35.728449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.504 qpair failed and we were unable to recover it. 00:27:08.504 [2024-11-20 11:21:35.728694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.504 [2024-11-20 11:21:35.728726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.504 qpair failed and we were unable to recover it. 00:27:08.504 [2024-11-20 11:21:35.728986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.504 [2024-11-20 11:21:35.729020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.504 qpair failed and we were unable to recover it. 00:27:08.504 [2024-11-20 11:21:35.729143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.504 [2024-11-20 11:21:35.729175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.504 qpair failed and we were unable to recover it. 00:27:08.504 [2024-11-20 11:21:35.729437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.504 [2024-11-20 11:21:35.729469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.504 qpair failed and we were unable to recover it. 00:27:08.504 [2024-11-20 11:21:35.729660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.504 [2024-11-20 11:21:35.729692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.504 qpair failed and we were unable to recover it. 00:27:08.504 [2024-11-20 11:21:35.729935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.504 [2024-11-20 11:21:35.729980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.504 qpair failed and we were unable to recover it. 00:27:08.504 [2024-11-20 11:21:35.730224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.504 [2024-11-20 11:21:35.730255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.504 qpair failed and we were unable to recover it. 00:27:08.504 [2024-11-20 11:21:35.730385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.504 [2024-11-20 11:21:35.730416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.504 qpair failed and we were unable to recover it. 00:27:08.504 [2024-11-20 11:21:35.730596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.504 [2024-11-20 11:21:35.730626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.504 qpair failed and we were unable to recover it. 00:27:08.504 [2024-11-20 11:21:35.730862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.504 [2024-11-20 11:21:35.730894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.504 qpair failed and we were unable to recover it. 00:27:08.504 [2024-11-20 11:21:35.731171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.504 [2024-11-20 11:21:35.731205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.504 qpair failed and we were unable to recover it. 00:27:08.505 [2024-11-20 11:21:35.731467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.505 [2024-11-20 11:21:35.731499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.505 qpair failed and we were unable to recover it. 00:27:08.505 [2024-11-20 11:21:35.731680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.505 [2024-11-20 11:21:35.731712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.505 qpair failed and we were unable to recover it. 00:27:08.505 [2024-11-20 11:21:35.731828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.505 [2024-11-20 11:21:35.731865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.505 qpair failed and we were unable to recover it. 00:27:08.505 [2024-11-20 11:21:35.732034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.505 [2024-11-20 11:21:35.732068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.505 qpair failed and we were unable to recover it. 00:27:08.505 [2024-11-20 11:21:35.732253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.505 [2024-11-20 11:21:35.732284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.505 qpair failed and we were unable to recover it. 00:27:08.505 [2024-11-20 11:21:35.732408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.505 [2024-11-20 11:21:35.732439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.505 qpair failed and we were unable to recover it. 00:27:08.505 [2024-11-20 11:21:35.732555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.505 [2024-11-20 11:21:35.732587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.505 qpair failed and we were unable to recover it. 00:27:08.505 [2024-11-20 11:21:35.732758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.505 [2024-11-20 11:21:35.732790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.505 qpair failed and we were unable to recover it. 00:27:08.505 [2024-11-20 11:21:35.732969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.505 [2024-11-20 11:21:35.733001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.505 qpair failed and we were unable to recover it. 00:27:08.505 [2024-11-20 11:21:35.733116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.505 [2024-11-20 11:21:35.733147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.505 qpair failed and we were unable to recover it. 00:27:08.505 [2024-11-20 11:21:35.733286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.505 [2024-11-20 11:21:35.733317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.505 qpair failed and we were unable to recover it. 00:27:08.505 [2024-11-20 11:21:35.733437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.505 [2024-11-20 11:21:35.733468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.505 qpair failed and we were unable to recover it. 00:27:08.505 [2024-11-20 11:21:35.733654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.505 [2024-11-20 11:21:35.733685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.505 qpair failed and we were unable to recover it. 00:27:08.505 [2024-11-20 11:21:35.733892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.505 [2024-11-20 11:21:35.733923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.505 qpair failed and we were unable to recover it. 00:27:08.505 [2024-11-20 11:21:35.734106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.505 [2024-11-20 11:21:35.734138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.505 qpair failed and we were unable to recover it. 00:27:08.505 [2024-11-20 11:21:35.734267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.505 [2024-11-20 11:21:35.734299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.505 qpair failed and we were unable to recover it. 00:27:08.505 [2024-11-20 11:21:35.734449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.505 [2024-11-20 11:21:35.734519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.505 qpair failed and we were unable to recover it. 00:27:08.505 [2024-11-20 11:21:35.734711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.505 [2024-11-20 11:21:35.734750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.505 qpair failed and we were unable to recover it. 00:27:08.505 [2024-11-20 11:21:35.734989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.505 [2024-11-20 11:21:35.735022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.505 qpair failed and we were unable to recover it. 00:27:08.505 [2024-11-20 11:21:35.735208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.505 [2024-11-20 11:21:35.735239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.505 qpair failed and we were unable to recover it. 00:27:08.505 [2024-11-20 11:21:35.735484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.505 [2024-11-20 11:21:35.735515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.505 qpair failed and we were unable to recover it. 00:27:08.505 [2024-11-20 11:21:35.735714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.505 [2024-11-20 11:21:35.735746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.505 qpair failed and we were unable to recover it. 00:27:08.505 [2024-11-20 11:21:35.735937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.505 [2024-11-20 11:21:35.735980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.505 qpair failed and we were unable to recover it. 00:27:08.505 [2024-11-20 11:21:35.736228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.505 [2024-11-20 11:21:35.736259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.505 qpair failed and we were unable to recover it. 00:27:08.505 [2024-11-20 11:21:35.736381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.505 [2024-11-20 11:21:35.736413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.505 qpair failed and we were unable to recover it. 00:27:08.505 [2024-11-20 11:21:35.736580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.505 [2024-11-20 11:21:35.736612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.505 qpair failed and we were unable to recover it. 00:27:08.505 [2024-11-20 11:21:35.736803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.505 [2024-11-20 11:21:35.736834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.505 qpair failed and we were unable to recover it. 00:27:08.505 [2024-11-20 11:21:35.737003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.505 [2024-11-20 11:21:35.737036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.505 qpair failed and we were unable to recover it. 00:27:08.505 [2024-11-20 11:21:35.737226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.505 [2024-11-20 11:21:35.737258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.505 qpair failed and we were unable to recover it. 00:27:08.505 [2024-11-20 11:21:35.737397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.505 [2024-11-20 11:21:35.737435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.505 qpair failed and we were unable to recover it. 00:27:08.505 [2024-11-20 11:21:35.737624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.505 [2024-11-20 11:21:35.737655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.505 qpair failed and we were unable to recover it. 00:27:08.505 [2024-11-20 11:21:35.737893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.505 [2024-11-20 11:21:35.737924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.505 qpair failed and we were unable to recover it. 00:27:08.505 [2024-11-20 11:21:35.738048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.506 [2024-11-20 11:21:35.738080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.506 qpair failed and we were unable to recover it. 00:27:08.506 [2024-11-20 11:21:35.738274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.506 [2024-11-20 11:21:35.738305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.506 qpair failed and we were unable to recover it. 00:27:08.506 [2024-11-20 11:21:35.738540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.506 [2024-11-20 11:21:35.738571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.506 qpair failed and we were unable to recover it. 00:27:08.506 [2024-11-20 11:21:35.738780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.506 [2024-11-20 11:21:35.738811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.506 qpair failed and we were unable to recover it. 00:27:08.506 [2024-11-20 11:21:35.738968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.506 [2024-11-20 11:21:35.739001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.506 qpair failed and we were unable to recover it. 00:27:08.506 [2024-11-20 11:21:35.739111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.506 [2024-11-20 11:21:35.739141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.506 qpair failed and we were unable to recover it. 00:27:08.506 [2024-11-20 11:21:35.739373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.506 [2024-11-20 11:21:35.739404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.506 qpair failed and we were unable to recover it. 00:27:08.506 [2024-11-20 11:21:35.739601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.506 [2024-11-20 11:21:35.739631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.506 qpair failed and we were unable to recover it. 00:27:08.506 [2024-11-20 11:21:35.739813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.506 [2024-11-20 11:21:35.739844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.506 qpair failed and we were unable to recover it. 00:27:08.506 [2024-11-20 11:21:35.740106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.506 [2024-11-20 11:21:35.740138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.506 qpair failed and we were unable to recover it. 00:27:08.506 [2024-11-20 11:21:35.740308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.506 [2024-11-20 11:21:35.740340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.506 qpair failed and we were unable to recover it. 00:27:08.506 [2024-11-20 11:21:35.740585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.506 [2024-11-20 11:21:35.740617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.506 qpair failed and we were unable to recover it. 00:27:08.506 [2024-11-20 11:21:35.740805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.506 [2024-11-20 11:21:35.740836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.506 qpair failed and we were unable to recover it. 00:27:08.506 [2024-11-20 11:21:35.741016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.506 [2024-11-20 11:21:35.741049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.506 qpair failed and we were unable to recover it. 00:27:08.506 [2024-11-20 11:21:35.741333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.506 [2024-11-20 11:21:35.741363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.506 qpair failed and we were unable to recover it. 00:27:08.506 [2024-11-20 11:21:35.741544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.506 [2024-11-20 11:21:35.741575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.506 qpair failed and we were unable to recover it. 00:27:08.506 [2024-11-20 11:21:35.741768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.506 [2024-11-20 11:21:35.741799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.506 qpair failed and we were unable to recover it. 00:27:08.506 [2024-11-20 11:21:35.741907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.506 [2024-11-20 11:21:35.741937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.506 qpair failed and we were unable to recover it. 00:27:08.506 [2024-11-20 11:21:35.742154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.506 [2024-11-20 11:21:35.742186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.506 qpair failed and we were unable to recover it. 00:27:08.506 [2024-11-20 11:21:35.742313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.506 [2024-11-20 11:21:35.742344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.506 qpair failed and we were unable to recover it. 00:27:08.506 [2024-11-20 11:21:35.742457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.506 [2024-11-20 11:21:35.742487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.506 qpair failed and we were unable to recover it. 00:27:08.506 [2024-11-20 11:21:35.742668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.506 [2024-11-20 11:21:35.742699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.506 qpair failed and we were unable to recover it. 00:27:08.506 [2024-11-20 11:21:35.742819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.506 [2024-11-20 11:21:35.742851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.506 qpair failed and we were unable to recover it. 00:27:08.506 [2024-11-20 11:21:35.743038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.506 [2024-11-20 11:21:35.743070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.506 qpair failed and we were unable to recover it. 00:27:08.506 [2024-11-20 11:21:35.743209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.506 [2024-11-20 11:21:35.743250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.506 qpair failed and we were unable to recover it. 00:27:08.506 [2024-11-20 11:21:35.743433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.506 [2024-11-20 11:21:35.743465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.506 qpair failed and we were unable to recover it. 00:27:08.506 [2024-11-20 11:21:35.743728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.506 [2024-11-20 11:21:35.743759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.506 qpair failed and we were unable to recover it. 00:27:08.506 [2024-11-20 11:21:35.743944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.506 [2024-11-20 11:21:35.743986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.506 qpair failed and we were unable to recover it. 00:27:08.506 [2024-11-20 11:21:35.744163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.506 [2024-11-20 11:21:35.744195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.506 qpair failed and we were unable to recover it. 00:27:08.506 [2024-11-20 11:21:35.744438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.506 [2024-11-20 11:21:35.744468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.506 qpair failed and we were unable to recover it. 00:27:08.506 [2024-11-20 11:21:35.744584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.506 [2024-11-20 11:21:35.744614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.506 qpair failed and we were unable to recover it. 00:27:08.506 [2024-11-20 11:21:35.744789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.506 [2024-11-20 11:21:35.744821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.506 qpair failed and we were unable to recover it. 00:27:08.506 [2024-11-20 11:21:35.745083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.506 [2024-11-20 11:21:35.745115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.506 qpair failed and we were unable to recover it. 00:27:08.507 [2024-11-20 11:21:35.745218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.507 [2024-11-20 11:21:35.745250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.507 qpair failed and we were unable to recover it. 00:27:08.507 [2024-11-20 11:21:35.745418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.507 [2024-11-20 11:21:35.745448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.507 qpair failed and we were unable to recover it. 00:27:08.507 [2024-11-20 11:21:35.745730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.507 [2024-11-20 11:21:35.745761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.507 qpair failed and we were unable to recover it. 00:27:08.507 [2024-11-20 11:21:35.745899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.507 [2024-11-20 11:21:35.745932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.507 qpair failed and we were unable to recover it. 00:27:08.507 [2024-11-20 11:21:35.746112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.507 [2024-11-20 11:21:35.746154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.507 qpair failed and we were unable to recover it. 00:27:08.507 [2024-11-20 11:21:35.746353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.507 [2024-11-20 11:21:35.746385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.507 qpair failed and we were unable to recover it. 00:27:08.507 [2024-11-20 11:21:35.746566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.507 [2024-11-20 11:21:35.746596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.507 qpair failed and we were unable to recover it. 00:27:08.507 [2024-11-20 11:21:35.746738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.507 [2024-11-20 11:21:35.746771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.507 qpair failed and we were unable to recover it. 00:27:08.507 [2024-11-20 11:21:35.747029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.507 [2024-11-20 11:21:35.747061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.507 qpair failed and we were unable to recover it. 00:27:08.507 [2024-11-20 11:21:35.747323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.507 [2024-11-20 11:21:35.747353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.507 qpair failed and we were unable to recover it. 00:27:08.507 [2024-11-20 11:21:35.747543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.507 [2024-11-20 11:21:35.747575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.507 qpair failed and we were unable to recover it. 00:27:08.507 [2024-11-20 11:21:35.747702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.507 [2024-11-20 11:21:35.747732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.507 qpair failed and we were unable to recover it. 00:27:08.507 [2024-11-20 11:21:35.747996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.507 [2024-11-20 11:21:35.748028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.507 qpair failed and we were unable to recover it. 00:27:08.507 [2024-11-20 11:21:35.748214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.507 [2024-11-20 11:21:35.748246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.507 qpair failed and we were unable to recover it. 00:27:08.507 [2024-11-20 11:21:35.748382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.507 [2024-11-20 11:21:35.748414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.507 qpair failed and we were unable to recover it. 00:27:08.507 [2024-11-20 11:21:35.748615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.507 [2024-11-20 11:21:35.748645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.507 qpair failed and we were unable to recover it. 00:27:08.507 [2024-11-20 11:21:35.748778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.507 [2024-11-20 11:21:35.748809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.507 qpair failed and we were unable to recover it. 00:27:08.507 [2024-11-20 11:21:35.748920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.507 [2024-11-20 11:21:35.748962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.507 qpair failed and we were unable to recover it. 00:27:08.507 [2024-11-20 11:21:35.749141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.507 [2024-11-20 11:21:35.749172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.507 qpair failed and we were unable to recover it. 00:27:08.507 [2024-11-20 11:21:35.749429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.507 [2024-11-20 11:21:35.749462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.507 qpair failed and we were unable to recover it. 00:27:08.507 [2024-11-20 11:21:35.749684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.507 [2024-11-20 11:21:35.749717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.507 qpair failed and we were unable to recover it. 00:27:08.507 [2024-11-20 11:21:35.749887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.507 [2024-11-20 11:21:35.749918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.507 qpair failed and we were unable to recover it. 00:27:08.507 [2024-11-20 11:21:35.750113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.507 [2024-11-20 11:21:35.750179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.507 qpair failed and we were unable to recover it. 00:27:08.507 [2024-11-20 11:21:35.750432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.507 [2024-11-20 11:21:35.750506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.507 qpair failed and we were unable to recover it. 00:27:08.507 [2024-11-20 11:21:35.750771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.507 [2024-11-20 11:21:35.750805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.507 qpair failed and we were unable to recover it. 00:27:08.507 [2024-11-20 11:21:35.750981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.507 [2024-11-20 11:21:35.751015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.507 qpair failed and we were unable to recover it. 00:27:08.507 [2024-11-20 11:21:35.751201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.507 [2024-11-20 11:21:35.751233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.507 qpair failed and we were unable to recover it. 00:27:08.507 [2024-11-20 11:21:35.751421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.507 [2024-11-20 11:21:35.751452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.507 qpair failed and we were unable to recover it. 00:27:08.508 [2024-11-20 11:21:35.751641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.508 [2024-11-20 11:21:35.751672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.508 qpair failed and we were unable to recover it. 00:27:08.508 [2024-11-20 11:21:35.751936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.508 [2024-11-20 11:21:35.751982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.508 qpair failed and we were unable to recover it. 00:27:08.508 [2024-11-20 11:21:35.752166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.508 [2024-11-20 11:21:35.752198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.508 qpair failed and we were unable to recover it. 00:27:08.508 [2024-11-20 11:21:35.752308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.508 [2024-11-20 11:21:35.752347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.508 qpair failed and we were unable to recover it. 00:27:08.508 [2024-11-20 11:21:35.752537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.508 [2024-11-20 11:21:35.752569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.508 qpair failed and we were unable to recover it. 00:27:08.508 [2024-11-20 11:21:35.752837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.508 [2024-11-20 11:21:35.752868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.508 qpair failed and we were unable to recover it. 00:27:08.508 [2024-11-20 11:21:35.753053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.508 [2024-11-20 11:21:35.753085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.508 qpair failed and we were unable to recover it. 00:27:08.508 [2024-11-20 11:21:35.753207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.508 [2024-11-20 11:21:35.753238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.508 qpair failed and we were unable to recover it. 00:27:08.508 [2024-11-20 11:21:35.753528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.508 [2024-11-20 11:21:35.753560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.508 qpair failed and we were unable to recover it. 00:27:08.508 [2024-11-20 11:21:35.753677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.508 [2024-11-20 11:21:35.753707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.508 qpair failed and we were unable to recover it. 00:27:08.508 [2024-11-20 11:21:35.753969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.508 [2024-11-20 11:21:35.754003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.508 qpair failed and we were unable to recover it. 00:27:08.508 [2024-11-20 11:21:35.754268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.508 [2024-11-20 11:21:35.754299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.508 qpair failed and we were unable to recover it. 00:27:08.508 [2024-11-20 11:21:35.754502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.508 [2024-11-20 11:21:35.754535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.508 qpair failed and we were unable to recover it. 00:27:08.508 [2024-11-20 11:21:35.754640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.508 [2024-11-20 11:21:35.754670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.508 qpair failed and we were unable to recover it. 00:27:08.508 [2024-11-20 11:21:35.754913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.508 [2024-11-20 11:21:35.754945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.508 qpair failed and we were unable to recover it. 00:27:08.508 [2024-11-20 11:21:35.755144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.508 [2024-11-20 11:21:35.755176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.508 qpair failed and we were unable to recover it. 00:27:08.508 [2024-11-20 11:21:35.755354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.508 [2024-11-20 11:21:35.755387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.508 qpair failed and we were unable to recover it. 00:27:08.508 [2024-11-20 11:21:35.755653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.508 [2024-11-20 11:21:35.755685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.508 qpair failed and we were unable to recover it. 00:27:08.508 [2024-11-20 11:21:35.755940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.508 [2024-11-20 11:21:35.755982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.508 qpair failed and we were unable to recover it. 00:27:08.508 [2024-11-20 11:21:35.756153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.508 [2024-11-20 11:21:35.756186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.508 qpair failed and we were unable to recover it. 00:27:08.508 [2024-11-20 11:21:35.756308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.508 [2024-11-20 11:21:35.756339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.508 qpair failed and we were unable to recover it. 00:27:08.508 [2024-11-20 11:21:35.756586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.508 [2024-11-20 11:21:35.756618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.508 qpair failed and we were unable to recover it. 00:27:08.508 [2024-11-20 11:21:35.756813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.508 [2024-11-20 11:21:35.756844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.508 qpair failed and we were unable to recover it. 00:27:08.508 [2024-11-20 11:21:35.756962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.508 [2024-11-20 11:21:35.756995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.508 qpair failed and we were unable to recover it. 00:27:08.508 [2024-11-20 11:21:35.757177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.508 [2024-11-20 11:21:35.757209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.508 qpair failed and we were unable to recover it. 00:27:08.508 [2024-11-20 11:21:35.757397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.508 [2024-11-20 11:21:35.757429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.508 qpair failed and we were unable to recover it. 00:27:08.508 [2024-11-20 11:21:35.757610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.508 [2024-11-20 11:21:35.757642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.508 qpair failed and we were unable to recover it. 00:27:08.508 [2024-11-20 11:21:35.757811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.508 [2024-11-20 11:21:35.757842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.508 qpair failed and we were unable to recover it. 00:27:08.508 [2024-11-20 11:21:35.758013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.508 [2024-11-20 11:21:35.758047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.508 qpair failed and we were unable to recover it. 00:27:08.508 [2024-11-20 11:21:35.758217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.508 [2024-11-20 11:21:35.758248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.508 qpair failed and we were unable to recover it. 00:27:08.508 [2024-11-20 11:21:35.758367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.508 [2024-11-20 11:21:35.758405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.508 qpair failed and we were unable to recover it. 00:27:08.508 [2024-11-20 11:21:35.758544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.508 [2024-11-20 11:21:35.758575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.508 qpair failed and we were unable to recover it. 00:27:08.508 [2024-11-20 11:21:35.758689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.508 [2024-11-20 11:21:35.758720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.508 qpair failed and we were unable to recover it. 00:27:08.508 [2024-11-20 11:21:35.758966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.509 [2024-11-20 11:21:35.758998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.509 qpair failed and we were unable to recover it. 00:27:08.509 [2024-11-20 11:21:35.759189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.509 [2024-11-20 11:21:35.759221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.509 qpair failed and we were unable to recover it. 00:27:08.509 [2024-11-20 11:21:35.759403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.509 [2024-11-20 11:21:35.759435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.509 qpair failed and we were unable to recover it. 00:27:08.509 [2024-11-20 11:21:35.759625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.509 [2024-11-20 11:21:35.759658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.509 qpair failed and we were unable to recover it. 00:27:08.509 [2024-11-20 11:21:35.759840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.509 [2024-11-20 11:21:35.759872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.509 qpair failed and we were unable to recover it. 00:27:08.509 [2024-11-20 11:21:35.760039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.509 [2024-11-20 11:21:35.760072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.509 qpair failed and we were unable to recover it. 00:27:08.509 [2024-11-20 11:21:35.760194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.509 [2024-11-20 11:21:35.760224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.509 qpair failed and we were unable to recover it. 00:27:08.509 [2024-11-20 11:21:35.760472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.509 [2024-11-20 11:21:35.760504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.509 qpair failed and we were unable to recover it. 00:27:08.509 [2024-11-20 11:21:35.760688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.509 [2024-11-20 11:21:35.760720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.509 qpair failed and we were unable to recover it. 00:27:08.509 [2024-11-20 11:21:35.760997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.509 [2024-11-20 11:21:35.761030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.509 qpair failed and we were unable to recover it. 00:27:08.509 [2024-11-20 11:21:35.761215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.509 [2024-11-20 11:21:35.761247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.509 qpair failed and we were unable to recover it. 00:27:08.509 [2024-11-20 11:21:35.761494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.509 [2024-11-20 11:21:35.761527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.509 qpair failed and we were unable to recover it. 00:27:08.509 [2024-11-20 11:21:35.761654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.509 [2024-11-20 11:21:35.761685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.509 qpair failed and we were unable to recover it. 00:27:08.509 [2024-11-20 11:21:35.761813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.509 [2024-11-20 11:21:35.761844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.509 qpair failed and we were unable to recover it. 00:27:08.509 [2024-11-20 11:21:35.762021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.509 [2024-11-20 11:21:35.762055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.509 qpair failed and we were unable to recover it. 00:27:08.509 [2024-11-20 11:21:35.762246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.509 [2024-11-20 11:21:35.762278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.509 qpair failed and we were unable to recover it. 00:27:08.509 [2024-11-20 11:21:35.762515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.509 [2024-11-20 11:21:35.762547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.509 qpair failed and we were unable to recover it. 00:27:08.509 [2024-11-20 11:21:35.762784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.509 [2024-11-20 11:21:35.762817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.509 qpair failed and we were unable to recover it. 00:27:08.509 [2024-11-20 11:21:35.763079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.509 [2024-11-20 11:21:35.763112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.509 qpair failed and we were unable to recover it. 00:27:08.509 [2024-11-20 11:21:35.763218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.509 [2024-11-20 11:21:35.763247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.509 qpair failed and we were unable to recover it. 00:27:08.509 [2024-11-20 11:21:35.763419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.509 [2024-11-20 11:21:35.763453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.509 qpair failed and we were unable to recover it. 00:27:08.509 [2024-11-20 11:21:35.763558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.509 [2024-11-20 11:21:35.763589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.509 qpair failed and we were unable to recover it. 00:27:08.509 [2024-11-20 11:21:35.763713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.509 [2024-11-20 11:21:35.763746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.509 qpair failed and we were unable to recover it. 00:27:08.509 [2024-11-20 11:21:35.763870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.509 [2024-11-20 11:21:35.763902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.509 qpair failed and we were unable to recover it. 00:27:08.509 [2024-11-20 11:21:35.764148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.509 [2024-11-20 11:21:35.764187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.509 qpair failed and we were unable to recover it. 00:27:08.509 [2024-11-20 11:21:35.764366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.509 [2024-11-20 11:21:35.764397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.509 qpair failed and we were unable to recover it. 00:27:08.509 [2024-11-20 11:21:35.764638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.509 [2024-11-20 11:21:35.764671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.509 qpair failed and we were unable to recover it. 00:27:08.509 [2024-11-20 11:21:35.764842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.509 [2024-11-20 11:21:35.764873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.509 qpair failed and we were unable to recover it. 00:27:08.509 [2024-11-20 11:21:35.765112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.509 [2024-11-20 11:21:35.765146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.509 qpair failed and we were unable to recover it. 00:27:08.509 [2024-11-20 11:21:35.765331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.509 [2024-11-20 11:21:35.765363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.509 qpair failed and we were unable to recover it. 00:27:08.509 [2024-11-20 11:21:35.765602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.509 [2024-11-20 11:21:35.765634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.509 qpair failed and we were unable to recover it. 00:27:08.509 [2024-11-20 11:21:35.765801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.509 [2024-11-20 11:21:35.765832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.509 qpair failed and we were unable to recover it. 00:27:08.509 [2024-11-20 11:21:35.765982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.509 [2024-11-20 11:21:35.766016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.509 qpair failed and we were unable to recover it. 00:27:08.509 [2024-11-20 11:21:35.766187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.510 [2024-11-20 11:21:35.766218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.510 qpair failed and we were unable to recover it. 00:27:08.510 [2024-11-20 11:21:35.766392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.510 [2024-11-20 11:21:35.766425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.510 qpair failed and we were unable to recover it. 00:27:08.510 [2024-11-20 11:21:35.766612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.510 [2024-11-20 11:21:35.766643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.510 qpair failed and we were unable to recover it. 00:27:08.510 [2024-11-20 11:21:35.766754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.510 [2024-11-20 11:21:35.766786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.510 qpair failed and we were unable to recover it. 00:27:08.510 [2024-11-20 11:21:35.766984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.510 [2024-11-20 11:21:35.767017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.510 qpair failed and we were unable to recover it. 00:27:08.510 [2024-11-20 11:21:35.767162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.510 [2024-11-20 11:21:35.767195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.510 qpair failed and we were unable to recover it. 00:27:08.510 [2024-11-20 11:21:35.767384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.510 [2024-11-20 11:21:35.767416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.510 qpair failed and we were unable to recover it. 00:27:08.510 [2024-11-20 11:21:35.767546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.510 [2024-11-20 11:21:35.767576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.510 qpair failed and we were unable to recover it. 00:27:08.510 [2024-11-20 11:21:35.767754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.510 [2024-11-20 11:21:35.767785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.510 qpair failed and we were unable to recover it. 00:27:08.510 [2024-11-20 11:21:35.767972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.510 [2024-11-20 11:21:35.768005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.510 qpair failed and we were unable to recover it. 00:27:08.510 [2024-11-20 11:21:35.768174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.510 [2024-11-20 11:21:35.768206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.510 qpair failed and we were unable to recover it. 00:27:08.510 [2024-11-20 11:21:35.768327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.510 [2024-11-20 11:21:35.768358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.510 qpair failed and we were unable to recover it. 00:27:08.510 [2024-11-20 11:21:35.768528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.510 [2024-11-20 11:21:35.768559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.510 qpair failed and we were unable to recover it. 00:27:08.510 [2024-11-20 11:21:35.768708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.510 [2024-11-20 11:21:35.768738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.510 qpair failed and we were unable to recover it. 00:27:08.510 [2024-11-20 11:21:35.768846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.510 [2024-11-20 11:21:35.768877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.510 qpair failed and we were unable to recover it. 00:27:08.510 [2024-11-20 11:21:35.769137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.510 [2024-11-20 11:21:35.769170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.510 qpair failed and we were unable to recover it. 00:27:08.510 [2024-11-20 11:21:35.769267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.510 [2024-11-20 11:21:35.769296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.510 qpair failed and we were unable to recover it. 00:27:08.510 [2024-11-20 11:21:35.769563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.510 [2024-11-20 11:21:35.769595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.510 qpair failed and we were unable to recover it. 00:27:08.510 [2024-11-20 11:21:35.769872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.510 [2024-11-20 11:21:35.769905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.510 qpair failed and we were unable to recover it. 00:27:08.510 [2024-11-20 11:21:35.770116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.510 [2024-11-20 11:21:35.770150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.510 qpair failed and we were unable to recover it. 00:27:08.510 [2024-11-20 11:21:35.770378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.510 [2024-11-20 11:21:35.770409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.510 qpair failed and we were unable to recover it. 00:27:08.510 [2024-11-20 11:21:35.770523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.510 [2024-11-20 11:21:35.770555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.510 qpair failed and we were unable to recover it. 00:27:08.510 [2024-11-20 11:21:35.770735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.510 [2024-11-20 11:21:35.770767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.510 qpair failed and we were unable to recover it. 00:27:08.510 [2024-11-20 11:21:35.770968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.510 [2024-11-20 11:21:35.771001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.510 qpair failed and we were unable to recover it. 00:27:08.510 [2024-11-20 11:21:35.771240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.510 [2024-11-20 11:21:35.771272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.510 qpair failed and we were unable to recover it. 00:27:08.510 [2024-11-20 11:21:35.771506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.510 [2024-11-20 11:21:35.771538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.510 qpair failed and we were unable to recover it. 00:27:08.510 [2024-11-20 11:21:35.771705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.510 [2024-11-20 11:21:35.771737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.510 qpair failed and we were unable to recover it. 00:27:08.510 [2024-11-20 11:21:35.771932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.510 [2024-11-20 11:21:35.771975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.510 qpair failed and we were unable to recover it. 00:27:08.510 [2024-11-20 11:21:35.772169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.510 [2024-11-20 11:21:35.772200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.510 qpair failed and we were unable to recover it. 00:27:08.510 [2024-11-20 11:21:35.772437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.510 [2024-11-20 11:21:35.772469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.510 qpair failed and we were unable to recover it. 00:27:08.510 [2024-11-20 11:21:35.772673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.510 [2024-11-20 11:21:35.772703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.510 qpair failed and we were unable to recover it. 00:27:08.510 [2024-11-20 11:21:35.772939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.510 [2024-11-20 11:21:35.772983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.510 qpair failed and we were unable to recover it. 00:27:08.510 [2024-11-20 11:21:35.773238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.510 [2024-11-20 11:21:35.773310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.510 qpair failed and we were unable to recover it. 00:27:08.510 [2024-11-20 11:21:35.773508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.511 [2024-11-20 11:21:35.773543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.511 qpair failed and we were unable to recover it. 00:27:08.511 [2024-11-20 11:21:35.773812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.511 [2024-11-20 11:21:35.773844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.511 qpair failed and we were unable to recover it. 00:27:08.511 [2024-11-20 11:21:35.773983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.511 [2024-11-20 11:21:35.774017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.511 qpair failed and we were unable to recover it. 00:27:08.511 [2024-11-20 11:21:35.774189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.511 [2024-11-20 11:21:35.774219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.511 qpair failed and we were unable to recover it. 00:27:08.511 [2024-11-20 11:21:35.774432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.511 [2024-11-20 11:21:35.774464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.511 qpair failed and we were unable to recover it. 00:27:08.511 [2024-11-20 11:21:35.774651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.511 [2024-11-20 11:21:35.774683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.511 qpair failed and we were unable to recover it. 00:27:08.511 [2024-11-20 11:21:35.774803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.511 [2024-11-20 11:21:35.774834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.511 qpair failed and we were unable to recover it. 00:27:08.511 [2024-11-20 11:21:35.775033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.511 [2024-11-20 11:21:35.775066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.511 qpair failed and we were unable to recover it. 00:27:08.511 [2024-11-20 11:21:35.775240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.511 [2024-11-20 11:21:35.775271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.511 qpair failed and we were unable to recover it. 00:27:08.511 [2024-11-20 11:21:35.775510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.511 [2024-11-20 11:21:35.775540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.511 qpair failed and we were unable to recover it. 00:27:08.511 [2024-11-20 11:21:35.775724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.511 [2024-11-20 11:21:35.775755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.511 qpair failed and we were unable to recover it. 00:27:08.511 [2024-11-20 11:21:35.775881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.511 [2024-11-20 11:21:35.775912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.511 qpair failed and we were unable to recover it. 00:27:08.511 [2024-11-20 11:21:35.776100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.511 [2024-11-20 11:21:35.776132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.511 qpair failed and we were unable to recover it. 00:27:08.511 [2024-11-20 11:21:35.776329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.511 [2024-11-20 11:21:35.776361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.511 qpair failed and we were unable to recover it. 00:27:08.511 [2024-11-20 11:21:35.776597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.511 [2024-11-20 11:21:35.776627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.511 qpair failed and we were unable to recover it. 00:27:08.511 [2024-11-20 11:21:35.776808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.511 [2024-11-20 11:21:35.776839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.511 qpair failed and we were unable to recover it. 00:27:08.511 [2024-11-20 11:21:35.776968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.511 [2024-11-20 11:21:35.777001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.511 qpair failed and we were unable to recover it. 00:27:08.511 [2024-11-20 11:21:35.777249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.511 [2024-11-20 11:21:35.777280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.511 qpair failed and we were unable to recover it. 00:27:08.511 [2024-11-20 11:21:35.777515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.511 [2024-11-20 11:21:35.777547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.511 qpair failed and we were unable to recover it. 00:27:08.511 [2024-11-20 11:21:35.777659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.511 [2024-11-20 11:21:35.777690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.511 qpair failed and we were unable to recover it. 00:27:08.511 [2024-11-20 11:21:35.777894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.511 [2024-11-20 11:21:35.777924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.511 qpair failed and we were unable to recover it. 00:27:08.511 [2024-11-20 11:21:35.778128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.511 [2024-11-20 11:21:35.778160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.511 qpair failed and we were unable to recover it. 00:27:08.511 [2024-11-20 11:21:35.778417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.511 [2024-11-20 11:21:35.778451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.511 qpair failed and we were unable to recover it. 00:27:08.511 [2024-11-20 11:21:35.778579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.511 [2024-11-20 11:21:35.778609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.511 qpair failed and we were unable to recover it. 00:27:08.511 [2024-11-20 11:21:35.778740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.511 [2024-11-20 11:21:35.778771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.511 qpair failed and we were unable to recover it. 00:27:08.511 [2024-11-20 11:21:35.778969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.511 [2024-11-20 11:21:35.779002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.511 qpair failed and we were unable to recover it. 00:27:08.511 [2024-11-20 11:21:35.779187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.511 [2024-11-20 11:21:35.779219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.511 qpair failed and we were unable to recover it. 00:27:08.511 [2024-11-20 11:21:35.779340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.511 [2024-11-20 11:21:35.779371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.511 qpair failed and we were unable to recover it. 00:27:08.511 [2024-11-20 11:21:35.779561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.511 [2024-11-20 11:21:35.779591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.511 qpair failed and we were unable to recover it. 00:27:08.511 [2024-11-20 11:21:35.779785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.511 [2024-11-20 11:21:35.779816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.511 qpair failed and we were unable to recover it. 00:27:08.511 [2024-11-20 11:21:35.780001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.511 [2024-11-20 11:21:35.780034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.511 qpair failed and we were unable to recover it. 00:27:08.511 [2024-11-20 11:21:35.780333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.511 [2024-11-20 11:21:35.780364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.511 qpair failed and we were unable to recover it. 00:27:08.511 [2024-11-20 11:21:35.780472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.511 [2024-11-20 11:21:35.780504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.511 qpair failed and we were unable to recover it. 00:27:08.512 [2024-11-20 11:21:35.780751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.512 [2024-11-20 11:21:35.780782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.512 qpair failed and we were unable to recover it. 00:27:08.512 [2024-11-20 11:21:35.780967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.512 [2024-11-20 11:21:35.781000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.512 qpair failed and we were unable to recover it. 00:27:08.512 [2024-11-20 11:21:35.781258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.512 [2024-11-20 11:21:35.781289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.512 qpair failed and we were unable to recover it. 00:27:08.512 [2024-11-20 11:21:35.781467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.512 [2024-11-20 11:21:35.781497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.512 qpair failed and we were unable to recover it. 00:27:08.512 [2024-11-20 11:21:35.781682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.512 [2024-11-20 11:21:35.781713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.512 qpair failed and we were unable to recover it. 00:27:08.512 [2024-11-20 11:21:35.781962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.512 [2024-11-20 11:21:35.781994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.512 qpair failed and we were unable to recover it. 00:27:08.512 [2024-11-20 11:21:35.782256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.512 [2024-11-20 11:21:35.782292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.512 qpair failed and we were unable to recover it. 00:27:08.512 [2024-11-20 11:21:35.782481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.512 [2024-11-20 11:21:35.782512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.512 qpair failed and we were unable to recover it. 00:27:08.512 [2024-11-20 11:21:35.782644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.512 [2024-11-20 11:21:35.782676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.512 qpair failed and we were unable to recover it. 00:27:08.512 [2024-11-20 11:21:35.782847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.512 [2024-11-20 11:21:35.782885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.512 qpair failed and we were unable to recover it. 00:27:08.512 [2024-11-20 11:21:35.783051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.512 [2024-11-20 11:21:35.783084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.512 qpair failed and we were unable to recover it. 00:27:08.512 [2024-11-20 11:21:35.783211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.512 [2024-11-20 11:21:35.783242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.512 qpair failed and we were unable to recover it. 00:27:08.512 [2024-11-20 11:21:35.783494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.512 [2024-11-20 11:21:35.783525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.512 qpair failed and we were unable to recover it. 00:27:08.512 [2024-11-20 11:21:35.783639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.512 [2024-11-20 11:21:35.783669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.512 qpair failed and we were unable to recover it. 00:27:08.512 [2024-11-20 11:21:35.783850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.512 [2024-11-20 11:21:35.783882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.512 qpair failed and we were unable to recover it. 00:27:08.512 [2024-11-20 11:21:35.784119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.512 [2024-11-20 11:21:35.784151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.512 qpair failed and we were unable to recover it. 00:27:08.512 [2024-11-20 11:21:35.784257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.512 [2024-11-20 11:21:35.784287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.512 qpair failed and we were unable to recover it. 00:27:08.512 [2024-11-20 11:21:35.784422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.512 [2024-11-20 11:21:35.784455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.512 qpair failed and we were unable to recover it. 00:27:08.512 [2024-11-20 11:21:35.784574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.512 [2024-11-20 11:21:35.784605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.512 qpair failed and we were unable to recover it. 00:27:08.512 [2024-11-20 11:21:35.784730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.512 [2024-11-20 11:21:35.784761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.512 qpair failed and we were unable to recover it. 00:27:08.512 [2024-11-20 11:21:35.785003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.512 [2024-11-20 11:21:35.785036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.512 qpair failed and we were unable to recover it. 00:27:08.512 [2024-11-20 11:21:35.785207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.512 [2024-11-20 11:21:35.785237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.512 qpair failed and we were unable to recover it. 00:27:08.512 [2024-11-20 11:21:35.785410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.512 [2024-11-20 11:21:35.785441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.512 qpair failed and we were unable to recover it. 00:27:08.512 [2024-11-20 11:21:35.785610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.512 [2024-11-20 11:21:35.785641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.512 qpair failed and we were unable to recover it. 00:27:08.512 [2024-11-20 11:21:35.785858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.512 [2024-11-20 11:21:35.785889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.512 qpair failed and we were unable to recover it. 00:27:08.512 [2024-11-20 11:21:35.786162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.512 [2024-11-20 11:21:35.786195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.512 qpair failed and we were unable to recover it. 00:27:08.512 [2024-11-20 11:21:35.786381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.512 [2024-11-20 11:21:35.786412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.512 qpair failed and we were unable to recover it. 00:27:08.512 [2024-11-20 11:21:35.786591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.512 [2024-11-20 11:21:35.786622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.512 qpair failed and we were unable to recover it. 00:27:08.512 [2024-11-20 11:21:35.786817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.513 [2024-11-20 11:21:35.786849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.513 qpair failed and we were unable to recover it. 00:27:08.513 [2024-11-20 11:21:35.786978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.513 [2024-11-20 11:21:35.787010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.513 qpair failed and we were unable to recover it. 00:27:08.513 [2024-11-20 11:21:35.787119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.513 [2024-11-20 11:21:35.787150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.513 qpair failed and we were unable to recover it. 00:27:08.513 [2024-11-20 11:21:35.787412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.513 [2024-11-20 11:21:35.787444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.513 qpair failed and we were unable to recover it. 00:27:08.513 [2024-11-20 11:21:35.787616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.513 [2024-11-20 11:21:35.787647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.513 qpair failed and we were unable to recover it. 00:27:08.513 [2024-11-20 11:21:35.787779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.513 [2024-11-20 11:21:35.787811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.513 qpair failed and we were unable to recover it. 00:27:08.513 [2024-11-20 11:21:35.787935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.513 [2024-11-20 11:21:35.787980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.513 qpair failed and we were unable to recover it. 00:27:08.513 [2024-11-20 11:21:35.788222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.513 [2024-11-20 11:21:35.788253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.513 qpair failed and we were unable to recover it. 00:27:08.513 [2024-11-20 11:21:35.788446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.513 [2024-11-20 11:21:35.788477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.513 qpair failed and we were unable to recover it. 00:27:08.513 [2024-11-20 11:21:35.788604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.513 [2024-11-20 11:21:35.788635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.513 qpair failed and we were unable to recover it. 00:27:08.513 [2024-11-20 11:21:35.788810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.513 [2024-11-20 11:21:35.788841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.513 qpair failed and we were unable to recover it. 00:27:08.513 [2024-11-20 11:21:35.789050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.513 [2024-11-20 11:21:35.789082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.513 qpair failed and we were unable to recover it. 00:27:08.513 [2024-11-20 11:21:35.789188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.513 [2024-11-20 11:21:35.789219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.513 qpair failed and we were unable to recover it. 00:27:08.513 [2024-11-20 11:21:35.789407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.513 [2024-11-20 11:21:35.789439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.513 qpair failed and we were unable to recover it. 00:27:08.513 [2024-11-20 11:21:35.789552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.513 [2024-11-20 11:21:35.789583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.513 qpair failed and we were unable to recover it. 00:27:08.513 [2024-11-20 11:21:35.789698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.513 [2024-11-20 11:21:35.789729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.513 qpair failed and we were unable to recover it. 00:27:08.513 [2024-11-20 11:21:35.789967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.513 [2024-11-20 11:21:35.789999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.513 qpair failed and we were unable to recover it. 00:27:08.513 [2024-11-20 11:21:35.790172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.513 [2024-11-20 11:21:35.790202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.513 qpair failed and we were unable to recover it. 00:27:08.513 [2024-11-20 11:21:35.790375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.513 [2024-11-20 11:21:35.790413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.513 qpair failed and we were unable to recover it. 00:27:08.513 [2024-11-20 11:21:35.790654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.513 [2024-11-20 11:21:35.790686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.513 qpair failed and we were unable to recover it. 00:27:08.513 [2024-11-20 11:21:35.790862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.513 [2024-11-20 11:21:35.790893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.513 qpair failed and we were unable to recover it. 00:27:08.513 [2024-11-20 11:21:35.791025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.513 [2024-11-20 11:21:35.791057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.513 qpair failed and we were unable to recover it. 00:27:08.513 [2024-11-20 11:21:35.791237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.513 [2024-11-20 11:21:35.791268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.513 qpair failed and we were unable to recover it. 00:27:08.513 [2024-11-20 11:21:35.791526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.513 [2024-11-20 11:21:35.791557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.513 qpair failed and we were unable to recover it. 00:27:08.513 [2024-11-20 11:21:35.791676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.513 [2024-11-20 11:21:35.791705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.513 qpair failed and we were unable to recover it. 00:27:08.513 [2024-11-20 11:21:35.791966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.513 [2024-11-20 11:21:35.791998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.513 qpair failed and we were unable to recover it. 00:27:08.513 [2024-11-20 11:21:35.792287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.513 [2024-11-20 11:21:35.792318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.513 qpair failed and we were unable to recover it. 00:27:08.513 [2024-11-20 11:21:35.792504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.513 [2024-11-20 11:21:35.792535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.513 qpair failed and we were unable to recover it. 00:27:08.513 [2024-11-20 11:21:35.792657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.513 [2024-11-20 11:21:35.792688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.513 qpair failed and we were unable to recover it. 00:27:08.513 [2024-11-20 11:21:35.792857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.513 [2024-11-20 11:21:35.792888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.513 qpair failed and we were unable to recover it. 00:27:08.513 [2024-11-20 11:21:35.793069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.513 [2024-11-20 11:21:35.793102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.513 qpair failed and we were unable to recover it. 00:27:08.513 [2024-11-20 11:21:35.793224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.513 [2024-11-20 11:21:35.793256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.513 qpair failed and we were unable to recover it. 00:27:08.513 [2024-11-20 11:21:35.793456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.513 [2024-11-20 11:21:35.793487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.513 qpair failed and we were unable to recover it. 00:27:08.513 [2024-11-20 11:21:35.793597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.514 [2024-11-20 11:21:35.793627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.514 qpair failed and we were unable to recover it. 00:27:08.514 [2024-11-20 11:21:35.793804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.514 [2024-11-20 11:21:35.793835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.514 qpair failed and we were unable to recover it. 00:27:08.514 [2024-11-20 11:21:35.793946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.514 [2024-11-20 11:21:35.794007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.514 qpair failed and we were unable to recover it. 00:27:08.514 [2024-11-20 11:21:35.794195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.514 [2024-11-20 11:21:35.794226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.514 qpair failed and we were unable to recover it. 00:27:08.514 [2024-11-20 11:21:35.794339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.514 [2024-11-20 11:21:35.794369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.514 qpair failed and we were unable to recover it. 00:27:08.514 [2024-11-20 11:21:35.794607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.514 [2024-11-20 11:21:35.794638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.514 qpair failed and we were unable to recover it. 00:27:08.514 [2024-11-20 11:21:35.794824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.514 [2024-11-20 11:21:35.794855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.514 qpair failed and we were unable to recover it. 00:27:08.514 [2024-11-20 11:21:35.795040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.514 [2024-11-20 11:21:35.795073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.514 qpair failed and we were unable to recover it. 00:27:08.514 [2024-11-20 11:21:35.795242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.514 [2024-11-20 11:21:35.795273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.514 qpair failed and we were unable to recover it. 00:27:08.514 [2024-11-20 11:21:35.795457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.514 [2024-11-20 11:21:35.795488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.514 qpair failed and we were unable to recover it. 00:27:08.514 [2024-11-20 11:21:35.795618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.514 [2024-11-20 11:21:35.795650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.514 qpair failed and we were unable to recover it. 00:27:08.514 [2024-11-20 11:21:35.795764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.514 [2024-11-20 11:21:35.795795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.514 qpair failed and we were unable to recover it. 00:27:08.514 [2024-11-20 11:21:35.795901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.514 [2024-11-20 11:21:35.795931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.514 qpair failed and we were unable to recover it. 00:27:08.514 [2024-11-20 11:21:35.796123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.514 [2024-11-20 11:21:35.796155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.514 qpair failed and we were unable to recover it. 00:27:08.514 [2024-11-20 11:21:35.796268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.514 [2024-11-20 11:21:35.796299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.514 qpair failed and we were unable to recover it. 00:27:08.514 [2024-11-20 11:21:35.796403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.514 [2024-11-20 11:21:35.796433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.514 qpair failed and we were unable to recover it. 00:27:08.514 [2024-11-20 11:21:35.796616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.514 [2024-11-20 11:21:35.796648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.514 qpair failed and we were unable to recover it. 00:27:08.514 [2024-11-20 11:21:35.796755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.514 [2024-11-20 11:21:35.796785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.514 qpair failed and we were unable to recover it. 00:27:08.514 [2024-11-20 11:21:35.796902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.514 [2024-11-20 11:21:35.796933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.514 qpair failed and we were unable to recover it. 00:27:08.514 [2024-11-20 11:21:35.797054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.514 [2024-11-20 11:21:35.797085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.514 qpair failed and we were unable to recover it. 00:27:08.514 [2024-11-20 11:21:35.797260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.514 [2024-11-20 11:21:35.797292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.514 qpair failed and we were unable to recover it. 00:27:08.514 [2024-11-20 11:21:35.797475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.514 [2024-11-20 11:21:35.797506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.514 qpair failed and we were unable to recover it. 00:27:08.514 [2024-11-20 11:21:35.797620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.514 [2024-11-20 11:21:35.797651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.514 qpair failed and we were unable to recover it. 00:27:08.514 [2024-11-20 11:21:35.797853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.514 [2024-11-20 11:21:35.797884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.514 qpair failed and we were unable to recover it. 00:27:08.514 [2024-11-20 11:21:35.798098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.514 [2024-11-20 11:21:35.798131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.514 qpair failed and we were unable to recover it. 00:27:08.514 [2024-11-20 11:21:35.798301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.514 [2024-11-20 11:21:35.798337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.514 qpair failed and we were unable to recover it. 00:27:08.514 [2024-11-20 11:21:35.798597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.514 [2024-11-20 11:21:35.798629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.514 qpair failed and we were unable to recover it. 00:27:08.514 [2024-11-20 11:21:35.798749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.514 [2024-11-20 11:21:35.798780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.514 qpair failed and we were unable to recover it. 00:27:08.514 [2024-11-20 11:21:35.798968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.514 [2024-11-20 11:21:35.799001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.514 qpair failed and we were unable to recover it. 00:27:08.514 [2024-11-20 11:21:35.799202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.514 [2024-11-20 11:21:35.799234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.514 qpair failed and we were unable to recover it. 00:27:08.514 [2024-11-20 11:21:35.799354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.514 [2024-11-20 11:21:35.799385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.514 qpair failed and we were unable to recover it. 00:27:08.514 [2024-11-20 11:21:35.799653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.514 [2024-11-20 11:21:35.799684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.514 qpair failed and we were unable to recover it. 00:27:08.514 [2024-11-20 11:21:35.799856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.514 [2024-11-20 11:21:35.799887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.515 qpair failed and we were unable to recover it. 00:27:08.515 [2024-11-20 11:21:35.800056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.515 [2024-11-20 11:21:35.800088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.515 qpair failed and we were unable to recover it. 00:27:08.515 [2024-11-20 11:21:35.800218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.515 [2024-11-20 11:21:35.800249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.515 qpair failed and we were unable to recover it. 00:27:08.515 [2024-11-20 11:21:35.800442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.515 [2024-11-20 11:21:35.800474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.515 qpair failed and we were unable to recover it. 00:27:08.515 [2024-11-20 11:21:35.800652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.515 [2024-11-20 11:21:35.800683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.515 qpair failed and we were unable to recover it. 00:27:08.515 [2024-11-20 11:21:35.800784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.515 [2024-11-20 11:21:35.800815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.515 qpair failed and we were unable to recover it. 00:27:08.515 [2024-11-20 11:21:35.800930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.515 [2024-11-20 11:21:35.800969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.515 qpair failed and we were unable to recover it. 00:27:08.515 [2024-11-20 11:21:35.801151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.515 [2024-11-20 11:21:35.801181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.515 qpair failed and we were unable to recover it. 00:27:08.515 [2024-11-20 11:21:35.801358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.515 [2024-11-20 11:21:35.801390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.515 qpair failed and we were unable to recover it. 00:27:08.515 [2024-11-20 11:21:35.801574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.515 [2024-11-20 11:21:35.801605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.515 qpair failed and we were unable to recover it. 00:27:08.515 [2024-11-20 11:21:35.801712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.515 [2024-11-20 11:21:35.801743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.515 qpair failed and we were unable to recover it. 00:27:08.515 [2024-11-20 11:21:35.801936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.515 [2024-11-20 11:21:35.801988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.515 qpair failed and we were unable to recover it. 00:27:08.515 [2024-11-20 11:21:35.802250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.515 [2024-11-20 11:21:35.802281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.515 qpair failed and we were unable to recover it. 00:27:08.515 [2024-11-20 11:21:35.802387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.515 [2024-11-20 11:21:35.802418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.515 qpair failed and we were unable to recover it. 00:27:08.515 [2024-11-20 11:21:35.802613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.515 [2024-11-20 11:21:35.802643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.515 qpair failed and we were unable to recover it. 00:27:08.515 [2024-11-20 11:21:35.802777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.515 [2024-11-20 11:21:35.802808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.515 qpair failed and we were unable to recover it. 00:27:08.515 [2024-11-20 11:21:35.802996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.515 [2024-11-20 11:21:35.803029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.515 qpair failed and we were unable to recover it. 00:27:08.515 [2024-11-20 11:21:35.803212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.515 [2024-11-20 11:21:35.803243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.515 qpair failed and we were unable to recover it. 00:27:08.515 [2024-11-20 11:21:35.803364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.515 [2024-11-20 11:21:35.803395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.515 qpair failed and we were unable to recover it. 00:27:08.515 [2024-11-20 11:21:35.803681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.515 [2024-11-20 11:21:35.803712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.515 qpair failed and we were unable to recover it. 00:27:08.515 [2024-11-20 11:21:35.803973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.515 [2024-11-20 11:21:35.804006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.515 qpair failed and we were unable to recover it. 00:27:08.515 [2024-11-20 11:21:35.804256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.515 [2024-11-20 11:21:35.804288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.515 qpair failed and we were unable to recover it. 00:27:08.515 [2024-11-20 11:21:35.804473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.515 [2024-11-20 11:21:35.804504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.515 qpair failed and we were unable to recover it. 00:27:08.515 [2024-11-20 11:21:35.804695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.515 [2024-11-20 11:21:35.804727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.515 qpair failed and we were unable to recover it. 00:27:08.515 [2024-11-20 11:21:35.804923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.515 [2024-11-20 11:21:35.804962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.515 qpair failed and we were unable to recover it. 00:27:08.515 [2024-11-20 11:21:35.805138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.515 [2024-11-20 11:21:35.805169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.515 qpair failed and we were unable to recover it. 00:27:08.515 [2024-11-20 11:21:35.805347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.515 [2024-11-20 11:21:35.805378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.515 qpair failed and we were unable to recover it. 00:27:08.515 [2024-11-20 11:21:35.805568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.515 [2024-11-20 11:21:35.805600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.515 qpair failed and we were unable to recover it. 00:27:08.515 [2024-11-20 11:21:35.805729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.515 [2024-11-20 11:21:35.805759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.515 qpair failed and we were unable to recover it. 00:27:08.515 [2024-11-20 11:21:35.805954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.515 [2024-11-20 11:21:35.805986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.515 qpair failed and we were unable to recover it. 00:27:08.515 [2024-11-20 11:21:35.806192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.515 [2024-11-20 11:21:35.806225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.515 qpair failed and we were unable to recover it. 00:27:08.515 [2024-11-20 11:21:35.806399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.515 [2024-11-20 11:21:35.806430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.515 qpair failed and we were unable to recover it. 00:27:08.515 [2024-11-20 11:21:35.806542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.515 [2024-11-20 11:21:35.806573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.515 qpair failed and we were unable to recover it. 00:27:08.515 [2024-11-20 11:21:35.806746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.516 [2024-11-20 11:21:35.806782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.516 qpair failed and we were unable to recover it. 00:27:08.516 [2024-11-20 11:21:35.806891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.516 [2024-11-20 11:21:35.806922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.516 qpair failed and we were unable to recover it. 00:27:08.516 [2024-11-20 11:21:35.807137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.516 [2024-11-20 11:21:35.807170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.516 qpair failed and we were unable to recover it. 00:27:08.516 [2024-11-20 11:21:35.807416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.516 [2024-11-20 11:21:35.807447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.516 qpair failed and we were unable to recover it. 00:27:08.516 [2024-11-20 11:21:35.807625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.516 [2024-11-20 11:21:35.807655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.516 qpair failed and we were unable to recover it. 00:27:08.516 [2024-11-20 11:21:35.807774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.516 [2024-11-20 11:21:35.807805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.516 qpair failed and we were unable to recover it. 00:27:08.516 [2024-11-20 11:21:35.807986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.516 [2024-11-20 11:21:35.808018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.516 qpair failed and we were unable to recover it. 00:27:08.516 [2024-11-20 11:21:35.808277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.516 [2024-11-20 11:21:35.808309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.516 qpair failed and we were unable to recover it. 00:27:08.516 [2024-11-20 11:21:35.808499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.516 [2024-11-20 11:21:35.808529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.516 qpair failed and we were unable to recover it. 00:27:08.516 [2024-11-20 11:21:35.808706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.516 [2024-11-20 11:21:35.808736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.516 qpair failed and we were unable to recover it. 00:27:08.516 [2024-11-20 11:21:35.808920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.516 [2024-11-20 11:21:35.808957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.516 qpair failed and we were unable to recover it. 00:27:08.516 [2024-11-20 11:21:35.809091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.516 [2024-11-20 11:21:35.809123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.516 qpair failed and we were unable to recover it. 00:27:08.516 [2024-11-20 11:21:35.809302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.516 [2024-11-20 11:21:35.809333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.516 qpair failed and we were unable to recover it. 00:27:08.516 [2024-11-20 11:21:35.809518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.516 [2024-11-20 11:21:35.809549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.516 qpair failed and we were unable to recover it. 00:27:08.516 [2024-11-20 11:21:35.809674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.516 [2024-11-20 11:21:35.809706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.516 qpair failed and we were unable to recover it. 00:27:08.516 [2024-11-20 11:21:35.809893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.516 [2024-11-20 11:21:35.809923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.516 qpair failed and we were unable to recover it. 00:27:08.516 [2024-11-20 11:21:35.810135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.516 [2024-11-20 11:21:35.810168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.516 qpair failed and we were unable to recover it. 00:27:08.516 [2024-11-20 11:21:35.810386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.516 [2024-11-20 11:21:35.810417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.516 qpair failed and we were unable to recover it. 00:27:08.516 [2024-11-20 11:21:35.810631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.516 [2024-11-20 11:21:35.810661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.516 qpair failed and we were unable to recover it. 00:27:08.516 [2024-11-20 11:21:35.810836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.516 [2024-11-20 11:21:35.810866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.516 qpair failed and we were unable to recover it. 00:27:08.516 [2024-11-20 11:21:35.810986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.516 [2024-11-20 11:21:35.811019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.516 qpair failed and we were unable to recover it. 00:27:08.516 [2024-11-20 11:21:35.811152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.516 [2024-11-20 11:21:35.811181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.516 qpair failed and we were unable to recover it. 00:27:08.516 [2024-11-20 11:21:35.811314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.516 [2024-11-20 11:21:35.811345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.516 qpair failed and we were unable to recover it. 00:27:08.516 [2024-11-20 11:21:35.811541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.516 [2024-11-20 11:21:35.811572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.516 qpair failed and we were unable to recover it. 00:27:08.516 [2024-11-20 11:21:35.811823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.516 [2024-11-20 11:21:35.811853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.516 qpair failed and we were unable to recover it. 00:27:08.516 [2024-11-20 11:21:35.812024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.516 [2024-11-20 11:21:35.812057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.516 qpair failed and we were unable to recover it. 00:27:08.516 [2024-11-20 11:21:35.812268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.516 [2024-11-20 11:21:35.812299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.516 qpair failed and we were unable to recover it. 00:27:08.516 [2024-11-20 11:21:35.812422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.516 [2024-11-20 11:21:35.812453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.516 qpair failed and we were unable to recover it. 00:27:08.516 [2024-11-20 11:21:35.812622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.516 [2024-11-20 11:21:35.812652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.516 qpair failed and we were unable to recover it. 00:27:08.516 [2024-11-20 11:21:35.812823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.516 [2024-11-20 11:21:35.812853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.516 qpair failed and we were unable to recover it. 00:27:08.516 [2024-11-20 11:21:35.813029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.516 [2024-11-20 11:21:35.813061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.516 qpair failed and we were unable to recover it. 00:27:08.516 [2024-11-20 11:21:35.813176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.516 [2024-11-20 11:21:35.813206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.516 qpair failed and we were unable to recover it. 00:27:08.516 [2024-11-20 11:21:35.813394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.516 [2024-11-20 11:21:35.813424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.516 qpair failed and we were unable to recover it. 00:27:08.516 [2024-11-20 11:21:35.813603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.517 [2024-11-20 11:21:35.813634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.517 qpair failed and we were unable to recover it. 00:27:08.517 [2024-11-20 11:21:35.813899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.517 [2024-11-20 11:21:35.813929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.517 qpair failed and we were unable to recover it. 00:27:08.517 [2024-11-20 11:21:35.814121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.517 [2024-11-20 11:21:35.814153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.517 qpair failed and we were unable to recover it. 00:27:08.517 [2024-11-20 11:21:35.814395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.517 [2024-11-20 11:21:35.814425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.517 qpair failed and we were unable to recover it. 00:27:08.517 [2024-11-20 11:21:35.814677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.517 [2024-11-20 11:21:35.814707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.517 qpair failed and we were unable to recover it. 00:27:08.517 [2024-11-20 11:21:35.814943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.517 [2024-11-20 11:21:35.815000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.517 qpair failed and we were unable to recover it. 00:27:08.517 [2024-11-20 11:21:35.815203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.517 [2024-11-20 11:21:35.815234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.517 qpair failed and we were unable to recover it. 00:27:08.517 [2024-11-20 11:21:35.815332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.517 [2024-11-20 11:21:35.815369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.517 qpair failed and we were unable to recover it. 00:27:08.517 [2024-11-20 11:21:35.815543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.517 [2024-11-20 11:21:35.815574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.517 qpair failed and we were unable to recover it. 00:27:08.517 [2024-11-20 11:21:35.815756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.517 [2024-11-20 11:21:35.815789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.517 qpair failed and we were unable to recover it. 00:27:08.517 [2024-11-20 11:21:35.815899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.517 [2024-11-20 11:21:35.815929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.517 qpair failed and we were unable to recover it. 00:27:08.517 [2024-11-20 11:21:35.816057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.517 [2024-11-20 11:21:35.816090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.517 qpair failed and we were unable to recover it. 00:27:08.517 [2024-11-20 11:21:35.816220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.517 [2024-11-20 11:21:35.816252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.517 qpair failed and we were unable to recover it. 00:27:08.517 [2024-11-20 11:21:35.816361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.517 [2024-11-20 11:21:35.816391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.517 qpair failed and we were unable to recover it. 00:27:08.517 [2024-11-20 11:21:35.816561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.517 [2024-11-20 11:21:35.816593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.517 qpair failed and we were unable to recover it. 00:27:08.517 [2024-11-20 11:21:35.816831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.517 [2024-11-20 11:21:35.816862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.517 qpair failed and we were unable to recover it. 00:27:08.517 [2024-11-20 11:21:35.817119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.517 [2024-11-20 11:21:35.817151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.517 qpair failed and we were unable to recover it. 00:27:08.517 [2024-11-20 11:21:35.817413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.517 [2024-11-20 11:21:35.817444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.517 qpair failed and we were unable to recover it. 00:27:08.517 [2024-11-20 11:21:35.817574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.517 [2024-11-20 11:21:35.817605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.517 qpair failed and we were unable to recover it. 00:27:08.517 [2024-11-20 11:21:35.817717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.517 [2024-11-20 11:21:35.817748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.517 qpair failed and we were unable to recover it. 00:27:08.517 [2024-11-20 11:21:35.817926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.517 [2024-11-20 11:21:35.817966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.517 qpair failed and we were unable to recover it. 00:27:08.517 [2024-11-20 11:21:35.818177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.517 [2024-11-20 11:21:35.818208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.517 qpair failed and we were unable to recover it. 00:27:08.517 [2024-11-20 11:21:35.818346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.517 [2024-11-20 11:21:35.818377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.517 qpair failed and we were unable to recover it. 00:27:08.517 [2024-11-20 11:21:35.818544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.517 [2024-11-20 11:21:35.818575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.517 qpair failed and we were unable to recover it. 00:27:08.517 [2024-11-20 11:21:35.818818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.517 [2024-11-20 11:21:35.818848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.517 qpair failed and we were unable to recover it. 00:27:08.517 [2024-11-20 11:21:35.819038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.517 [2024-11-20 11:21:35.819069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.517 qpair failed and we were unable to recover it. 00:27:08.517 [2024-11-20 11:21:35.819307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.517 [2024-11-20 11:21:35.819339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.517 qpair failed and we were unable to recover it. 00:27:08.517 [2024-11-20 11:21:35.819459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.517 [2024-11-20 11:21:35.819490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.517 qpair failed and we were unable to recover it. 00:27:08.517 [2024-11-20 11:21:35.819590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.518 [2024-11-20 11:21:35.819619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.518 qpair failed and we were unable to recover it. 00:27:08.518 [2024-11-20 11:21:35.819797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.518 [2024-11-20 11:21:35.819829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.518 qpair failed and we were unable to recover it. 00:27:08.518 [2024-11-20 11:21:35.819966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.518 [2024-11-20 11:21:35.819998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.518 qpair failed and we were unable to recover it. 00:27:08.518 [2024-11-20 11:21:35.820105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.518 [2024-11-20 11:21:35.820137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.518 qpair failed and we were unable to recover it. 00:27:08.518 [2024-11-20 11:21:35.820264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.518 [2024-11-20 11:21:35.820294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.518 qpair failed and we were unable to recover it. 00:27:08.518 [2024-11-20 11:21:35.820506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.518 [2024-11-20 11:21:35.820536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.518 qpair failed and we were unable to recover it. 00:27:08.518 [2024-11-20 11:21:35.820803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.518 [2024-11-20 11:21:35.820834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.518 qpair failed and we were unable to recover it. 00:27:08.518 [2024-11-20 11:21:35.820979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.518 [2024-11-20 11:21:35.821012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.518 qpair failed and we were unable to recover it. 00:27:08.518 [2024-11-20 11:21:35.821128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.518 [2024-11-20 11:21:35.821160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.518 qpair failed and we were unable to recover it. 00:27:08.518 [2024-11-20 11:21:35.821338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.518 [2024-11-20 11:21:35.821369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.518 qpair failed and we were unable to recover it. 00:27:08.518 [2024-11-20 11:21:35.821546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.518 [2024-11-20 11:21:35.821576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.518 qpair failed and we were unable to recover it. 00:27:08.518 [2024-11-20 11:21:35.821834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.518 [2024-11-20 11:21:35.821866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.518 qpair failed and we were unable to recover it. 00:27:08.518 [2024-11-20 11:21:35.821987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.518 [2024-11-20 11:21:35.822019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.518 qpair failed and we were unable to recover it. 00:27:08.518 [2024-11-20 11:21:35.822143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.518 [2024-11-20 11:21:35.822175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.518 qpair failed and we were unable to recover it. 00:27:08.518 [2024-11-20 11:21:35.822295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.518 [2024-11-20 11:21:35.822326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.518 qpair failed and we were unable to recover it. 00:27:08.518 [2024-11-20 11:21:35.822540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.518 [2024-11-20 11:21:35.822571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.518 qpair failed and we were unable to recover it. 00:27:08.518 [2024-11-20 11:21:35.822808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.518 [2024-11-20 11:21:35.822839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.518 qpair failed and we were unable to recover it. 00:27:08.518 [2024-11-20 11:21:35.823025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.518 [2024-11-20 11:21:35.823057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.518 qpair failed and we were unable to recover it. 00:27:08.518 [2024-11-20 11:21:35.823238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.518 [2024-11-20 11:21:35.823269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.518 qpair failed and we were unable to recover it. 00:27:08.518 [2024-11-20 11:21:35.823454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.518 [2024-11-20 11:21:35.823491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.518 qpair failed and we were unable to recover it. 00:27:08.518 [2024-11-20 11:21:35.823606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.518 [2024-11-20 11:21:35.823638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.518 qpair failed and we were unable to recover it. 00:27:08.518 [2024-11-20 11:21:35.823813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.518 [2024-11-20 11:21:35.823844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.518 qpair failed and we were unable to recover it. 00:27:08.518 [2024-11-20 11:21:35.823965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.518 [2024-11-20 11:21:35.823997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.518 qpair failed and we were unable to recover it. 00:27:08.518 [2024-11-20 11:21:35.824196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.518 [2024-11-20 11:21:35.824227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.518 qpair failed and we were unable to recover it. 00:27:08.518 [2024-11-20 11:21:35.824399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.518 [2024-11-20 11:21:35.824430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.518 qpair failed and we were unable to recover it. 00:27:08.518 [2024-11-20 11:21:35.824602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.518 [2024-11-20 11:21:35.824631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.518 qpair failed and we were unable to recover it. 00:27:08.518 [2024-11-20 11:21:35.824820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.518 [2024-11-20 11:21:35.824851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.518 qpair failed and we were unable to recover it. 00:27:08.518 [2024-11-20 11:21:35.824991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.518 [2024-11-20 11:21:35.825024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.518 qpair failed and we were unable to recover it. 00:27:08.518 [2024-11-20 11:21:35.825142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.518 [2024-11-20 11:21:35.825174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.518 qpair failed and we were unable to recover it. 00:27:08.518 [2024-11-20 11:21:35.825357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.518 [2024-11-20 11:21:35.825388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.518 qpair failed and we were unable to recover it. 00:27:08.518 [2024-11-20 11:21:35.825625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.518 [2024-11-20 11:21:35.825656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.518 qpair failed and we were unable to recover it. 00:27:08.518 [2024-11-20 11:21:35.825846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.518 [2024-11-20 11:21:35.825877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.518 qpair failed and we were unable to recover it. 00:27:08.518 [2024-11-20 11:21:35.826016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.518 [2024-11-20 11:21:35.826049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.518 qpair failed and we were unable to recover it. 00:27:08.519 [2024-11-20 11:21:35.826241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.519 [2024-11-20 11:21:35.826273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.519 qpair failed and we were unable to recover it. 00:27:08.519 [2024-11-20 11:21:35.826465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.519 [2024-11-20 11:21:35.826496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.519 qpair failed and we were unable to recover it. 00:27:08.519 [2024-11-20 11:21:35.826679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.519 [2024-11-20 11:21:35.826710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.519 qpair failed and we were unable to recover it. 00:27:08.519 [2024-11-20 11:21:35.826820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.519 [2024-11-20 11:21:35.826850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.519 qpair failed and we were unable to recover it. 00:27:08.519 [2024-11-20 11:21:35.826966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.519 [2024-11-20 11:21:35.826999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.519 qpair failed and we were unable to recover it. 00:27:08.519 [2024-11-20 11:21:35.827193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.519 [2024-11-20 11:21:35.827224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.519 qpair failed and we were unable to recover it. 00:27:08.519 [2024-11-20 11:21:35.827334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.519 [2024-11-20 11:21:35.827364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.519 qpair failed and we were unable to recover it. 00:27:08.519 [2024-11-20 11:21:35.827549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.519 [2024-11-20 11:21:35.827579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.519 qpair failed and we were unable to recover it. 00:27:08.519 [2024-11-20 11:21:35.827705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.519 [2024-11-20 11:21:35.827737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.519 qpair failed and we were unable to recover it. 00:27:08.519 [2024-11-20 11:21:35.827904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.519 [2024-11-20 11:21:35.827935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.519 qpair failed and we were unable to recover it. 00:27:08.519 [2024-11-20 11:21:35.828062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.519 [2024-11-20 11:21:35.828094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.519 qpair failed and we were unable to recover it. 00:27:08.519 [2024-11-20 11:21:35.828214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.519 [2024-11-20 11:21:35.828245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.519 qpair failed and we were unable to recover it. 00:27:08.519 [2024-11-20 11:21:35.828415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.519 [2024-11-20 11:21:35.828446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.519 qpair failed and we were unable to recover it. 00:27:08.519 [2024-11-20 11:21:35.828635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.519 [2024-11-20 11:21:35.828668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.519 qpair failed and we were unable to recover it. 00:27:08.519 [2024-11-20 11:21:35.828873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.519 [2024-11-20 11:21:35.828903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.519 qpair failed and we were unable to recover it. 00:27:08.519 [2024-11-20 11:21:35.829093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.519 [2024-11-20 11:21:35.829125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.519 qpair failed and we were unable to recover it. 00:27:08.519 [2024-11-20 11:21:35.829297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.519 [2024-11-20 11:21:35.829329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.519 qpair failed and we were unable to recover it. 00:27:08.519 [2024-11-20 11:21:35.829430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.519 [2024-11-20 11:21:35.829461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.519 qpair failed and we were unable to recover it. 00:27:08.519 [2024-11-20 11:21:35.829576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.519 [2024-11-20 11:21:35.829608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.519 qpair failed and we were unable to recover it. 00:27:08.519 [2024-11-20 11:21:35.829821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.519 [2024-11-20 11:21:35.829852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.519 qpair failed and we were unable to recover it. 00:27:08.519 [2024-11-20 11:21:35.829987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.519 [2024-11-20 11:21:35.830019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.519 qpair failed and we were unable to recover it. 00:27:08.519 [2024-11-20 11:21:35.830229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.519 [2024-11-20 11:21:35.830261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.519 qpair failed and we were unable to recover it. 00:27:08.519 [2024-11-20 11:21:35.830435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.519 [2024-11-20 11:21:35.830466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.519 qpair failed and we were unable to recover it. 00:27:08.519 [2024-11-20 11:21:35.830647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.519 [2024-11-20 11:21:35.830678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.519 qpair failed and we were unable to recover it. 00:27:08.519 [2024-11-20 11:21:35.830799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.519 [2024-11-20 11:21:35.830830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.519 qpair failed and we were unable to recover it. 00:27:08.519 [2024-11-20 11:21:35.831091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.519 [2024-11-20 11:21:35.831123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.519 qpair failed and we were unable to recover it. 00:27:08.519 [2024-11-20 11:21:35.831364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.519 [2024-11-20 11:21:35.831402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.519 qpair failed and we were unable to recover it. 00:27:08.519 [2024-11-20 11:21:35.831504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.519 [2024-11-20 11:21:35.831537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.519 qpair failed and we were unable to recover it. 00:27:08.519 [2024-11-20 11:21:35.831716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.519 [2024-11-20 11:21:35.831747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.519 qpair failed and we were unable to recover it. 00:27:08.519 [2024-11-20 11:21:35.831927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.519 [2024-11-20 11:21:35.831967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.519 qpair failed and we were unable to recover it. 00:27:08.519 [2024-11-20 11:21:35.832100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.519 [2024-11-20 11:21:35.832132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.519 qpair failed and we were unable to recover it. 00:27:08.519 [2024-11-20 11:21:35.832246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.519 [2024-11-20 11:21:35.832276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.519 qpair failed and we were unable to recover it. 00:27:08.519 [2024-11-20 11:21:35.832462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.519 [2024-11-20 11:21:35.832494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.520 qpair failed and we were unable to recover it. 00:27:08.520 [2024-11-20 11:21:35.832678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.520 [2024-11-20 11:21:35.832709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.520 qpair failed and we were unable to recover it. 00:27:08.520 [2024-11-20 11:21:35.832925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.520 [2024-11-20 11:21:35.832978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.520 qpair failed and we were unable to recover it. 00:27:08.520 [2024-11-20 11:21:35.833166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.520 [2024-11-20 11:21:35.833197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.520 qpair failed and we were unable to recover it. 00:27:08.520 [2024-11-20 11:21:35.833434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.520 [2024-11-20 11:21:35.833465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.520 qpair failed and we were unable to recover it. 00:27:08.520 [2024-11-20 11:21:35.833653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.520 [2024-11-20 11:21:35.833685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.520 qpair failed and we were unable to recover it. 00:27:08.520 [2024-11-20 11:21:35.833807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.520 [2024-11-20 11:21:35.833838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.520 qpair failed and we were unable to recover it. 00:27:08.520 [2024-11-20 11:21:35.834024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.520 [2024-11-20 11:21:35.834056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.520 qpair failed and we were unable to recover it. 00:27:08.520 [2024-11-20 11:21:35.834240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.520 [2024-11-20 11:21:35.834272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.520 qpair failed and we were unable to recover it. 00:27:08.520 [2024-11-20 11:21:35.834509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.520 [2024-11-20 11:21:35.834540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.520 qpair failed and we were unable to recover it. 00:27:08.520 [2024-11-20 11:21:35.834646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.520 [2024-11-20 11:21:35.834678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.520 qpair failed and we were unable to recover it. 00:27:08.520 [2024-11-20 11:21:35.834846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.520 [2024-11-20 11:21:35.834877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.520 qpair failed and we were unable to recover it. 00:27:08.520 [2024-11-20 11:21:35.834994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.520 [2024-11-20 11:21:35.835024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.520 qpair failed and we were unable to recover it. 00:27:08.520 [2024-11-20 11:21:35.835211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.520 [2024-11-20 11:21:35.835242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.520 qpair failed and we were unable to recover it. 00:27:08.520 [2024-11-20 11:21:35.835373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.520 [2024-11-20 11:21:35.835404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.520 qpair failed and we were unable to recover it. 00:27:08.520 [2024-11-20 11:21:35.835523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.520 [2024-11-20 11:21:35.835556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.520 qpair failed and we were unable to recover it. 00:27:08.520 [2024-11-20 11:21:35.835747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.520 [2024-11-20 11:21:35.835778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.520 qpair failed and we were unable to recover it. 00:27:08.520 [2024-11-20 11:21:35.835965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.520 [2024-11-20 11:21:35.835998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.520 qpair failed and we were unable to recover it. 00:27:08.520 [2024-11-20 11:21:35.836232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.520 [2024-11-20 11:21:35.836263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.520 qpair failed and we were unable to recover it. 00:27:08.520 [2024-11-20 11:21:35.836383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.520 [2024-11-20 11:21:35.836413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.520 qpair failed and we were unable to recover it. 00:27:08.520 [2024-11-20 11:21:35.836541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.520 [2024-11-20 11:21:35.836572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.520 qpair failed and we were unable to recover it. 00:27:08.520 [2024-11-20 11:21:35.836706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.520 [2024-11-20 11:21:35.836739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.520 qpair failed and we were unable to recover it. 00:27:08.520 [2024-11-20 11:21:35.836912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.520 [2024-11-20 11:21:35.836944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.520 qpair failed and we were unable to recover it. 00:27:08.520 [2024-11-20 11:21:35.837125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.520 [2024-11-20 11:21:35.837156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.520 qpair failed and we were unable to recover it. 00:27:08.520 [2024-11-20 11:21:35.837331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.520 [2024-11-20 11:21:35.837362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.520 qpair failed and we were unable to recover it. 00:27:08.520 [2024-11-20 11:21:35.837535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.520 [2024-11-20 11:21:35.837566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.520 qpair failed and we were unable to recover it. 00:27:08.520 [2024-11-20 11:21:35.837810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.520 [2024-11-20 11:21:35.837841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.520 qpair failed and we were unable to recover it. 00:27:08.520 [2024-11-20 11:21:35.838083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.520 [2024-11-20 11:21:35.838116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.520 qpair failed and we were unable to recover it. 00:27:08.520 [2024-11-20 11:21:35.838295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.520 [2024-11-20 11:21:35.838325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.520 qpair failed and we were unable to recover it. 00:27:08.520 [2024-11-20 11:21:35.838494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.520 [2024-11-20 11:21:35.838526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.520 qpair failed and we were unable to recover it. 00:27:08.520 [2024-11-20 11:21:35.838708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.520 [2024-11-20 11:21:35.838740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.520 qpair failed and we were unable to recover it. 00:27:08.520 [2024-11-20 11:21:35.838862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.520 [2024-11-20 11:21:35.838892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.520 qpair failed and we were unable to recover it. 00:27:08.520 [2024-11-20 11:21:35.839069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.520 [2024-11-20 11:21:35.839101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.520 qpair failed and we were unable to recover it. 00:27:08.520 [2024-11-20 11:21:35.839291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.521 [2024-11-20 11:21:35.839322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.521 qpair failed and we were unable to recover it. 00:27:08.521 [2024-11-20 11:21:35.839495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.521 [2024-11-20 11:21:35.839532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.521 qpair failed and we were unable to recover it. 00:27:08.521 [2024-11-20 11:21:35.839707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.521 [2024-11-20 11:21:35.839738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.521 qpair failed and we were unable to recover it. 00:27:08.521 [2024-11-20 11:21:35.839871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.521 [2024-11-20 11:21:35.839903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.521 qpair failed and we were unable to recover it. 00:27:08.521 [2024-11-20 11:21:35.840029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.521 [2024-11-20 11:21:35.840061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.521 qpair failed and we were unable to recover it. 00:27:08.521 [2024-11-20 11:21:35.840238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.521 [2024-11-20 11:21:35.840270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.521 qpair failed and we were unable to recover it. 00:27:08.521 [2024-11-20 11:21:35.840459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.521 [2024-11-20 11:21:35.840491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.521 qpair failed and we were unable to recover it. 00:27:08.521 [2024-11-20 11:21:35.840794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.521 [2024-11-20 11:21:35.840824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.521 qpair failed and we were unable to recover it. 00:27:08.521 [2024-11-20 11:21:35.841073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.521 [2024-11-20 11:21:35.841105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.521 qpair failed and we were unable to recover it. 00:27:08.521 [2024-11-20 11:21:35.841391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.521 [2024-11-20 11:21:35.841422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.521 qpair failed and we were unable to recover it. 00:27:08.521 [2024-11-20 11:21:35.841602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.521 [2024-11-20 11:21:35.841634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.521 qpair failed and we were unable to recover it. 00:27:08.521 [2024-11-20 11:21:35.841843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.521 [2024-11-20 11:21:35.841873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.521 qpair failed and we were unable to recover it. 00:27:08.521 [2024-11-20 11:21:35.842060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.521 [2024-11-20 11:21:35.842093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.521 qpair failed and we were unable to recover it. 00:27:08.521 [2024-11-20 11:21:35.842219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.521 [2024-11-20 11:21:35.842249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.521 qpair failed and we were unable to recover it. 00:27:08.521 [2024-11-20 11:21:35.842437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.521 [2024-11-20 11:21:35.842468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.521 qpair failed and we were unable to recover it. 00:27:08.521 [2024-11-20 11:21:35.842652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.521 [2024-11-20 11:21:35.842683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.521 qpair failed and we were unable to recover it. 00:27:08.521 [2024-11-20 11:21:35.842806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.521 [2024-11-20 11:21:35.842837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.521 qpair failed and we were unable to recover it. 00:27:08.521 [2024-11-20 11:21:35.843031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.521 [2024-11-20 11:21:35.843064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.521 qpair failed and we were unable to recover it. 00:27:08.521 [2024-11-20 11:21:35.843305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.521 [2024-11-20 11:21:35.843336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.521 qpair failed and we were unable to recover it. 00:27:08.521 [2024-11-20 11:21:35.843574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.521 [2024-11-20 11:21:35.843605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.521 qpair failed and we were unable to recover it. 00:27:08.521 [2024-11-20 11:21:35.843731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.521 [2024-11-20 11:21:35.843762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.521 qpair failed and we were unable to recover it. 00:27:08.521 [2024-11-20 11:21:35.844006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.521 [2024-11-20 11:21:35.844038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.521 qpair failed and we were unable to recover it. 00:27:08.521 [2024-11-20 11:21:35.844221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.521 [2024-11-20 11:21:35.844253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.521 qpair failed and we were unable to recover it. 00:27:08.521 [2024-11-20 11:21:35.844493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.521 [2024-11-20 11:21:35.844524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.521 qpair failed and we were unable to recover it. 00:27:08.521 [2024-11-20 11:21:35.844749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.521 [2024-11-20 11:21:35.844779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.521 qpair failed and we were unable to recover it. 00:27:08.521 [2024-11-20 11:21:35.845041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.521 [2024-11-20 11:21:35.845073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.521 qpair failed and we were unable to recover it. 00:27:08.521 [2024-11-20 11:21:35.845261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.521 [2024-11-20 11:21:35.845292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.521 qpair failed and we were unable to recover it. 00:27:08.521 [2024-11-20 11:21:35.845472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.521 [2024-11-20 11:21:35.845503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.521 qpair failed and we were unable to recover it. 00:27:08.521 [2024-11-20 11:21:35.845635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.521 [2024-11-20 11:21:35.845667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.521 qpair failed and we were unable to recover it. 00:27:08.521 [2024-11-20 11:21:35.845902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.521 [2024-11-20 11:21:35.845933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.521 qpair failed and we were unable to recover it. 00:27:08.521 [2024-11-20 11:21:35.846123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.521 [2024-11-20 11:21:35.846157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.521 qpair failed and we were unable to recover it. 00:27:08.521 [2024-11-20 11:21:35.846272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.521 [2024-11-20 11:21:35.846303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.521 qpair failed and we were unable to recover it. 00:27:08.521 [2024-11-20 11:21:35.846540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.521 [2024-11-20 11:21:35.846571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.521 qpair failed and we were unable to recover it. 00:27:08.521 [2024-11-20 11:21:35.846757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.522 [2024-11-20 11:21:35.846788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.522 qpair failed and we were unable to recover it. 00:27:08.522 [2024-11-20 11:21:35.846968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.522 [2024-11-20 11:21:35.847001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.522 qpair failed and we were unable to recover it. 00:27:08.522 [2024-11-20 11:21:35.847123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.522 [2024-11-20 11:21:35.847154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.522 qpair failed and we were unable to recover it. 00:27:08.522 [2024-11-20 11:21:35.847282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.522 [2024-11-20 11:21:35.847313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.522 qpair failed and we were unable to recover it. 00:27:08.522 [2024-11-20 11:21:35.847490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.522 [2024-11-20 11:21:35.847521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.522 qpair failed and we were unable to recover it. 00:27:08.522 [2024-11-20 11:21:35.847695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.522 [2024-11-20 11:21:35.847726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.522 qpair failed and we were unable to recover it. 00:27:08.522 [2024-11-20 11:21:35.847902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.522 [2024-11-20 11:21:35.847933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.522 qpair failed and we were unable to recover it. 00:27:08.522 [2024-11-20 11:21:35.848066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.522 [2024-11-20 11:21:35.848098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.522 qpair failed and we were unable to recover it. 00:27:08.522 [2024-11-20 11:21:35.848215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.522 [2024-11-20 11:21:35.848252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.522 qpair failed and we were unable to recover it. 00:27:08.522 [2024-11-20 11:21:35.848379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.522 [2024-11-20 11:21:35.848411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.522 qpair failed and we were unable to recover it. 00:27:08.522 [2024-11-20 11:21:35.848604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.522 [2024-11-20 11:21:35.848635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.522 qpair failed and we were unable to recover it. 00:27:08.522 [2024-11-20 11:21:35.848844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.522 [2024-11-20 11:21:35.848876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.522 qpair failed and we were unable to recover it. 00:27:08.522 [2024-11-20 11:21:35.848980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.522 [2024-11-20 11:21:35.849014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.522 qpair failed and we were unable to recover it. 00:27:08.522 [2024-11-20 11:21:35.849120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.522 [2024-11-20 11:21:35.849149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.522 qpair failed and we were unable to recover it. 00:27:08.522 [2024-11-20 11:21:35.849395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.522 [2024-11-20 11:21:35.849427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.522 qpair failed and we were unable to recover it. 00:27:08.522 [2024-11-20 11:21:35.849660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.522 [2024-11-20 11:21:35.849691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.522 qpair failed and we were unable to recover it. 00:27:08.522 [2024-11-20 11:21:35.849797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.522 [2024-11-20 11:21:35.849829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.522 qpair failed and we were unable to recover it. 00:27:08.522 [2024-11-20 11:21:35.850008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.522 [2024-11-20 11:21:35.850040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.522 qpair failed and we were unable to recover it. 00:27:08.522 [2024-11-20 11:21:35.850153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.522 [2024-11-20 11:21:35.850183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.522 qpair failed and we were unable to recover it. 00:27:08.522 [2024-11-20 11:21:35.850310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.522 [2024-11-20 11:21:35.850341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.522 qpair failed and we were unable to recover it. 00:27:08.522 [2024-11-20 11:21:35.850623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.522 [2024-11-20 11:21:35.850654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.522 qpair failed and we were unable to recover it. 00:27:08.522 [2024-11-20 11:21:35.850840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.522 [2024-11-20 11:21:35.850871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.522 qpair failed and we were unable to recover it. 00:27:08.522 [2024-11-20 11:21:35.851051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.522 [2024-11-20 11:21:35.851084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.522 qpair failed and we were unable to recover it. 00:27:08.522 [2024-11-20 11:21:35.851215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.522 [2024-11-20 11:21:35.851247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.522 qpair failed and we were unable to recover it. 00:27:08.522 [2024-11-20 11:21:35.851376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.522 [2024-11-20 11:21:35.851407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.522 qpair failed and we were unable to recover it. 00:27:08.522 [2024-11-20 11:21:35.851570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.522 [2024-11-20 11:21:35.851600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.522 qpair failed and we were unable to recover it. 00:27:08.522 [2024-11-20 11:21:35.851720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.522 [2024-11-20 11:21:35.851752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.522 qpair failed and we were unable to recover it. 00:27:08.522 [2024-11-20 11:21:35.851852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.522 [2024-11-20 11:21:35.851883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.522 qpair failed and we were unable to recover it. 00:27:08.522 [2024-11-20 11:21:35.852064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.522 [2024-11-20 11:21:35.852095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.522 qpair failed and we were unable to recover it. 00:27:08.522 [2024-11-20 11:21:35.852220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.522 [2024-11-20 11:21:35.852252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.522 qpair failed and we were unable to recover it. 00:27:08.522 [2024-11-20 11:21:35.852363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.522 [2024-11-20 11:21:35.852394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.522 qpair failed and we were unable to recover it. 00:27:08.522 [2024-11-20 11:21:35.852562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.522 [2024-11-20 11:21:35.852594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.522 qpair failed and we were unable to recover it. 00:27:08.522 [2024-11-20 11:21:35.852768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.522 [2024-11-20 11:21:35.852799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.522 qpair failed and we were unable to recover it. 00:27:08.522 [2024-11-20 11:21:35.852991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.523 [2024-11-20 11:21:35.853025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.523 qpair failed and we were unable to recover it. 00:27:08.523 [2024-11-20 11:21:35.853200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.523 [2024-11-20 11:21:35.853232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.523 qpair failed and we were unable to recover it. 00:27:08.523 [2024-11-20 11:21:35.853495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.523 [2024-11-20 11:21:35.853569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.523 qpair failed and we were unable to recover it. 00:27:08.523 [2024-11-20 11:21:35.853713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.523 [2024-11-20 11:21:35.853750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.523 qpair failed and we were unable to recover it. 00:27:08.523 [2024-11-20 11:21:35.853880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.523 [2024-11-20 11:21:35.853912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.523 qpair failed and we were unable to recover it. 00:27:08.523 [2024-11-20 11:21:35.854049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.523 [2024-11-20 11:21:35.854084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.523 qpair failed and we were unable to recover it. 00:27:08.523 [2024-11-20 11:21:35.854360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.523 [2024-11-20 11:21:35.854393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.523 qpair failed and we were unable to recover it. 00:27:08.523 [2024-11-20 11:21:35.854587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.523 [2024-11-20 11:21:35.854619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.523 qpair failed and we were unable to recover it. 00:27:08.523 [2024-11-20 11:21:35.854868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.523 [2024-11-20 11:21:35.854900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.523 qpair failed and we were unable to recover it. 00:27:08.523 [2024-11-20 11:21:35.855101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.523 [2024-11-20 11:21:35.855133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.523 qpair failed and we were unable to recover it. 00:27:08.523 [2024-11-20 11:21:35.855256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.523 [2024-11-20 11:21:35.855288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.523 qpair failed and we were unable to recover it. 00:27:08.523 [2024-11-20 11:21:35.855401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.523 [2024-11-20 11:21:35.855433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.523 qpair failed and we were unable to recover it. 00:27:08.523 [2024-11-20 11:21:35.855548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.523 [2024-11-20 11:21:35.855579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.523 qpair failed and we were unable to recover it. 00:27:08.523 [2024-11-20 11:21:35.855754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.523 [2024-11-20 11:21:35.855786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.523 qpair failed and we were unable to recover it. 00:27:08.523 [2024-11-20 11:21:35.855914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.523 [2024-11-20 11:21:35.855946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.523 qpair failed and we were unable to recover it. 00:27:08.523 [2024-11-20 11:21:35.856203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.523 [2024-11-20 11:21:35.856236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.523 qpair failed and we were unable to recover it. 00:27:08.523 [2024-11-20 11:21:35.856443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.523 [2024-11-20 11:21:35.856476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.523 qpair failed and we were unable to recover it. 00:27:08.523 [2024-11-20 11:21:35.856599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.523 [2024-11-20 11:21:35.856630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.523 qpair failed and we were unable to recover it. 00:27:08.523 [2024-11-20 11:21:35.856758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.523 [2024-11-20 11:21:35.856789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.523 qpair failed and we were unable to recover it. 00:27:08.523 [2024-11-20 11:21:35.856984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.523 [2024-11-20 11:21:35.857019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.523 qpair failed and we were unable to recover it. 00:27:08.523 [2024-11-20 11:21:35.857145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.523 [2024-11-20 11:21:35.857176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.523 qpair failed and we were unable to recover it. 00:27:08.523 [2024-11-20 11:21:35.857419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.523 [2024-11-20 11:21:35.857451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.523 qpair failed and we were unable to recover it. 00:27:08.523 [2024-11-20 11:21:35.857567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.523 [2024-11-20 11:21:35.857598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.523 qpair failed and we were unable to recover it. 00:27:08.523 [2024-11-20 11:21:35.857793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.523 [2024-11-20 11:21:35.857824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.523 qpair failed and we were unable to recover it. 00:27:08.523 [2024-11-20 11:21:35.857940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.523 [2024-11-20 11:21:35.857981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.523 qpair failed and we were unable to recover it. 00:27:08.523 [2024-11-20 11:21:35.858124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.523 [2024-11-20 11:21:35.858155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.523 qpair failed and we were unable to recover it. 00:27:08.523 [2024-11-20 11:21:35.858269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.523 [2024-11-20 11:21:35.858300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.523 qpair failed and we were unable to recover it. 00:27:08.523 [2024-11-20 11:21:35.858537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.523 [2024-11-20 11:21:35.858567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.523 qpair failed and we were unable to recover it. 00:27:08.523 [2024-11-20 11:21:35.858697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.523 [2024-11-20 11:21:35.858729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.523 qpair failed and we were unable to recover it. 00:27:08.523 [2024-11-20 11:21:35.858853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.523 [2024-11-20 11:21:35.858891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.524 qpair failed and we were unable to recover it. 00:27:08.524 [2024-11-20 11:21:35.859014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.524 [2024-11-20 11:21:35.859045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.524 qpair failed and we were unable to recover it. 00:27:08.524 [2024-11-20 11:21:35.859167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.524 [2024-11-20 11:21:35.859199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.524 qpair failed and we were unable to recover it. 00:27:08.524 [2024-11-20 11:21:35.859306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.524 [2024-11-20 11:21:35.859337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.524 qpair failed and we were unable to recover it. 00:27:08.524 [2024-11-20 11:21:35.859467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.524 [2024-11-20 11:21:35.859498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.524 qpair failed and we were unable to recover it. 00:27:08.524 [2024-11-20 11:21:35.859673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.524 [2024-11-20 11:21:35.859704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.524 qpair failed and we were unable to recover it. 00:27:08.524 [2024-11-20 11:21:35.859890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.524 [2024-11-20 11:21:35.859921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.524 qpair failed and we were unable to recover it. 00:27:08.524 [2024-11-20 11:21:35.860102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.524 [2024-11-20 11:21:35.860135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.524 qpair failed and we were unable to recover it. 00:27:08.524 [2024-11-20 11:21:35.860243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.524 [2024-11-20 11:21:35.860273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.524 qpair failed and we were unable to recover it. 00:27:08.524 [2024-11-20 11:21:35.860396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.524 [2024-11-20 11:21:35.860429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.524 qpair failed and we were unable to recover it. 00:27:08.524 [2024-11-20 11:21:35.860635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.524 [2024-11-20 11:21:35.860667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.524 qpair failed and we were unable to recover it. 00:27:08.524 [2024-11-20 11:21:35.860851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.524 [2024-11-20 11:21:35.860882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.524 qpair failed and we were unable to recover it. 00:27:08.524 [2024-11-20 11:21:35.861010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.524 [2024-11-20 11:21:35.861042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.524 qpair failed and we were unable to recover it. 00:27:08.524 [2024-11-20 11:21:35.861174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.524 [2024-11-20 11:21:35.861205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.524 qpair failed and we were unable to recover it. 00:27:08.524 [2024-11-20 11:21:35.861407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.524 [2024-11-20 11:21:35.861440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.524 qpair failed and we were unable to recover it. 00:27:08.524 [2024-11-20 11:21:35.861569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.524 [2024-11-20 11:21:35.861600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.524 qpair failed and we were unable to recover it. 00:27:08.524 [2024-11-20 11:21:35.861712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.524 [2024-11-20 11:21:35.861743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.524 qpair failed and we were unable to recover it. 00:27:08.524 [2024-11-20 11:21:35.861925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.524 [2024-11-20 11:21:35.861968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.524 qpair failed and we were unable to recover it. 00:27:08.524 [2024-11-20 11:21:35.862145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.524 [2024-11-20 11:21:35.862177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.524 qpair failed and we were unable to recover it. 00:27:08.524 [2024-11-20 11:21:35.862411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.524 [2024-11-20 11:21:35.862444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.524 qpair failed and we were unable to recover it. 00:27:08.524 [2024-11-20 11:21:35.862544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.524 [2024-11-20 11:21:35.862577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.524 qpair failed and we were unable to recover it. 00:27:08.524 [2024-11-20 11:21:35.862827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.524 [2024-11-20 11:21:35.862859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.524 qpair failed and we were unable to recover it. 00:27:08.524 [2024-11-20 11:21:35.863039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.524 [2024-11-20 11:21:35.863072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.524 qpair failed and we were unable to recover it. 00:27:08.524 [2024-11-20 11:21:35.863282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.524 [2024-11-20 11:21:35.863314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.524 qpair failed and we were unable to recover it. 00:27:08.524 [2024-11-20 11:21:35.863482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.524 [2024-11-20 11:21:35.863514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.524 qpair failed and we were unable to recover it. 00:27:08.524 [2024-11-20 11:21:35.863636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.524 [2024-11-20 11:21:35.863668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.524 qpair failed and we were unable to recover it. 00:27:08.524 [2024-11-20 11:21:35.863835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.524 [2024-11-20 11:21:35.863868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.524 qpair failed and we were unable to recover it. 00:27:08.524 [2024-11-20 11:21:35.863997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.524 [2024-11-20 11:21:35.864036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.524 qpair failed and we were unable to recover it. 00:27:08.524 [2024-11-20 11:21:35.864161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.524 [2024-11-20 11:21:35.864193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.524 qpair failed and we were unable to recover it. 00:27:08.524 [2024-11-20 11:21:35.864368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.524 [2024-11-20 11:21:35.864402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.524 qpair failed and we were unable to recover it. 00:27:08.524 [2024-11-20 11:21:35.864503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.524 [2024-11-20 11:21:35.864533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.524 qpair failed and we were unable to recover it. 00:27:08.524 [2024-11-20 11:21:35.864722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.524 [2024-11-20 11:21:35.864753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.524 qpair failed and we were unable to recover it. 00:27:08.524 [2024-11-20 11:21:35.864991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.524 [2024-11-20 11:21:35.865025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.524 qpair failed and we were unable to recover it. 00:27:08.524 [2024-11-20 11:21:35.865142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.524 [2024-11-20 11:21:35.865174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.525 qpair failed and we were unable to recover it. 00:27:08.525 [2024-11-20 11:21:35.865292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.525 [2024-11-20 11:21:35.865324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.525 qpair failed and we were unable to recover it. 00:27:08.525 [2024-11-20 11:21:35.865457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.525 [2024-11-20 11:21:35.865488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.525 qpair failed and we were unable to recover it. 00:27:08.525 [2024-11-20 11:21:35.865592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.525 [2024-11-20 11:21:35.865623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.525 qpair failed and we were unable to recover it. 00:27:08.525 [2024-11-20 11:21:35.865793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.525 [2024-11-20 11:21:35.865824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.525 qpair failed and we were unable to recover it. 00:27:08.525 [2024-11-20 11:21:35.865998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.525 [2024-11-20 11:21:35.866032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.525 qpair failed and we were unable to recover it. 00:27:08.525 [2024-11-20 11:21:35.866157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.525 [2024-11-20 11:21:35.866189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.525 qpair failed and we were unable to recover it. 00:27:08.525 [2024-11-20 11:21:35.866365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.525 [2024-11-20 11:21:35.866397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.525 qpair failed and we were unable to recover it. 00:27:08.525 [2024-11-20 11:21:35.866537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.525 [2024-11-20 11:21:35.866568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.525 qpair failed and we were unable to recover it. 00:27:08.525 [2024-11-20 11:21:35.866705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.525 [2024-11-20 11:21:35.866737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.525 qpair failed and we were unable to recover it. 00:27:08.525 [2024-11-20 11:21:35.866860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.525 [2024-11-20 11:21:35.866892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.525 qpair failed and we were unable to recover it. 00:27:08.525 [2024-11-20 11:21:35.867081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.525 [2024-11-20 11:21:35.867113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.525 qpair failed and we were unable to recover it. 00:27:08.525 [2024-11-20 11:21:35.867223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.525 [2024-11-20 11:21:35.867253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.525 qpair failed and we were unable to recover it. 00:27:08.525 [2024-11-20 11:21:35.867376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.525 [2024-11-20 11:21:35.867409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.525 qpair failed and we were unable to recover it. 00:27:08.525 [2024-11-20 11:21:35.867599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.525 [2024-11-20 11:21:35.867631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.525 qpair failed and we were unable to recover it. 00:27:08.525 [2024-11-20 11:21:35.867735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.525 [2024-11-20 11:21:35.867767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.525 qpair failed and we were unable to recover it. 00:27:08.525 [2024-11-20 11:21:35.868037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.525 [2024-11-20 11:21:35.868070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.525 qpair failed and we were unable to recover it. 00:27:08.525 [2024-11-20 11:21:35.868186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.525 [2024-11-20 11:21:35.868218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.525 qpair failed and we were unable to recover it. 00:27:08.525 [2024-11-20 11:21:35.868400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.525 [2024-11-20 11:21:35.868432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.525 qpair failed and we were unable to recover it. 00:27:08.525 [2024-11-20 11:21:35.868628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.525 [2024-11-20 11:21:35.868660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.525 qpair failed and we were unable to recover it. 00:27:08.525 [2024-11-20 11:21:35.868788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.525 [2024-11-20 11:21:35.868819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.525 qpair failed and we were unable to recover it. 00:27:08.525 [2024-11-20 11:21:35.868926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.525 [2024-11-20 11:21:35.868973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.525 qpair failed and we were unable to recover it. 00:27:08.525 [2024-11-20 11:21:35.869167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.525 [2024-11-20 11:21:35.869199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.525 qpair failed and we were unable to recover it. 00:27:08.525 [2024-11-20 11:21:35.869312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.525 [2024-11-20 11:21:35.869343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.525 qpair failed and we were unable to recover it. 00:27:08.525 [2024-11-20 11:21:35.869510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.525 [2024-11-20 11:21:35.869541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.525 qpair failed and we were unable to recover it. 00:27:08.525 [2024-11-20 11:21:35.869730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.525 [2024-11-20 11:21:35.869763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.525 qpair failed and we were unable to recover it. 00:27:08.525 [2024-11-20 11:21:35.869870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.525 [2024-11-20 11:21:35.869901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.525 qpair failed and we were unable to recover it. 00:27:08.525 [2024-11-20 11:21:35.870022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.525 [2024-11-20 11:21:35.870054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.525 qpair failed and we were unable to recover it. 00:27:08.525 [2024-11-20 11:21:35.870173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.525 [2024-11-20 11:21:35.870204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.525 qpair failed and we were unable to recover it. 00:27:08.525 [2024-11-20 11:21:35.870316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.525 [2024-11-20 11:21:35.870347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.525 qpair failed and we were unable to recover it. 00:27:08.525 [2024-11-20 11:21:35.870519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.525 [2024-11-20 11:21:35.870551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.525 qpair failed and we were unable to recover it. 00:27:08.525 [2024-11-20 11:21:35.870759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.525 [2024-11-20 11:21:35.870791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.525 qpair failed and we were unable to recover it. 00:27:08.525 [2024-11-20 11:21:35.870914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.525 [2024-11-20 11:21:35.870957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.525 qpair failed and we were unable to recover it. 00:27:08.525 [2024-11-20 11:21:35.871079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.525 [2024-11-20 11:21:35.871112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.525 qpair failed and we were unable to recover it. 00:27:08.526 [2024-11-20 11:21:35.871309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.526 [2024-11-20 11:21:35.871340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.526 qpair failed and we were unable to recover it. 00:27:08.526 [2024-11-20 11:21:35.871518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.526 [2024-11-20 11:21:35.871549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.526 qpair failed and we were unable to recover it. 00:27:08.526 [2024-11-20 11:21:35.871665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.526 [2024-11-20 11:21:35.871696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.526 qpair failed and we were unable to recover it. 00:27:08.526 [2024-11-20 11:21:35.871802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.526 [2024-11-20 11:21:35.871836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.526 qpair failed and we were unable to recover it. 00:27:08.526 [2024-11-20 11:21:35.872036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.526 [2024-11-20 11:21:35.872069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.526 qpair failed and we were unable to recover it. 00:27:08.526 [2024-11-20 11:21:35.872258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.526 [2024-11-20 11:21:35.872289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.526 qpair failed and we were unable to recover it. 00:27:08.526 [2024-11-20 11:21:35.872425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.526 [2024-11-20 11:21:35.872456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.526 qpair failed and we were unable to recover it. 00:27:08.526 [2024-11-20 11:21:35.872574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.526 [2024-11-20 11:21:35.872606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.526 qpair failed and we were unable to recover it. 00:27:08.526 [2024-11-20 11:21:35.872719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.526 [2024-11-20 11:21:35.872750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.526 qpair failed and we were unable to recover it. 00:27:08.526 [2024-11-20 11:21:35.872921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.526 [2024-11-20 11:21:35.872980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.526 qpair failed and we were unable to recover it. 00:27:08.526 [2024-11-20 11:21:35.873094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.526 [2024-11-20 11:21:35.873127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.526 qpair failed and we were unable to recover it. 00:27:08.526 [2024-11-20 11:21:35.873393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.526 [2024-11-20 11:21:35.873425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.526 qpair failed and we were unable to recover it. 00:27:08.526 [2024-11-20 11:21:35.873554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.526 [2024-11-20 11:21:35.873585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.526 qpair failed and we were unable to recover it. 00:27:08.526 [2024-11-20 11:21:35.873711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.526 [2024-11-20 11:21:35.873743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.526 qpair failed and we were unable to recover it. 00:27:08.526 [2024-11-20 11:21:35.873871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.526 [2024-11-20 11:21:35.873903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.526 qpair failed and we were unable to recover it. 00:27:08.526 [2024-11-20 11:21:35.874031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.526 [2024-11-20 11:21:35.874065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.526 qpair failed and we were unable to recover it. 00:27:08.526 [2024-11-20 11:21:35.874171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.526 [2024-11-20 11:21:35.874203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.526 qpair failed and we were unable to recover it. 00:27:08.526 [2024-11-20 11:21:35.874388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.526 [2024-11-20 11:21:35.874421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.526 qpair failed and we were unable to recover it. 00:27:08.526 [2024-11-20 11:21:35.874607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.526 [2024-11-20 11:21:35.874640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.526 qpair failed and we were unable to recover it. 00:27:08.526 [2024-11-20 11:21:35.874829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.526 [2024-11-20 11:21:35.874862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.526 qpair failed and we were unable to recover it. 00:27:08.526 [2024-11-20 11:21:35.875111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.526 [2024-11-20 11:21:35.875144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.526 qpair failed and we were unable to recover it. 00:27:08.526 [2024-11-20 11:21:35.875276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.526 [2024-11-20 11:21:35.875309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.526 qpair failed and we were unable to recover it. 00:27:08.526 [2024-11-20 11:21:35.875498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.526 [2024-11-20 11:21:35.875530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.526 qpair failed and we were unable to recover it. 00:27:08.526 [2024-11-20 11:21:35.875716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.526 [2024-11-20 11:21:35.875748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.526 qpair failed and we were unable to recover it. 00:27:08.526 [2024-11-20 11:21:35.875944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.526 [2024-11-20 11:21:35.875986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.526 qpair failed and we were unable to recover it. 00:27:08.526 [2024-11-20 11:21:35.876164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.526 [2024-11-20 11:21:35.876196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.526 qpair failed and we were unable to recover it. 00:27:08.526 [2024-11-20 11:21:35.876321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.526 [2024-11-20 11:21:35.876352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.526 qpair failed and we were unable to recover it. 00:27:08.526 [2024-11-20 11:21:35.876487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.526 [2024-11-20 11:21:35.876519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.526 qpair failed and we were unable to recover it. 00:27:08.526 [2024-11-20 11:21:35.876759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.526 [2024-11-20 11:21:35.876810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.526 qpair failed and we were unable to recover it. 00:27:08.526 [2024-11-20 11:21:35.877021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.526 [2024-11-20 11:21:35.877057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.526 qpair failed and we were unable to recover it. 00:27:08.526 [2024-11-20 11:21:35.877190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.526 [2024-11-20 11:21:35.877229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.526 qpair failed and we were unable to recover it. 00:27:08.526 [2024-11-20 11:21:35.877346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.526 [2024-11-20 11:21:35.877381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.526 qpair failed and we were unable to recover it. 00:27:08.526 [2024-11-20 11:21:35.877530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.526 [2024-11-20 11:21:35.877562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.526 qpair failed and we were unable to recover it. 00:27:08.527 [2024-11-20 11:21:35.877810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.527 [2024-11-20 11:21:35.877842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.527 qpair failed and we were unable to recover it. 00:27:08.527 [2024-11-20 11:21:35.877968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.527 [2024-11-20 11:21:35.878003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.527 qpair failed and we were unable to recover it. 00:27:08.527 [2024-11-20 11:21:35.878201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.527 [2024-11-20 11:21:35.878233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.527 qpair failed and we were unable to recover it. 00:27:08.527 [2024-11-20 11:21:35.878458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.527 [2024-11-20 11:21:35.878488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.527 qpair failed and we were unable to recover it. 00:27:08.527 [2024-11-20 11:21:35.878680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.527 [2024-11-20 11:21:35.878720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.527 qpair failed and we were unable to recover it. 00:27:08.527 [2024-11-20 11:21:35.878872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.527 [2024-11-20 11:21:35.878919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.527 qpair failed and we were unable to recover it. 00:27:08.527 [2024-11-20 11:21:35.879148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.527 [2024-11-20 11:21:35.879189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.527 qpair failed and we were unable to recover it. 00:27:08.527 [2024-11-20 11:21:35.879328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.527 [2024-11-20 11:21:35.879366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.527 qpair failed and we were unable to recover it. 00:27:08.527 [2024-11-20 11:21:35.879484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.527 [2024-11-20 11:21:35.879516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.527 qpair failed and we were unable to recover it. 00:27:08.527 [2024-11-20 11:21:35.879760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.527 [2024-11-20 11:21:35.879792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.527 qpair failed and we were unable to recover it. 00:27:08.527 [2024-11-20 11:21:35.879914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.527 [2024-11-20 11:21:35.879945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.527 qpair failed and we were unable to recover it. 00:27:08.527 [2024-11-20 11:21:35.880078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.527 [2024-11-20 11:21:35.880109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.527 qpair failed and we were unable to recover it. 00:27:08.527 [2024-11-20 11:21:35.880229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.527 [2024-11-20 11:21:35.880262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.527 qpair failed and we were unable to recover it. 00:27:08.527 [2024-11-20 11:21:35.880373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.527 [2024-11-20 11:21:35.880404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.527 qpair failed and we were unable to recover it. 00:27:08.527 [2024-11-20 11:21:35.880583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.527 [2024-11-20 11:21:35.880614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.527 qpair failed and we were unable to recover it. 00:27:08.527 [2024-11-20 11:21:35.880872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.527 [2024-11-20 11:21:35.880904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.527 qpair failed and we were unable to recover it. 00:27:08.527 [2024-11-20 11:21:35.881130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.527 [2024-11-20 11:21:35.881169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.527 qpair failed and we were unable to recover it. 00:27:08.527 [2024-11-20 11:21:35.881437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.527 [2024-11-20 11:21:35.881469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.527 qpair failed and we were unable to recover it. 00:27:08.527 [2024-11-20 11:21:35.881601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.527 [2024-11-20 11:21:35.881632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.527 qpair failed and we were unable to recover it. 00:27:08.527 [2024-11-20 11:21:35.881743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.527 [2024-11-20 11:21:35.881774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.527 qpair failed and we were unable to recover it. 00:27:08.527 [2024-11-20 11:21:35.882010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.527 [2024-11-20 11:21:35.882044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.527 qpair failed and we were unable to recover it. 00:27:08.527 [2024-11-20 11:21:35.882159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.527 [2024-11-20 11:21:35.882189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.527 qpair failed and we were unable to recover it. 00:27:08.527 [2024-11-20 11:21:35.882472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.527 [2024-11-20 11:21:35.882511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.527 qpair failed and we were unable to recover it. 00:27:08.527 [2024-11-20 11:21:35.882641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.527 [2024-11-20 11:21:35.882677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.527 qpair failed and we were unable to recover it. 00:27:08.527 [2024-11-20 11:21:35.882885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.527 [2024-11-20 11:21:35.882917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.527 qpair failed and we were unable to recover it. 00:27:08.527 [2024-11-20 11:21:35.883057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.527 [2024-11-20 11:21:35.883096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.527 qpair failed and we were unable to recover it. 00:27:08.527 [2024-11-20 11:21:35.883394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.527 [2024-11-20 11:21:35.883435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.527 qpair failed and we were unable to recover it. 00:27:08.527 [2024-11-20 11:21:35.883672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.527 [2024-11-20 11:21:35.883712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.527 qpair failed and we were unable to recover it. 00:27:08.527 [2024-11-20 11:21:35.883933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.527 [2024-11-20 11:21:35.883983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.527 qpair failed and we were unable to recover it. 00:27:08.527 [2024-11-20 11:21:35.884210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.527 [2024-11-20 11:21:35.884241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.527 qpair failed and we were unable to recover it. 00:27:08.527 [2024-11-20 11:21:35.884382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.527 [2024-11-20 11:21:35.884415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.527 qpair failed and we were unable to recover it. 00:27:08.527 [2024-11-20 11:21:35.884594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.527 [2024-11-20 11:21:35.884625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.527 qpair failed and we were unable to recover it. 00:27:08.527 [2024-11-20 11:21:35.884812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.527 [2024-11-20 11:21:35.884842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.528 qpair failed and we were unable to recover it. 00:27:08.528 [2024-11-20 11:21:35.884982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.528 [2024-11-20 11:21:35.885018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.528 qpair failed and we were unable to recover it. 00:27:08.528 [2024-11-20 11:21:35.885198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.528 [2024-11-20 11:21:35.885229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.528 qpair failed and we were unable to recover it. 00:27:08.528 [2024-11-20 11:21:35.885413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.528 [2024-11-20 11:21:35.885451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.528 qpair failed and we were unable to recover it. 00:27:08.528 [2024-11-20 11:21:35.885580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.528 [2024-11-20 11:21:35.885616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.528 qpair failed and we were unable to recover it. 00:27:08.528 [2024-11-20 11:21:35.885753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.528 [2024-11-20 11:21:35.885784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.528 qpair failed and we were unable to recover it. 00:27:08.528 [2024-11-20 11:21:35.885999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.528 [2024-11-20 11:21:35.886032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.528 qpair failed and we were unable to recover it. 00:27:08.528 [2024-11-20 11:21:35.886161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.528 [2024-11-20 11:21:35.886198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.528 qpair failed and we were unable to recover it. 00:27:08.528 [2024-11-20 11:21:35.886323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.528 [2024-11-20 11:21:35.886361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.528 qpair failed and we were unable to recover it. 00:27:08.528 [2024-11-20 11:21:35.886539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.528 [2024-11-20 11:21:35.886570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.528 qpair failed and we were unable to recover it. 00:27:08.528 [2024-11-20 11:21:35.886690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.528 [2024-11-20 11:21:35.886725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.528 qpair failed and we were unable to recover it. 00:27:08.528 [2024-11-20 11:21:35.886966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.528 [2024-11-20 11:21:35.886997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.528 qpair failed and we were unable to recover it. 00:27:08.528 [2024-11-20 11:21:35.887147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.528 [2024-11-20 11:21:35.887179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.528 qpair failed and we were unable to recover it. 00:27:08.528 [2024-11-20 11:21:35.887300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.528 [2024-11-20 11:21:35.887332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.528 qpair failed and we were unable to recover it. 00:27:08.528 [2024-11-20 11:21:35.887439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.528 [2024-11-20 11:21:35.887471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.528 qpair failed and we were unable to recover it. 00:27:08.528 [2024-11-20 11:21:35.887706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.528 [2024-11-20 11:21:35.887735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.528 qpair failed and we were unable to recover it. 00:27:08.528 [2024-11-20 11:21:35.887920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.528 [2024-11-20 11:21:35.887957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.528 qpair failed and we were unable to recover it. 00:27:08.528 [2024-11-20 11:21:35.888149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.528 [2024-11-20 11:21:35.888178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.528 qpair failed and we were unable to recover it. 00:27:08.528 [2024-11-20 11:21:35.888317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.528 [2024-11-20 11:21:35.888345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.528 qpair failed and we were unable to recover it. 00:27:08.528 [2024-11-20 11:21:35.888603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.528 [2024-11-20 11:21:35.888631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.528 qpair failed and we were unable to recover it. 00:27:08.528 [2024-11-20 11:21:35.888765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.528 [2024-11-20 11:21:35.888797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.528 qpair failed and we were unable to recover it. 00:27:08.528 [2024-11-20 11:21:35.888904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.528 [2024-11-20 11:21:35.888936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.528 qpair failed and we were unable to recover it. 00:27:08.528 [2024-11-20 11:21:35.889074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.528 [2024-11-20 11:21:35.889107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.528 qpair failed and we were unable to recover it. 00:27:08.528 [2024-11-20 11:21:35.889350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.528 [2024-11-20 11:21:35.889378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.528 qpair failed and we were unable to recover it. 00:27:08.528 [2024-11-20 11:21:35.889574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.528 [2024-11-20 11:21:35.889602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.528 qpair failed and we were unable to recover it. 00:27:08.528 [2024-11-20 11:21:35.889821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.528 [2024-11-20 11:21:35.889851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.528 qpair failed and we were unable to recover it. 00:27:08.528 [2024-11-20 11:21:35.889970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.528 [2024-11-20 11:21:35.890005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.528 qpair failed and we were unable to recover it. 00:27:08.528 [2024-11-20 11:21:35.890178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.528 [2024-11-20 11:21:35.890207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.528 qpair failed and we were unable to recover it. 00:27:08.528 [2024-11-20 11:21:35.890320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.528 [2024-11-20 11:21:35.890352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.528 qpair failed and we were unable to recover it. 00:27:08.528 [2024-11-20 11:21:35.890489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.529 [2024-11-20 11:21:35.890517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.529 qpair failed and we were unable to recover it. 00:27:08.529 [2024-11-20 11:21:35.890708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.529 [2024-11-20 11:21:35.890737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.529 qpair failed and we were unable to recover it. 00:27:08.529 [2024-11-20 11:21:35.890909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.529 [2024-11-20 11:21:35.890937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.529 qpair failed and we were unable to recover it. 00:27:08.529 [2024-11-20 11:21:35.891214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.529 [2024-11-20 11:21:35.891242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.529 qpair failed and we were unable to recover it. 00:27:08.529 [2024-11-20 11:21:35.891451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.529 [2024-11-20 11:21:35.891480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.529 qpair failed and we were unable to recover it. 00:27:08.529 [2024-11-20 11:21:35.891660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.529 [2024-11-20 11:21:35.891689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.529 qpair failed and we were unable to recover it. 00:27:08.529 [2024-11-20 11:21:35.891861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.529 [2024-11-20 11:21:35.891888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.529 qpair failed and we were unable to recover it. 00:27:08.529 [2024-11-20 11:21:35.892086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.529 [2024-11-20 11:21:35.892115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.529 qpair failed and we were unable to recover it. 00:27:08.529 [2024-11-20 11:21:35.892263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.529 [2024-11-20 11:21:35.892293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.529 qpair failed and we were unable to recover it. 00:27:08.529 [2024-11-20 11:21:35.892408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.529 [2024-11-20 11:21:35.892440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.529 qpair failed and we were unable to recover it. 00:27:08.529 [2024-11-20 11:21:35.892612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.529 [2024-11-20 11:21:35.892640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.529 qpair failed and we were unable to recover it. 00:27:08.529 [2024-11-20 11:21:35.892823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.529 [2024-11-20 11:21:35.892850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.529 qpair failed and we were unable to recover it. 00:27:08.529 [2024-11-20 11:21:35.892973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.529 [2024-11-20 11:21:35.893005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.529 qpair failed and we were unable to recover it. 00:27:08.529 [2024-11-20 11:21:35.893127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.529 [2024-11-20 11:21:35.893160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.529 qpair failed and we were unable to recover it. 00:27:08.529 [2024-11-20 11:21:35.893281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.529 [2024-11-20 11:21:35.893319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.529 qpair failed and we were unable to recover it. 00:27:08.529 [2024-11-20 11:21:35.893427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.529 [2024-11-20 11:21:35.893459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.529 qpair failed and we were unable to recover it. 00:27:08.529 [2024-11-20 11:21:35.893584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.529 [2024-11-20 11:21:35.893616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.529 qpair failed and we were unable to recover it. 00:27:08.529 [2024-11-20 11:21:35.893786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.529 [2024-11-20 11:21:35.893813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.529 qpair failed and we were unable to recover it. 00:27:08.529 [2024-11-20 11:21:35.894006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.529 [2024-11-20 11:21:35.894036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.529 qpair failed and we were unable to recover it. 00:27:08.529 [2024-11-20 11:21:35.894173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.529 [2024-11-20 11:21:35.894203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.529 qpair failed and we were unable to recover it. 00:27:08.529 [2024-11-20 11:21:35.894338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.529 [2024-11-20 11:21:35.894366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.529 qpair failed and we were unable to recover it. 00:27:08.529 [2024-11-20 11:21:35.894490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.529 [2024-11-20 11:21:35.894524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.529 qpair failed and we were unable to recover it. 00:27:08.529 [2024-11-20 11:21:35.894631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.529 [2024-11-20 11:21:35.894663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.529 qpair failed and we were unable to recover it. 00:27:08.529 [2024-11-20 11:21:35.894855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.529 [2024-11-20 11:21:35.894883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.529 qpair failed and we were unable to recover it. 00:27:08.529 [2024-11-20 11:21:35.895133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.529 [2024-11-20 11:21:35.895162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.529 qpair failed and we were unable to recover it. 00:27:08.529 [2024-11-20 11:21:35.895337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.529 [2024-11-20 11:21:35.895366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.529 qpair failed and we were unable to recover it. 00:27:08.529 [2024-11-20 11:21:35.895546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.529 [2024-11-20 11:21:35.895574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.529 qpair failed and we were unable to recover it. 00:27:08.529 [2024-11-20 11:21:35.895698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.529 [2024-11-20 11:21:35.895730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.529 qpair failed and we were unable to recover it. 00:27:08.529 [2024-11-20 11:21:35.895914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.529 [2024-11-20 11:21:35.895942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.529 qpair failed and we were unable to recover it. 00:27:08.529 [2024-11-20 11:21:35.896229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.529 [2024-11-20 11:21:35.896258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.529 qpair failed and we were unable to recover it. 00:27:08.529 [2024-11-20 11:21:35.896380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.529 [2024-11-20 11:21:35.896412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.529 qpair failed and we were unable to recover it. 00:27:08.529 [2024-11-20 11:21:35.896546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.529 [2024-11-20 11:21:35.896575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.529 qpair failed and we were unable to recover it. 00:27:08.529 [2024-11-20 11:21:35.896784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.529 [2024-11-20 11:21:35.896813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.529 qpair failed and we were unable to recover it. 00:27:08.529 [2024-11-20 11:21:35.896999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.530 [2024-11-20 11:21:35.897029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.530 qpair failed and we were unable to recover it. 00:27:08.530 [2024-11-20 11:21:35.897273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.530 [2024-11-20 11:21:35.897302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.530 qpair failed and we were unable to recover it. 00:27:08.530 [2024-11-20 11:21:35.897413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.530 [2024-11-20 11:21:35.897452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.530 qpair failed and we were unable to recover it. 00:27:08.530 [2024-11-20 11:21:35.897620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.530 [2024-11-20 11:21:35.897644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.530 qpair failed and we were unable to recover it. 00:27:08.530 [2024-11-20 11:21:35.897898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.530 [2024-11-20 11:21:35.897923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.530 qpair failed and we were unable to recover it. 00:27:08.530 [2024-11-20 11:21:35.898045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.530 [2024-11-20 11:21:35.898077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.530 qpair failed and we were unable to recover it. 00:27:08.530 [2024-11-20 11:21:35.898277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.530 [2024-11-20 11:21:35.898302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.530 qpair failed and we were unable to recover it. 00:27:08.530 [2024-11-20 11:21:35.898481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.530 [2024-11-20 11:21:35.898506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.530 qpair failed and we were unable to recover it. 00:27:08.530 [2024-11-20 11:21:35.898664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.530 [2024-11-20 11:21:35.898734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.530 qpair failed and we were unable to recover it. 00:27:08.530 [2024-11-20 11:21:35.898929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.530 [2024-11-20 11:21:35.898978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.530 qpair failed and we were unable to recover it. 00:27:08.530 [2024-11-20 11:21:35.899170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.530 [2024-11-20 11:21:35.899203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.530 qpair failed and we were unable to recover it. 00:27:08.530 [2024-11-20 11:21:35.899344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.530 [2024-11-20 11:21:35.899377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.530 qpair failed and we were unable to recover it. 00:27:08.530 [2024-11-20 11:21:35.899622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.530 [2024-11-20 11:21:35.899654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.530 qpair failed and we were unable to recover it. 00:27:08.530 [2024-11-20 11:21:35.899781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.530 [2024-11-20 11:21:35.899813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.530 qpair failed and we were unable to recover it. 00:27:08.530 [2024-11-20 11:21:35.899981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.530 [2024-11-20 11:21:35.900014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.530 qpair failed and we were unable to recover it. 00:27:08.530 [2024-11-20 11:21:35.900200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.530 [2024-11-20 11:21:35.900230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.530 qpair failed and we were unable to recover it. 00:27:08.530 [2024-11-20 11:21:35.900441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.530 [2024-11-20 11:21:35.900473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.530 qpair failed and we were unable to recover it. 00:27:08.530 [2024-11-20 11:21:35.900652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.530 [2024-11-20 11:21:35.900683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.530 qpair failed and we were unable to recover it. 00:27:08.530 [2024-11-20 11:21:35.900812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.530 [2024-11-20 11:21:35.900844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.530 qpair failed and we were unable to recover it. 00:27:08.530 [2024-11-20 11:21:35.901022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.530 [2024-11-20 11:21:35.901054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.530 qpair failed and we were unable to recover it. 00:27:08.530 [2024-11-20 11:21:35.901192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.530 [2024-11-20 11:21:35.901223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.530 qpair failed and we were unable to recover it. 00:27:08.530 [2024-11-20 11:21:35.901337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.530 [2024-11-20 11:21:35.901378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.530 qpair failed and we were unable to recover it. 00:27:08.530 [2024-11-20 11:21:35.901508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.530 [2024-11-20 11:21:35.901539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.530 qpair failed and we were unable to recover it. 00:27:08.530 [2024-11-20 11:21:35.901705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.530 [2024-11-20 11:21:35.901736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.530 qpair failed and we were unable to recover it. 00:27:08.530 [2024-11-20 11:21:35.901844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.530 [2024-11-20 11:21:35.901875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.530 qpair failed and we were unable to recover it. 00:27:08.530 [2024-11-20 11:21:35.902054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.530 [2024-11-20 11:21:35.902086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.530 qpair failed and we were unable to recover it. 00:27:08.530 [2024-11-20 11:21:35.902211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.530 [2024-11-20 11:21:35.902242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.530 qpair failed and we were unable to recover it. 00:27:08.530 [2024-11-20 11:21:35.902346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.530 [2024-11-20 11:21:35.902376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.530 qpair failed and we were unable to recover it. 00:27:08.530 [2024-11-20 11:21:35.902547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.530 [2024-11-20 11:21:35.902579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.530 qpair failed and we were unable to recover it. 00:27:08.530 [2024-11-20 11:21:35.902696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.530 [2024-11-20 11:21:35.902728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.530 qpair failed and we were unable to recover it. 00:27:08.530 [2024-11-20 11:21:35.902845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.530 [2024-11-20 11:21:35.902876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.530 qpair failed and we were unable to recover it. 00:27:08.530 [2024-11-20 11:21:35.903062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.530 [2024-11-20 11:21:35.903094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.530 qpair failed and we were unable to recover it. 00:27:08.530 [2024-11-20 11:21:35.903275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.530 [2024-11-20 11:21:35.903307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.531 qpair failed and we were unable to recover it. 00:27:08.531 [2024-11-20 11:21:35.903415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.531 [2024-11-20 11:21:35.903447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.531 qpair failed and we were unable to recover it. 00:27:08.531 [2024-11-20 11:21:35.903568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.531 [2024-11-20 11:21:35.903599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.531 qpair failed and we were unable to recover it. 00:27:08.531 [2024-11-20 11:21:35.903739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.531 [2024-11-20 11:21:35.903770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.531 qpair failed and we were unable to recover it. 00:27:08.531 [2024-11-20 11:21:35.903961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.531 [2024-11-20 11:21:35.903996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.531 qpair failed and we were unable to recover it. 00:27:08.531 [2024-11-20 11:21:35.904180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.531 [2024-11-20 11:21:35.904211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.531 qpair failed and we were unable to recover it. 00:27:08.531 [2024-11-20 11:21:35.904386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.531 [2024-11-20 11:21:35.904417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.531 qpair failed and we were unable to recover it. 00:27:08.531 [2024-11-20 11:21:35.904652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.531 [2024-11-20 11:21:35.904683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.531 qpair failed and we were unable to recover it. 00:27:08.531 [2024-11-20 11:21:35.904804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.531 [2024-11-20 11:21:35.904834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.531 qpair failed and we were unable to recover it. 00:27:08.531 [2024-11-20 11:21:35.905012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.531 [2024-11-20 11:21:35.905044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.531 qpair failed and we were unable to recover it. 00:27:08.531 [2024-11-20 11:21:35.905221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.531 [2024-11-20 11:21:35.905253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.531 qpair failed and we were unable to recover it. 00:27:08.531 [2024-11-20 11:21:35.905451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.531 [2024-11-20 11:21:35.905482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.531 qpair failed and we were unable to recover it. 00:27:08.531 [2024-11-20 11:21:35.905600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.531 [2024-11-20 11:21:35.905631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.531 qpair failed and we were unable to recover it. 00:27:08.531 [2024-11-20 11:21:35.905813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.531 [2024-11-20 11:21:35.905843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.531 qpair failed and we were unable to recover it. 00:27:08.531 [2024-11-20 11:21:35.906039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.531 [2024-11-20 11:21:35.906071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.531 qpair failed and we were unable to recover it. 00:27:08.531 [2024-11-20 11:21:35.906177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.531 [2024-11-20 11:21:35.906207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.531 qpair failed and we were unable to recover it. 00:27:08.531 [2024-11-20 11:21:35.906401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.531 [2024-11-20 11:21:35.906432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.531 qpair failed and we were unable to recover it. 00:27:08.531 [2024-11-20 11:21:35.906553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.531 [2024-11-20 11:21:35.906584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.531 qpair failed and we were unable to recover it. 00:27:08.531 [2024-11-20 11:21:35.906865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.531 [2024-11-20 11:21:35.906896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.531 qpair failed and we were unable to recover it. 00:27:08.531 [2024-11-20 11:21:35.907037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.531 [2024-11-20 11:21:35.907070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.531 qpair failed and we were unable to recover it. 00:27:08.531 [2024-11-20 11:21:35.907259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.531 [2024-11-20 11:21:35.907291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.531 qpair failed and we were unable to recover it. 00:27:08.531 [2024-11-20 11:21:35.907486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.531 [2024-11-20 11:21:35.907517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.531 qpair failed and we were unable to recover it. 00:27:08.531 [2024-11-20 11:21:35.907628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.531 [2024-11-20 11:21:35.907658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.531 qpair failed and we were unable to recover it. 00:27:08.531 [2024-11-20 11:21:35.907863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.531 [2024-11-20 11:21:35.907894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.531 qpair failed and we were unable to recover it. 00:27:08.531 [2024-11-20 11:21:35.908086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.531 [2024-11-20 11:21:35.908119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.531 qpair failed and we were unable to recover it. 00:27:08.531 [2024-11-20 11:21:35.908326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.531 [2024-11-20 11:21:35.908358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.531 qpair failed and we were unable to recover it. 00:27:08.531 [2024-11-20 11:21:35.908598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.531 [2024-11-20 11:21:35.908630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.531 qpair failed and we were unable to recover it. 00:27:08.531 [2024-11-20 11:21:35.908756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.531 [2024-11-20 11:21:35.908787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.531 qpair failed and we were unable to recover it. 00:27:08.531 [2024-11-20 11:21:35.908969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.531 [2024-11-20 11:21:35.909002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.531 qpair failed and we were unable to recover it. 00:27:08.531 [2024-11-20 11:21:35.909174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.531 [2024-11-20 11:21:35.909211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.531 qpair failed and we were unable to recover it. 00:27:08.531 [2024-11-20 11:21:35.909326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.531 [2024-11-20 11:21:35.909356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.531 qpair failed and we were unable to recover it. 00:27:08.531 [2024-11-20 11:21:35.909620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.531 [2024-11-20 11:21:35.909651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.531 qpair failed and we were unable to recover it. 00:27:08.531 [2024-11-20 11:21:35.909828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.531 [2024-11-20 11:21:35.909858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.531 qpair failed and we were unable to recover it. 00:27:08.531 [2024-11-20 11:21:35.909995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.532 [2024-11-20 11:21:35.910030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.532 qpair failed and we were unable to recover it. 00:27:08.532 [2024-11-20 11:21:35.910216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.532 [2024-11-20 11:21:35.910248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.532 qpair failed and we were unable to recover it. 00:27:08.532 [2024-11-20 11:21:35.910467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.532 [2024-11-20 11:21:35.910500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.532 qpair failed and we were unable to recover it. 00:27:08.532 [2024-11-20 11:21:35.910752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.532 [2024-11-20 11:21:35.910784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.532 qpair failed and we were unable to recover it. 00:27:08.532 [2024-11-20 11:21:35.910898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.532 [2024-11-20 11:21:35.910929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.532 qpair failed and we were unable to recover it. 00:27:08.532 [2024-11-20 11:21:35.911149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.532 [2024-11-20 11:21:35.911181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.532 qpair failed and we were unable to recover it. 00:27:08.532 [2024-11-20 11:21:35.911315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.532 [2024-11-20 11:21:35.911345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.532 qpair failed and we were unable to recover it. 00:27:08.532 [2024-11-20 11:21:35.911527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.532 [2024-11-20 11:21:35.911558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.532 qpair failed and we were unable to recover it. 00:27:08.532 [2024-11-20 11:21:35.911728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.532 [2024-11-20 11:21:35.911759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.532 qpair failed and we were unable to recover it. 00:27:08.532 [2024-11-20 11:21:35.911877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.532 [2024-11-20 11:21:35.911909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.532 qpair failed and we were unable to recover it. 00:27:08.532 [2024-11-20 11:21:35.912132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.532 [2024-11-20 11:21:35.912165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.532 qpair failed and we were unable to recover it. 00:27:08.532 [2024-11-20 11:21:35.912372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.532 [2024-11-20 11:21:35.912403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.532 qpair failed and we were unable to recover it. 00:27:08.532 [2024-11-20 11:21:35.912535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.532 [2024-11-20 11:21:35.912567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.532 qpair failed and we were unable to recover it. 00:27:08.532 [2024-11-20 11:21:35.912769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.532 [2024-11-20 11:21:35.912800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.532 qpair failed and we were unable to recover it. 00:27:08.532 [2024-11-20 11:21:35.912976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.532 [2024-11-20 11:21:35.913008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.532 qpair failed and we were unable to recover it. 00:27:08.532 [2024-11-20 11:21:35.913129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.532 [2024-11-20 11:21:35.913161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.532 qpair failed and we were unable to recover it. 00:27:08.532 [2024-11-20 11:21:35.913349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.532 [2024-11-20 11:21:35.913380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.532 qpair failed and we were unable to recover it. 00:27:08.532 [2024-11-20 11:21:35.913572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.532 [2024-11-20 11:21:35.913603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.532 qpair failed and we were unable to recover it. 00:27:08.532 [2024-11-20 11:21:35.913841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.532 [2024-11-20 11:21:35.913871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.532 qpair failed and we were unable to recover it. 00:27:08.532 [2024-11-20 11:21:35.914074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.532 [2024-11-20 11:21:35.914107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.532 qpair failed and we were unable to recover it. 00:27:08.532 [2024-11-20 11:21:35.914317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.532 [2024-11-20 11:21:35.914348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.532 qpair failed and we were unable to recover it. 00:27:08.532 [2024-11-20 11:21:35.914473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.532 [2024-11-20 11:21:35.914505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.532 qpair failed and we were unable to recover it. 00:27:08.532 [2024-11-20 11:21:35.914741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.532 [2024-11-20 11:21:35.914771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.532 qpair failed and we were unable to recover it. 00:27:08.532 [2024-11-20 11:21:35.914893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.532 [2024-11-20 11:21:35.914935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.532 qpair failed and we were unable to recover it. 00:27:08.532 [2024-11-20 11:21:35.915121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.532 [2024-11-20 11:21:35.915154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.532 qpair failed and we were unable to recover it. 00:27:08.532 [2024-11-20 11:21:35.915337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.532 [2024-11-20 11:21:35.915369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.532 qpair failed and we were unable to recover it. 00:27:08.532 [2024-11-20 11:21:35.915539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.532 [2024-11-20 11:21:35.915571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.532 qpair failed and we were unable to recover it. 00:27:08.532 [2024-11-20 11:21:35.915757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.532 [2024-11-20 11:21:35.915788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.532 qpair failed and we were unable to recover it. 00:27:08.532 [2024-11-20 11:21:35.915992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.532 [2024-11-20 11:21:35.916023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.532 qpair failed and we were unable to recover it. 00:27:08.532 [2024-11-20 11:21:35.916274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.532 [2024-11-20 11:21:35.916306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.532 qpair failed and we were unable to recover it. 00:27:08.532 [2024-11-20 11:21:35.916438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.532 [2024-11-20 11:21:35.916470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.532 qpair failed and we were unable to recover it. 00:27:08.532 [2024-11-20 11:21:35.916707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.532 [2024-11-20 11:21:35.916738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.532 qpair failed and we were unable to recover it. 00:27:08.532 [2024-11-20 11:21:35.916910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.532 [2024-11-20 11:21:35.916942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.532 qpair failed and we were unable to recover it. 00:27:08.532 [2024-11-20 11:21:35.917202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.533 [2024-11-20 11:21:35.917234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.533 qpair failed and we were unable to recover it. 00:27:08.533 [2024-11-20 11:21:35.917421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.533 [2024-11-20 11:21:35.917452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.533 qpair failed and we were unable to recover it. 00:27:08.533 [2024-11-20 11:21:35.917642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.533 [2024-11-20 11:21:35.917673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.533 qpair failed and we were unable to recover it. 00:27:08.533 [2024-11-20 11:21:35.917871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.533 [2024-11-20 11:21:35.917903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.533 qpair failed and we were unable to recover it. 00:27:08.533 [2024-11-20 11:21:35.918115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.533 [2024-11-20 11:21:35.918149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.533 qpair failed and we were unable to recover it. 00:27:08.533 [2024-11-20 11:21:35.918332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.533 [2024-11-20 11:21:35.918362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.533 qpair failed and we were unable to recover it. 00:27:08.533 [2024-11-20 11:21:35.918554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.533 [2024-11-20 11:21:35.918585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.533 qpair failed and we were unable to recover it. 00:27:08.533 [2024-11-20 11:21:35.918753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.533 [2024-11-20 11:21:35.918785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.533 qpair failed and we were unable to recover it. 00:27:08.533 [2024-11-20 11:21:35.919042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.533 [2024-11-20 11:21:35.919074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.533 qpair failed and we were unable to recover it. 00:27:08.533 [2024-11-20 11:21:35.919195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.533 [2024-11-20 11:21:35.919226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.533 qpair failed and we were unable to recover it. 00:27:08.533 [2024-11-20 11:21:35.919340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.533 [2024-11-20 11:21:35.919371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.533 qpair failed and we were unable to recover it. 00:27:08.533 [2024-11-20 11:21:35.919483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.533 [2024-11-20 11:21:35.919515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.533 qpair failed and we were unable to recover it. 00:27:08.533 [2024-11-20 11:21:35.919644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.533 [2024-11-20 11:21:35.919674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.533 qpair failed and we were unable to recover it. 00:27:08.533 [2024-11-20 11:21:35.919784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.533 [2024-11-20 11:21:35.919815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.533 qpair failed and we were unable to recover it. 00:27:08.533 [2024-11-20 11:21:35.919931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.533 [2024-11-20 11:21:35.919973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.533 qpair failed and we were unable to recover it. 00:27:08.533 [2024-11-20 11:21:35.920238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.533 [2024-11-20 11:21:35.920268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.533 qpair failed and we were unable to recover it. 00:27:08.533 [2024-11-20 11:21:35.920472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.533 [2024-11-20 11:21:35.920504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.533 qpair failed and we were unable to recover it. 00:27:08.533 [2024-11-20 11:21:35.920720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.533 [2024-11-20 11:21:35.920751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.533 qpair failed and we were unable to recover it. 00:27:08.533 [2024-11-20 11:21:35.920868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.533 [2024-11-20 11:21:35.920899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.533 qpair failed and we were unable to recover it. 00:27:08.533 [2024-11-20 11:21:35.921148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.533 [2024-11-20 11:21:35.921180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.533 qpair failed and we were unable to recover it. 00:27:08.533 [2024-11-20 11:21:35.921288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.533 [2024-11-20 11:21:35.921318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.533 qpair failed and we were unable to recover it. 00:27:08.533 [2024-11-20 11:21:35.921515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.533 [2024-11-20 11:21:35.921546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.533 qpair failed and we were unable to recover it. 00:27:08.533 [2024-11-20 11:21:35.921720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.533 [2024-11-20 11:21:35.921751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.533 qpair failed and we were unable to recover it. 00:27:08.533 [2024-11-20 11:21:35.921964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.533 [2024-11-20 11:21:35.921996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.533 qpair failed and we were unable to recover it. 00:27:08.533 [2024-11-20 11:21:35.922130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.533 [2024-11-20 11:21:35.922162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.533 qpair failed and we were unable to recover it. 00:27:08.533 [2024-11-20 11:21:35.922276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.533 [2024-11-20 11:21:35.922306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.533 qpair failed and we were unable to recover it. 00:27:08.533 [2024-11-20 11:21:35.922411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.533 [2024-11-20 11:21:35.922442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.533 qpair failed and we were unable to recover it. 00:27:08.533 [2024-11-20 11:21:35.922629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.533 [2024-11-20 11:21:35.922660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.533 qpair failed and we were unable to recover it. 00:27:08.533 [2024-11-20 11:21:35.922911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.533 [2024-11-20 11:21:35.922942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.533 qpair failed and we were unable to recover it. 00:27:08.533 [2024-11-20 11:21:35.923123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.534 [2024-11-20 11:21:35.923156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.534 qpair failed and we were unable to recover it. 00:27:08.534 [2024-11-20 11:21:35.923281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.534 [2024-11-20 11:21:35.923318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.534 qpair failed and we were unable to recover it. 00:27:08.534 [2024-11-20 11:21:35.923513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.534 [2024-11-20 11:21:35.923544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.534 qpair failed and we were unable to recover it. 00:27:08.534 [2024-11-20 11:21:35.923650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.534 [2024-11-20 11:21:35.923681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.534 qpair failed and we were unable to recover it. 00:27:08.534 [2024-11-20 11:21:35.923891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.534 [2024-11-20 11:21:35.923922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.534 qpair failed and we were unable to recover it. 00:27:08.534 [2024-11-20 11:21:35.924045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.534 [2024-11-20 11:21:35.924077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.534 qpair failed and we were unable to recover it. 00:27:08.534 [2024-11-20 11:21:35.924256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.534 [2024-11-20 11:21:35.924287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.534 qpair failed and we were unable to recover it. 00:27:08.534 [2024-11-20 11:21:35.924549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.534 [2024-11-20 11:21:35.924581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.534 qpair failed and we were unable to recover it. 00:27:08.534 [2024-11-20 11:21:35.924683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.534 [2024-11-20 11:21:35.924714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.534 qpair failed and we were unable to recover it. 00:27:08.534 [2024-11-20 11:21:35.924852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.534 [2024-11-20 11:21:35.924884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.534 qpair failed and we were unable to recover it. 00:27:08.534 [2024-11-20 11:21:35.925052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.534 [2024-11-20 11:21:35.925085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.534 qpair failed and we were unable to recover it. 00:27:08.534 [2024-11-20 11:21:35.925265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.534 [2024-11-20 11:21:35.925297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.534 qpair failed and we were unable to recover it. 00:27:08.534 [2024-11-20 11:21:35.925429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.534 [2024-11-20 11:21:35.925460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.534 qpair failed and we were unable to recover it. 00:27:08.534 [2024-11-20 11:21:35.925630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.534 [2024-11-20 11:21:35.925661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.534 qpair failed and we were unable to recover it. 00:27:08.534 [2024-11-20 11:21:35.925921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.534 [2024-11-20 11:21:35.925973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.534 qpair failed and we were unable to recover it. 00:27:08.534 [2024-11-20 11:21:35.926113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.534 [2024-11-20 11:21:35.926146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.534 qpair failed and we were unable to recover it. 00:27:08.534 [2024-11-20 11:21:35.926318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.534 [2024-11-20 11:21:35.926350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.534 qpair failed and we were unable to recover it. 00:27:08.534 [2024-11-20 11:21:35.926446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.534 [2024-11-20 11:21:35.926477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.534 qpair failed and we were unable to recover it. 00:27:08.534 [2024-11-20 11:21:35.926664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.534 [2024-11-20 11:21:35.926695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.534 qpair failed and we were unable to recover it. 00:27:08.534 [2024-11-20 11:21:35.926812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.534 [2024-11-20 11:21:35.926842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.534 qpair failed and we were unable to recover it. 00:27:08.534 [2024-11-20 11:21:35.926946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.534 [2024-11-20 11:21:35.926989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.534 qpair failed and we were unable to recover it. 00:27:08.534 [2024-11-20 11:21:35.927111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.534 [2024-11-20 11:21:35.927142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.534 qpair failed and we were unable to recover it. 00:27:08.534 [2024-11-20 11:21:35.927387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.534 [2024-11-20 11:21:35.927417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.534 qpair failed and we were unable to recover it. 00:27:08.534 [2024-11-20 11:21:35.927593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.534 [2024-11-20 11:21:35.927625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.534 qpair failed and we were unable to recover it. 00:27:08.534 [2024-11-20 11:21:35.927797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.534 [2024-11-20 11:21:35.927828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.534 qpair failed and we were unable to recover it. 00:27:08.534 [2024-11-20 11:21:35.928012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.534 [2024-11-20 11:21:35.928044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.534 qpair failed and we were unable to recover it. 00:27:08.534 [2024-11-20 11:21:35.928226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.534 [2024-11-20 11:21:35.928257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.534 qpair failed and we were unable to recover it. 00:27:08.534 [2024-11-20 11:21:35.928392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.534 [2024-11-20 11:21:35.928422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.534 qpair failed and we were unable to recover it. 00:27:08.534 [2024-11-20 11:21:35.928541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.534 [2024-11-20 11:21:35.928572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.534 qpair failed and we were unable to recover it. 00:27:08.534 [2024-11-20 11:21:35.928746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.534 [2024-11-20 11:21:35.928777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.534 qpair failed and we were unable to recover it. 00:27:08.534 [2024-11-20 11:21:35.928972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.534 [2024-11-20 11:21:35.929004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.534 qpair failed and we were unable to recover it. 00:27:08.534 [2024-11-20 11:21:35.929264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.534 [2024-11-20 11:21:35.929295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.534 qpair failed and we were unable to recover it. 00:27:08.534 [2024-11-20 11:21:35.929543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.534 [2024-11-20 11:21:35.929575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.534 qpair failed and we were unable to recover it. 00:27:08.535 [2024-11-20 11:21:35.929827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.535 [2024-11-20 11:21:35.929858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.535 qpair failed and we were unable to recover it. 00:27:08.535 [2024-11-20 11:21:35.930027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.535 [2024-11-20 11:21:35.930059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.535 qpair failed and we were unable to recover it. 00:27:08.535 [2024-11-20 11:21:35.930180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.535 [2024-11-20 11:21:35.930211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.535 qpair failed and we were unable to recover it. 00:27:08.535 [2024-11-20 11:21:35.930346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.535 [2024-11-20 11:21:35.930378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.535 qpair failed and we were unable to recover it. 00:27:08.535 [2024-11-20 11:21:35.930492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.535 [2024-11-20 11:21:35.930522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.535 qpair failed and we were unable to recover it. 00:27:08.535 [2024-11-20 11:21:35.930661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.535 [2024-11-20 11:21:35.930692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.535 qpair failed and we were unable to recover it. 00:27:08.535 [2024-11-20 11:21:35.930873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.535 [2024-11-20 11:21:35.930905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.535 qpair failed and we were unable to recover it. 00:27:08.535 [2024-11-20 11:21:35.931109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.535 [2024-11-20 11:21:35.931140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.535 qpair failed and we were unable to recover it. 00:27:08.535 [2024-11-20 11:21:35.931310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.535 [2024-11-20 11:21:35.931346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.535 qpair failed and we were unable to recover it. 00:27:08.535 [2024-11-20 11:21:35.931585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.535 [2024-11-20 11:21:35.931616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.535 qpair failed and we were unable to recover it. 00:27:08.535 [2024-11-20 11:21:35.931745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.535 [2024-11-20 11:21:35.931776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.535 qpair failed and we were unable to recover it. 00:27:08.535 [2024-11-20 11:21:35.931980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.535 [2024-11-20 11:21:35.932017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.535 qpair failed and we were unable to recover it. 00:27:08.535 [2024-11-20 11:21:35.932188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.535 [2024-11-20 11:21:35.932220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.535 qpair failed and we were unable to recover it. 00:27:08.535 [2024-11-20 11:21:35.932402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.535 [2024-11-20 11:21:35.932433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.535 qpair failed and we were unable to recover it. 00:27:08.535 [2024-11-20 11:21:35.932562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.535 [2024-11-20 11:21:35.932593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.535 qpair failed and we were unable to recover it. 00:27:08.535 [2024-11-20 11:21:35.932708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.535 [2024-11-20 11:21:35.932739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.535 qpair failed and we were unable to recover it. 00:27:08.535 [2024-11-20 11:21:35.932970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.535 [2024-11-20 11:21:35.933002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.535 qpair failed and we were unable to recover it. 00:27:08.535 [2024-11-20 11:21:35.933115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.535 [2024-11-20 11:21:35.933146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.535 qpair failed and we were unable to recover it. 00:27:08.535 [2024-11-20 11:21:35.933339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.535 [2024-11-20 11:21:35.933370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.535 qpair failed and we were unable to recover it. 00:27:08.535 [2024-11-20 11:21:35.933473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.535 [2024-11-20 11:21:35.933503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.535 qpair failed and we were unable to recover it. 00:27:08.535 [2024-11-20 11:21:35.933628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.535 [2024-11-20 11:21:35.933659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.535 qpair failed and we were unable to recover it. 00:27:08.535 [2024-11-20 11:21:35.933845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.535 [2024-11-20 11:21:35.933876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.535 qpair failed and we were unable to recover it. 00:27:08.535 [2024-11-20 11:21:35.934058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.535 [2024-11-20 11:21:35.934110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.535 qpair failed and we were unable to recover it. 00:27:08.535 [2024-11-20 11:21:35.934286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.535 [2024-11-20 11:21:35.934316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.535 qpair failed and we were unable to recover it. 00:27:08.535 [2024-11-20 11:21:35.934576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.535 [2024-11-20 11:21:35.934608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.535 qpair failed and we were unable to recover it. 00:27:08.535 [2024-11-20 11:21:35.934781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.535 [2024-11-20 11:21:35.934812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.535 qpair failed and we were unable to recover it. 00:27:08.535 [2024-11-20 11:21:35.935001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.535 [2024-11-20 11:21:35.935033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.535 qpair failed and we were unable to recover it. 00:27:08.535 [2024-11-20 11:21:35.935227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.535 [2024-11-20 11:21:35.935258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.535 qpair failed and we were unable to recover it. 00:27:08.535 [2024-11-20 11:21:35.935441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.535 [2024-11-20 11:21:35.935471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.535 qpair failed and we were unable to recover it. 00:27:08.535 [2024-11-20 11:21:35.935695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.535 [2024-11-20 11:21:35.935727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.535 qpair failed and we were unable to recover it. 00:27:08.535 [2024-11-20 11:21:35.935904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.535 [2024-11-20 11:21:35.935935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.535 qpair failed and we were unable to recover it. 00:27:08.535 [2024-11-20 11:21:35.936081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.535 [2024-11-20 11:21:35.936113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.535 qpair failed and we were unable to recover it. 00:27:08.535 [2024-11-20 11:21:35.936323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.536 [2024-11-20 11:21:35.936354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.536 qpair failed and we were unable to recover it. 00:27:08.536 [2024-11-20 11:21:35.936462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.536 [2024-11-20 11:21:35.936491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.536 qpair failed and we were unable to recover it. 00:27:08.536 [2024-11-20 11:21:35.936741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.536 [2024-11-20 11:21:35.936772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.536 qpair failed and we were unable to recover it. 00:27:08.536 [2024-11-20 11:21:35.936969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.536 [2024-11-20 11:21:35.937003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.536 qpair failed and we were unable to recover it. 00:27:08.536 [2024-11-20 11:21:35.937242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.536 [2024-11-20 11:21:35.937274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.536 qpair failed and we were unable to recover it. 00:27:08.536 [2024-11-20 11:21:35.937479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.536 [2024-11-20 11:21:35.937510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.536 qpair failed and we were unable to recover it. 00:27:08.536 [2024-11-20 11:21:35.937690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.536 [2024-11-20 11:21:35.937721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.536 qpair failed and we were unable to recover it. 00:27:08.536 [2024-11-20 11:21:35.937840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.536 [2024-11-20 11:21:35.937870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.536 qpair failed and we were unable to recover it. 00:27:08.536 [2024-11-20 11:21:35.938068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.536 [2024-11-20 11:21:35.938100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.536 qpair failed and we were unable to recover it. 00:27:08.536 [2024-11-20 11:21:35.938335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.536 [2024-11-20 11:21:35.938367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.536 qpair failed and we were unable to recover it. 00:27:08.536 [2024-11-20 11:21:35.938489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.536 [2024-11-20 11:21:35.938519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.536 qpair failed and we were unable to recover it. 00:27:08.536 [2024-11-20 11:21:35.938642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.536 [2024-11-20 11:21:35.938674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.536 qpair failed and we were unable to recover it. 00:27:08.536 [2024-11-20 11:21:35.938842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.536 [2024-11-20 11:21:35.938874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.536 qpair failed and we were unable to recover it. 00:27:08.536 [2024-11-20 11:21:35.939137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.536 [2024-11-20 11:21:35.939169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.536 qpair failed and we were unable to recover it. 00:27:08.536 [2024-11-20 11:21:35.939346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.536 [2024-11-20 11:21:35.939377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.536 qpair failed and we were unable to recover it. 00:27:08.536 [2024-11-20 11:21:35.939560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.536 [2024-11-20 11:21:35.939592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.536 qpair failed and we were unable to recover it. 00:27:08.536 [2024-11-20 11:21:35.939818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.536 [2024-11-20 11:21:35.939855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.536 qpair failed and we were unable to recover it. 00:27:08.536 [2024-11-20 11:21:35.940092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.536 [2024-11-20 11:21:35.940125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.536 qpair failed and we were unable to recover it. 00:27:08.536 [2024-11-20 11:21:35.940370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.536 [2024-11-20 11:21:35.940402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.536 qpair failed and we were unable to recover it. 00:27:08.536 [2024-11-20 11:21:35.940506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.536 [2024-11-20 11:21:35.940536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.536 qpair failed and we were unable to recover it. 00:27:08.536 [2024-11-20 11:21:35.940737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.536 [2024-11-20 11:21:35.940767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.536 qpair failed and we were unable to recover it. 00:27:08.536 [2024-11-20 11:21:35.940898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.536 [2024-11-20 11:21:35.940928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.536 qpair failed and we were unable to recover it. 00:27:08.536 [2024-11-20 11:21:35.941136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.536 [2024-11-20 11:21:35.941168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.536 qpair failed and we were unable to recover it. 00:27:08.536 [2024-11-20 11:21:35.941315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.536 [2024-11-20 11:21:35.941345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.536 qpair failed and we were unable to recover it. 00:27:08.536 [2024-11-20 11:21:35.941530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.536 [2024-11-20 11:21:35.941560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.536 qpair failed and we were unable to recover it. 00:27:08.536 [2024-11-20 11:21:35.941763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.536 [2024-11-20 11:21:35.941795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.536 qpair failed and we were unable to recover it. 00:27:08.536 [2024-11-20 11:21:35.942028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.536 [2024-11-20 11:21:35.942060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.536 qpair failed and we were unable to recover it. 00:27:08.536 [2024-11-20 11:21:35.942185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.536 [2024-11-20 11:21:35.942216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.536 qpair failed and we were unable to recover it. 00:27:08.536 [2024-11-20 11:21:35.942410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.536 [2024-11-20 11:21:35.942441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.536 qpair failed and we were unable to recover it. 00:27:08.536 [2024-11-20 11:21:35.942552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.536 [2024-11-20 11:21:35.942583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.536 qpair failed and we were unable to recover it. 00:27:08.536 [2024-11-20 11:21:35.942824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.536 [2024-11-20 11:21:35.942856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.536 qpair failed and we were unable to recover it. 00:27:08.536 [2024-11-20 11:21:35.942977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.536 [2024-11-20 11:21:35.943010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.536 qpair failed and we were unable to recover it. 00:27:08.536 [2024-11-20 11:21:35.943184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.536 [2024-11-20 11:21:35.943216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.536 qpair failed and we were unable to recover it. 00:27:08.536 [2024-11-20 11:21:35.943394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.536 [2024-11-20 11:21:35.943426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.536 qpair failed and we were unable to recover it. 00:27:08.536 [2024-11-20 11:21:35.943638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.536 [2024-11-20 11:21:35.943669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.536 qpair failed and we were unable to recover it. 00:27:08.536 [2024-11-20 11:21:35.943902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.536 [2024-11-20 11:21:35.943933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.537 qpair failed and we were unable to recover it. 00:27:08.537 [2024-11-20 11:21:35.944074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.537 [2024-11-20 11:21:35.944107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.537 qpair failed and we were unable to recover it. 00:27:08.537 [2024-11-20 11:21:35.944220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.537 [2024-11-20 11:21:35.944253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.537 qpair failed and we were unable to recover it. 00:27:08.537 [2024-11-20 11:21:35.944427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.537 [2024-11-20 11:21:35.944457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.537 qpair failed and we were unable to recover it. 00:27:08.537 [2024-11-20 11:21:35.944651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.537 [2024-11-20 11:21:35.944682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.537 qpair failed and we were unable to recover it. 00:27:08.537 [2024-11-20 11:21:35.944881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.537 [2024-11-20 11:21:35.944913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.537 qpair failed and we were unable to recover it. 00:27:08.537 [2024-11-20 11:21:35.945182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.537 [2024-11-20 11:21:35.945214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.537 qpair failed and we were unable to recover it. 00:27:08.537 [2024-11-20 11:21:35.945402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.537 [2024-11-20 11:21:35.945433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.537 qpair failed and we were unable to recover it. 00:27:08.537 [2024-11-20 11:21:35.945546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.537 [2024-11-20 11:21:35.945576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.537 qpair failed and we were unable to recover it. 00:27:08.537 [2024-11-20 11:21:35.945706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.537 [2024-11-20 11:21:35.945738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.537 qpair failed and we were unable to recover it. 00:27:08.537 [2024-11-20 11:21:35.945850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.537 [2024-11-20 11:21:35.945881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.537 qpair failed and we were unable to recover it. 00:27:08.537 [2024-11-20 11:21:35.946088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.537 [2024-11-20 11:21:35.946122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.537 qpair failed and we were unable to recover it. 00:27:08.537 [2024-11-20 11:21:35.946379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.537 [2024-11-20 11:21:35.946411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.537 qpair failed and we were unable to recover it. 00:27:08.537 [2024-11-20 11:21:35.946526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.537 [2024-11-20 11:21:35.946557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.537 qpair failed and we were unable to recover it. 00:27:08.537 [2024-11-20 11:21:35.946673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.537 [2024-11-20 11:21:35.946705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.537 qpair failed and we were unable to recover it. 00:27:08.537 [2024-11-20 11:21:35.946823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.537 [2024-11-20 11:21:35.946853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.537 qpair failed and we were unable to recover it. 00:27:08.537 [2024-11-20 11:21:35.946963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.537 [2024-11-20 11:21:35.946995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.537 qpair failed and we were unable to recover it. 00:27:08.537 [2024-11-20 11:21:35.947171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.537 [2024-11-20 11:21:35.947202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.537 qpair failed and we were unable to recover it. 00:27:08.537 [2024-11-20 11:21:35.947337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.537 [2024-11-20 11:21:35.947369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.537 qpair failed and we were unable to recover it. 00:27:08.537 [2024-11-20 11:21:35.947623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.537 [2024-11-20 11:21:35.947654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.537 qpair failed and we were unable to recover it. 00:27:08.537 [2024-11-20 11:21:35.947829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.537 [2024-11-20 11:21:35.947859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.537 qpair failed and we were unable to recover it. 00:27:08.537 [2024-11-20 11:21:35.948037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.537 [2024-11-20 11:21:35.948074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.537 qpair failed and we were unable to recover it. 00:27:08.537 [2024-11-20 11:21:35.948211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.537 [2024-11-20 11:21:35.948244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.537 qpair failed and we were unable to recover it. 00:27:08.537 [2024-11-20 11:21:35.948371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.537 [2024-11-20 11:21:35.948401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.537 qpair failed and we were unable to recover it. 00:27:08.537 [2024-11-20 11:21:35.948682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.537 [2024-11-20 11:21:35.948714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.537 qpair failed and we were unable to recover it. 00:27:08.537 [2024-11-20 11:21:35.948834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.537 [2024-11-20 11:21:35.948866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.537 qpair failed and we were unable to recover it. 00:27:08.537 [2024-11-20 11:21:35.949047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.537 [2024-11-20 11:21:35.949079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.537 qpair failed and we were unable to recover it. 00:27:08.537 [2024-11-20 11:21:35.949200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.537 [2024-11-20 11:21:35.949231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.537 qpair failed and we were unable to recover it. 00:27:08.537 [2024-11-20 11:21:35.949426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.537 [2024-11-20 11:21:35.949457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.537 qpair failed and we were unable to recover it. 00:27:08.537 [2024-11-20 11:21:35.949693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.537 [2024-11-20 11:21:35.949724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.537 qpair failed and we were unable to recover it. 00:27:08.537 [2024-11-20 11:21:35.949895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.537 [2024-11-20 11:21:35.949925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.537 qpair failed and we were unable to recover it. 00:27:08.537 [2024-11-20 11:21:35.950119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.537 [2024-11-20 11:21:35.950151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.537 qpair failed and we were unable to recover it. 00:27:08.537 [2024-11-20 11:21:35.950413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.537 [2024-11-20 11:21:35.950445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.537 qpair failed and we were unable to recover it. 00:27:08.537 [2024-11-20 11:21:35.950549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.537 [2024-11-20 11:21:35.950580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.537 qpair failed and we were unable to recover it. 00:27:08.537 [2024-11-20 11:21:35.950721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.537 [2024-11-20 11:21:35.950753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.537 qpair failed and we were unable to recover it. 00:27:08.537 [2024-11-20 11:21:35.950937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.537 [2024-11-20 11:21:35.950980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.537 qpair failed and we were unable to recover it. 00:27:08.537 [2024-11-20 11:21:35.951102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.538 [2024-11-20 11:21:35.951132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.538 qpair failed and we were unable to recover it. 00:27:08.538 [2024-11-20 11:21:35.951250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.538 [2024-11-20 11:21:35.951281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.538 qpair failed and we were unable to recover it. 00:27:08.538 [2024-11-20 11:21:35.951379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.538 [2024-11-20 11:21:35.951412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.538 qpair failed and we were unable to recover it. 00:27:08.538 [2024-11-20 11:21:35.951527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.538 [2024-11-20 11:21:35.951558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.538 qpair failed and we were unable to recover it. 00:27:08.538 [2024-11-20 11:21:35.951742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.538 [2024-11-20 11:21:35.951775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.538 qpair failed and we were unable to recover it. 00:27:08.538 [2024-11-20 11:21:35.951940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.538 [2024-11-20 11:21:35.952003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.538 qpair failed and we were unable to recover it. 00:27:08.538 [2024-11-20 11:21:35.952190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.538 [2024-11-20 11:21:35.952221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.538 qpair failed and we were unable to recover it. 00:27:08.538 [2024-11-20 11:21:35.952337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.538 [2024-11-20 11:21:35.952368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.538 qpair failed and we were unable to recover it. 00:27:08.538 [2024-11-20 11:21:35.952542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.538 [2024-11-20 11:21:35.952575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.538 qpair failed and we were unable to recover it. 00:27:08.538 [2024-11-20 11:21:35.952680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.538 [2024-11-20 11:21:35.952714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.538 qpair failed and we were unable to recover it. 00:27:08.538 [2024-11-20 11:21:35.952898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.538 [2024-11-20 11:21:35.952929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.538 qpair failed and we were unable to recover it. 00:27:08.538 [2024-11-20 11:21:35.953120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.538 [2024-11-20 11:21:35.953151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.538 qpair failed and we were unable to recover it. 00:27:08.538 [2024-11-20 11:21:35.953285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.538 [2024-11-20 11:21:35.953317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.538 qpair failed and we were unable to recover it. 00:27:08.538 [2024-11-20 11:21:35.953496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.538 [2024-11-20 11:21:35.953529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.538 qpair failed and we were unable to recover it. 00:27:08.538 [2024-11-20 11:21:35.953669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.538 [2024-11-20 11:21:35.953698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.538 qpair failed and we were unable to recover it. 00:27:08.538 [2024-11-20 11:21:35.953886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.538 [2024-11-20 11:21:35.953917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.538 qpair failed and we were unable to recover it. 00:27:08.538 [2024-11-20 11:21:35.954056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.538 [2024-11-20 11:21:35.954087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.538 qpair failed and we were unable to recover it. 00:27:08.538 [2024-11-20 11:21:35.954223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.538 [2024-11-20 11:21:35.954254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.538 qpair failed and we were unable to recover it. 00:27:08.538 [2024-11-20 11:21:35.954362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.538 [2024-11-20 11:21:35.954392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.538 qpair failed and we were unable to recover it. 00:27:08.538 [2024-11-20 11:21:35.954565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.538 [2024-11-20 11:21:35.954596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.538 qpair failed and we were unable to recover it. 00:27:08.538 [2024-11-20 11:21:35.954779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.538 [2024-11-20 11:21:35.954810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.538 qpair failed and we were unable to recover it. 00:27:08.538 [2024-11-20 11:21:35.954918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.538 [2024-11-20 11:21:35.954961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.538 qpair failed and we were unable to recover it. 00:27:08.538 [2024-11-20 11:21:35.955086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.538 [2024-11-20 11:21:35.955119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.538 qpair failed and we were unable to recover it. 00:27:08.538 [2024-11-20 11:21:35.955249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.538 [2024-11-20 11:21:35.955281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.538 qpair failed and we were unable to recover it. 00:27:08.538 [2024-11-20 11:21:35.955476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.538 [2024-11-20 11:21:35.955509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.538 qpair failed and we were unable to recover it. 00:27:08.538 [2024-11-20 11:21:35.955608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.538 [2024-11-20 11:21:35.955651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.538 qpair failed and we were unable to recover it. 00:27:08.539 [2024-11-20 11:21:35.955823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.539 [2024-11-20 11:21:35.955854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.539 qpair failed and we were unable to recover it. 00:27:08.539 [2024-11-20 11:21:35.955984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.539 [2024-11-20 11:21:35.956016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.539 qpair failed and we were unable to recover it. 00:27:08.539 [2024-11-20 11:21:35.956115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.539 [2024-11-20 11:21:35.956147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.539 qpair failed and we were unable to recover it. 00:27:08.539 [2024-11-20 11:21:35.956359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.539 [2024-11-20 11:21:35.956391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.539 qpair failed and we were unable to recover it. 00:27:08.539 [2024-11-20 11:21:35.956517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.539 [2024-11-20 11:21:35.956548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.539 qpair failed and we were unable to recover it. 00:27:08.539 [2024-11-20 11:21:35.956730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.539 [2024-11-20 11:21:35.956762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.539 qpair failed and we were unable to recover it. 00:27:08.539 [2024-11-20 11:21:35.956998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.539 [2024-11-20 11:21:35.957030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.539 qpair failed and we were unable to recover it. 00:27:08.539 [2024-11-20 11:21:35.957203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.539 [2024-11-20 11:21:35.957234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.539 qpair failed and we were unable to recover it. 00:27:08.539 [2024-11-20 11:21:35.957351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.539 [2024-11-20 11:21:35.957382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.539 qpair failed and we were unable to recover it. 00:27:08.539 [2024-11-20 11:21:35.957492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.539 [2024-11-20 11:21:35.957522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.539 qpair failed and we were unable to recover it. 00:27:08.539 [2024-11-20 11:21:35.957690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.539 [2024-11-20 11:21:35.957722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.539 qpair failed and we were unable to recover it. 00:27:08.539 [2024-11-20 11:21:35.957921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.539 [2024-11-20 11:21:35.957959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.539 qpair failed and we were unable to recover it. 00:27:08.539 [2024-11-20 11:21:35.958242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.539 [2024-11-20 11:21:35.958274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.539 qpair failed and we were unable to recover it. 00:27:08.539 [2024-11-20 11:21:35.958415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.539 [2024-11-20 11:21:35.958446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.539 qpair failed and we were unable to recover it. 00:27:08.539 [2024-11-20 11:21:35.958680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.539 [2024-11-20 11:21:35.958710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.539 qpair failed and we were unable to recover it. 00:27:08.539 [2024-11-20 11:21:35.958816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.539 [2024-11-20 11:21:35.958848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.539 qpair failed and we were unable to recover it. 00:27:08.539 [2024-11-20 11:21:35.958979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.539 [2024-11-20 11:21:35.959011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.539 qpair failed and we were unable to recover it. 00:27:08.539 [2024-11-20 11:21:35.959230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.539 [2024-11-20 11:21:35.959261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.539 qpair failed and we were unable to recover it. 00:27:08.539 [2024-11-20 11:21:35.959464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.539 [2024-11-20 11:21:35.959496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.539 qpair failed and we were unable to recover it. 00:27:08.539 [2024-11-20 11:21:35.959695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.539 [2024-11-20 11:21:35.959726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.539 qpair failed and we were unable to recover it. 00:27:08.539 [2024-11-20 11:21:35.959984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.539 [2024-11-20 11:21:35.960018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.539 qpair failed and we were unable to recover it. 00:27:08.539 [2024-11-20 11:21:35.960238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.539 [2024-11-20 11:21:35.960270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.539 qpair failed and we were unable to recover it. 00:27:08.539 [2024-11-20 11:21:35.960507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.539 [2024-11-20 11:21:35.960538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.539 qpair failed and we were unable to recover it. 00:27:08.539 [2024-11-20 11:21:35.960671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.539 [2024-11-20 11:21:35.960702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.539 qpair failed and we were unable to recover it. 00:27:08.539 [2024-11-20 11:21:35.960934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.539 [2024-11-20 11:21:35.960976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.539 qpair failed and we were unable to recover it. 00:27:08.539 [2024-11-20 11:21:35.961222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.539 [2024-11-20 11:21:35.961254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.539 qpair failed and we were unable to recover it. 00:27:08.539 [2024-11-20 11:21:35.961509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.539 [2024-11-20 11:21:35.961541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.539 qpair failed and we were unable to recover it. 00:27:08.539 [2024-11-20 11:21:35.961742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.539 [2024-11-20 11:21:35.961773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.539 qpair failed and we were unable to recover it. 00:27:08.539 [2024-11-20 11:21:35.961961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.539 [2024-11-20 11:21:35.961993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.539 qpair failed and we were unable to recover it. 00:27:08.539 [2024-11-20 11:21:35.962180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.539 [2024-11-20 11:21:35.962211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.539 qpair failed and we were unable to recover it. 00:27:08.539 [2024-11-20 11:21:35.962412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.539 [2024-11-20 11:21:35.962445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.539 qpair failed and we were unable to recover it. 00:27:08.539 [2024-11-20 11:21:35.962629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.539 [2024-11-20 11:21:35.962659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.539 qpair failed and we were unable to recover it. 00:27:08.540 [2024-11-20 11:21:35.962846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.540 [2024-11-20 11:21:35.962877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.540 qpair failed and we were unable to recover it. 00:27:08.540 [2024-11-20 11:21:35.963057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.540 [2024-11-20 11:21:35.963090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.540 qpair failed and we were unable to recover it. 00:27:08.540 [2024-11-20 11:21:35.963258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.540 [2024-11-20 11:21:35.963289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.540 qpair failed and we were unable to recover it. 00:27:08.540 [2024-11-20 11:21:35.963485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.540 [2024-11-20 11:21:35.963517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.540 qpair failed and we were unable to recover it. 00:27:08.540 [2024-11-20 11:21:35.963622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.540 [2024-11-20 11:21:35.963652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.540 qpair failed and we were unable to recover it. 00:27:08.540 [2024-11-20 11:21:35.963838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.540 [2024-11-20 11:21:35.963869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.540 qpair failed and we were unable to recover it. 00:27:08.540 [2024-11-20 11:21:35.964045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.540 [2024-11-20 11:21:35.964078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.540 qpair failed and we were unable to recover it. 00:27:08.540 [2024-11-20 11:21:35.964218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.540 [2024-11-20 11:21:35.964256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.540 qpair failed and we were unable to recover it. 00:27:08.540 [2024-11-20 11:21:35.964496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.540 [2024-11-20 11:21:35.964529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.540 qpair failed and we were unable to recover it. 00:27:08.540 [2024-11-20 11:21:35.964706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.540 [2024-11-20 11:21:35.964739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.540 qpair failed and we were unable to recover it. 00:27:08.540 [2024-11-20 11:21:35.964862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.540 [2024-11-20 11:21:35.964893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.540 qpair failed and we were unable to recover it. 00:27:08.540 [2024-11-20 11:21:35.965024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.540 [2024-11-20 11:21:35.965055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.540 qpair failed and we were unable to recover it. 00:27:08.540 [2024-11-20 11:21:35.965157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.540 [2024-11-20 11:21:35.965187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.540 qpair failed and we were unable to recover it. 00:27:08.540 [2024-11-20 11:21:35.965438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.540 [2024-11-20 11:21:35.965471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.540 qpair failed and we were unable to recover it. 00:27:08.540 [2024-11-20 11:21:35.965584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.540 [2024-11-20 11:21:35.965615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.540 qpair failed and we were unable to recover it. 00:27:08.540 [2024-11-20 11:21:35.965855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.540 [2024-11-20 11:21:35.965888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.540 qpair failed and we were unable to recover it. 00:27:08.540 [2024-11-20 11:21:35.966018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.540 [2024-11-20 11:21:35.966050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.540 qpair failed and we were unable to recover it. 00:27:08.540 [2024-11-20 11:21:35.966239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.540 [2024-11-20 11:21:35.966270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.540 qpair failed and we were unable to recover it. 00:27:08.540 [2024-11-20 11:21:35.966532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.540 [2024-11-20 11:21:35.966564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.540 qpair failed and we were unable to recover it. 00:27:08.540 [2024-11-20 11:21:35.966736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.540 [2024-11-20 11:21:35.966767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.540 qpair failed and we were unable to recover it. 00:27:08.540 [2024-11-20 11:21:35.966889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.540 [2024-11-20 11:21:35.966920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.540 qpair failed and we were unable to recover it. 00:27:08.540 [2024-11-20 11:21:35.967144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.540 [2024-11-20 11:21:35.967177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.540 qpair failed and we were unable to recover it. 00:27:08.540 [2024-11-20 11:21:35.967363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.540 [2024-11-20 11:21:35.967394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.540 qpair failed and we were unable to recover it. 00:27:08.540 [2024-11-20 11:21:35.967511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.540 [2024-11-20 11:21:35.967543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.540 qpair failed and we were unable to recover it. 00:27:08.540 [2024-11-20 11:21:35.967798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.540 [2024-11-20 11:21:35.967829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.540 qpair failed and we were unable to recover it. 00:27:08.540 [2024-11-20 11:21:35.967997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.540 [2024-11-20 11:21:35.968031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.540 qpair failed and we were unable to recover it. 00:27:08.540 [2024-11-20 11:21:35.968229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.540 [2024-11-20 11:21:35.968260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.540 qpair failed and we were unable to recover it. 00:27:08.540 [2024-11-20 11:21:35.968444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.540 [2024-11-20 11:21:35.968475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.540 qpair failed and we were unable to recover it. 00:27:08.540 [2024-11-20 11:21:35.968594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.540 [2024-11-20 11:21:35.968624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.540 qpair failed and we were unable to recover it. 00:27:08.540 [2024-11-20 11:21:35.968813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.540 [2024-11-20 11:21:35.968845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.540 qpair failed and we were unable to recover it. 00:27:08.540 [2024-11-20 11:21:35.968967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.540 [2024-11-20 11:21:35.968999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.540 qpair failed and we were unable to recover it. 00:27:08.829 [2024-11-20 11:21:35.969282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.829 [2024-11-20 11:21:35.969314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.829 qpair failed and we were unable to recover it. 00:27:08.829 [2024-11-20 11:21:35.969429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.829 [2024-11-20 11:21:35.969460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.829 qpair failed and we were unable to recover it. 00:27:08.829 [2024-11-20 11:21:35.969585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.829 [2024-11-20 11:21:35.969615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.829 qpair failed and we were unable to recover it. 00:27:08.829 [2024-11-20 11:21:35.969725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.829 [2024-11-20 11:21:35.969757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.830 qpair failed and we were unable to recover it. 00:27:08.830 [2024-11-20 11:21:35.969993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.830 [2024-11-20 11:21:35.970027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.830 qpair failed and we were unable to recover it. 00:27:08.830 [2024-11-20 11:21:35.970215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.830 [2024-11-20 11:21:35.970246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.830 qpair failed and we were unable to recover it. 00:27:08.830 [2024-11-20 11:21:35.970383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.830 [2024-11-20 11:21:35.970414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.830 qpair failed and we were unable to recover it. 00:27:08.830 [2024-11-20 11:21:35.970668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.830 [2024-11-20 11:21:35.970699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.830 qpair failed and we were unable to recover it. 00:27:08.830 [2024-11-20 11:21:35.970831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.830 [2024-11-20 11:21:35.970862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.830 qpair failed and we were unable to recover it. 00:27:08.830 [2024-11-20 11:21:35.971031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.830 [2024-11-20 11:21:35.971063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.830 qpair failed and we were unable to recover it. 00:27:08.830 [2024-11-20 11:21:35.971308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.830 [2024-11-20 11:21:35.971339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.830 qpair failed and we were unable to recover it. 00:27:08.830 [2024-11-20 11:21:35.971475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.830 [2024-11-20 11:21:35.971507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.830 qpair failed and we were unable to recover it. 00:27:08.830 [2024-11-20 11:21:35.971620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.830 [2024-11-20 11:21:35.971651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.830 qpair failed and we were unable to recover it. 00:27:08.830 [2024-11-20 11:21:35.971780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.830 [2024-11-20 11:21:35.971811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.830 qpair failed and we were unable to recover it. 00:27:08.830 [2024-11-20 11:21:35.972054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.830 [2024-11-20 11:21:35.972086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.830 qpair failed and we were unable to recover it. 00:27:08.830 [2024-11-20 11:21:35.972278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.830 [2024-11-20 11:21:35.972309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.830 qpair failed and we were unable to recover it. 00:27:08.830 [2024-11-20 11:21:35.972537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.830 [2024-11-20 11:21:35.972575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.830 qpair failed and we were unable to recover it. 00:27:08.830 [2024-11-20 11:21:35.972757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.830 [2024-11-20 11:21:35.972788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.830 qpair failed and we were unable to recover it. 00:27:08.830 [2024-11-20 11:21:35.972974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.830 [2024-11-20 11:21:35.973007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.830 qpair failed and we were unable to recover it. 00:27:08.830 [2024-11-20 11:21:35.973123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.830 [2024-11-20 11:21:35.973153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.830 qpair failed and we were unable to recover it. 00:27:08.830 [2024-11-20 11:21:35.973269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.830 [2024-11-20 11:21:35.973299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.830 qpair failed and we were unable to recover it. 00:27:08.830 [2024-11-20 11:21:35.973562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.830 [2024-11-20 11:21:35.973594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.830 qpair failed and we were unable to recover it. 00:27:08.830 [2024-11-20 11:21:35.973714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.830 [2024-11-20 11:21:35.973745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.830 qpair failed and we were unable to recover it. 00:27:08.830 [2024-11-20 11:21:35.973865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.830 [2024-11-20 11:21:35.973896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.830 qpair failed and we were unable to recover it. 00:27:08.830 [2024-11-20 11:21:35.974029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.830 [2024-11-20 11:21:35.974062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.830 qpair failed and we were unable to recover it. 00:27:08.830 [2024-11-20 11:21:35.974245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.830 [2024-11-20 11:21:35.974275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.830 qpair failed and we were unable to recover it. 00:27:08.830 [2024-11-20 11:21:35.974407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.830 [2024-11-20 11:21:35.974438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.830 qpair failed and we were unable to recover it. 00:27:08.830 [2024-11-20 11:21:35.974569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.830 [2024-11-20 11:21:35.974601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.830 qpair failed and we were unable to recover it. 00:27:08.830 [2024-11-20 11:21:35.974794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.830 [2024-11-20 11:21:35.974824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.830 qpair failed and we were unable to recover it. 00:27:08.830 [2024-11-20 11:21:35.975037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.830 [2024-11-20 11:21:35.975070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.830 qpair failed and we were unable to recover it. 00:27:08.830 [2024-11-20 11:21:35.975281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.830 [2024-11-20 11:21:35.975312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.830 qpair failed and we were unable to recover it. 00:27:08.830 [2024-11-20 11:21:35.975447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.830 [2024-11-20 11:21:35.975478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.830 qpair failed and we were unable to recover it. 00:27:08.830 [2024-11-20 11:21:35.975592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.830 [2024-11-20 11:21:35.975624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.830 qpair failed and we were unable to recover it. 00:27:08.830 [2024-11-20 11:21:35.975740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.830 [2024-11-20 11:21:35.975771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.830 qpair failed and we were unable to recover it. 00:27:08.831 [2024-11-20 11:21:35.975896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.831 [2024-11-20 11:21:35.975926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.831 qpair failed and we were unable to recover it. 00:27:08.831 [2024-11-20 11:21:35.976128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.831 [2024-11-20 11:21:35.976160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.831 qpair failed and we were unable to recover it. 00:27:08.831 [2024-11-20 11:21:35.976283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.831 [2024-11-20 11:21:35.976314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.831 qpair failed and we were unable to recover it. 00:27:08.831 [2024-11-20 11:21:35.976426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.831 [2024-11-20 11:21:35.976458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.831 qpair failed and we were unable to recover it. 00:27:08.831 [2024-11-20 11:21:35.976674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.831 [2024-11-20 11:21:35.976706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.831 qpair failed and we were unable to recover it. 00:27:08.831 [2024-11-20 11:21:35.976888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.831 [2024-11-20 11:21:35.976918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.831 qpair failed and we were unable to recover it. 00:27:08.831 [2024-11-20 11:21:35.977169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.831 [2024-11-20 11:21:35.977203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.831 qpair failed and we were unable to recover it. 00:27:08.831 [2024-11-20 11:21:35.977423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.831 [2024-11-20 11:21:35.977454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.831 qpair failed and we were unable to recover it. 00:27:08.831 [2024-11-20 11:21:35.977564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.831 [2024-11-20 11:21:35.977594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.831 qpair failed and we were unable to recover it. 00:27:08.831 [2024-11-20 11:21:35.977814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.831 [2024-11-20 11:21:35.977872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.831 qpair failed and we were unable to recover it. 00:27:08.831 [2024-11-20 11:21:35.978078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.831 [2024-11-20 11:21:35.978150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.831 qpair failed and we were unable to recover it. 00:27:08.831 [2024-11-20 11:21:35.978301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.831 [2024-11-20 11:21:35.978337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.831 qpair failed and we were unable to recover it. 00:27:08.831 [2024-11-20 11:21:35.978538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.831 [2024-11-20 11:21:35.978570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.831 qpair failed and we were unable to recover it. 00:27:08.831 [2024-11-20 11:21:35.978686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.831 [2024-11-20 11:21:35.978719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.831 qpair failed and we were unable to recover it. 00:27:08.831 [2024-11-20 11:21:35.978904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.831 [2024-11-20 11:21:35.978936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.831 qpair failed and we were unable to recover it. 00:27:08.831 [2024-11-20 11:21:35.979070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.831 [2024-11-20 11:21:35.979102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.831 qpair failed and we were unable to recover it. 00:27:08.831 [2024-11-20 11:21:35.979288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.831 [2024-11-20 11:21:35.979321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.831 qpair failed and we were unable to recover it. 00:27:08.831 [2024-11-20 11:21:35.979426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.831 [2024-11-20 11:21:35.979458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.831 qpair failed and we were unable to recover it. 00:27:08.831 [2024-11-20 11:21:35.979567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.831 [2024-11-20 11:21:35.979598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.831 qpair failed and we were unable to recover it. 00:27:08.831 [2024-11-20 11:21:35.979724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.831 [2024-11-20 11:21:35.979755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.831 qpair failed and we were unable to recover it. 00:27:08.831 [2024-11-20 11:21:35.979878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.831 [2024-11-20 11:21:35.979909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.831 qpair failed and we were unable to recover it. 00:27:08.831 [2024-11-20 11:21:35.980046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.831 [2024-11-20 11:21:35.980079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.831 qpair failed and we were unable to recover it. 00:27:08.831 [2024-11-20 11:21:35.980251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.831 [2024-11-20 11:21:35.980281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.831 qpair failed and we were unable to recover it. 00:27:08.831 [2024-11-20 11:21:35.980472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.831 [2024-11-20 11:21:35.980504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.831 qpair failed and we were unable to recover it. 00:27:08.831 [2024-11-20 11:21:35.980749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.831 [2024-11-20 11:21:35.980781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.831 qpair failed and we were unable to recover it. 00:27:08.831 [2024-11-20 11:21:35.980893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.831 [2024-11-20 11:21:35.980924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.831 qpair failed and we were unable to recover it. 00:27:08.831 [2024-11-20 11:21:35.981174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.831 [2024-11-20 11:21:35.981206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.831 qpair failed and we were unable to recover it. 00:27:08.831 [2024-11-20 11:21:35.981389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.831 [2024-11-20 11:21:35.981420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.831 qpair failed and we were unable to recover it. 00:27:08.831 [2024-11-20 11:21:35.981609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.831 [2024-11-20 11:21:35.981640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.831 qpair failed and we were unable to recover it. 00:27:08.831 [2024-11-20 11:21:35.981932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.831 [2024-11-20 11:21:35.981976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.831 qpair failed and we were unable to recover it. 00:27:08.831 [2024-11-20 11:21:35.982093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.831 [2024-11-20 11:21:35.982124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.831 qpair failed and we were unable to recover it. 00:27:08.831 [2024-11-20 11:21:35.982255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.831 [2024-11-20 11:21:35.982287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.831 qpair failed and we were unable to recover it. 00:27:08.831 [2024-11-20 11:21:35.982399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.832 [2024-11-20 11:21:35.982431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.832 qpair failed and we were unable to recover it. 00:27:08.832 [2024-11-20 11:21:35.982557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.832 [2024-11-20 11:21:35.982590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.832 qpair failed and we were unable to recover it. 00:27:08.832 [2024-11-20 11:21:35.982766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.832 [2024-11-20 11:21:35.982797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.832 qpair failed and we were unable to recover it. 00:27:08.832 [2024-11-20 11:21:35.982979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.832 [2024-11-20 11:21:35.983013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.832 qpair failed and we were unable to recover it. 00:27:08.832 [2024-11-20 11:21:35.983132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.832 [2024-11-20 11:21:35.983167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.832 qpair failed and we were unable to recover it. 00:27:08.832 [2024-11-20 11:21:35.983290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.832 [2024-11-20 11:21:35.983322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.832 qpair failed and we were unable to recover it. 00:27:08.832 [2024-11-20 11:21:35.983430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.832 [2024-11-20 11:21:35.983462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.832 qpair failed and we were unable to recover it. 00:27:08.832 [2024-11-20 11:21:35.983714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.832 [2024-11-20 11:21:35.983745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.832 qpair failed and we were unable to recover it. 00:27:08.832 [2024-11-20 11:21:35.983914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.832 [2024-11-20 11:21:35.983958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.832 qpair failed and we were unable to recover it. 00:27:08.832 [2024-11-20 11:21:35.984070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.832 [2024-11-20 11:21:35.984101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.832 qpair failed and we were unable to recover it. 00:27:08.832 [2024-11-20 11:21:35.984272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.832 [2024-11-20 11:21:35.984303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.832 qpair failed and we were unable to recover it. 00:27:08.832 [2024-11-20 11:21:35.984406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.832 [2024-11-20 11:21:35.984437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.832 qpair failed and we were unable to recover it. 00:27:08.832 [2024-11-20 11:21:35.984571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.832 [2024-11-20 11:21:35.984601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.832 qpair failed and we were unable to recover it. 00:27:08.832 [2024-11-20 11:21:35.984721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.832 [2024-11-20 11:21:35.984751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.832 qpair failed and we were unable to recover it. 00:27:08.832 [2024-11-20 11:21:35.985016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.832 [2024-11-20 11:21:35.985049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.832 qpair failed and we were unable to recover it. 00:27:08.832 [2024-11-20 11:21:35.985176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.832 [2024-11-20 11:21:35.985207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.832 qpair failed and we were unable to recover it. 00:27:08.832 [2024-11-20 11:21:35.985317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.832 [2024-11-20 11:21:35.985349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.832 qpair failed and we were unable to recover it. 00:27:08.832 [2024-11-20 11:21:35.985484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.832 [2024-11-20 11:21:35.985521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.832 qpair failed and we were unable to recover it. 00:27:08.832 [2024-11-20 11:21:35.985644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.832 [2024-11-20 11:21:35.985675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.832 qpair failed and we were unable to recover it. 00:27:08.832 [2024-11-20 11:21:35.985783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.832 [2024-11-20 11:21:35.985814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.832 qpair failed and we were unable to recover it. 00:27:08.832 [2024-11-20 11:21:35.985929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.832 [2024-11-20 11:21:35.985992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.832 qpair failed and we were unable to recover it. 00:27:08.832 [2024-11-20 11:21:35.986099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.832 [2024-11-20 11:21:35.986131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.832 qpair failed and we were unable to recover it. 00:27:08.832 [2024-11-20 11:21:35.986234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.832 [2024-11-20 11:21:35.986266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.832 qpair failed and we were unable to recover it. 00:27:08.832 [2024-11-20 11:21:35.986434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.832 [2024-11-20 11:21:35.986465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.832 qpair failed and we were unable to recover it. 00:27:08.832 [2024-11-20 11:21:35.986576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.832 [2024-11-20 11:21:35.986608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.832 qpair failed and we were unable to recover it. 00:27:08.832 [2024-11-20 11:21:35.986781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.832 [2024-11-20 11:21:35.986812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.832 qpair failed and we were unable to recover it. 00:27:08.832 [2024-11-20 11:21:35.986981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.832 [2024-11-20 11:21:35.987014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.832 qpair failed and we were unable to recover it. 00:27:08.832 [2024-11-20 11:21:35.987184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.832 [2024-11-20 11:21:35.987216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.832 qpair failed and we were unable to recover it. 00:27:08.832 [2024-11-20 11:21:35.987323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.832 [2024-11-20 11:21:35.987355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.832 qpair failed and we were unable to recover it. 00:27:08.832 [2024-11-20 11:21:35.987521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.832 [2024-11-20 11:21:35.987552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.832 qpair failed and we were unable to recover it. 00:27:08.832 [2024-11-20 11:21:35.987676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.832 [2024-11-20 11:21:35.987709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.832 qpair failed and we were unable to recover it. 00:27:08.832 [2024-11-20 11:21:35.987827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.832 [2024-11-20 11:21:35.987859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.832 qpair failed and we were unable to recover it. 00:27:08.832 [2024-11-20 11:21:35.987989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.833 [2024-11-20 11:21:35.988020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.833 qpair failed and we were unable to recover it. 00:27:08.833 [2024-11-20 11:21:35.988218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.833 [2024-11-20 11:21:35.988248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.833 qpair failed and we were unable to recover it. 00:27:08.833 [2024-11-20 11:21:35.988364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.833 [2024-11-20 11:21:35.988395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.833 qpair failed and we were unable to recover it. 00:27:08.833 [2024-11-20 11:21:35.988573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.833 [2024-11-20 11:21:35.988605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.833 qpair failed and we were unable to recover it. 00:27:08.833 [2024-11-20 11:21:35.988847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.833 [2024-11-20 11:21:35.988880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.833 qpair failed and we were unable to recover it. 00:27:08.833 [2024-11-20 11:21:35.989165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.833 [2024-11-20 11:21:35.989197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.833 qpair failed and we were unable to recover it. 00:27:08.833 [2024-11-20 11:21:35.989367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.833 [2024-11-20 11:21:35.989398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.833 qpair failed and we were unable to recover it. 00:27:08.833 [2024-11-20 11:21:35.989639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.833 [2024-11-20 11:21:35.989669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.833 qpair failed and we were unable to recover it. 00:27:08.833 [2024-11-20 11:21:35.989963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.833 [2024-11-20 11:21:35.989996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.833 qpair failed and we were unable to recover it. 00:27:08.833 [2024-11-20 11:21:35.990186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.833 [2024-11-20 11:21:35.990217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.833 qpair failed and we were unable to recover it. 00:27:08.833 [2024-11-20 11:21:35.990402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.833 [2024-11-20 11:21:35.990433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.833 qpair failed and we were unable to recover it. 00:27:08.833 [2024-11-20 11:21:35.990620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.833 [2024-11-20 11:21:35.990651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.833 qpair failed and we were unable to recover it. 00:27:08.833 [2024-11-20 11:21:35.990825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.833 [2024-11-20 11:21:35.990862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.833 qpair failed and we were unable to recover it. 00:27:08.833 [2024-11-20 11:21:35.990989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.833 [2024-11-20 11:21:35.991022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.833 qpair failed and we were unable to recover it. 00:27:08.833 [2024-11-20 11:21:35.991131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.833 [2024-11-20 11:21:35.991162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.833 qpair failed and we were unable to recover it. 00:27:08.833 [2024-11-20 11:21:35.991288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.833 [2024-11-20 11:21:35.991319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.833 qpair failed and we were unable to recover it. 00:27:08.833 [2024-11-20 11:21:35.991448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.833 [2024-11-20 11:21:35.991480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.833 qpair failed and we were unable to recover it. 00:27:08.833 [2024-11-20 11:21:35.991578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.833 [2024-11-20 11:21:35.991608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.833 qpair failed and we were unable to recover it. 00:27:08.833 [2024-11-20 11:21:35.991879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.833 [2024-11-20 11:21:35.991910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.833 qpair failed and we were unable to recover it. 00:27:08.833 [2024-11-20 11:21:35.992096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.833 [2024-11-20 11:21:35.992128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.833 qpair failed and we were unable to recover it. 00:27:08.833 [2024-11-20 11:21:35.992249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.833 [2024-11-20 11:21:35.992280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.833 qpair failed and we were unable to recover it. 00:27:08.833 [2024-11-20 11:21:35.992384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.833 [2024-11-20 11:21:35.992413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.833 qpair failed and we were unable to recover it. 00:27:08.833 [2024-11-20 11:21:35.992518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.833 [2024-11-20 11:21:35.992549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.833 qpair failed and we were unable to recover it. 00:27:08.833 [2024-11-20 11:21:35.992729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.833 [2024-11-20 11:21:35.992760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.833 qpair failed and we were unable to recover it. 00:27:08.833 [2024-11-20 11:21:35.992868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.833 [2024-11-20 11:21:35.992897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.833 qpair failed and we were unable to recover it. 00:27:08.833 [2024-11-20 11:21:35.993017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.833 [2024-11-20 11:21:35.993050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.833 qpair failed and we were unable to recover it. 00:27:08.833 [2024-11-20 11:21:35.993176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.833 [2024-11-20 11:21:35.993207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.833 qpair failed and we were unable to recover it. 00:27:08.833 [2024-11-20 11:21:35.993318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.833 [2024-11-20 11:21:35.993352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.833 qpair failed and we were unable to recover it. 00:27:08.833 [2024-11-20 11:21:35.993461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.833 [2024-11-20 11:21:35.993493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.833 qpair failed and we were unable to recover it. 00:27:08.833 [2024-11-20 11:21:35.993631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.833 [2024-11-20 11:21:35.993663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.833 qpair failed and we were unable to recover it. 00:27:08.833 [2024-11-20 11:21:35.993793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.833 [2024-11-20 11:21:35.993824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.833 qpair failed and we were unable to recover it. 00:27:08.833 [2024-11-20 11:21:35.993971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.833 [2024-11-20 11:21:35.994005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.833 qpair failed and we were unable to recover it. 00:27:08.833 [2024-11-20 11:21:35.994115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.833 [2024-11-20 11:21:35.994144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.833 qpair failed and we were unable to recover it. 00:27:08.833 [2024-11-20 11:21:35.994257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.834 [2024-11-20 11:21:35.994289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.834 qpair failed and we were unable to recover it. 00:27:08.834 [2024-11-20 11:21:35.994408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.834 [2024-11-20 11:21:35.994440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.834 qpair failed and we were unable to recover it. 00:27:08.834 [2024-11-20 11:21:35.994563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.834 [2024-11-20 11:21:35.994593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.834 qpair failed and we were unable to recover it. 00:27:08.834 [2024-11-20 11:21:35.994766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.834 [2024-11-20 11:21:35.994797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.834 qpair failed and we were unable to recover it. 00:27:08.834 [2024-11-20 11:21:35.994970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.834 [2024-11-20 11:21:35.995003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.834 qpair failed and we were unable to recover it. 00:27:08.834 [2024-11-20 11:21:35.995131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.834 [2024-11-20 11:21:35.995163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.834 qpair failed and we were unable to recover it. 00:27:08.834 [2024-11-20 11:21:35.995296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.834 [2024-11-20 11:21:35.995328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.834 qpair failed and we were unable to recover it. 00:27:08.834 [2024-11-20 11:21:35.995496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.834 [2024-11-20 11:21:35.995526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.834 qpair failed and we were unable to recover it. 00:27:08.834 [2024-11-20 11:21:35.995641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.834 [2024-11-20 11:21:35.995671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.834 qpair failed and we were unable to recover it. 00:27:08.834 [2024-11-20 11:21:35.995860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.834 [2024-11-20 11:21:35.995892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.834 qpair failed and we were unable to recover it. 00:27:08.834 [2024-11-20 11:21:35.996003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.834 [2024-11-20 11:21:35.996036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.834 qpair failed and we were unable to recover it. 00:27:08.834 [2024-11-20 11:21:35.996151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.834 [2024-11-20 11:21:35.996182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.834 qpair failed and we were unable to recover it. 00:27:08.834 [2024-11-20 11:21:35.996297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.834 [2024-11-20 11:21:35.996329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.834 qpair failed and we were unable to recover it. 00:27:08.834 [2024-11-20 11:21:35.996525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.834 [2024-11-20 11:21:35.996558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.834 qpair failed and we were unable to recover it. 00:27:08.834 [2024-11-20 11:21:35.997939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.834 [2024-11-20 11:21:35.998015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.834 qpair failed and we were unable to recover it. 00:27:08.834 [2024-11-20 11:21:35.998222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.834 [2024-11-20 11:21:35.998254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.834 qpair failed and we were unable to recover it. 00:27:08.834 [2024-11-20 11:21:35.998500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.834 [2024-11-20 11:21:35.998531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.834 qpair failed and we were unable to recover it. 00:27:08.834 [2024-11-20 11:21:35.998657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.834 [2024-11-20 11:21:35.998687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.834 qpair failed and we were unable to recover it. 00:27:08.834 [2024-11-20 11:21:35.998860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.834 [2024-11-20 11:21:35.998892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.834 qpair failed and we were unable to recover it. 00:27:08.834 [2024-11-20 11:21:35.999021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.834 [2024-11-20 11:21:35.999061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.834 qpair failed and we were unable to recover it. 00:27:08.834 [2024-11-20 11:21:35.999238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.834 [2024-11-20 11:21:35.999269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.834 qpair failed and we were unable to recover it. 00:27:08.834 [2024-11-20 11:21:35.999447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.834 [2024-11-20 11:21:35.999477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.834 qpair failed and we were unable to recover it. 00:27:08.834 [2024-11-20 11:21:35.999759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.834 [2024-11-20 11:21:35.999791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.834 qpair failed and we were unable to recover it. 00:27:08.834 [2024-11-20 11:21:36.000069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.834 [2024-11-20 11:21:36.000103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.834 qpair failed and we were unable to recover it. 00:27:08.834 [2024-11-20 11:21:36.000219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.834 [2024-11-20 11:21:36.000249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.834 qpair failed and we were unable to recover it. 00:27:08.834 [2024-11-20 11:21:36.000443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.834 [2024-11-20 11:21:36.000474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.834 qpair failed and we were unable to recover it. 00:27:08.834 [2024-11-20 11:21:36.000670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.834 [2024-11-20 11:21:36.000702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.834 qpair failed and we were unable to recover it. 00:27:08.834 [2024-11-20 11:21:36.000818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.834 [2024-11-20 11:21:36.000849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.834 qpair failed and we were unable to recover it. 00:27:08.834 [2024-11-20 11:21:36.000973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.834 [2024-11-20 11:21:36.001006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.834 qpair failed and we were unable to recover it. 00:27:08.834 [2024-11-20 11:21:36.001129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.834 [2024-11-20 11:21:36.001160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.834 qpair failed and we were unable to recover it. 00:27:08.834 [2024-11-20 11:21:36.001280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.834 [2024-11-20 11:21:36.001310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.834 qpair failed and we were unable to recover it. 00:27:08.834 [2024-11-20 11:21:36.001496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.834 [2024-11-20 11:21:36.001528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.834 qpair failed and we were unable to recover it. 00:27:08.834 [2024-11-20 11:21:36.001637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.834 [2024-11-20 11:21:36.001667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.834 qpair failed and we were unable to recover it. 00:27:08.835 [2024-11-20 11:21:36.001778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.835 [2024-11-20 11:21:36.001809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.835 qpair failed and we were unable to recover it. 00:27:08.835 [2024-11-20 11:21:36.001922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.835 [2024-11-20 11:21:36.001959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.835 qpair failed and we were unable to recover it. 00:27:08.835 [2024-11-20 11:21:36.002129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.835 [2024-11-20 11:21:36.002160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.835 qpair failed and we were unable to recover it. 00:27:08.835 [2024-11-20 11:21:36.002329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.835 [2024-11-20 11:21:36.002360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.835 qpair failed and we were unable to recover it. 00:27:08.835 [2024-11-20 11:21:36.002461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.835 [2024-11-20 11:21:36.002492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.835 qpair failed and we were unable to recover it. 00:27:08.835 [2024-11-20 11:21:36.002606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.835 [2024-11-20 11:21:36.002637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.835 qpair failed and we were unable to recover it. 00:27:08.835 [2024-11-20 11:21:36.002828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.835 [2024-11-20 11:21:36.002858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.835 qpair failed and we were unable to recover it. 00:27:08.835 [2024-11-20 11:21:36.003052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.835 [2024-11-20 11:21:36.003085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.835 qpair failed and we were unable to recover it. 00:27:08.835 [2024-11-20 11:21:36.003208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.835 [2024-11-20 11:21:36.003237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.835 qpair failed and we were unable to recover it. 00:27:08.835 [2024-11-20 11:21:36.003345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.835 [2024-11-20 11:21:36.003374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.835 qpair failed and we were unable to recover it. 00:27:08.835 [2024-11-20 11:21:36.003557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.835 [2024-11-20 11:21:36.003588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.835 qpair failed and we were unable to recover it. 00:27:08.835 [2024-11-20 11:21:36.003706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.835 [2024-11-20 11:21:36.003736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.835 qpair failed and we were unable to recover it. 00:27:08.835 [2024-11-20 11:21:36.003866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.835 [2024-11-20 11:21:36.003897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.835 qpair failed and we were unable to recover it. 00:27:08.835 [2024-11-20 11:21:36.004082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.835 [2024-11-20 11:21:36.004115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.835 qpair failed and we were unable to recover it. 00:27:08.835 [2024-11-20 11:21:36.004286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.835 [2024-11-20 11:21:36.004317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.835 qpair failed and we were unable to recover it. 00:27:08.835 [2024-11-20 11:21:36.004445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.835 [2024-11-20 11:21:36.004476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.835 qpair failed and we were unable to recover it. 00:27:08.835 [2024-11-20 11:21:36.004605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.835 [2024-11-20 11:21:36.004638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.835 qpair failed and we were unable to recover it. 00:27:08.835 [2024-11-20 11:21:36.004743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.835 [2024-11-20 11:21:36.004773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.835 qpair failed and we were unable to recover it. 00:27:08.835 [2024-11-20 11:21:36.004961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.835 [2024-11-20 11:21:36.004995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.835 qpair failed and we were unable to recover it. 00:27:08.835 [2024-11-20 11:21:36.005171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.835 [2024-11-20 11:21:36.005202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.835 qpair failed and we were unable to recover it. 00:27:08.835 [2024-11-20 11:21:36.005313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.835 [2024-11-20 11:21:36.005343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.835 qpair failed and we were unable to recover it. 00:27:08.835 [2024-11-20 11:21:36.005561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.835 [2024-11-20 11:21:36.005592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.835 qpair failed and we were unable to recover it. 00:27:08.835 [2024-11-20 11:21:36.005700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.835 [2024-11-20 11:21:36.005731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.835 qpair failed and we were unable to recover it. 00:27:08.835 [2024-11-20 11:21:36.005908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.835 [2024-11-20 11:21:36.005939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.835 qpair failed and we were unable to recover it. 00:27:08.835 [2024-11-20 11:21:36.006108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.835 [2024-11-20 11:21:36.006140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.835 qpair failed and we were unable to recover it. 00:27:08.835 [2024-11-20 11:21:36.006323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.835 [2024-11-20 11:21:36.006354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.835 qpair failed and we were unable to recover it. 00:27:08.835 [2024-11-20 11:21:36.006577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.835 [2024-11-20 11:21:36.006613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.835 qpair failed and we were unable to recover it. 00:27:08.836 [2024-11-20 11:21:36.006805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.836 [2024-11-20 11:21:36.006835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.836 qpair failed and we were unable to recover it. 00:27:08.836 [2024-11-20 11:21:36.006973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.836 [2024-11-20 11:21:36.007006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.836 qpair failed and we were unable to recover it. 00:27:08.836 [2024-11-20 11:21:36.007134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.836 [2024-11-20 11:21:36.007167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.836 qpair failed and we were unable to recover it. 00:27:08.836 [2024-11-20 11:21:36.007298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.836 [2024-11-20 11:21:36.007329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.836 qpair failed and we were unable to recover it. 00:27:08.836 [2024-11-20 11:21:36.007444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.836 [2024-11-20 11:21:36.007476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.836 qpair failed and we were unable to recover it. 00:27:08.836 [2024-11-20 11:21:36.007717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.836 [2024-11-20 11:21:36.007747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.836 qpair failed and we were unable to recover it. 00:27:08.836 [2024-11-20 11:21:36.007991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.836 [2024-11-20 11:21:36.008024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.836 qpair failed and we were unable to recover it. 00:27:08.836 [2024-11-20 11:21:36.008260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.836 [2024-11-20 11:21:36.008292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.836 qpair failed and we were unable to recover it. 00:27:08.836 [2024-11-20 11:21:36.008478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.836 [2024-11-20 11:21:36.008510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.836 qpair failed and we were unable to recover it. 00:27:08.836 [2024-11-20 11:21:36.008700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.836 [2024-11-20 11:21:36.008731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.836 qpair failed and we were unable to recover it. 00:27:08.836 [2024-11-20 11:21:36.008942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.836 [2024-11-20 11:21:36.008993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.836 qpair failed and we were unable to recover it. 00:27:08.836 [2024-11-20 11:21:36.009186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.836 [2024-11-20 11:21:36.009218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.836 qpair failed and we were unable to recover it. 00:27:08.836 [2024-11-20 11:21:36.009336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.836 [2024-11-20 11:21:36.009367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.836 qpair failed and we were unable to recover it. 00:27:08.836 [2024-11-20 11:21:36.009555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.836 [2024-11-20 11:21:36.009586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.836 qpair failed and we were unable to recover it. 00:27:08.836 [2024-11-20 11:21:36.009826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.836 [2024-11-20 11:21:36.009857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.836 qpair failed and we were unable to recover it. 00:27:08.836 [2024-11-20 11:21:36.009986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.836 [2024-11-20 11:21:36.010019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.836 qpair failed and we were unable to recover it. 00:27:08.836 [2024-11-20 11:21:36.010195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.836 [2024-11-20 11:21:36.010226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.836 qpair failed and we were unable to recover it. 00:27:08.836 [2024-11-20 11:21:36.010346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.836 [2024-11-20 11:21:36.010375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.836 qpair failed and we were unable to recover it. 00:27:08.836 [2024-11-20 11:21:36.010508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.836 [2024-11-20 11:21:36.010539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.836 qpair failed and we were unable to recover it. 00:27:08.836 [2024-11-20 11:21:36.010805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.836 [2024-11-20 11:21:36.010836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.836 qpair failed and we were unable to recover it. 00:27:08.836 [2024-11-20 11:21:36.010996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.836 [2024-11-20 11:21:36.011029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.836 qpair failed and we were unable to recover it. 00:27:08.836 [2024-11-20 11:21:36.011221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.836 [2024-11-20 11:21:36.011251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.836 qpair failed and we were unable to recover it. 00:27:08.836 [2024-11-20 11:21:36.011432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.836 [2024-11-20 11:21:36.011463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.836 qpair failed and we were unable to recover it. 00:27:08.836 [2024-11-20 11:21:36.011591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.836 [2024-11-20 11:21:36.011622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.836 qpair failed and we were unable to recover it. 00:27:08.836 [2024-11-20 11:21:36.011800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.836 [2024-11-20 11:21:36.011834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.836 qpair failed and we were unable to recover it. 00:27:08.836 [2024-11-20 11:21:36.012008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.836 [2024-11-20 11:21:36.012039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.836 qpair failed and we were unable to recover it. 00:27:08.836 [2024-11-20 11:21:36.012155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.836 [2024-11-20 11:21:36.012184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.836 qpair failed and we were unable to recover it. 00:27:08.836 [2024-11-20 11:21:36.012369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.836 [2024-11-20 11:21:36.012401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.836 qpair failed and we were unable to recover it. 00:27:08.836 [2024-11-20 11:21:36.012514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.836 [2024-11-20 11:21:36.012544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.836 qpair failed and we were unable to recover it. 00:27:08.836 [2024-11-20 11:21:36.012717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.836 [2024-11-20 11:21:36.012748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.836 qpair failed and we were unable to recover it. 00:27:08.836 [2024-11-20 11:21:36.013011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.836 [2024-11-20 11:21:36.013044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.836 qpair failed and we were unable to recover it. 00:27:08.836 [2024-11-20 11:21:36.013228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.836 [2024-11-20 11:21:36.013259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.836 qpair failed and we were unable to recover it. 00:27:08.836 [2024-11-20 11:21:36.013390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.836 [2024-11-20 11:21:36.013421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.837 qpair failed and we were unable to recover it. 00:27:08.837 [2024-11-20 11:21:36.013591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.837 [2024-11-20 11:21:36.013621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.837 qpair failed and we were unable to recover it. 00:27:08.837 [2024-11-20 11:21:36.013795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.837 [2024-11-20 11:21:36.013827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.837 qpair failed and we were unable to recover it. 00:27:08.837 [2024-11-20 11:21:36.013927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.837 [2024-11-20 11:21:36.013966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.837 qpair failed and we were unable to recover it. 00:27:08.837 [2024-11-20 11:21:36.014071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.837 [2024-11-20 11:21:36.014103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.837 qpair failed and we were unable to recover it. 00:27:08.837 [2024-11-20 11:21:36.014229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.837 [2024-11-20 11:21:36.014261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.837 qpair failed and we were unable to recover it. 00:27:08.837 [2024-11-20 11:21:36.014393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.837 [2024-11-20 11:21:36.014423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.837 qpair failed and we were unable to recover it. 00:27:08.837 [2024-11-20 11:21:36.014634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.837 [2024-11-20 11:21:36.014675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.837 qpair failed and we were unable to recover it. 00:27:08.837 [2024-11-20 11:21:36.014778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.837 [2024-11-20 11:21:36.014809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.837 qpair failed and we were unable to recover it. 00:27:08.837 [2024-11-20 11:21:36.014933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.837 [2024-11-20 11:21:36.014972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.837 qpair failed and we were unable to recover it. 00:27:08.837 [2024-11-20 11:21:36.015156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.837 [2024-11-20 11:21:36.015185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.837 qpair failed and we were unable to recover it. 00:27:08.837 [2024-11-20 11:21:36.015296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.837 [2024-11-20 11:21:36.015326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.837 qpair failed and we were unable to recover it. 00:27:08.837 [2024-11-20 11:21:36.015450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.837 [2024-11-20 11:21:36.015479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.837 qpair failed and we were unable to recover it. 00:27:08.837 [2024-11-20 11:21:36.015699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.837 [2024-11-20 11:21:36.015729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.837 qpair failed and we were unable to recover it. 00:27:08.837 [2024-11-20 11:21:36.015894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.837 [2024-11-20 11:21:36.015924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.837 qpair failed and we were unable to recover it. 00:27:08.837 [2024-11-20 11:21:36.016062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.837 [2024-11-20 11:21:36.016093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.837 qpair failed and we were unable to recover it. 00:27:08.837 [2024-11-20 11:21:36.016215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.837 [2024-11-20 11:21:36.016248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.837 qpair failed and we were unable to recover it. 00:27:08.837 [2024-11-20 11:21:36.016377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.837 [2024-11-20 11:21:36.016407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.837 qpair failed and we were unable to recover it. 00:27:08.837 [2024-11-20 11:21:36.016681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.837 [2024-11-20 11:21:36.016712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.837 qpair failed and we were unable to recover it. 00:27:08.837 [2024-11-20 11:21:36.016834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.837 [2024-11-20 11:21:36.016863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.837 qpair failed and we were unable to recover it. 00:27:08.837 [2024-11-20 11:21:36.017050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.837 [2024-11-20 11:21:36.017083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.837 qpair failed and we were unable to recover it. 00:27:08.837 [2024-11-20 11:21:36.017204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.837 [2024-11-20 11:21:36.017236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.837 qpair failed and we were unable to recover it. 00:27:08.837 [2024-11-20 11:21:36.017426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.837 [2024-11-20 11:21:36.017457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.837 qpair failed and we were unable to recover it. 00:27:08.837 [2024-11-20 11:21:36.017576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.837 [2024-11-20 11:21:36.017606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.837 qpair failed and we were unable to recover it. 00:27:08.837 [2024-11-20 11:21:36.017715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.837 [2024-11-20 11:21:36.017746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.837 qpair failed and we were unable to recover it. 00:27:08.837 [2024-11-20 11:21:36.017934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.837 [2024-11-20 11:21:36.017977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.837 qpair failed and we were unable to recover it. 00:27:08.837 [2024-11-20 11:21:36.018158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.837 [2024-11-20 11:21:36.018189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.837 qpair failed and we were unable to recover it. 00:27:08.837 [2024-11-20 11:21:36.018366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.837 [2024-11-20 11:21:36.018396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.837 qpair failed and we were unable to recover it. 00:27:08.837 [2024-11-20 11:21:36.018641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.837 [2024-11-20 11:21:36.018673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.837 qpair failed and we were unable to recover it. 00:27:08.837 [2024-11-20 11:21:36.018849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.837 [2024-11-20 11:21:36.018880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.837 qpair failed and we were unable to recover it. 00:27:08.837 [2024-11-20 11:21:36.019049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.837 [2024-11-20 11:21:36.019081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.837 qpair failed and we were unable to recover it. 00:27:08.837 [2024-11-20 11:21:36.019181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.837 [2024-11-20 11:21:36.019210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.837 qpair failed and we were unable to recover it. 00:27:08.837 [2024-11-20 11:21:36.019331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.837 [2024-11-20 11:21:36.019362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.837 qpair failed and we were unable to recover it. 00:27:08.837 [2024-11-20 11:21:36.019537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.837 [2024-11-20 11:21:36.019569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.838 qpair failed and we were unable to recover it. 00:27:08.838 [2024-11-20 11:21:36.019740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.838 [2024-11-20 11:21:36.019770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.838 qpair failed and we were unable to recover it. 00:27:08.838 [2024-11-20 11:21:36.019888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.838 [2024-11-20 11:21:36.019920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.838 qpair failed and we were unable to recover it. 00:27:08.838 [2024-11-20 11:21:36.020147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.838 [2024-11-20 11:21:36.020202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.838 qpair failed and we were unable to recover it. 00:27:08.838 [2024-11-20 11:21:36.020405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.838 [2024-11-20 11:21:36.020439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.838 qpair failed and we were unable to recover it. 00:27:08.838 [2024-11-20 11:21:36.020569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.838 [2024-11-20 11:21:36.020606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.838 qpair failed and we were unable to recover it. 00:27:08.838 [2024-11-20 11:21:36.020799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.838 [2024-11-20 11:21:36.020831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.838 qpair failed and we were unable to recover it. 00:27:08.838 [2024-11-20 11:21:36.020982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.838 [2024-11-20 11:21:36.021020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.838 qpair failed and we were unable to recover it. 00:27:08.838 [2024-11-20 11:21:36.021259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.838 [2024-11-20 11:21:36.021290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.838 qpair failed and we were unable to recover it. 00:27:08.838 [2024-11-20 11:21:36.021468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.838 [2024-11-20 11:21:36.021499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.838 qpair failed and we were unable to recover it. 00:27:08.838 [2024-11-20 11:21:36.021675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.838 [2024-11-20 11:21:36.021707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.838 qpair failed and we were unable to recover it. 00:27:08.838 [2024-11-20 11:21:36.021913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.838 [2024-11-20 11:21:36.021945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.838 qpair failed and we were unable to recover it. 00:27:08.838 [2024-11-20 11:21:36.022086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.838 [2024-11-20 11:21:36.022122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.838 qpair failed and we were unable to recover it. 00:27:08.838 [2024-11-20 11:21:36.022311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.838 [2024-11-20 11:21:36.022342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.838 qpair failed and we were unable to recover it. 00:27:08.838 [2024-11-20 11:21:36.022529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.838 [2024-11-20 11:21:36.022566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.838 qpair failed and we were unable to recover it. 00:27:08.838 [2024-11-20 11:21:36.022764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.838 [2024-11-20 11:21:36.022795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.838 qpair failed and we were unable to recover it. 00:27:08.838 [2024-11-20 11:21:36.022913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.838 [2024-11-20 11:21:36.022960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.838 qpair failed and we were unable to recover it. 00:27:08.838 [2024-11-20 11:21:36.023221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.838 [2024-11-20 11:21:36.023251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.838 qpair failed and we were unable to recover it. 00:27:08.838 [2024-11-20 11:21:36.023439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.838 [2024-11-20 11:21:36.023469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.838 qpair failed and we were unable to recover it. 00:27:08.838 [2024-11-20 11:21:36.023652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.838 [2024-11-20 11:21:36.023682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.838 qpair failed and we were unable to recover it. 00:27:08.838 [2024-11-20 11:21:36.023821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.838 [2024-11-20 11:21:36.023853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.838 qpair failed and we were unable to recover it. 00:27:08.838 [2024-11-20 11:21:36.024096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.838 [2024-11-20 11:21:36.024128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.838 qpair failed and we were unable to recover it. 00:27:08.838 [2024-11-20 11:21:36.024275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.838 [2024-11-20 11:21:36.024306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.838 qpair failed and we were unable to recover it. 00:27:08.838 [2024-11-20 11:21:36.024445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.838 [2024-11-20 11:21:36.024475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.838 qpair failed and we were unable to recover it. 00:27:08.838 [2024-11-20 11:21:36.024604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.838 [2024-11-20 11:21:36.024641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.838 qpair failed and we were unable to recover it. 00:27:08.838 [2024-11-20 11:21:36.024766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.838 [2024-11-20 11:21:36.024800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.838 qpair failed and we were unable to recover it. 00:27:08.838 [2024-11-20 11:21:36.024928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.838 [2024-11-20 11:21:36.024977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.838 qpair failed and we were unable to recover it. 00:27:08.838 [2024-11-20 11:21:36.025158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.838 [2024-11-20 11:21:36.025188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.838 qpair failed and we were unable to recover it. 00:27:08.838 [2024-11-20 11:21:36.025382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.838 [2024-11-20 11:21:36.025412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.838 qpair failed and we were unable to recover it. 00:27:08.838 [2024-11-20 11:21:36.025533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.838 [2024-11-20 11:21:36.025567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.838 qpair failed and we were unable to recover it. 00:27:08.838 [2024-11-20 11:21:36.025681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.838 [2024-11-20 11:21:36.025717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.838 qpair failed and we were unable to recover it. 00:27:08.838 [2024-11-20 11:21:36.025906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.838 [2024-11-20 11:21:36.025936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.838 qpair failed and we were unable to recover it. 00:27:08.838 [2024-11-20 11:21:36.026075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.838 [2024-11-20 11:21:36.026113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.838 qpair failed and we were unable to recover it. 00:27:08.838 [2024-11-20 11:21:36.026241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.839 [2024-11-20 11:21:36.026277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.839 qpair failed and we were unable to recover it. 00:27:08.839 [2024-11-20 11:21:36.026473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.839 [2024-11-20 11:21:36.026503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.839 qpair failed and we were unable to recover it. 00:27:08.839 [2024-11-20 11:21:36.026637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.839 [2024-11-20 11:21:36.026673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.839 qpair failed and we were unable to recover it. 00:27:08.839 [2024-11-20 11:21:36.026925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.839 [2024-11-20 11:21:36.026966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.839 qpair failed and we were unable to recover it. 00:27:08.839 [2024-11-20 11:21:36.027115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.839 [2024-11-20 11:21:36.027145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.839 qpair failed and we were unable to recover it. 00:27:08.839 [2024-11-20 11:21:36.027424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.839 [2024-11-20 11:21:36.027456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.839 qpair failed and we were unable to recover it. 00:27:08.839 [2024-11-20 11:21:36.027633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.839 [2024-11-20 11:21:36.027663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.839 qpair failed and we were unable to recover it. 00:27:08.839 [2024-11-20 11:21:36.027808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.839 [2024-11-20 11:21:36.027839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.839 qpair failed and we were unable to recover it. 00:27:08.839 [2024-11-20 11:21:36.027995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.839 [2024-11-20 11:21:36.028030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.839 qpair failed and we were unable to recover it. 00:27:08.839 [2024-11-20 11:21:36.028164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.839 [2024-11-20 11:21:36.028199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.839 qpair failed and we were unable to recover it. 00:27:08.839 [2024-11-20 11:21:36.028414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.839 [2024-11-20 11:21:36.028445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.839 qpair failed and we were unable to recover it. 00:27:08.839 [2024-11-20 11:21:36.028646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.839 [2024-11-20 11:21:36.028677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.839 qpair failed and we were unable to recover it. 00:27:08.839 [2024-11-20 11:21:36.028811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.839 [2024-11-20 11:21:36.028845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.839 qpair failed and we were unable to recover it. 00:27:08.839 [2024-11-20 11:21:36.029032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.839 [2024-11-20 11:21:36.029064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.839 qpair failed and we were unable to recover it. 00:27:08.839 [2024-11-20 11:21:36.029247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.839 [2024-11-20 11:21:36.029278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.839 qpair failed and we were unable to recover it. 00:27:08.839 [2024-11-20 11:21:36.029483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.839 [2024-11-20 11:21:36.029515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.839 qpair failed and we were unable to recover it. 00:27:08.839 [2024-11-20 11:21:36.029631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.839 [2024-11-20 11:21:36.029666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.839 qpair failed and we were unable to recover it. 00:27:08.839 [2024-11-20 11:21:36.029847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.839 [2024-11-20 11:21:36.029877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.839 qpair failed and we were unable to recover it. 00:27:08.839 [2024-11-20 11:21:36.030060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.839 [2024-11-20 11:21:36.030091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.839 qpair failed and we were unable to recover it. 00:27:08.839 [2024-11-20 11:21:36.030279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.839 [2024-11-20 11:21:36.030305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.839 qpair failed and we were unable to recover it. 00:27:08.839 [2024-11-20 11:21:36.030545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.839 [2024-11-20 11:21:36.030571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.839 qpair failed and we were unable to recover it. 00:27:08.839 [2024-11-20 11:21:36.030740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.839 [2024-11-20 11:21:36.030772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.839 qpair failed and we were unable to recover it. 00:27:08.839 [2024-11-20 11:21:36.030899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.839 [2024-11-20 11:21:36.030925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.839 qpair failed and we were unable to recover it. 00:27:08.839 [2024-11-20 11:21:36.031043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.839 [2024-11-20 11:21:36.031077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.839 qpair failed and we were unable to recover it. 00:27:08.839 [2024-11-20 11:21:36.031261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.839 [2024-11-20 11:21:36.031287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.839 qpair failed and we were unable to recover it. 00:27:08.839 [2024-11-20 11:21:36.031388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.839 [2024-11-20 11:21:36.031418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.839 qpair failed and we were unable to recover it. 00:27:08.839 [2024-11-20 11:21:36.031589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.839 [2024-11-20 11:21:36.031616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.839 qpair failed and we were unable to recover it. 00:27:08.839 [2024-11-20 11:21:36.031731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.839 [2024-11-20 11:21:36.031762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.839 qpair failed and we were unable to recover it. 00:27:08.839 [2024-11-20 11:21:36.031944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.839 [2024-11-20 11:21:36.031985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.839 qpair failed and we were unable to recover it. 00:27:08.839 [2024-11-20 11:21:36.032102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.839 [2024-11-20 11:21:36.032134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.839 qpair failed and we were unable to recover it. 00:27:08.839 [2024-11-20 11:21:36.032402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.839 [2024-11-20 11:21:36.032428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.839 qpair failed and we were unable to recover it. 00:27:08.839 [2024-11-20 11:21:36.032632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.839 [2024-11-20 11:21:36.032660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.839 qpair failed and we were unable to recover it. 00:27:08.839 [2024-11-20 11:21:36.032845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.839 [2024-11-20 11:21:36.032872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.839 qpair failed and we were unable to recover it. 00:27:08.840 [2024-11-20 11:21:36.033037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.840 [2024-11-20 11:21:36.033066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.840 qpair failed and we were unable to recover it. 00:27:08.840 [2024-11-20 11:21:36.033250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.840 [2024-11-20 11:21:36.033277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.840 qpair failed and we were unable to recover it. 00:27:08.840 [2024-11-20 11:21:36.033409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.840 [2024-11-20 11:21:36.033437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.840 qpair failed and we were unable to recover it. 00:27:08.840 [2024-11-20 11:21:36.033702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.840 [2024-11-20 11:21:36.033727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.840 qpair failed and we were unable to recover it. 00:27:08.840 [2024-11-20 11:21:36.033847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.840 [2024-11-20 11:21:36.033879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.840 qpair failed and we were unable to recover it. 00:27:08.840 [2024-11-20 11:21:36.034009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.840 [2024-11-20 11:21:36.034041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.840 qpair failed and we were unable to recover it. 00:27:08.840 [2024-11-20 11:21:36.034161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.840 [2024-11-20 11:21:36.034190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.840 qpair failed and we were unable to recover it. 00:27:08.840 [2024-11-20 11:21:36.034362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.840 [2024-11-20 11:21:36.034389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.840 qpair failed and we were unable to recover it. 00:27:08.840 [2024-11-20 11:21:36.034653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.840 [2024-11-20 11:21:36.034679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.840 qpair failed and we were unable to recover it. 00:27:08.840 [2024-11-20 11:21:36.034846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.840 [2024-11-20 11:21:36.034872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.840 qpair failed and we were unable to recover it. 00:27:08.840 [2024-11-20 11:21:36.034989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.840 [2024-11-20 11:21:36.035021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.840 qpair failed and we were unable to recover it. 00:27:08.840 [2024-11-20 11:21:36.035135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.840 [2024-11-20 11:21:36.035166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.840 qpair failed and we were unable to recover it. 00:27:08.840 [2024-11-20 11:21:36.035269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.840 [2024-11-20 11:21:36.035299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.840 qpair failed and we were unable to recover it. 00:27:08.840 [2024-11-20 11:21:36.035428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.840 [2024-11-20 11:21:36.035455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.840 qpair failed and we were unable to recover it. 00:27:08.840 [2024-11-20 11:21:36.035700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.840 [2024-11-20 11:21:36.035726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.840 qpair failed and we were unable to recover it. 00:27:08.840 [2024-11-20 11:21:36.035836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.840 [2024-11-20 11:21:36.035869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.840 qpair failed and we were unable to recover it. 00:27:08.840 [2024-11-20 11:21:36.036040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.840 [2024-11-20 11:21:36.036069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.840 qpair failed and we were unable to recover it. 00:27:08.840 [2024-11-20 11:21:36.036277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.840 [2024-11-20 11:21:36.036304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.840 qpair failed and we were unable to recover it. 00:27:08.840 [2024-11-20 11:21:36.036428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.840 [2024-11-20 11:21:36.036455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.840 qpair failed and we were unable to recover it. 00:27:08.840 [2024-11-20 11:21:36.036587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.840 [2024-11-20 11:21:36.036613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.840 qpair failed and we were unable to recover it. 00:27:08.840 [2024-11-20 11:21:36.036738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.840 [2024-11-20 11:21:36.036765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.840 qpair failed and we were unable to recover it. 00:27:08.840 [2024-11-20 11:21:36.036880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.840 [2024-11-20 11:21:36.036914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.840 qpair failed and we were unable to recover it. 00:27:08.840 [2024-11-20 11:21:36.037173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.840 [2024-11-20 11:21:36.037246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.840 qpair failed and we were unable to recover it. 00:27:08.840 [2024-11-20 11:21:36.037444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.840 [2024-11-20 11:21:36.037480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.840 qpair failed and we were unable to recover it. 00:27:08.840 [2024-11-20 11:21:36.037586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.840 [2024-11-20 11:21:36.037619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.840 qpair failed and we were unable to recover it. 00:27:08.840 [2024-11-20 11:21:36.037738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.840 [2024-11-20 11:21:36.037770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.840 qpair failed and we were unable to recover it. 00:27:08.840 [2024-11-20 11:21:36.037974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.840 [2024-11-20 11:21:36.038022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.840 qpair failed and we were unable to recover it. 00:27:08.840 [2024-11-20 11:21:36.038215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.840 [2024-11-20 11:21:36.038247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.840 qpair failed and we were unable to recover it. 00:27:08.840 [2024-11-20 11:21:36.038451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.840 [2024-11-20 11:21:36.038492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.840 qpair failed and we were unable to recover it. 00:27:08.841 [2024-11-20 11:21:36.038622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.841 [2024-11-20 11:21:36.038653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.841 qpair failed and we were unable to recover it. 00:27:08.841 [2024-11-20 11:21:36.038829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.841 [2024-11-20 11:21:36.038859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.841 qpair failed and we were unable to recover it. 00:27:08.841 [2024-11-20 11:21:36.038981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.841 [2024-11-20 11:21:36.039014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.841 qpair failed and we were unable to recover it. 00:27:08.841 [2024-11-20 11:21:36.039120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.841 [2024-11-20 11:21:36.039152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.841 qpair failed and we were unable to recover it. 00:27:08.841 [2024-11-20 11:21:36.039265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.841 [2024-11-20 11:21:36.039296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.841 qpair failed and we were unable to recover it. 00:27:08.841 [2024-11-20 11:21:36.039463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.841 [2024-11-20 11:21:36.039493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.841 qpair failed and we were unable to recover it. 00:27:08.841 [2024-11-20 11:21:36.039601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.841 [2024-11-20 11:21:36.039633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.841 qpair failed and we were unable to recover it. 00:27:08.841 [2024-11-20 11:21:36.039819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.841 [2024-11-20 11:21:36.039853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.841 qpair failed and we were unable to recover it. 00:27:08.841 [2024-11-20 11:21:36.039961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.841 [2024-11-20 11:21:36.039993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.841 qpair failed and we were unable to recover it. 00:27:08.841 [2024-11-20 11:21:36.040170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.841 [2024-11-20 11:21:36.040200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.841 qpair failed and we were unable to recover it. 00:27:08.841 [2024-11-20 11:21:36.040443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.841 [2024-11-20 11:21:36.040474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.841 qpair failed and we were unable to recover it. 00:27:08.841 [2024-11-20 11:21:36.040601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.841 [2024-11-20 11:21:36.040631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.841 qpair failed and we were unable to recover it. 00:27:08.841 [2024-11-20 11:21:36.040819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.841 [2024-11-20 11:21:36.040850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.841 qpair failed and we were unable to recover it. 00:27:08.841 [2024-11-20 11:21:36.041063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.841 [2024-11-20 11:21:36.041097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.841 qpair failed and we were unable to recover it. 00:27:08.841 [2024-11-20 11:21:36.041210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.841 [2024-11-20 11:21:36.041241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.841 qpair failed and we were unable to recover it. 00:27:08.841 [2024-11-20 11:21:36.041421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.841 [2024-11-20 11:21:36.041453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.841 qpair failed and we were unable to recover it. 00:27:08.841 [2024-11-20 11:21:36.041581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.841 [2024-11-20 11:21:36.041612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.841 qpair failed and we were unable to recover it. 00:27:08.841 [2024-11-20 11:21:36.041790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.841 [2024-11-20 11:21:36.041822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.841 qpair failed and we were unable to recover it. 00:27:08.841 [2024-11-20 11:21:36.041992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.841 [2024-11-20 11:21:36.042025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.841 qpair failed and we were unable to recover it. 00:27:08.841 [2024-11-20 11:21:36.042207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.841 [2024-11-20 11:21:36.042239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.841 qpair failed and we were unable to recover it. 00:27:08.841 [2024-11-20 11:21:36.042413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.841 [2024-11-20 11:21:36.042444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.841 qpair failed and we were unable to recover it. 00:27:08.841 [2024-11-20 11:21:36.042556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.841 [2024-11-20 11:21:36.042586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.841 qpair failed and we were unable to recover it. 00:27:08.841 [2024-11-20 11:21:36.042766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.841 [2024-11-20 11:21:36.042796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.841 qpair failed and we were unable to recover it. 00:27:08.841 [2024-11-20 11:21:36.042938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.841 [2024-11-20 11:21:36.042983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.841 qpair failed and we were unable to recover it. 00:27:08.841 [2024-11-20 11:21:36.043159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.841 [2024-11-20 11:21:36.043189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.841 qpair failed and we were unable to recover it. 00:27:08.841 [2024-11-20 11:21:36.043429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.841 [2024-11-20 11:21:36.043460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.841 qpair failed and we were unable to recover it. 00:27:08.841 [2024-11-20 11:21:36.043660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.841 [2024-11-20 11:21:36.043691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.841 qpair failed and we were unable to recover it. 00:27:08.841 [2024-11-20 11:21:36.043963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.841 [2024-11-20 11:21:36.043996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.841 qpair failed and we were unable to recover it. 00:27:08.841 [2024-11-20 11:21:36.044121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.841 [2024-11-20 11:21:36.044153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.841 qpair failed and we were unable to recover it. 00:27:08.841 [2024-11-20 11:21:36.044265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.841 [2024-11-20 11:21:36.044296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.841 qpair failed and we were unable to recover it. 00:27:08.841 [2024-11-20 11:21:36.044419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.841 [2024-11-20 11:21:36.044449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.841 qpair failed and we were unable to recover it. 00:27:08.841 [2024-11-20 11:21:36.044655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.841 [2024-11-20 11:21:36.044687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.841 qpair failed and we were unable to recover it. 00:27:08.841 [2024-11-20 11:21:36.044869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.842 [2024-11-20 11:21:36.044902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.842 qpair failed and we were unable to recover it. 00:27:08.842 [2024-11-20 11:21:36.045110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.842 [2024-11-20 11:21:36.045142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.842 qpair failed and we were unable to recover it. 00:27:08.842 [2024-11-20 11:21:36.045329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.842 [2024-11-20 11:21:36.045360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.842 qpair failed and we were unable to recover it. 00:27:08.842 [2024-11-20 11:21:36.045484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.842 [2024-11-20 11:21:36.045516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.842 qpair failed and we were unable to recover it. 00:27:08.842 [2024-11-20 11:21:36.045643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.842 [2024-11-20 11:21:36.045674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.842 qpair failed and we were unable to recover it. 00:27:08.842 [2024-11-20 11:21:36.045805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.842 [2024-11-20 11:21:36.045836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.842 qpair failed and we were unable to recover it. 00:27:08.842 [2024-11-20 11:21:36.046046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.842 [2024-11-20 11:21:36.046079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.842 qpair failed and we were unable to recover it. 00:27:08.842 [2024-11-20 11:21:36.046271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.842 [2024-11-20 11:21:36.046308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.842 qpair failed and we were unable to recover it. 00:27:08.842 [2024-11-20 11:21:36.046412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.842 [2024-11-20 11:21:36.046443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.842 qpair failed and we were unable to recover it. 00:27:08.842 [2024-11-20 11:21:36.046629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.842 [2024-11-20 11:21:36.046661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.842 qpair failed and we were unable to recover it. 00:27:08.842 [2024-11-20 11:21:36.046830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.842 [2024-11-20 11:21:36.046862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.842 qpair failed and we were unable to recover it. 00:27:08.842 [2024-11-20 11:21:36.047048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.842 [2024-11-20 11:21:36.047081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.842 qpair failed and we were unable to recover it. 00:27:08.842 [2024-11-20 11:21:36.047315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.842 [2024-11-20 11:21:36.047347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.842 qpair failed and we were unable to recover it. 00:27:08.842 [2024-11-20 11:21:36.047471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.842 [2024-11-20 11:21:36.047502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.842 qpair failed and we were unable to recover it. 00:27:08.842 [2024-11-20 11:21:36.047757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.842 [2024-11-20 11:21:36.047789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.842 qpair failed and we were unable to recover it. 00:27:08.842 [2024-11-20 11:21:36.047977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.842 [2024-11-20 11:21:36.048010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.842 qpair failed and we were unable to recover it. 00:27:08.842 [2024-11-20 11:21:36.048124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.842 [2024-11-20 11:21:36.048155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.842 qpair failed and we were unable to recover it. 00:27:08.842 [2024-11-20 11:21:36.048277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.842 [2024-11-20 11:21:36.048309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.842 qpair failed and we were unable to recover it. 00:27:08.842 [2024-11-20 11:21:36.048491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.842 [2024-11-20 11:21:36.048523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.842 qpair failed and we were unable to recover it. 00:27:08.842 [2024-11-20 11:21:36.048625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.842 [2024-11-20 11:21:36.048656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.842 qpair failed and we were unable to recover it. 00:27:08.842 [2024-11-20 11:21:36.048891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.842 [2024-11-20 11:21:36.048922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.842 qpair failed and we were unable to recover it. 00:27:08.842 [2024-11-20 11:21:36.049051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.842 [2024-11-20 11:21:36.049084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.842 qpair failed and we were unable to recover it. 00:27:08.842 [2024-11-20 11:21:36.049271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.842 [2024-11-20 11:21:36.049301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.842 qpair failed and we were unable to recover it. 00:27:08.842 [2024-11-20 11:21:36.049415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.842 [2024-11-20 11:21:36.049446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.842 qpair failed and we were unable to recover it. 00:27:08.842 [2024-11-20 11:21:36.049583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.842 [2024-11-20 11:21:36.049613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.842 qpair failed and we were unable to recover it. 00:27:08.842 [2024-11-20 11:21:36.049727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.842 [2024-11-20 11:21:36.049758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.842 qpair failed and we were unable to recover it. 00:27:08.842 [2024-11-20 11:21:36.049968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.842 [2024-11-20 11:21:36.050001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.842 qpair failed and we were unable to recover it. 00:27:08.842 [2024-11-20 11:21:36.050114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.842 [2024-11-20 11:21:36.050145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.842 qpair failed and we were unable to recover it. 00:27:08.842 [2024-11-20 11:21:36.050272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.842 [2024-11-20 11:21:36.050303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.842 qpair failed and we were unable to recover it. 00:27:08.842 [2024-11-20 11:21:36.050591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.842 [2024-11-20 11:21:36.050621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.842 qpair failed and we were unable to recover it. 00:27:08.842 [2024-11-20 11:21:36.050826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.842 [2024-11-20 11:21:36.050856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.842 qpair failed and we were unable to recover it. 00:27:08.842 [2024-11-20 11:21:36.050978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.842 [2024-11-20 11:21:36.051010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.842 qpair failed and we were unable to recover it. 00:27:08.842 [2024-11-20 11:21:36.051118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.842 [2024-11-20 11:21:36.051149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.842 qpair failed and we were unable to recover it. 00:27:08.843 [2024-11-20 11:21:36.051256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.843 [2024-11-20 11:21:36.051287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.843 qpair failed and we were unable to recover it. 00:27:08.843 [2024-11-20 11:21:36.051546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.843 [2024-11-20 11:21:36.051618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.843 qpair failed and we were unable to recover it. 00:27:08.843 [2024-11-20 11:21:36.051832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.843 [2024-11-20 11:21:36.051869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.843 qpair failed and we were unable to recover it. 00:27:08.843 [2024-11-20 11:21:36.052139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.843 [2024-11-20 11:21:36.052176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.843 qpair failed and we were unable to recover it. 00:27:08.843 [2024-11-20 11:21:36.052285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.843 [2024-11-20 11:21:36.052317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.843 qpair failed and we were unable to recover it. 00:27:08.843 [2024-11-20 11:21:36.052487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.843 [2024-11-20 11:21:36.052520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.843 qpair failed and we were unable to recover it. 00:27:08.843 [2024-11-20 11:21:36.052648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.843 [2024-11-20 11:21:36.052682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.843 qpair failed and we were unable to recover it. 00:27:08.843 [2024-11-20 11:21:36.052853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.843 [2024-11-20 11:21:36.052885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.843 qpair failed and we were unable to recover it. 00:27:08.843 [2024-11-20 11:21:36.053087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.843 [2024-11-20 11:21:36.053120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.843 qpair failed and we were unable to recover it. 00:27:08.843 [2024-11-20 11:21:36.053260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.843 [2024-11-20 11:21:36.053294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.843 qpair failed and we were unable to recover it. 00:27:08.843 [2024-11-20 11:21:36.053424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.843 [2024-11-20 11:21:36.053456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.843 qpair failed and we were unable to recover it. 00:27:08.843 [2024-11-20 11:21:36.053747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.843 [2024-11-20 11:21:36.053780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.843 qpair failed and we were unable to recover it. 00:27:08.843 [2024-11-20 11:21:36.053968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.843 [2024-11-20 11:21:36.054001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.843 qpair failed and we were unable to recover it. 00:27:08.843 [2024-11-20 11:21:36.054175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.843 [2024-11-20 11:21:36.054208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.843 qpair failed and we were unable to recover it. 00:27:08.843 [2024-11-20 11:21:36.054463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.843 [2024-11-20 11:21:36.054496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.843 qpair failed and we were unable to recover it. 00:27:08.843 [2024-11-20 11:21:36.054691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.843 [2024-11-20 11:21:36.054724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.843 qpair failed and we were unable to recover it. 00:27:08.843 [2024-11-20 11:21:36.054914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.843 [2024-11-20 11:21:36.054958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.843 qpair failed and we were unable to recover it. 00:27:08.843 [2024-11-20 11:21:36.055235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.843 [2024-11-20 11:21:36.055268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.843 qpair failed and we were unable to recover it. 00:27:08.843 [2024-11-20 11:21:36.055372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.843 [2024-11-20 11:21:36.055405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.843 qpair failed and we were unable to recover it. 00:27:08.843 [2024-11-20 11:21:36.055521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.843 [2024-11-20 11:21:36.055552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.843 qpair failed and we were unable to recover it. 00:27:08.843 [2024-11-20 11:21:36.055676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.843 [2024-11-20 11:21:36.055708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.843 qpair failed and we were unable to recover it. 00:27:08.843 [2024-11-20 11:21:36.055888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.843 [2024-11-20 11:21:36.055919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.843 qpair failed and we were unable to recover it. 00:27:08.843 [2024-11-20 11:21:36.056064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.843 [2024-11-20 11:21:36.056099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.843 qpair failed and we were unable to recover it. 00:27:08.843 [2024-11-20 11:21:36.056291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.843 [2024-11-20 11:21:36.056322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.843 qpair failed and we were unable to recover it. 00:27:08.843 [2024-11-20 11:21:36.056567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.843 [2024-11-20 11:21:36.056597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.843 qpair failed and we were unable to recover it. 00:27:08.843 [2024-11-20 11:21:36.056766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.843 [2024-11-20 11:21:36.056797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.843 qpair failed and we were unable to recover it. 00:27:08.843 [2024-11-20 11:21:36.056921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.843 [2024-11-20 11:21:36.056972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.843 qpair failed and we were unable to recover it. 00:27:08.843 [2024-11-20 11:21:36.057220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.843 [2024-11-20 11:21:36.057252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.843 qpair failed and we were unable to recover it. 00:27:08.843 [2024-11-20 11:21:36.057443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.843 [2024-11-20 11:21:36.057479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.843 qpair failed and we were unable to recover it. 00:27:08.843 [2024-11-20 11:21:36.057697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.843 [2024-11-20 11:21:36.057730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.843 qpair failed and we were unable to recover it. 00:27:08.843 [2024-11-20 11:21:36.057848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.843 [2024-11-20 11:21:36.057880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.843 qpair failed and we were unable to recover it. 00:27:08.843 [2024-11-20 11:21:36.058064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.843 [2024-11-20 11:21:36.058097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.843 qpair failed and we were unable to recover it. 00:27:08.843 [2024-11-20 11:21:36.058266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.843 [2024-11-20 11:21:36.058300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.843 qpair failed and we were unable to recover it. 00:27:08.843 [2024-11-20 11:21:36.058486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.844 [2024-11-20 11:21:36.058518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.844 qpair failed and we were unable to recover it. 00:27:08.844 [2024-11-20 11:21:36.058752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.844 [2024-11-20 11:21:36.058785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.844 qpair failed and we were unable to recover it. 00:27:08.844 [2024-11-20 11:21:36.058985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.844 [2024-11-20 11:21:36.059019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.844 qpair failed and we were unable to recover it. 00:27:08.844 [2024-11-20 11:21:36.059261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.844 [2024-11-20 11:21:36.059293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.844 qpair failed and we were unable to recover it. 00:27:08.844 [2024-11-20 11:21:36.059467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.844 [2024-11-20 11:21:36.059499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.844 qpair failed and we were unable to recover it. 00:27:08.844 [2024-11-20 11:21:36.059610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.844 [2024-11-20 11:21:36.059641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.844 qpair failed and we were unable to recover it. 00:27:08.844 [2024-11-20 11:21:36.059827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.844 [2024-11-20 11:21:36.059860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.844 qpair failed and we were unable to recover it. 00:27:08.844 [2024-11-20 11:21:36.060039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.844 [2024-11-20 11:21:36.060071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.844 qpair failed and we were unable to recover it. 00:27:08.844 [2024-11-20 11:21:36.060241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.844 [2024-11-20 11:21:36.060273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.844 qpair failed and we were unable to recover it. 00:27:08.844 [2024-11-20 11:21:36.060463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.844 [2024-11-20 11:21:36.060496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.844 qpair failed and we were unable to recover it. 00:27:08.844 [2024-11-20 11:21:36.060695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.844 [2024-11-20 11:21:36.060727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.844 qpair failed and we were unable to recover it. 00:27:08.844 [2024-11-20 11:21:36.060845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.844 [2024-11-20 11:21:36.060877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.844 qpair failed and we were unable to recover it. 00:27:08.844 [2024-11-20 11:21:36.061051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.844 [2024-11-20 11:21:36.061085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.844 qpair failed and we were unable to recover it. 00:27:08.844 [2024-11-20 11:21:36.061212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.844 [2024-11-20 11:21:36.061245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.844 qpair failed and we were unable to recover it. 00:27:08.844 [2024-11-20 11:21:36.061377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.844 [2024-11-20 11:21:36.061410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.844 qpair failed and we were unable to recover it. 00:27:08.844 [2024-11-20 11:21:36.061535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.844 [2024-11-20 11:21:36.061567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.844 qpair failed and we were unable to recover it. 00:27:08.844 [2024-11-20 11:21:36.061743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.844 [2024-11-20 11:21:36.061776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.844 qpair failed and we were unable to recover it. 00:27:08.844 [2024-11-20 11:21:36.061978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.844 [2024-11-20 11:21:36.062011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.844 qpair failed and we were unable to recover it. 00:27:08.844 [2024-11-20 11:21:36.062199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.844 [2024-11-20 11:21:36.062231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.844 qpair failed and we were unable to recover it. 00:27:08.844 [2024-11-20 11:21:36.062413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.844 [2024-11-20 11:21:36.062446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.844 qpair failed and we were unable to recover it. 00:27:08.844 [2024-11-20 11:21:36.062664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.844 [2024-11-20 11:21:36.062696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.844 qpair failed and we were unable to recover it. 00:27:08.844 [2024-11-20 11:21:36.062878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.844 [2024-11-20 11:21:36.062912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.844 qpair failed and we were unable to recover it. 00:27:08.844 [2024-11-20 11:21:36.063117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.844 [2024-11-20 11:21:36.063163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.844 qpair failed and we were unable to recover it. 00:27:08.844 [2024-11-20 11:21:36.063299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.844 [2024-11-20 11:21:36.063331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.844 qpair failed and we were unable to recover it. 00:27:08.844 [2024-11-20 11:21:36.063520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.844 [2024-11-20 11:21:36.063552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.844 qpair failed and we were unable to recover it. 00:27:08.844 [2024-11-20 11:21:36.063677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.844 [2024-11-20 11:21:36.063709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.844 qpair failed and we were unable to recover it. 00:27:08.844 [2024-11-20 11:21:36.063830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.844 [2024-11-20 11:21:36.063863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.844 qpair failed and we were unable to recover it. 00:27:08.844 [2024-11-20 11:21:36.064036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.844 [2024-11-20 11:21:36.064069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.844 qpair failed and we were unable to recover it. 00:27:08.844 [2024-11-20 11:21:36.064195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.844 [2024-11-20 11:21:36.064228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.844 qpair failed and we were unable to recover it. 00:27:08.844 [2024-11-20 11:21:36.064349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.844 [2024-11-20 11:21:36.064381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.844 qpair failed and we were unable to recover it. 00:27:08.844 [2024-11-20 11:21:36.064564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.844 [2024-11-20 11:21:36.064597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.844 qpair failed and we were unable to recover it. 00:27:08.844 [2024-11-20 11:21:36.064891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.844 [2024-11-20 11:21:36.064923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.844 qpair failed and we were unable to recover it. 00:27:08.844 [2024-11-20 11:21:36.065073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.844 [2024-11-20 11:21:36.065105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.844 qpair failed and we were unable to recover it. 00:27:08.844 [2024-11-20 11:21:36.065298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.845 [2024-11-20 11:21:36.065329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.845 qpair failed and we were unable to recover it. 00:27:08.845 [2024-11-20 11:21:36.065439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.845 [2024-11-20 11:21:36.065472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.845 qpair failed and we were unable to recover it. 00:27:08.845 [2024-11-20 11:21:36.065581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.845 [2024-11-20 11:21:36.065612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.845 qpair failed and we were unable to recover it. 00:27:08.845 [2024-11-20 11:21:36.065733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.845 [2024-11-20 11:21:36.065765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.845 qpair failed and we were unable to recover it. 00:27:08.845 [2024-11-20 11:21:36.065899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.845 [2024-11-20 11:21:36.065932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.845 qpair failed and we were unable to recover it. 00:27:08.845 [2024-11-20 11:21:36.066209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.845 [2024-11-20 11:21:36.066242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.845 qpair failed and we were unable to recover it. 00:27:08.845 [2024-11-20 11:21:36.066414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.845 [2024-11-20 11:21:36.066446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.845 qpair failed and we were unable to recover it. 00:27:08.845 [2024-11-20 11:21:36.066576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.845 [2024-11-20 11:21:36.066612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.845 qpair failed and we were unable to recover it. 00:27:08.845 [2024-11-20 11:21:36.066743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.845 [2024-11-20 11:21:36.066774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.845 qpair failed and we were unable to recover it. 00:27:08.845 [2024-11-20 11:21:36.066988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.845 [2024-11-20 11:21:36.067022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.845 qpair failed and we were unable to recover it. 00:27:08.845 [2024-11-20 11:21:36.067220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.845 [2024-11-20 11:21:36.067253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.845 qpair failed and we were unable to recover it. 00:27:08.845 [2024-11-20 11:21:36.067428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.845 [2024-11-20 11:21:36.067461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.845 qpair failed and we were unable to recover it. 00:27:08.845 [2024-11-20 11:21:36.067649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.845 [2024-11-20 11:21:36.067682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.845 qpair failed and we were unable to recover it. 00:27:08.845 [2024-11-20 11:21:36.067790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.845 [2024-11-20 11:21:36.067824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.845 qpair failed and we were unable to recover it. 00:27:08.845 [2024-11-20 11:21:36.067933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.845 [2024-11-20 11:21:36.067977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.845 qpair failed and we were unable to recover it. 00:27:08.845 [2024-11-20 11:21:36.068166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.845 [2024-11-20 11:21:36.068199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.845 qpair failed and we were unable to recover it. 00:27:08.845 [2024-11-20 11:21:36.068339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.845 [2024-11-20 11:21:36.068377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.845 qpair failed and we were unable to recover it. 00:27:08.845 [2024-11-20 11:21:36.068581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.845 [2024-11-20 11:21:36.068615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.845 qpair failed and we were unable to recover it. 00:27:08.845 [2024-11-20 11:21:36.068797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.845 [2024-11-20 11:21:36.068829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.845 qpair failed and we were unable to recover it. 00:27:08.845 [2024-11-20 11:21:36.068942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.845 [2024-11-20 11:21:36.068992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.845 qpair failed and we were unable to recover it. 00:27:08.845 [2024-11-20 11:21:36.069112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.845 [2024-11-20 11:21:36.069144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.845 qpair failed and we were unable to recover it. 00:27:08.845 [2024-11-20 11:21:36.069335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.845 [2024-11-20 11:21:36.069367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.845 qpair failed and we were unable to recover it. 00:27:08.845 [2024-11-20 11:21:36.069503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.845 [2024-11-20 11:21:36.069536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.845 qpair failed and we were unable to recover it. 00:27:08.845 [2024-11-20 11:21:36.069716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.845 [2024-11-20 11:21:36.069749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.845 qpair failed and we were unable to recover it. 00:27:08.845 [2024-11-20 11:21:36.069876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.845 [2024-11-20 11:21:36.069909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.845 qpair failed and we were unable to recover it. 00:27:08.845 [2024-11-20 11:21:36.070093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.845 [2024-11-20 11:21:36.070127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.845 qpair failed and we were unable to recover it. 00:27:08.846 [2024-11-20 11:21:36.070315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.846 [2024-11-20 11:21:36.070348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.846 qpair failed and we were unable to recover it. 00:27:08.846 [2024-11-20 11:21:36.070479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.846 [2024-11-20 11:21:36.070510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.846 qpair failed and we were unable to recover it. 00:27:08.846 [2024-11-20 11:21:36.070632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.846 [2024-11-20 11:21:36.070664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.846 qpair failed and we were unable to recover it. 00:27:08.846 [2024-11-20 11:21:36.070782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.846 [2024-11-20 11:21:36.070814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.846 qpair failed and we were unable to recover it. 00:27:08.846 [2024-11-20 11:21:36.071006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.846 [2024-11-20 11:21:36.071041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.846 qpair failed and we were unable to recover it. 00:27:08.846 [2024-11-20 11:21:36.071251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.846 [2024-11-20 11:21:36.071285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.846 qpair failed and we were unable to recover it. 00:27:08.846 [2024-11-20 11:21:36.071524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.846 [2024-11-20 11:21:36.071556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.846 qpair failed and we were unable to recover it. 00:27:08.846 [2024-11-20 11:21:36.071725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.846 [2024-11-20 11:21:36.071758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.846 qpair failed and we were unable to recover it. 00:27:08.846 [2024-11-20 11:21:36.071869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.846 [2024-11-20 11:21:36.071901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.846 qpair failed and we were unable to recover it. 00:27:08.846 [2024-11-20 11:21:36.072094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.846 [2024-11-20 11:21:36.072126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.846 qpair failed and we were unable to recover it. 00:27:08.846 [2024-11-20 11:21:36.072307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.846 [2024-11-20 11:21:36.072339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.846 qpair failed and we were unable to recover it. 00:27:08.846 [2024-11-20 11:21:36.072449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.846 [2024-11-20 11:21:36.072480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.846 qpair failed and we were unable to recover it. 00:27:08.846 [2024-11-20 11:21:36.072690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.846 [2024-11-20 11:21:36.072723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.846 qpair failed and we were unable to recover it. 00:27:08.846 [2024-11-20 11:21:36.072956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.846 [2024-11-20 11:21:36.072989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.846 qpair failed and we were unable to recover it. 00:27:08.846 [2024-11-20 11:21:36.073138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.846 [2024-11-20 11:21:36.073170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.846 qpair failed and we were unable to recover it. 00:27:08.846 [2024-11-20 11:21:36.073406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.846 [2024-11-20 11:21:36.073439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.846 qpair failed and we were unable to recover it. 00:27:08.846 [2024-11-20 11:21:36.073545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.846 [2024-11-20 11:21:36.073576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.846 qpair failed and we were unable to recover it. 00:27:08.846 [2024-11-20 11:21:36.073747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.846 [2024-11-20 11:21:36.073784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.846 qpair failed and we were unable to recover it. 00:27:08.846 [2024-11-20 11:21:36.073976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.846 [2024-11-20 11:21:36.074010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.846 qpair failed and we were unable to recover it. 00:27:08.846 [2024-11-20 11:21:36.074223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.846 [2024-11-20 11:21:36.074254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.846 qpair failed and we were unable to recover it. 00:27:08.846 [2024-11-20 11:21:36.074491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.846 [2024-11-20 11:21:36.074523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.846 qpair failed and we were unable to recover it. 00:27:08.846 [2024-11-20 11:21:36.074653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.846 [2024-11-20 11:21:36.074684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.846 qpair failed and we were unable to recover it. 00:27:08.846 [2024-11-20 11:21:36.074792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.846 [2024-11-20 11:21:36.074824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.846 qpair failed and we were unable to recover it. 00:27:08.846 [2024-11-20 11:21:36.074929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.846 [2024-11-20 11:21:36.074972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.846 qpair failed and we were unable to recover it. 00:27:08.846 [2024-11-20 11:21:36.075085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.846 [2024-11-20 11:21:36.075118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.846 qpair failed and we were unable to recover it. 00:27:08.846 [2024-11-20 11:21:36.075225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.846 [2024-11-20 11:21:36.075257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.846 qpair failed and we were unable to recover it. 00:27:08.846 [2024-11-20 11:21:36.075428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.846 [2024-11-20 11:21:36.075460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.846 qpair failed and we were unable to recover it. 00:27:08.846 [2024-11-20 11:21:36.075576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.846 [2024-11-20 11:21:36.075609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.846 qpair failed and we were unable to recover it. 00:27:08.846 [2024-11-20 11:21:36.075729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.846 [2024-11-20 11:21:36.075760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.847 qpair failed and we were unable to recover it. 00:27:08.847 [2024-11-20 11:21:36.075942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.847 [2024-11-20 11:21:36.075986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.847 qpair failed and we were unable to recover it. 00:27:08.847 [2024-11-20 11:21:36.076106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.847 [2024-11-20 11:21:36.076139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.847 qpair failed and we were unable to recover it. 00:27:08.847 [2024-11-20 11:21:36.076256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.847 [2024-11-20 11:21:36.076289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.847 qpair failed and we were unable to recover it. 00:27:08.847 [2024-11-20 11:21:36.076418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.847 [2024-11-20 11:21:36.076451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.847 qpair failed and we were unable to recover it. 00:27:08.847 [2024-11-20 11:21:36.076617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.847 [2024-11-20 11:21:36.076650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.847 qpair failed and we were unable to recover it. 00:27:08.847 [2024-11-20 11:21:36.076769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.847 [2024-11-20 11:21:36.076801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.847 qpair failed and we were unable to recover it. 00:27:08.847 [2024-11-20 11:21:36.077036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.847 [2024-11-20 11:21:36.077070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.847 qpair failed and we were unable to recover it. 00:27:08.847 [2024-11-20 11:21:36.077192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.847 [2024-11-20 11:21:36.077222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.847 qpair failed and we were unable to recover it. 00:27:08.847 [2024-11-20 11:21:36.077360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.847 [2024-11-20 11:21:36.077394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.847 qpair failed and we were unable to recover it. 00:27:08.847 [2024-11-20 11:21:36.077571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.847 [2024-11-20 11:21:36.077603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.847 qpair failed and we were unable to recover it. 00:27:08.847 [2024-11-20 11:21:36.077710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.847 [2024-11-20 11:21:36.077742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.847 qpair failed and we were unable to recover it. 00:27:08.847 [2024-11-20 11:21:36.078006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.847 [2024-11-20 11:21:36.078040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.847 qpair failed and we were unable to recover it. 00:27:08.847 [2024-11-20 11:21:36.078154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.847 [2024-11-20 11:21:36.078187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.847 qpair failed and we were unable to recover it. 00:27:08.847 [2024-11-20 11:21:36.078284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.847 [2024-11-20 11:21:36.078316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.847 qpair failed and we were unable to recover it. 00:27:08.847 [2024-11-20 11:21:36.078548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.847 [2024-11-20 11:21:36.078581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.847 qpair failed and we were unable to recover it. 00:27:08.847 [2024-11-20 11:21:36.078696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.847 [2024-11-20 11:21:36.078728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.847 qpair failed and we were unable to recover it. 00:27:08.847 [2024-11-20 11:21:36.078850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.847 [2024-11-20 11:21:36.078881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.847 qpair failed and we were unable to recover it. 00:27:08.847 [2024-11-20 11:21:36.079071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.847 [2024-11-20 11:21:36.079103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.847 qpair failed and we were unable to recover it. 00:27:08.847 [2024-11-20 11:21:36.079269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.847 [2024-11-20 11:21:36.079302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.847 qpair failed and we were unable to recover it. 00:27:08.847 [2024-11-20 11:21:36.079425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.847 [2024-11-20 11:21:36.079458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.847 qpair failed and we were unable to recover it. 00:27:08.847 [2024-11-20 11:21:36.079559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.847 [2024-11-20 11:21:36.079591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.847 qpair failed and we were unable to recover it. 00:27:08.847 [2024-11-20 11:21:36.079703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.847 [2024-11-20 11:21:36.079735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.847 qpair failed and we were unable to recover it. 00:27:08.847 [2024-11-20 11:21:36.079830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.847 [2024-11-20 11:21:36.079863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.847 qpair failed and we were unable to recover it. 00:27:08.847 [2024-11-20 11:21:36.080096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.847 [2024-11-20 11:21:36.080130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.847 qpair failed and we were unable to recover it. 00:27:08.847 [2024-11-20 11:21:36.080250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.847 [2024-11-20 11:21:36.080282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.847 qpair failed and we were unable to recover it. 00:27:08.847 [2024-11-20 11:21:36.080466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.847 [2024-11-20 11:21:36.080498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.847 qpair failed and we were unable to recover it. 00:27:08.847 [2024-11-20 11:21:36.080704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.847 [2024-11-20 11:21:36.080737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.847 qpair failed and we were unable to recover it. 00:27:08.847 [2024-11-20 11:21:36.080855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.847 [2024-11-20 11:21:36.080887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.847 qpair failed and we were unable to recover it. 00:27:08.847 [2024-11-20 11:21:36.081077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.847 [2024-11-20 11:21:36.081111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.847 qpair failed and we were unable to recover it. 00:27:08.847 [2024-11-20 11:21:36.081293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.847 [2024-11-20 11:21:36.081326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.847 qpair failed and we were unable to recover it. 00:27:08.847 [2024-11-20 11:21:36.081563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.847 [2024-11-20 11:21:36.081594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.847 qpair failed and we were unable to recover it. 00:27:08.847 [2024-11-20 11:21:36.081703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.847 [2024-11-20 11:21:36.081735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.847 qpair failed and we were unable to recover it. 00:27:08.848 [2024-11-20 11:21:36.081928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.848 [2024-11-20 11:21:36.081984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.848 qpair failed and we were unable to recover it. 00:27:08.848 [2024-11-20 11:21:36.082227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.848 [2024-11-20 11:21:36.082259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.848 qpair failed and we were unable to recover it. 00:27:08.848 [2024-11-20 11:21:36.082431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.848 [2024-11-20 11:21:36.082463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.848 qpair failed and we were unable to recover it. 00:27:08.848 [2024-11-20 11:21:36.082675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.848 [2024-11-20 11:21:36.082708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.848 qpair failed and we were unable to recover it. 00:27:08.848 [2024-11-20 11:21:36.082812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.848 [2024-11-20 11:21:36.082844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.848 qpair failed and we were unable to recover it. 00:27:08.848 [2024-11-20 11:21:36.082968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.848 [2024-11-20 11:21:36.083002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.848 qpair failed and we were unable to recover it. 00:27:08.848 [2024-11-20 11:21:36.083182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.848 [2024-11-20 11:21:36.083215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.848 qpair failed and we were unable to recover it. 00:27:08.848 [2024-11-20 11:21:36.083383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.848 [2024-11-20 11:21:36.083415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.848 qpair failed and we were unable to recover it. 00:27:08.848 [2024-11-20 11:21:36.083589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.848 [2024-11-20 11:21:36.083621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.848 qpair failed and we were unable to recover it. 00:27:08.848 [2024-11-20 11:21:36.083791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.848 [2024-11-20 11:21:36.083823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.848 qpair failed and we were unable to recover it. 00:27:08.848 [2024-11-20 11:21:36.083943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.848 [2024-11-20 11:21:36.083986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.848 qpair failed and we were unable to recover it. 00:27:08.848 [2024-11-20 11:21:36.084171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.848 [2024-11-20 11:21:36.084204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.848 qpair failed and we were unable to recover it. 00:27:08.848 [2024-11-20 11:21:36.084325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.848 [2024-11-20 11:21:36.084357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.848 qpair failed and we were unable to recover it. 00:27:08.848 [2024-11-20 11:21:36.084610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.848 [2024-11-20 11:21:36.084642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.848 qpair failed and we were unable to recover it. 00:27:08.848 [2024-11-20 11:21:36.084811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.848 [2024-11-20 11:21:36.084843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.848 qpair failed and we were unable to recover it. 00:27:08.848 [2024-11-20 11:21:36.085011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.848 [2024-11-20 11:21:36.085045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.848 qpair failed and we were unable to recover it. 00:27:08.848 [2024-11-20 11:21:36.085213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.848 [2024-11-20 11:21:36.085245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.848 qpair failed and we were unable to recover it. 00:27:08.848 [2024-11-20 11:21:36.085367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.848 [2024-11-20 11:21:36.085400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.848 qpair failed and we were unable to recover it. 00:27:08.848 [2024-11-20 11:21:36.085512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.848 [2024-11-20 11:21:36.085543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.848 qpair failed and we were unable to recover it. 00:27:08.848 [2024-11-20 11:21:36.085782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.848 [2024-11-20 11:21:36.085815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.848 qpair failed and we were unable to recover it. 00:27:08.848 [2024-11-20 11:21:36.085917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.848 [2024-11-20 11:21:36.085958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.848 qpair failed and we were unable to recover it. 00:27:08.848 [2024-11-20 11:21:36.086140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.848 [2024-11-20 11:21:36.086173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.848 qpair failed and we were unable to recover it. 00:27:08.848 [2024-11-20 11:21:36.086292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.848 [2024-11-20 11:21:36.086325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.848 qpair failed and we were unable to recover it. 00:27:08.848 [2024-11-20 11:21:36.086507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.848 [2024-11-20 11:21:36.086538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.848 qpair failed and we were unable to recover it. 00:27:08.848 [2024-11-20 11:21:36.086806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.848 [2024-11-20 11:21:36.086844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.848 qpair failed and we were unable to recover it. 00:27:08.848 [2024-11-20 11:21:36.087134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.848 [2024-11-20 11:21:36.087168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.848 qpair failed and we were unable to recover it. 00:27:08.848 [2024-11-20 11:21:36.087382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.848 [2024-11-20 11:21:36.087414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.848 qpair failed and we were unable to recover it. 00:27:08.848 [2024-11-20 11:21:36.087591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.848 [2024-11-20 11:21:36.087624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.848 qpair failed and we were unable to recover it. 00:27:08.848 [2024-11-20 11:21:36.087862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.848 [2024-11-20 11:21:36.087894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.848 qpair failed and we were unable to recover it. 00:27:08.848 [2024-11-20 11:21:36.088087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.848 [2024-11-20 11:21:36.088122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.848 qpair failed and we were unable to recover it. 00:27:08.848 [2024-11-20 11:21:36.088291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.848 [2024-11-20 11:21:36.088323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.848 qpair failed and we were unable to recover it. 00:27:08.848 [2024-11-20 11:21:36.088516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.849 [2024-11-20 11:21:36.088548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.849 qpair failed and we were unable to recover it. 00:27:08.849 [2024-11-20 11:21:36.088672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.849 [2024-11-20 11:21:36.088705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.849 qpair failed and we were unable to recover it. 00:27:08.849 [2024-11-20 11:21:36.088833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.849 [2024-11-20 11:21:36.088865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.849 qpair failed and we were unable to recover it. 00:27:08.849 [2024-11-20 11:21:36.089048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.849 [2024-11-20 11:21:36.089083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.849 qpair failed and we were unable to recover it. 00:27:08.849 [2024-11-20 11:21:36.089263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.849 [2024-11-20 11:21:36.089296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.849 qpair failed and we were unable to recover it. 00:27:08.849 [2024-11-20 11:21:36.089518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.849 [2024-11-20 11:21:36.089550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.849 qpair failed and we were unable to recover it. 00:27:08.849 [2024-11-20 11:21:36.089789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.849 [2024-11-20 11:21:36.089821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.849 qpair failed and we were unable to recover it. 00:27:08.849 [2024-11-20 11:21:36.090076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.849 [2024-11-20 11:21:36.090110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.849 qpair failed and we were unable to recover it. 00:27:08.849 [2024-11-20 11:21:36.090302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.849 [2024-11-20 11:21:36.090333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.849 qpair failed and we were unable to recover it. 00:27:08.849 [2024-11-20 11:21:36.090526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.849 [2024-11-20 11:21:36.090558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.849 qpair failed and we were unable to recover it. 00:27:08.849 [2024-11-20 11:21:36.090817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.849 [2024-11-20 11:21:36.090850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.849 qpair failed and we were unable to recover it. 00:27:08.849 [2024-11-20 11:21:36.091130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.849 [2024-11-20 11:21:36.091164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.849 qpair failed and we were unable to recover it. 00:27:08.849 [2024-11-20 11:21:36.091441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.849 [2024-11-20 11:21:36.091473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.849 qpair failed and we were unable to recover it. 00:27:08.849 [2024-11-20 11:21:36.091688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.849 [2024-11-20 11:21:36.091721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.849 qpair failed and we were unable to recover it. 00:27:08.849 [2024-11-20 11:21:36.091910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.849 [2024-11-20 11:21:36.091943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.849 qpair failed and we were unable to recover it. 00:27:08.849 [2024-11-20 11:21:36.092120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.849 [2024-11-20 11:21:36.092153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.849 qpair failed and we were unable to recover it. 00:27:08.849 [2024-11-20 11:21:36.092337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.849 [2024-11-20 11:21:36.092369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.849 qpair failed and we were unable to recover it. 00:27:08.849 [2024-11-20 11:21:36.092479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.849 [2024-11-20 11:21:36.092511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.849 qpair failed and we were unable to recover it. 00:27:08.849 [2024-11-20 11:21:36.092753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.849 [2024-11-20 11:21:36.092784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.849 qpair failed and we were unable to recover it. 00:27:08.849 [2024-11-20 11:21:36.092907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.849 [2024-11-20 11:21:36.092940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.849 qpair failed and we were unable to recover it. 00:27:08.849 [2024-11-20 11:21:36.093070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.849 [2024-11-20 11:21:36.093109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.849 qpair failed and we were unable to recover it. 00:27:08.849 [2024-11-20 11:21:36.093220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.849 [2024-11-20 11:21:36.093252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.849 qpair failed and we were unable to recover it. 00:27:08.849 [2024-11-20 11:21:36.093515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.849 [2024-11-20 11:21:36.093548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.849 qpair failed and we were unable to recover it. 00:27:08.849 [2024-11-20 11:21:36.093825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.849 [2024-11-20 11:21:36.093859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.849 qpair failed and we were unable to recover it. 00:27:08.849 [2024-11-20 11:21:36.094080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.849 [2024-11-20 11:21:36.094113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.849 qpair failed and we were unable to recover it. 00:27:08.849 [2024-11-20 11:21:36.094349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.849 [2024-11-20 11:21:36.094382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.849 qpair failed and we were unable to recover it. 00:27:08.849 [2024-11-20 11:21:36.094632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.849 [2024-11-20 11:21:36.094664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.849 qpair failed and we were unable to recover it. 00:27:08.849 [2024-11-20 11:21:36.094849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.849 [2024-11-20 11:21:36.094882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.849 qpair failed and we were unable to recover it. 00:27:08.849 [2024-11-20 11:21:36.095064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.849 [2024-11-20 11:21:36.095103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.849 qpair failed and we were unable to recover it. 00:27:08.849 [2024-11-20 11:21:36.095386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.849 [2024-11-20 11:21:36.095419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.849 qpair failed and we were unable to recover it. 00:27:08.849 [2024-11-20 11:21:36.095531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.849 [2024-11-20 11:21:36.095563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.849 qpair failed and we were unable to recover it. 00:27:08.849 [2024-11-20 11:21:36.095744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.849 [2024-11-20 11:21:36.095775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.849 qpair failed and we were unable to recover it. 00:27:08.849 [2024-11-20 11:21:36.095965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.849 [2024-11-20 11:21:36.095999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.849 qpair failed and we were unable to recover it. 00:27:08.849 [2024-11-20 11:21:36.096223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.850 [2024-11-20 11:21:36.096255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.850 qpair failed and we were unable to recover it. 00:27:08.850 [2024-11-20 11:21:36.096468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.850 [2024-11-20 11:21:36.096501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.850 qpair failed and we were unable to recover it. 00:27:08.850 [2024-11-20 11:21:36.096782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.850 [2024-11-20 11:21:36.096814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.850 qpair failed and we were unable to recover it. 00:27:08.850 [2024-11-20 11:21:36.097061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.850 [2024-11-20 11:21:36.097095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.850 qpair failed and we were unable to recover it. 00:27:08.850 [2024-11-20 11:21:36.097335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.850 [2024-11-20 11:21:36.097369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.850 qpair failed and we were unable to recover it. 00:27:08.850 [2024-11-20 11:21:36.097629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.850 [2024-11-20 11:21:36.097661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.850 qpair failed and we were unable to recover it. 00:27:08.850 [2024-11-20 11:21:36.097876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.850 [2024-11-20 11:21:36.097908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.850 qpair failed and we were unable to recover it. 00:27:08.850 [2024-11-20 11:21:36.098199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.850 [2024-11-20 11:21:36.098233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.850 qpair failed and we were unable to recover it. 00:27:08.850 [2024-11-20 11:21:36.098471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.850 [2024-11-20 11:21:36.098503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.850 qpair failed and we were unable to recover it. 00:27:08.850 [2024-11-20 11:21:36.098772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.850 [2024-11-20 11:21:36.098805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.850 qpair failed and we were unable to recover it. 00:27:08.850 [2024-11-20 11:21:36.098998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.850 [2024-11-20 11:21:36.099032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.850 qpair failed and we were unable to recover it. 00:27:08.850 [2024-11-20 11:21:36.099204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.850 [2024-11-20 11:21:36.099236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.850 qpair failed and we were unable to recover it. 00:27:08.850 [2024-11-20 11:21:36.099426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.850 [2024-11-20 11:21:36.099457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.850 qpair failed and we were unable to recover it. 00:27:08.850 [2024-11-20 11:21:36.099677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.850 [2024-11-20 11:21:36.099709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.850 qpair failed and we were unable to recover it. 00:27:08.850 [2024-11-20 11:21:36.099912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.850 [2024-11-20 11:21:36.099945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.850 qpair failed and we were unable to recover it. 00:27:08.850 [2024-11-20 11:21:36.100097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.850 [2024-11-20 11:21:36.100131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.850 qpair failed and we were unable to recover it. 00:27:08.850 [2024-11-20 11:21:36.100414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.850 [2024-11-20 11:21:36.100447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.850 qpair failed and we were unable to recover it. 00:27:08.850 [2024-11-20 11:21:36.100665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.850 [2024-11-20 11:21:36.100696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.850 qpair failed and we were unable to recover it. 00:27:08.850 [2024-11-20 11:21:36.100857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.850 [2024-11-20 11:21:36.100889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.850 qpair failed and we were unable to recover it. 00:27:08.850 [2024-11-20 11:21:36.101110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.850 [2024-11-20 11:21:36.101143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.850 qpair failed and we were unable to recover it. 00:27:08.850 [2024-11-20 11:21:36.101284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.850 [2024-11-20 11:21:36.101316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.850 qpair failed and we were unable to recover it. 00:27:08.850 [2024-11-20 11:21:36.101435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.850 [2024-11-20 11:21:36.101469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.850 qpair failed and we were unable to recover it. 00:27:08.850 [2024-11-20 11:21:36.101707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.850 [2024-11-20 11:21:36.101741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.850 qpair failed and we were unable to recover it. 00:27:08.850 [2024-11-20 11:21:36.102022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.850 [2024-11-20 11:21:36.102055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.850 qpair failed and we were unable to recover it. 00:27:08.850 [2024-11-20 11:21:36.102257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.850 [2024-11-20 11:21:36.102290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.850 qpair failed and we were unable to recover it. 00:27:08.850 [2024-11-20 11:21:36.102551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.850 [2024-11-20 11:21:36.102582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.850 qpair failed and we were unable to recover it. 00:27:08.850 [2024-11-20 11:21:36.102772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.850 [2024-11-20 11:21:36.102804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.850 qpair failed and we were unable to recover it. 00:27:08.850 [2024-11-20 11:21:36.103009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.850 [2024-11-20 11:21:36.103043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.850 qpair failed and we were unable to recover it. 00:27:08.850 [2024-11-20 11:21:36.103185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.850 [2024-11-20 11:21:36.103219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.850 qpair failed and we were unable to recover it. 00:27:08.850 [2024-11-20 11:21:36.103419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.850 [2024-11-20 11:21:36.103450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.850 qpair failed and we were unable to recover it. 00:27:08.850 [2024-11-20 11:21:36.103686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.850 [2024-11-20 11:21:36.103717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.850 qpair failed and we were unable to recover it. 00:27:08.850 [2024-11-20 11:21:36.103960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.851 [2024-11-20 11:21:36.103993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.851 qpair failed and we were unable to recover it. 00:27:08.851 [2024-11-20 11:21:36.104168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.851 [2024-11-20 11:21:36.104200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.851 qpair failed and we were unable to recover it. 00:27:08.851 [2024-11-20 11:21:36.104320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.851 [2024-11-20 11:21:36.104352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.851 qpair failed and we were unable to recover it. 00:27:08.851 [2024-11-20 11:21:36.104522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.851 [2024-11-20 11:21:36.104554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.851 qpair failed and we were unable to recover it. 00:27:08.851 [2024-11-20 11:21:36.104758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.851 [2024-11-20 11:21:36.104790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.851 qpair failed and we were unable to recover it. 00:27:08.851 [2024-11-20 11:21:36.104978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.851 [2024-11-20 11:21:36.105012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.851 qpair failed and we were unable to recover it. 00:27:08.851 [2024-11-20 11:21:36.105284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.851 [2024-11-20 11:21:36.105317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.851 qpair failed and we were unable to recover it. 00:27:08.851 [2024-11-20 11:21:36.105508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.851 [2024-11-20 11:21:36.105541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.851 qpair failed and we were unable to recover it. 00:27:08.851 [2024-11-20 11:21:36.105796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.851 [2024-11-20 11:21:36.105828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.851 qpair failed and we were unable to recover it. 00:27:08.851 [2024-11-20 11:21:36.106007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.851 [2024-11-20 11:21:36.106040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.851 qpair failed and we were unable to recover it. 00:27:08.851 [2024-11-20 11:21:36.106282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.851 [2024-11-20 11:21:36.106315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.851 qpair failed and we were unable to recover it. 00:27:08.851 [2024-11-20 11:21:36.106572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.851 [2024-11-20 11:21:36.106604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.851 qpair failed and we were unable to recover it. 00:27:08.851 [2024-11-20 11:21:36.106806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.851 [2024-11-20 11:21:36.106839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.851 qpair failed and we were unable to recover it. 00:27:08.851 [2024-11-20 11:21:36.107031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.851 [2024-11-20 11:21:36.107065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.851 qpair failed and we were unable to recover it. 00:27:08.851 [2024-11-20 11:21:36.107297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.851 [2024-11-20 11:21:36.107329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.851 qpair failed and we were unable to recover it. 00:27:08.851 [2024-11-20 11:21:36.107447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.851 [2024-11-20 11:21:36.107481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.851 qpair failed and we were unable to recover it. 00:27:08.851 [2024-11-20 11:21:36.107742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.851 [2024-11-20 11:21:36.107774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.851 qpair failed and we were unable to recover it. 00:27:08.851 [2024-11-20 11:21:36.107960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.851 [2024-11-20 11:21:36.107995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.851 qpair failed and we were unable to recover it. 00:27:08.851 [2024-11-20 11:21:36.108115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.851 [2024-11-20 11:21:36.108146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.851 qpair failed and we were unable to recover it. 00:27:08.851 [2024-11-20 11:21:36.108364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.851 [2024-11-20 11:21:36.108397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.851 qpair failed and we were unable to recover it. 00:27:08.851 [2024-11-20 11:21:36.108528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.851 [2024-11-20 11:21:36.108559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.851 qpair failed and we were unable to recover it. 00:27:08.851 [2024-11-20 11:21:36.108770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.851 [2024-11-20 11:21:36.108804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.851 qpair failed and we were unable to recover it. 00:27:08.851 [2024-11-20 11:21:36.109002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.851 [2024-11-20 11:21:36.109036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.851 qpair failed and we were unable to recover it. 00:27:08.851 [2024-11-20 11:21:36.109301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.851 [2024-11-20 11:21:36.109334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.851 qpair failed and we were unable to recover it. 00:27:08.851 [2024-11-20 11:21:36.109474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.851 [2024-11-20 11:21:36.109513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.851 qpair failed and we were unable to recover it. 00:27:08.851 [2024-11-20 11:21:36.109801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.851 [2024-11-20 11:21:36.109833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.851 qpair failed and we were unable to recover it. 00:27:08.851 [2024-11-20 11:21:36.110011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.851 [2024-11-20 11:21:36.110044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.851 qpair failed and we were unable to recover it. 00:27:08.851 [2024-11-20 11:21:36.110325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.852 [2024-11-20 11:21:36.110356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.852 qpair failed and we were unable to recover it. 00:27:08.852 [2024-11-20 11:21:36.110538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.852 [2024-11-20 11:21:36.110570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.852 qpair failed and we were unable to recover it. 00:27:08.852 [2024-11-20 11:21:36.110835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.852 [2024-11-20 11:21:36.110868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.852 qpair failed and we were unable to recover it. 00:27:08.852 [2024-11-20 11:21:36.111134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.852 [2024-11-20 11:21:36.111168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.852 qpair failed and we were unable to recover it. 00:27:08.852 [2024-11-20 11:21:36.111459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.852 [2024-11-20 11:21:36.111491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.852 qpair failed and we were unable to recover it. 00:27:08.852 [2024-11-20 11:21:36.111715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.852 [2024-11-20 11:21:36.111746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.852 qpair failed and we were unable to recover it. 00:27:08.852 [2024-11-20 11:21:36.111881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.852 [2024-11-20 11:21:36.111913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.852 qpair failed and we were unable to recover it. 00:27:08.852 [2024-11-20 11:21:36.112135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.852 [2024-11-20 11:21:36.112170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.852 qpair failed and we were unable to recover it. 00:27:08.852 [2024-11-20 11:21:36.112299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.852 [2024-11-20 11:21:36.112332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.852 qpair failed and we were unable to recover it. 00:27:08.852 [2024-11-20 11:21:36.112546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.852 [2024-11-20 11:21:36.112579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.852 qpair failed and we were unable to recover it. 00:27:08.852 [2024-11-20 11:21:36.112714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.852 [2024-11-20 11:21:36.112746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.852 qpair failed and we were unable to recover it. 00:27:08.852 [2024-11-20 11:21:36.112965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.852 [2024-11-20 11:21:36.112998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.852 qpair failed and we were unable to recover it. 00:27:08.852 [2024-11-20 11:21:36.113133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.852 [2024-11-20 11:21:36.113166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.852 qpair failed and we were unable to recover it. 00:27:08.852 [2024-11-20 11:21:36.113442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.852 [2024-11-20 11:21:36.113475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.852 qpair failed and we were unable to recover it. 00:27:08.852 [2024-11-20 11:21:36.113677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.852 [2024-11-20 11:21:36.113708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.852 qpair failed and we were unable to recover it. 00:27:08.852 [2024-11-20 11:21:36.113899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.852 [2024-11-20 11:21:36.113932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.852 qpair failed and we were unable to recover it. 00:27:08.852 [2024-11-20 11:21:36.114172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.852 [2024-11-20 11:21:36.114205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.852 qpair failed and we were unable to recover it. 00:27:08.852 [2024-11-20 11:21:36.114446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.852 [2024-11-20 11:21:36.114478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.852 qpair failed and we were unable to recover it. 00:27:08.852 [2024-11-20 11:21:36.114722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.852 [2024-11-20 11:21:36.114755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.852 qpair failed and we were unable to recover it. 00:27:08.852 [2024-11-20 11:21:36.114978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.852 [2024-11-20 11:21:36.115013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.852 qpair failed and we were unable to recover it. 00:27:08.852 [2024-11-20 11:21:36.115147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.852 [2024-11-20 11:21:36.115177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.852 qpair failed and we were unable to recover it. 00:27:08.852 [2024-11-20 11:21:36.115303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.852 [2024-11-20 11:21:36.115333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.852 qpair failed and we were unable to recover it. 00:27:08.852 [2024-11-20 11:21:36.115459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.852 [2024-11-20 11:21:36.115491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.852 qpair failed and we were unable to recover it. 00:27:08.852 [2024-11-20 11:21:36.115698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.852 [2024-11-20 11:21:36.115733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.852 qpair failed and we were unable to recover it. 00:27:08.852 [2024-11-20 11:21:36.115965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.852 [2024-11-20 11:21:36.116004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.852 qpair failed and we were unable to recover it. 00:27:08.852 [2024-11-20 11:21:36.116127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.852 [2024-11-20 11:21:36.116159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.852 qpair failed and we were unable to recover it. 00:27:08.852 [2024-11-20 11:21:36.116405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.852 [2024-11-20 11:21:36.116438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.852 qpair failed and we were unable to recover it. 00:27:08.852 [2024-11-20 11:21:36.116734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.852 [2024-11-20 11:21:36.116766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.852 qpair failed and we were unable to recover it. 00:27:08.852 [2024-11-20 11:21:36.116967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.852 [2024-11-20 11:21:36.117000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.852 qpair failed and we were unable to recover it. 00:27:08.852 [2024-11-20 11:21:36.117267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.852 [2024-11-20 11:21:36.117301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.852 qpair failed and we were unable to recover it. 00:27:08.852 [2024-11-20 11:21:36.117506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.852 [2024-11-20 11:21:36.117538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.852 qpair failed and we were unable to recover it. 00:27:08.852 [2024-11-20 11:21:36.117708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.852 [2024-11-20 11:21:36.117741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.852 qpair failed and we were unable to recover it. 00:27:08.852 [2024-11-20 11:21:36.118010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.852 [2024-11-20 11:21:36.118045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.852 qpair failed and we were unable to recover it. 00:27:08.852 [2024-11-20 11:21:36.118321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.852 [2024-11-20 11:21:36.118352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.852 qpair failed and we were unable to recover it. 00:27:08.853 [2024-11-20 11:21:36.118524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.853 [2024-11-20 11:21:36.118556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.853 qpair failed and we were unable to recover it. 00:27:08.853 [2024-11-20 11:21:36.118736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.853 [2024-11-20 11:21:36.118768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.853 qpair failed and we were unable to recover it. 00:27:08.853 [2024-11-20 11:21:36.119030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.853 [2024-11-20 11:21:36.119064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.853 qpair failed and we were unable to recover it. 00:27:08.853 [2024-11-20 11:21:36.119239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.853 [2024-11-20 11:21:36.119272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.853 qpair failed and we were unable to recover it. 00:27:08.853 [2024-11-20 11:21:36.119491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.853 [2024-11-20 11:21:36.119525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.853 qpair failed and we were unable to recover it. 00:27:08.853 [2024-11-20 11:21:36.119694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.853 [2024-11-20 11:21:36.119725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.853 qpair failed and we were unable to recover it. 00:27:08.853 [2024-11-20 11:21:36.119962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.853 [2024-11-20 11:21:36.119994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.853 qpair failed and we were unable to recover it. 00:27:08.853 [2024-11-20 11:21:36.120129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.853 [2024-11-20 11:21:36.120162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.853 qpair failed and we were unable to recover it. 00:27:08.853 [2024-11-20 11:21:36.120352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.853 [2024-11-20 11:21:36.120385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.853 qpair failed and we were unable to recover it. 00:27:08.853 [2024-11-20 11:21:36.120564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.853 [2024-11-20 11:21:36.120596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.853 qpair failed and we were unable to recover it. 00:27:08.853 [2024-11-20 11:21:36.120720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.853 [2024-11-20 11:21:36.120754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.853 qpair failed and we were unable to recover it. 00:27:08.853 [2024-11-20 11:21:36.120984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.853 [2024-11-20 11:21:36.121019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.853 qpair failed and we were unable to recover it. 00:27:08.853 [2024-11-20 11:21:36.121212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.853 [2024-11-20 11:21:36.121243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.853 qpair failed and we were unable to recover it. 00:27:08.853 [2024-11-20 11:21:36.121452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.853 [2024-11-20 11:21:36.121487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.853 qpair failed and we were unable to recover it. 00:27:08.853 [2024-11-20 11:21:36.121756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.853 [2024-11-20 11:21:36.121788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.853 qpair failed and we were unable to recover it. 00:27:08.853 [2024-11-20 11:21:36.121979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.853 [2024-11-20 11:21:36.122013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.853 qpair failed and we were unable to recover it. 00:27:08.853 [2024-11-20 11:21:36.122138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.853 [2024-11-20 11:21:36.122170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.853 qpair failed and we were unable to recover it. 00:27:08.853 [2024-11-20 11:21:36.122432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.853 [2024-11-20 11:21:36.122471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.853 qpair failed and we were unable to recover it. 00:27:08.853 [2024-11-20 11:21:36.122739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.853 [2024-11-20 11:21:36.122772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.853 qpair failed and we were unable to recover it. 00:27:08.853 [2024-11-20 11:21:36.122964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.853 [2024-11-20 11:21:36.122998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.853 qpair failed and we were unable to recover it. 00:27:08.853 [2024-11-20 11:21:36.123133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.853 [2024-11-20 11:21:36.123165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.853 qpair failed and we were unable to recover it. 00:27:08.853 [2024-11-20 11:21:36.123364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.853 [2024-11-20 11:21:36.123396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.853 qpair failed and we were unable to recover it. 00:27:08.853 [2024-11-20 11:21:36.123707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.853 [2024-11-20 11:21:36.123742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.853 qpair failed and we were unable to recover it. 00:27:08.853 [2024-11-20 11:21:36.123918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.853 [2024-11-20 11:21:36.123971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.853 qpair failed and we were unable to recover it. 00:27:08.853 [2024-11-20 11:21:36.124238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.853 [2024-11-20 11:21:36.124274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.853 qpair failed and we were unable to recover it. 00:27:08.853 [2024-11-20 11:21:36.124480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.853 [2024-11-20 11:21:36.124513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.853 qpair failed and we were unable to recover it. 00:27:08.853 [2024-11-20 11:21:36.124814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.853 [2024-11-20 11:21:36.124847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.853 qpair failed and we were unable to recover it. 00:27:08.853 [2024-11-20 11:21:36.125065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.853 [2024-11-20 11:21:36.125100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.853 qpair failed and we were unable to recover it. 00:27:08.853 [2024-11-20 11:21:36.125307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.853 [2024-11-20 11:21:36.125339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.853 qpair failed and we were unable to recover it. 00:27:08.853 [2024-11-20 11:21:36.125627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.853 [2024-11-20 11:21:36.125660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.853 qpair failed and we were unable to recover it. 00:27:08.853 [2024-11-20 11:21:36.125937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.853 [2024-11-20 11:21:36.125978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.853 qpair failed and we were unable to recover it. 00:27:08.853 [2024-11-20 11:21:36.126185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.853 [2024-11-20 11:21:36.126218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.854 qpair failed and we were unable to recover it. 00:27:08.854 [2024-11-20 11:21:36.126352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.854 [2024-11-20 11:21:36.126384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.854 qpair failed and we were unable to recover it. 00:27:08.854 [2024-11-20 11:21:36.126650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.854 [2024-11-20 11:21:36.126683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.854 qpair failed and we were unable to recover it. 00:27:08.854 [2024-11-20 11:21:36.126912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.854 [2024-11-20 11:21:36.126945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.854 qpair failed and we were unable to recover it. 00:27:08.854 [2024-11-20 11:21:36.127172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.854 [2024-11-20 11:21:36.127206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.854 qpair failed and we were unable to recover it. 00:27:08.854 [2024-11-20 11:21:36.127406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.854 [2024-11-20 11:21:36.127438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.854 qpair failed and we were unable to recover it. 00:27:08.854 [2024-11-20 11:21:36.127667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.854 [2024-11-20 11:21:36.127699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.854 qpair failed and we were unable to recover it. 00:27:08.854 [2024-11-20 11:21:36.127898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.854 [2024-11-20 11:21:36.127932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.854 qpair failed and we were unable to recover it. 00:27:08.854 [2024-11-20 11:21:36.128142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.854 [2024-11-20 11:21:36.128175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.854 qpair failed and we were unable to recover it. 00:27:08.854 [2024-11-20 11:21:36.128431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.854 [2024-11-20 11:21:36.128464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.854 qpair failed and we were unable to recover it. 00:27:08.854 [2024-11-20 11:21:36.128753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.854 [2024-11-20 11:21:36.128787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.854 qpair failed and we were unable to recover it. 00:27:08.854 [2024-11-20 11:21:36.129077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.854 [2024-11-20 11:21:36.129111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.854 qpair failed and we were unable to recover it. 00:27:08.854 [2024-11-20 11:21:36.129320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.854 [2024-11-20 11:21:36.129354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.854 qpair failed and we were unable to recover it. 00:27:08.854 [2024-11-20 11:21:36.129530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.854 [2024-11-20 11:21:36.129562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.854 qpair failed and we were unable to recover it. 00:27:08.854 [2024-11-20 11:21:36.129829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.854 [2024-11-20 11:21:36.129861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.854 qpair failed and we were unable to recover it. 00:27:08.854 [2024-11-20 11:21:36.130132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.854 [2024-11-20 11:21:36.130166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.854 qpair failed and we were unable to recover it. 00:27:08.854 [2024-11-20 11:21:36.130293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.854 [2024-11-20 11:21:36.130325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.854 qpair failed and we were unable to recover it. 00:27:08.854 [2024-11-20 11:21:36.130499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.854 [2024-11-20 11:21:36.130531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.854 qpair failed and we were unable to recover it. 00:27:08.854 [2024-11-20 11:21:36.130819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.854 [2024-11-20 11:21:36.130852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.854 qpair failed and we were unable to recover it. 00:27:08.854 [2024-11-20 11:21:36.131113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.854 [2024-11-20 11:21:36.131146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.854 qpair failed and we were unable to recover it. 00:27:08.854 [2024-11-20 11:21:36.131356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.854 [2024-11-20 11:21:36.131389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.854 qpair failed and we were unable to recover it. 00:27:08.854 [2024-11-20 11:21:36.131641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.854 [2024-11-20 11:21:36.131675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.854 qpair failed and we were unable to recover it. 00:27:08.854 [2024-11-20 11:21:36.131850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.854 [2024-11-20 11:21:36.131882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.854 qpair failed and we were unable to recover it. 00:27:08.854 [2024-11-20 11:21:36.132072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.854 [2024-11-20 11:21:36.132106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.854 qpair failed and we were unable to recover it. 00:27:08.854 [2024-11-20 11:21:36.132362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.854 [2024-11-20 11:21:36.132396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.854 qpair failed and we were unable to recover it. 00:27:08.854 [2024-11-20 11:21:36.132514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.854 [2024-11-20 11:21:36.132546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.854 qpair failed and we were unable to recover it. 00:27:08.854 [2024-11-20 11:21:36.132809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.854 [2024-11-20 11:21:36.132841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.854 qpair failed and we were unable to recover it. 00:27:08.854 [2024-11-20 11:21:36.132972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.854 [2024-11-20 11:21:36.133007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.854 qpair failed and we were unable to recover it. 00:27:08.854 [2024-11-20 11:21:36.133129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.854 [2024-11-20 11:21:36.133161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.854 qpair failed and we were unable to recover it. 00:27:08.854 [2024-11-20 11:21:36.133429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.854 [2024-11-20 11:21:36.133462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.854 qpair failed and we were unable to recover it. 00:27:08.854 [2024-11-20 11:21:36.133750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.854 [2024-11-20 11:21:36.133783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.854 qpair failed and we were unable to recover it. 00:27:08.854 [2024-11-20 11:21:36.134000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.854 [2024-11-20 11:21:36.134033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.854 qpair failed and we were unable to recover it. 00:27:08.854 [2024-11-20 11:21:36.134319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.854 [2024-11-20 11:21:36.134352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.854 qpair failed and we were unable to recover it. 00:27:08.854 [2024-11-20 11:21:36.134593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.854 [2024-11-20 11:21:36.134627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.854 qpair failed and we were unable to recover it. 00:27:08.855 [2024-11-20 11:21:36.134818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.855 [2024-11-20 11:21:36.134849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.855 qpair failed and we were unable to recover it. 00:27:08.855 [2024-11-20 11:21:36.134987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.855 [2024-11-20 11:21:36.135022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.855 qpair failed and we were unable to recover it. 00:27:08.855 [2024-11-20 11:21:36.135174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.855 [2024-11-20 11:21:36.135208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.855 qpair failed and we were unable to recover it. 00:27:08.855 [2024-11-20 11:21:36.135473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.855 [2024-11-20 11:21:36.135505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.855 qpair failed and we were unable to recover it. 00:27:08.855 [2024-11-20 11:21:36.135790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.855 [2024-11-20 11:21:36.135822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.855 qpair failed and we were unable to recover it. 00:27:08.855 [2024-11-20 11:21:36.136018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.855 [2024-11-20 11:21:36.136052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.855 qpair failed and we were unable to recover it. 00:27:08.855 [2024-11-20 11:21:36.136243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.855 [2024-11-20 11:21:36.136277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.855 qpair failed and we were unable to recover it. 00:27:08.855 [2024-11-20 11:21:36.136539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.855 [2024-11-20 11:21:36.136572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.855 qpair failed and we were unable to recover it. 00:27:08.855 [2024-11-20 11:21:36.136752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.855 [2024-11-20 11:21:36.136784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.855 qpair failed and we were unable to recover it. 00:27:08.855 [2024-11-20 11:21:36.136970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.855 [2024-11-20 11:21:36.137004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.855 qpair failed and we were unable to recover it. 00:27:08.855 [2024-11-20 11:21:36.137198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.855 [2024-11-20 11:21:36.137230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.855 qpair failed and we were unable to recover it. 00:27:08.855 [2024-11-20 11:21:36.137395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.855 [2024-11-20 11:21:36.137428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.855 qpair failed and we were unable to recover it. 00:27:08.855 [2024-11-20 11:21:36.137635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.855 [2024-11-20 11:21:36.137667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.855 qpair failed and we were unable to recover it. 00:27:08.855 [2024-11-20 11:21:36.137908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.855 [2024-11-20 11:21:36.137941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.855 qpair failed and we were unable to recover it. 00:27:08.855 [2024-11-20 11:21:36.138130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.855 [2024-11-20 11:21:36.138163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.855 qpair failed and we were unable to recover it. 00:27:08.855 [2024-11-20 11:21:36.138335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.855 [2024-11-20 11:21:36.138369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.855 qpair failed and we were unable to recover it. 00:27:08.855 [2024-11-20 11:21:36.138499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.855 [2024-11-20 11:21:36.138532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.855 qpair failed and we were unable to recover it. 00:27:08.855 [2024-11-20 11:21:36.138727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.855 [2024-11-20 11:21:36.138760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.855 qpair failed and we were unable to recover it. 00:27:08.855 [2024-11-20 11:21:36.138965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.855 [2024-11-20 11:21:36.138999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.855 qpair failed and we were unable to recover it. 00:27:08.855 [2024-11-20 11:21:36.139195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.855 [2024-11-20 11:21:36.139227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.855 qpair failed and we were unable to recover it. 00:27:08.855 [2024-11-20 11:21:36.139350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.855 [2024-11-20 11:21:36.139388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.855 qpair failed and we were unable to recover it. 00:27:08.855 [2024-11-20 11:21:36.139509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.855 [2024-11-20 11:21:36.139543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.855 qpair failed and we were unable to recover it. 00:27:08.855 [2024-11-20 11:21:36.139806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.855 [2024-11-20 11:21:36.139838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.855 qpair failed and we were unable to recover it. 00:27:08.855 [2024-11-20 11:21:36.140035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.855 [2024-11-20 11:21:36.140068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.855 qpair failed and we were unable to recover it. 00:27:08.855 [2024-11-20 11:21:36.140309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.855 [2024-11-20 11:21:36.140341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.855 qpair failed and we were unable to recover it. 00:27:08.855 [2024-11-20 11:21:36.140608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.855 [2024-11-20 11:21:36.140641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.855 qpair failed and we were unable to recover it. 00:27:08.855 [2024-11-20 11:21:36.140937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.855 [2024-11-20 11:21:36.140979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.855 qpair failed and we were unable to recover it. 00:27:08.855 [2024-11-20 11:21:36.141102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.855 [2024-11-20 11:21:36.141134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.855 qpair failed and we were unable to recover it. 00:27:08.855 [2024-11-20 11:21:36.141255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.855 [2024-11-20 11:21:36.141287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.855 qpair failed and we were unable to recover it. 00:27:08.855 [2024-11-20 11:21:36.141486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.855 [2024-11-20 11:21:36.141519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.855 qpair failed and we were unable to recover it. 00:27:08.855 [2024-11-20 11:21:36.141694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.855 [2024-11-20 11:21:36.141727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.855 qpair failed and we were unable to recover it. 00:27:08.855 [2024-11-20 11:21:36.141854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.855 [2024-11-20 11:21:36.141886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.855 qpair failed and we were unable to recover it. 00:27:08.855 [2024-11-20 11:21:36.142112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.855 [2024-11-20 11:21:36.142146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.855 qpair failed and we were unable to recover it. 00:27:08.856 [2024-11-20 11:21:36.142421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.856 [2024-11-20 11:21:36.142454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.856 qpair failed and we were unable to recover it. 00:27:08.856 [2024-11-20 11:21:36.142603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.856 [2024-11-20 11:21:36.142636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.856 qpair failed and we were unable to recover it. 00:27:08.856 [2024-11-20 11:21:36.142770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.856 [2024-11-20 11:21:36.142803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.856 qpair failed and we were unable to recover it. 00:27:08.856 [2024-11-20 11:21:36.142989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.856 [2024-11-20 11:21:36.143023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.856 qpair failed and we were unable to recover it. 00:27:08.856 [2024-11-20 11:21:36.143148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.856 [2024-11-20 11:21:36.143180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.856 qpair failed and we were unable to recover it. 00:27:08.856 [2024-11-20 11:21:36.143370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.856 [2024-11-20 11:21:36.143403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.856 qpair failed and we were unable to recover it. 00:27:08.856 [2024-11-20 11:21:36.143517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.856 [2024-11-20 11:21:36.143549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.856 qpair failed and we were unable to recover it. 00:27:08.856 [2024-11-20 11:21:36.143677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.856 [2024-11-20 11:21:36.143712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.856 qpair failed and we were unable to recover it. 00:27:08.856 [2024-11-20 11:21:36.143918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.856 [2024-11-20 11:21:36.143973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.856 qpair failed and we were unable to recover it. 00:27:08.856 [2024-11-20 11:21:36.144232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.856 [2024-11-20 11:21:36.144265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.856 qpair failed and we were unable to recover it. 00:27:08.856 [2024-11-20 11:21:36.144449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.856 [2024-11-20 11:21:36.144483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.856 qpair failed and we were unable to recover it. 00:27:08.856 [2024-11-20 11:21:36.144687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.856 [2024-11-20 11:21:36.144719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.856 qpair failed and we were unable to recover it. 00:27:08.856 [2024-11-20 11:21:36.144842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.856 [2024-11-20 11:21:36.144874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.856 qpair failed and we were unable to recover it. 00:27:08.856 [2024-11-20 11:21:36.145138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.856 [2024-11-20 11:21:36.145173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.856 qpair failed and we were unable to recover it. 00:27:08.856 [2024-11-20 11:21:36.145378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.856 [2024-11-20 11:21:36.145417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.856 qpair failed and we were unable to recover it. 00:27:08.856 [2024-11-20 11:21:36.145599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.856 [2024-11-20 11:21:36.145632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.856 qpair failed and we were unable to recover it. 00:27:08.856 [2024-11-20 11:21:36.145823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.856 [2024-11-20 11:21:36.145855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.856 qpair failed and we were unable to recover it. 00:27:08.856 [2024-11-20 11:21:36.146033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.856 [2024-11-20 11:21:36.146067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.856 qpair failed and we were unable to recover it. 00:27:08.856 [2024-11-20 11:21:36.146275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.856 [2024-11-20 11:21:36.146306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.856 qpair failed and we were unable to recover it. 00:27:08.856 [2024-11-20 11:21:36.146550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.856 [2024-11-20 11:21:36.146581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.856 qpair failed and we were unable to recover it. 00:27:08.856 [2024-11-20 11:21:36.146714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.856 [2024-11-20 11:21:36.146749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.856 qpair failed and we were unable to recover it. 00:27:08.856 [2024-11-20 11:21:36.146932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.856 [2024-11-20 11:21:36.146976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.856 qpair failed and we were unable to recover it. 00:27:08.856 [2024-11-20 11:21:36.147107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.856 [2024-11-20 11:21:36.147140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.856 qpair failed and we were unable to recover it. 00:27:08.856 [2024-11-20 11:21:36.147325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.856 [2024-11-20 11:21:36.147358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.856 qpair failed and we were unable to recover it. 00:27:08.856 [2024-11-20 11:21:36.147527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.856 [2024-11-20 11:21:36.147560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.856 qpair failed and we were unable to recover it. 00:27:08.856 [2024-11-20 11:21:36.147747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.856 [2024-11-20 11:21:36.147782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.856 qpair failed and we were unable to recover it. 00:27:08.856 [2024-11-20 11:21:36.148068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.856 [2024-11-20 11:21:36.148102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.856 qpair failed and we were unable to recover it. 00:27:08.856 [2024-11-20 11:21:36.148236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.856 [2024-11-20 11:21:36.148269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.856 qpair failed and we were unable to recover it. 00:27:08.856 [2024-11-20 11:21:36.148469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.856 [2024-11-20 11:21:36.148503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.856 qpair failed and we were unable to recover it. 00:27:08.857 [2024-11-20 11:21:36.148692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.857 [2024-11-20 11:21:36.148725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.857 qpair failed and we were unable to recover it. 00:27:08.857 [2024-11-20 11:21:36.148895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.857 [2024-11-20 11:21:36.148927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.857 qpair failed and we were unable to recover it. 00:27:08.857 [2024-11-20 11:21:36.149187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.857 [2024-11-20 11:21:36.149221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.857 qpair failed and we were unable to recover it. 00:27:08.857 [2024-11-20 11:21:36.149426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.857 [2024-11-20 11:21:36.149459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.857 qpair failed and we were unable to recover it. 00:27:08.857 [2024-11-20 11:21:36.149743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.857 [2024-11-20 11:21:36.149776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.857 qpair failed and we were unable to recover it. 00:27:08.857 [2024-11-20 11:21:36.150046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.857 [2024-11-20 11:21:36.150080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.857 qpair failed and we were unable to recover it. 00:27:08.857 [2024-11-20 11:21:36.150299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.857 [2024-11-20 11:21:36.150331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.857 qpair failed and we were unable to recover it. 00:27:08.857 [2024-11-20 11:21:36.150510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.857 [2024-11-20 11:21:36.150541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.857 qpair failed and we were unable to recover it. 00:27:08.857 [2024-11-20 11:21:36.150807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.857 [2024-11-20 11:21:36.150839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.857 qpair failed and we were unable to recover it. 00:27:08.857 [2024-11-20 11:21:36.151052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.857 [2024-11-20 11:21:36.151085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.857 qpair failed and we were unable to recover it. 00:27:08.857 [2024-11-20 11:21:36.151276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.857 [2024-11-20 11:21:36.151310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.857 qpair failed and we were unable to recover it. 00:27:08.857 [2024-11-20 11:21:36.151499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.857 [2024-11-20 11:21:36.151533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.857 qpair failed and we were unable to recover it. 00:27:08.857 [2024-11-20 11:21:36.151739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.857 [2024-11-20 11:21:36.151771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.857 qpair failed and we were unable to recover it. 00:27:08.857 [2024-11-20 11:21:36.152019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.857 [2024-11-20 11:21:36.152053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.857 qpair failed and we were unable to recover it. 00:27:08.857 [2024-11-20 11:21:36.152241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.857 [2024-11-20 11:21:36.152274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.857 qpair failed and we were unable to recover it. 00:27:08.857 [2024-11-20 11:21:36.152463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.857 [2024-11-20 11:21:36.152495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.857 qpair failed and we were unable to recover it. 00:27:08.857 [2024-11-20 11:21:36.152721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.857 [2024-11-20 11:21:36.152755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.857 qpair failed and we were unable to recover it. 00:27:08.857 [2024-11-20 11:21:36.152895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.857 [2024-11-20 11:21:36.152926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.857 qpair failed and we were unable to recover it. 00:27:08.857 [2024-11-20 11:21:36.153130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.857 [2024-11-20 11:21:36.153164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.857 qpair failed and we were unable to recover it. 00:27:08.857 [2024-11-20 11:21:36.153298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.857 [2024-11-20 11:21:36.153331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.857 qpair failed and we were unable to recover it. 00:27:08.857 [2024-11-20 11:21:36.153557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.857 [2024-11-20 11:21:36.153589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.857 qpair failed and we were unable to recover it. 00:27:08.857 [2024-11-20 11:21:36.153830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.857 [2024-11-20 11:21:36.153862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.857 qpair failed and we were unable to recover it. 00:27:08.857 [2024-11-20 11:21:36.154045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.857 [2024-11-20 11:21:36.154079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.857 qpair failed and we were unable to recover it. 00:27:08.857 [2024-11-20 11:21:36.154275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.857 [2024-11-20 11:21:36.154308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.857 qpair failed and we were unable to recover it. 00:27:08.857 [2024-11-20 11:21:36.154447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.857 [2024-11-20 11:21:36.154480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.857 qpair failed and we were unable to recover it. 00:27:08.857 [2024-11-20 11:21:36.154658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.857 [2024-11-20 11:21:36.154690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.857 qpair failed and we were unable to recover it. 00:27:08.857 [2024-11-20 11:21:36.155019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.857 [2024-11-20 11:21:36.155092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.857 qpair failed and we were unable to recover it. 00:27:08.857 [2024-11-20 11:21:36.155299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.857 [2024-11-20 11:21:36.155337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.857 qpair failed and we were unable to recover it. 00:27:08.857 [2024-11-20 11:21:36.155479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.857 [2024-11-20 11:21:36.155510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.857 qpair failed and we were unable to recover it. 00:27:08.857 [2024-11-20 11:21:36.155800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.857 [2024-11-20 11:21:36.155832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.857 qpair failed and we were unable to recover it. 00:27:08.857 [2024-11-20 11:21:36.155967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.857 [2024-11-20 11:21:36.156001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.857 qpair failed and we were unable to recover it. 00:27:08.857 [2024-11-20 11:21:36.156128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.857 [2024-11-20 11:21:36.156160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.857 qpair failed and we were unable to recover it. 00:27:08.857 [2024-11-20 11:21:36.156332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.857 [2024-11-20 11:21:36.156363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.858 qpair failed and we were unable to recover it. 00:27:08.858 [2024-11-20 11:21:36.156560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.858 [2024-11-20 11:21:36.156592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.858 qpair failed and we were unable to recover it. 00:27:08.858 [2024-11-20 11:21:36.156788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.858 [2024-11-20 11:21:36.156821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.858 qpair failed and we were unable to recover it. 00:27:08.858 [2024-11-20 11:21:36.156931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.858 [2024-11-20 11:21:36.156978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.858 qpair failed and we were unable to recover it. 00:27:08.858 [2024-11-20 11:21:36.157243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.858 [2024-11-20 11:21:36.157275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.858 qpair failed and we were unable to recover it. 00:27:08.858 [2024-11-20 11:21:36.157517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.858 [2024-11-20 11:21:36.157549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.858 qpair failed and we were unable to recover it. 00:27:08.858 [2024-11-20 11:21:36.157815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.858 [2024-11-20 11:21:36.157846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.858 qpair failed and we were unable to recover it. 00:27:08.858 [2024-11-20 11:21:36.158035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.858 [2024-11-20 11:21:36.158076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.858 qpair failed and we were unable to recover it. 00:27:08.858 [2024-11-20 11:21:36.158218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.858 [2024-11-20 11:21:36.158250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.858 qpair failed and we were unable to recover it. 00:27:08.858 [2024-11-20 11:21:36.158491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.858 [2024-11-20 11:21:36.158522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.858 qpair failed and we were unable to recover it. 00:27:08.858 [2024-11-20 11:21:36.158650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.858 [2024-11-20 11:21:36.158682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.858 qpair failed and we were unable to recover it. 00:27:08.858 [2024-11-20 11:21:36.158863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.858 [2024-11-20 11:21:36.158894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.858 qpair failed and we were unable to recover it. 00:27:08.858 [2024-11-20 11:21:36.159034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.858 [2024-11-20 11:21:36.159067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.858 qpair failed and we were unable to recover it. 00:27:08.858 [2024-11-20 11:21:36.159244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.858 [2024-11-20 11:21:36.159275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.858 qpair failed and we were unable to recover it. 00:27:08.858 [2024-11-20 11:21:36.159458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.858 [2024-11-20 11:21:36.159488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.858 qpair failed and we were unable to recover it. 00:27:08.858 [2024-11-20 11:21:36.159616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.858 [2024-11-20 11:21:36.159647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.858 qpair failed and we were unable to recover it. 00:27:08.858 [2024-11-20 11:21:36.159839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.858 [2024-11-20 11:21:36.159869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.858 qpair failed and we were unable to recover it. 00:27:08.858 [2024-11-20 11:21:36.160000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.858 [2024-11-20 11:21:36.160033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.858 qpair failed and we were unable to recover it. 00:27:08.858 [2024-11-20 11:21:36.160176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.858 [2024-11-20 11:21:36.160207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.858 qpair failed and we were unable to recover it. 00:27:08.858 [2024-11-20 11:21:36.160336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.858 [2024-11-20 11:21:36.160368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.858 qpair failed and we were unable to recover it. 00:27:08.858 [2024-11-20 11:21:36.160488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.858 [2024-11-20 11:21:36.160519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.858 qpair failed and we were unable to recover it. 00:27:08.858 [2024-11-20 11:21:36.160655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.858 [2024-11-20 11:21:36.160687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.858 qpair failed and we were unable to recover it. 00:27:08.858 [2024-11-20 11:21:36.160812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.858 [2024-11-20 11:21:36.160843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.858 qpair failed and we were unable to recover it. 00:27:08.858 [2024-11-20 11:21:36.160974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.858 [2024-11-20 11:21:36.161006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.858 qpair failed and we were unable to recover it. 00:27:08.858 [2024-11-20 11:21:36.161200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.858 [2024-11-20 11:21:36.161231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.858 qpair failed and we were unable to recover it. 00:27:08.858 [2024-11-20 11:21:36.161345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.858 [2024-11-20 11:21:36.161376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.858 qpair failed and we were unable to recover it. 00:27:08.858 [2024-11-20 11:21:36.161495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.858 [2024-11-20 11:21:36.161526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.858 qpair failed and we were unable to recover it. 00:27:08.858 [2024-11-20 11:21:36.161652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.858 [2024-11-20 11:21:36.161684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.858 qpair failed and we were unable to recover it. 00:27:08.858 [2024-11-20 11:21:36.161862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.858 [2024-11-20 11:21:36.161895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.858 qpair failed and we were unable to recover it. 00:27:08.858 [2024-11-20 11:21:36.162097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.858 [2024-11-20 11:21:36.162131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.858 qpair failed and we were unable to recover it. 00:27:08.858 [2024-11-20 11:21:36.162309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.858 [2024-11-20 11:21:36.162341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.858 qpair failed and we were unable to recover it. 00:27:08.858 [2024-11-20 11:21:36.162510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.858 [2024-11-20 11:21:36.162540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.858 qpair failed and we were unable to recover it. 00:27:08.858 [2024-11-20 11:21:36.162665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.858 [2024-11-20 11:21:36.162698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.858 qpair failed and we were unable to recover it. 00:27:08.858 [2024-11-20 11:21:36.162798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.858 [2024-11-20 11:21:36.162831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:08.859 qpair failed and we were unable to recover it. 00:27:08.859 [2024-11-20 11:21:36.163007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.859 [2024-11-20 11:21:36.163079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.859 qpair failed and we were unable to recover it. 00:27:08.859 [2024-11-20 11:21:36.163224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.859 [2024-11-20 11:21:36.163260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.859 qpair failed and we were unable to recover it. 00:27:08.859 [2024-11-20 11:21:36.163438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.859 [2024-11-20 11:21:36.163471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.859 qpair failed and we were unable to recover it. 00:27:08.859 [2024-11-20 11:21:36.163614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.859 [2024-11-20 11:21:36.163647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.859 qpair failed and we were unable to recover it. 00:27:08.859 [2024-11-20 11:21:36.163832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.859 [2024-11-20 11:21:36.163864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.859 qpair failed and we were unable to recover it. 00:27:08.859 [2024-11-20 11:21:36.164054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.859 [2024-11-20 11:21:36.164088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.859 qpair failed and we were unable to recover it. 00:27:08.859 [2024-11-20 11:21:36.164303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.859 [2024-11-20 11:21:36.164335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.859 qpair failed and we were unable to recover it. 00:27:08.859 [2024-11-20 11:21:36.164441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.859 [2024-11-20 11:21:36.164470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.859 qpair failed and we were unable to recover it. 00:27:08.859 [2024-11-20 11:21:36.164588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.859 [2024-11-20 11:21:36.164620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.859 qpair failed and we were unable to recover it. 00:27:08.859 [2024-11-20 11:21:36.164740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.859 [2024-11-20 11:21:36.164772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.859 qpair failed and we were unable to recover it. 00:27:08.859 [2024-11-20 11:21:36.164874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.859 [2024-11-20 11:21:36.164903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.859 qpair failed and we were unable to recover it. 00:27:08.859 [2024-11-20 11:21:36.165090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.859 [2024-11-20 11:21:36.165123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.859 qpair failed and we were unable to recover it. 00:27:08.859 [2024-11-20 11:21:36.165251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.859 [2024-11-20 11:21:36.165282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.859 qpair failed and we were unable to recover it. 00:27:08.859 [2024-11-20 11:21:36.165389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.859 [2024-11-20 11:21:36.165432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.859 qpair failed and we were unable to recover it. 00:27:08.859 [2024-11-20 11:21:36.165565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.859 [2024-11-20 11:21:36.165595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.859 qpair failed and we were unable to recover it. 00:27:08.859 [2024-11-20 11:21:36.165725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.859 [2024-11-20 11:21:36.165754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.859 qpair failed and we were unable to recover it. 00:27:08.859 [2024-11-20 11:21:36.165969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.859 [2024-11-20 11:21:36.166002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.859 qpair failed and we were unable to recover it. 00:27:08.859 [2024-11-20 11:21:36.166127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.859 [2024-11-20 11:21:36.166157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.859 qpair failed and we were unable to recover it. 00:27:08.859 [2024-11-20 11:21:36.166395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.859 [2024-11-20 11:21:36.166427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.859 qpair failed and we were unable to recover it. 00:27:08.859 [2024-11-20 11:21:36.166530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.859 [2024-11-20 11:21:36.166558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.859 qpair failed and we were unable to recover it. 00:27:08.859 [2024-11-20 11:21:36.166732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.859 [2024-11-20 11:21:36.166763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.859 qpair failed and we were unable to recover it. 00:27:08.859 [2024-11-20 11:21:36.166866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.859 [2024-11-20 11:21:36.166896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.859 qpair failed and we were unable to recover it. 00:27:08.859 [2024-11-20 11:21:36.167047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.859 [2024-11-20 11:21:36.167078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.859 qpair failed and we were unable to recover it. 00:27:08.859 [2024-11-20 11:21:36.167267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.859 [2024-11-20 11:21:36.167297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.859 qpair failed and we were unable to recover it. 00:27:08.859 [2024-11-20 11:21:36.167408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.859 [2024-11-20 11:21:36.167440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.859 qpair failed and we were unable to recover it. 00:27:08.859 [2024-11-20 11:21:36.167559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.859 [2024-11-20 11:21:36.167592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.859 qpair failed and we were unable to recover it. 00:27:08.859 [2024-11-20 11:21:36.167818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.859 [2024-11-20 11:21:36.167848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.859 qpair failed and we were unable to recover it. 00:27:08.859 [2024-11-20 11:21:36.167989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.859 [2024-11-20 11:21:36.168022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.859 qpair failed and we were unable to recover it. 00:27:08.859 [2024-11-20 11:21:36.168156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.859 [2024-11-20 11:21:36.168186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.859 qpair failed and we were unable to recover it. 00:27:08.859 [2024-11-20 11:21:36.168453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.859 [2024-11-20 11:21:36.168485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.859 qpair failed and we were unable to recover it. 00:27:08.859 [2024-11-20 11:21:36.168598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.859 [2024-11-20 11:21:36.168633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.859 qpair failed and we were unable to recover it. 00:27:08.859 [2024-11-20 11:21:36.168762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.859 [2024-11-20 11:21:36.168794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.859 qpair failed and we were unable to recover it. 00:27:08.859 [2024-11-20 11:21:36.169048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.860 [2024-11-20 11:21:36.169080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.860 qpair failed and we were unable to recover it. 00:27:08.860 [2024-11-20 11:21:36.169267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.860 [2024-11-20 11:21:36.169299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.860 qpair failed and we were unable to recover it. 00:27:08.860 [2024-11-20 11:21:36.169429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.860 [2024-11-20 11:21:36.169458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.860 qpair failed and we were unable to recover it. 00:27:08.860 [2024-11-20 11:21:36.169638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.860 [2024-11-20 11:21:36.169668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.860 qpair failed and we were unable to recover it. 00:27:08.860 [2024-11-20 11:21:36.169841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.860 [2024-11-20 11:21:36.169873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.860 qpair failed and we were unable to recover it. 00:27:08.860 [2024-11-20 11:21:36.170007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.860 [2024-11-20 11:21:36.170041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.860 qpair failed and we were unable to recover it. 00:27:08.860 [2024-11-20 11:21:36.170162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.860 [2024-11-20 11:21:36.170193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.860 qpair failed and we were unable to recover it. 00:27:08.860 [2024-11-20 11:21:36.170379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.860 [2024-11-20 11:21:36.170412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.860 qpair failed and we were unable to recover it. 00:27:08.860 [2024-11-20 11:21:36.170654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.860 [2024-11-20 11:21:36.170685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.860 qpair failed and we were unable to recover it. 00:27:08.860 [2024-11-20 11:21:36.170813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.860 [2024-11-20 11:21:36.170844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.860 qpair failed and we were unable to recover it. 00:27:08.860 [2024-11-20 11:21:36.171029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.860 [2024-11-20 11:21:36.171062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.860 qpair failed and we were unable to recover it. 00:27:08.860 [2024-11-20 11:21:36.171181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.860 [2024-11-20 11:21:36.171212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.860 qpair failed and we were unable to recover it. 00:27:08.860 [2024-11-20 11:21:36.171346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.860 [2024-11-20 11:21:36.171379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.860 qpair failed and we were unable to recover it. 00:27:08.860 [2024-11-20 11:21:36.171621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.860 [2024-11-20 11:21:36.171653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.860 qpair failed and we were unable to recover it. 00:27:08.860 [2024-11-20 11:21:36.171829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.860 [2024-11-20 11:21:36.171862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.860 qpair failed and we were unable to recover it. 00:27:08.860 [2024-11-20 11:21:36.172058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.860 [2024-11-20 11:21:36.172093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.860 qpair failed and we were unable to recover it. 00:27:08.860 [2024-11-20 11:21:36.172285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.860 [2024-11-20 11:21:36.172318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.860 qpair failed and we were unable to recover it. 00:27:08.860 [2024-11-20 11:21:36.172566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.860 [2024-11-20 11:21:36.172598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.860 qpair failed and we were unable to recover it. 00:27:08.860 [2024-11-20 11:21:36.172729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.860 [2024-11-20 11:21:36.172760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.860 qpair failed and we were unable to recover it. 00:27:08.860 [2024-11-20 11:21:36.172889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.860 [2024-11-20 11:21:36.172921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.860 qpair failed and we were unable to recover it. 00:27:08.860 [2024-11-20 11:21:36.173047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.860 [2024-11-20 11:21:36.173078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.860 qpair failed and we were unable to recover it. 00:27:08.860 [2024-11-20 11:21:36.173251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.860 [2024-11-20 11:21:36.173289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.860 qpair failed and we were unable to recover it. 00:27:08.860 [2024-11-20 11:21:36.173395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.860 [2024-11-20 11:21:36.173428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.860 qpair failed and we were unable to recover it. 00:27:08.860 [2024-11-20 11:21:36.173546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.860 [2024-11-20 11:21:36.173577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.860 qpair failed and we were unable to recover it. 00:27:08.860 [2024-11-20 11:21:36.173679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.860 [2024-11-20 11:21:36.173713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.860 qpair failed and we were unable to recover it. 00:27:08.860 [2024-11-20 11:21:36.173831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.860 [2024-11-20 11:21:36.173860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.860 qpair failed and we were unable to recover it. 00:27:08.860 [2024-11-20 11:21:36.174061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.860 [2024-11-20 11:21:36.174095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.860 qpair failed and we were unable to recover it. 00:27:08.860 [2024-11-20 11:21:36.174274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.860 [2024-11-20 11:21:36.174306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.860 qpair failed and we were unable to recover it. 00:27:08.860 [2024-11-20 11:21:36.174546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.860 [2024-11-20 11:21:36.174578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.860 qpair failed and we were unable to recover it. 00:27:08.860 [2024-11-20 11:21:36.174697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.860 [2024-11-20 11:21:36.174728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.860 qpair failed and we were unable to recover it. 00:27:08.860 [2024-11-20 11:21:36.174840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.860 [2024-11-20 11:21:36.174870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.860 qpair failed and we were unable to recover it. 00:27:08.860 [2024-11-20 11:21:36.174987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.860 [2024-11-20 11:21:36.175020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.860 qpair failed and we were unable to recover it. 00:27:08.860 [2024-11-20 11:21:36.175203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.860 [2024-11-20 11:21:36.175234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.860 qpair failed and we were unable to recover it. 00:27:08.860 [2024-11-20 11:21:36.175418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.861 [2024-11-20 11:21:36.175450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.861 qpair failed and we were unable to recover it. 00:27:08.861 [2024-11-20 11:21:36.175574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.861 [2024-11-20 11:21:36.175605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.861 qpair failed and we were unable to recover it. 00:27:08.861 [2024-11-20 11:21:36.175722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.861 [2024-11-20 11:21:36.175753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.861 qpair failed and we were unable to recover it. 00:27:08.861 [2024-11-20 11:21:36.175868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.861 [2024-11-20 11:21:36.175899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.861 qpair failed and we were unable to recover it. 00:27:08.861 [2024-11-20 11:21:36.176020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.861 [2024-11-20 11:21:36.176053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.861 qpair failed and we were unable to recover it. 00:27:08.861 [2024-11-20 11:21:36.176227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.861 [2024-11-20 11:21:36.176259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.861 qpair failed and we were unable to recover it. 00:27:08.861 [2024-11-20 11:21:36.176471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.861 [2024-11-20 11:21:36.176503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.861 qpair failed and we were unable to recover it. 00:27:08.861 [2024-11-20 11:21:36.176628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.861 [2024-11-20 11:21:36.176660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.861 qpair failed and we were unable to recover it. 00:27:08.861 [2024-11-20 11:21:36.176773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.861 [2024-11-20 11:21:36.176804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.861 qpair failed and we were unable to recover it. 00:27:08.861 [2024-11-20 11:21:36.176917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.861 [2024-11-20 11:21:36.176963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.861 qpair failed and we were unable to recover it. 00:27:08.861 [2024-11-20 11:21:36.177080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.861 [2024-11-20 11:21:36.177111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.861 qpair failed and we were unable to recover it. 00:27:08.861 [2024-11-20 11:21:36.177224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.861 [2024-11-20 11:21:36.177256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.861 qpair failed and we were unable to recover it. 00:27:08.861 [2024-11-20 11:21:36.177378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.861 [2024-11-20 11:21:36.177410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.861 qpair failed and we were unable to recover it. 00:27:08.861 [2024-11-20 11:21:36.177517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.861 [2024-11-20 11:21:36.177548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.861 qpair failed and we were unable to recover it. 00:27:08.861 [2024-11-20 11:21:36.177730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.861 [2024-11-20 11:21:36.177765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:08.861 qpair failed and we were unable to recover it. 00:27:08.861 [2024-11-20 11:21:36.178018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.861 [2024-11-20 11:21:36.178094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.861 qpair failed and we were unable to recover it. 00:27:08.861 [2024-11-20 11:21:36.178293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.861 [2024-11-20 11:21:36.178329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.861 qpair failed and we were unable to recover it. 00:27:08.861 [2024-11-20 11:21:36.178526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.861 [2024-11-20 11:21:36.178559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.861 qpair failed and we were unable to recover it. 00:27:08.861 [2024-11-20 11:21:36.178732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.861 [2024-11-20 11:21:36.178763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.861 qpair failed and we were unable to recover it. 00:27:08.861 [2024-11-20 11:21:36.178890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.861 [2024-11-20 11:21:36.178922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.861 qpair failed and we were unable to recover it. 00:27:08.861 [2024-11-20 11:21:36.179126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.861 [2024-11-20 11:21:36.179160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.861 qpair failed and we were unable to recover it. 00:27:08.861 [2024-11-20 11:21:36.179340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.861 [2024-11-20 11:21:36.179371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.861 qpair failed and we were unable to recover it. 00:27:08.861 [2024-11-20 11:21:36.179540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.861 [2024-11-20 11:21:36.179571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.861 qpair failed and we were unable to recover it. 00:27:08.861 [2024-11-20 11:21:36.179701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.861 [2024-11-20 11:21:36.179733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.861 qpair failed and we were unable to recover it. 00:27:08.861 [2024-11-20 11:21:36.179974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.861 [2024-11-20 11:21:36.180008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.861 qpair failed and we were unable to recover it. 00:27:08.861 [2024-11-20 11:21:36.180189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.861 [2024-11-20 11:21:36.180220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.861 qpair failed and we were unable to recover it. 00:27:08.861 [2024-11-20 11:21:36.180400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.861 [2024-11-20 11:21:36.180432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.861 qpair failed and we were unable to recover it. 00:27:08.861 [2024-11-20 11:21:36.180619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.861 [2024-11-20 11:21:36.180651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.861 qpair failed and we were unable to recover it. 00:27:08.861 [2024-11-20 11:21:36.180774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.861 [2024-11-20 11:21:36.180806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.861 qpair failed and we were unable to recover it. 00:27:08.861 [2024-11-20 11:21:36.181052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.861 [2024-11-20 11:21:36.181085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.861 qpair failed and we were unable to recover it. 00:27:08.861 [2024-11-20 11:21:36.181192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.861 [2024-11-20 11:21:36.181224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.861 qpair failed and we were unable to recover it. 00:27:08.861 [2024-11-20 11:21:36.185182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.861 [2024-11-20 11:21:36.185217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.861 qpair failed and we were unable to recover it. 00:27:08.861 [2024-11-20 11:21:36.185341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.861 [2024-11-20 11:21:36.185371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.861 qpair failed and we were unable to recover it. 00:27:08.861 [2024-11-20 11:21:36.185622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.862 [2024-11-20 11:21:36.185654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.862 qpair failed and we were unable to recover it. 00:27:08.862 [2024-11-20 11:21:36.185859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.862 [2024-11-20 11:21:36.185891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.862 qpair failed and we were unable to recover it. 00:27:08.862 [2024-11-20 11:21:36.186101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.862 [2024-11-20 11:21:36.186133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.862 qpair failed and we were unable to recover it. 00:27:08.862 [2024-11-20 11:21:36.186284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.862 [2024-11-20 11:21:36.186315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.862 qpair failed and we were unable to recover it. 00:27:08.862 [2024-11-20 11:21:36.186503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.862 [2024-11-20 11:21:36.186535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.862 qpair failed and we were unable to recover it. 00:27:08.862 [2024-11-20 11:21:36.186717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.862 [2024-11-20 11:21:36.186752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.862 qpair failed and we were unable to recover it. 00:27:08.862 [2024-11-20 11:21:36.186926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.862 [2024-11-20 11:21:36.186965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.862 qpair failed and we were unable to recover it. 00:27:08.862 [2024-11-20 11:21:36.187148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.862 [2024-11-20 11:21:36.187180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.862 qpair failed and we were unable to recover it. 00:27:08.862 [2024-11-20 11:21:36.187309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.862 [2024-11-20 11:21:36.187341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.862 qpair failed and we were unable to recover it. 00:27:08.862 [2024-11-20 11:21:36.187579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.862 [2024-11-20 11:21:36.187617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.862 qpair failed and we were unable to recover it. 00:27:08.862 [2024-11-20 11:21:36.187818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.862 [2024-11-20 11:21:36.187850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.862 qpair failed and we were unable to recover it. 00:27:08.862 [2024-11-20 11:21:36.188074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.862 [2024-11-20 11:21:36.188108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.862 qpair failed and we were unable to recover it. 00:27:08.862 [2024-11-20 11:21:36.188349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.862 [2024-11-20 11:21:36.188381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.862 qpair failed and we were unable to recover it. 00:27:08.862 [2024-11-20 11:21:36.188610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.862 [2024-11-20 11:21:36.188642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.862 qpair failed and we were unable to recover it. 00:27:08.862 [2024-11-20 11:21:36.188778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.862 [2024-11-20 11:21:36.188810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.862 qpair failed and we were unable to recover it. 00:27:08.862 [2024-11-20 11:21:36.189070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.862 [2024-11-20 11:21:36.189103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.862 qpair failed and we were unable to recover it. 00:27:08.862 [2024-11-20 11:21:36.189230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.862 [2024-11-20 11:21:36.189262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.862 qpair failed and we were unable to recover it. 00:27:08.862 [2024-11-20 11:21:36.189435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.862 [2024-11-20 11:21:36.189466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.862 qpair failed and we were unable to recover it. 00:27:08.862 [2024-11-20 11:21:36.189673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.862 [2024-11-20 11:21:36.189704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.862 qpair failed and we were unable to recover it. 00:27:08.862 [2024-11-20 11:21:36.189944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.862 [2024-11-20 11:21:36.189989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.862 qpair failed and we were unable to recover it. 00:27:08.862 [2024-11-20 11:21:36.190192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.862 [2024-11-20 11:21:36.190225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.862 qpair failed and we were unable to recover it. 00:27:08.862 [2024-11-20 11:21:36.190349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.862 [2024-11-20 11:21:36.190381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.862 qpair failed and we were unable to recover it. 00:27:08.862 [2024-11-20 11:21:36.190563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.862 [2024-11-20 11:21:36.190595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.862 qpair failed and we were unable to recover it. 00:27:08.862 [2024-11-20 11:21:36.190742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.862 [2024-11-20 11:21:36.190775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.862 qpair failed and we were unable to recover it. 00:27:08.862 [2024-11-20 11:21:36.191015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.862 [2024-11-20 11:21:36.191048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.862 qpair failed and we were unable to recover it. 00:27:08.862 [2024-11-20 11:21:36.191181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.862 [2024-11-20 11:21:36.191213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.862 qpair failed and we were unable to recover it. 00:27:08.862 [2024-11-20 11:21:36.191427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.862 [2024-11-20 11:21:36.191459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.862 qpair failed and we were unable to recover it. 00:27:08.862 [2024-11-20 11:21:36.191703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.862 [2024-11-20 11:21:36.191738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.862 qpair failed and we were unable to recover it. 00:27:08.862 [2024-11-20 11:21:36.191927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.862 [2024-11-20 11:21:36.191969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.862 qpair failed and we were unable to recover it. 00:27:08.862 [2024-11-20 11:21:36.192136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.862 [2024-11-20 11:21:36.192168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.863 qpair failed and we were unable to recover it. 00:27:08.863 [2024-11-20 11:21:36.192286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.863 [2024-11-20 11:21:36.192319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.863 qpair failed and we were unable to recover it. 00:27:08.863 [2024-11-20 11:21:36.192550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.863 [2024-11-20 11:21:36.192582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.863 qpair failed and we were unable to recover it. 00:27:08.863 [2024-11-20 11:21:36.192778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.863 [2024-11-20 11:21:36.192809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.863 qpair failed and we were unable to recover it. 00:27:08.863 [2024-11-20 11:21:36.192934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.863 [2024-11-20 11:21:36.192978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.863 qpair failed and we were unable to recover it. 00:27:08.863 [2024-11-20 11:21:36.193113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.863 [2024-11-20 11:21:36.193144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.863 qpair failed and we were unable to recover it. 00:27:08.863 [2024-11-20 11:21:36.193277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.863 [2024-11-20 11:21:36.193309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.863 qpair failed and we were unable to recover it. 00:27:08.863 [2024-11-20 11:21:36.193436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.863 [2024-11-20 11:21:36.193474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.863 qpair failed and we were unable to recover it. 00:27:08.863 [2024-11-20 11:21:36.193660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.863 [2024-11-20 11:21:36.193693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.863 qpair failed and we were unable to recover it. 00:27:08.863 [2024-11-20 11:21:36.193824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.863 [2024-11-20 11:21:36.193856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.863 qpair failed and we were unable to recover it. 00:27:08.863 [2024-11-20 11:21:36.194058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.863 [2024-11-20 11:21:36.194091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.863 qpair failed and we were unable to recover it. 00:27:08.863 [2024-11-20 11:21:36.194278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.863 [2024-11-20 11:21:36.194310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.863 qpair failed and we were unable to recover it. 00:27:08.863 [2024-11-20 11:21:36.194549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.863 [2024-11-20 11:21:36.194581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.863 qpair failed and we were unable to recover it. 00:27:08.863 [2024-11-20 11:21:36.194709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.863 [2024-11-20 11:21:36.194741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.863 qpair failed and we were unable to recover it. 00:27:08.863 [2024-11-20 11:21:36.194858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.863 [2024-11-20 11:21:36.194890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.863 qpair failed and we were unable to recover it. 00:27:08.863 [2024-11-20 11:21:36.195013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.863 [2024-11-20 11:21:36.195046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.863 qpair failed and we were unable to recover it. 00:27:08.863 [2024-11-20 11:21:36.195171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.863 [2024-11-20 11:21:36.195204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.863 qpair failed and we were unable to recover it. 00:27:08.863 [2024-11-20 11:21:36.195324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.863 [2024-11-20 11:21:36.195356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.863 qpair failed and we were unable to recover it. 00:27:08.863 [2024-11-20 11:21:36.195479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.863 [2024-11-20 11:21:36.195510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.863 qpair failed and we were unable to recover it. 00:27:08.863 [2024-11-20 11:21:36.195640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.863 [2024-11-20 11:21:36.195673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.863 qpair failed and we were unable to recover it. 00:27:08.863 [2024-11-20 11:21:36.195783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.863 [2024-11-20 11:21:36.195815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.863 qpair failed and we were unable to recover it. 00:27:08.863 [2024-11-20 11:21:36.195934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.863 [2024-11-20 11:21:36.195976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.863 qpair failed and we were unable to recover it. 00:27:08.863 [2024-11-20 11:21:36.196181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.863 [2024-11-20 11:21:36.196214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.863 qpair failed and we were unable to recover it. 00:27:08.863 [2024-11-20 11:21:36.196408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.863 [2024-11-20 11:21:36.196439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.863 qpair failed and we were unable to recover it. 00:27:08.863 [2024-11-20 11:21:36.196735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.863 [2024-11-20 11:21:36.196767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.863 qpair failed and we were unable to recover it. 00:27:08.863 [2024-11-20 11:21:36.197034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.863 [2024-11-20 11:21:36.197068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.863 qpair failed and we were unable to recover it. 00:27:08.863 [2024-11-20 11:21:36.197344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.863 [2024-11-20 11:21:36.197377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.863 qpair failed and we were unable to recover it. 00:27:08.863 [2024-11-20 11:21:36.197571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.863 [2024-11-20 11:21:36.197604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.863 qpair failed and we were unable to recover it. 00:27:08.863 [2024-11-20 11:21:36.197847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.863 [2024-11-20 11:21:36.197878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.863 qpair failed and we were unable to recover it. 00:27:08.863 [2024-11-20 11:21:36.198004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.864 [2024-11-20 11:21:36.198038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.864 qpair failed and we were unable to recover it. 00:27:08.864 [2024-11-20 11:21:36.198178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.864 [2024-11-20 11:21:36.198210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.864 qpair failed and we were unable to recover it. 00:27:08.864 [2024-11-20 11:21:36.198405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.864 [2024-11-20 11:21:36.198436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.864 qpair failed and we were unable to recover it. 00:27:08.864 [2024-11-20 11:21:36.198539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.864 [2024-11-20 11:21:36.198571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.864 qpair failed and we were unable to recover it. 00:27:08.864 [2024-11-20 11:21:36.198686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.864 [2024-11-20 11:21:36.198718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.864 qpair failed and we were unable to recover it. 00:27:08.864 [2024-11-20 11:21:36.198889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.864 [2024-11-20 11:21:36.198921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.864 qpair failed and we were unable to recover it. 00:27:08.864 [2024-11-20 11:21:36.199062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.864 [2024-11-20 11:21:36.199095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.864 qpair failed and we were unable to recover it. 00:27:08.864 [2024-11-20 11:21:36.199334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.864 [2024-11-20 11:21:36.199366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.864 qpair failed and we were unable to recover it. 00:27:08.864 [2024-11-20 11:21:36.199593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.864 [2024-11-20 11:21:36.199626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.864 qpair failed and we were unable to recover it. 00:27:08.864 [2024-11-20 11:21:36.199897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.864 [2024-11-20 11:21:36.199929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.864 qpair failed and we were unable to recover it. 00:27:08.864 [2024-11-20 11:21:36.200129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.864 [2024-11-20 11:21:36.200162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.864 qpair failed and we were unable to recover it. 00:27:08.864 [2024-11-20 11:21:36.200404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.864 [2024-11-20 11:21:36.200435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.864 qpair failed and we were unable to recover it. 00:27:08.864 [2024-11-20 11:21:36.200678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.864 [2024-11-20 11:21:36.200710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.864 qpair failed and we were unable to recover it. 00:27:08.864 [2024-11-20 11:21:36.201002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.864 [2024-11-20 11:21:36.201035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.864 qpair failed and we were unable to recover it. 00:27:08.864 [2024-11-20 11:21:36.201227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.864 [2024-11-20 11:21:36.201259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.864 qpair failed and we were unable to recover it. 00:27:08.864 [2024-11-20 11:21:36.201451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.864 [2024-11-20 11:21:36.201484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.864 qpair failed and we were unable to recover it. 00:27:08.864 [2024-11-20 11:21:36.201685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.864 [2024-11-20 11:21:36.201717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.864 qpair failed and we were unable to recover it. 00:27:08.864 [2024-11-20 11:21:36.201964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.864 [2024-11-20 11:21:36.202002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.864 qpair failed and we were unable to recover it. 00:27:08.864 [2024-11-20 11:21:36.202268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.864 [2024-11-20 11:21:36.202301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.864 qpair failed and we were unable to recover it. 00:27:08.864 [2024-11-20 11:21:36.202546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.864 [2024-11-20 11:21:36.202579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.864 qpair failed and we were unable to recover it. 00:27:08.864 [2024-11-20 11:21:36.202764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.864 [2024-11-20 11:21:36.202796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.864 qpair failed and we were unable to recover it. 00:27:08.864 [2024-11-20 11:21:36.203044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.864 [2024-11-20 11:21:36.203079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.864 qpair failed and we were unable to recover it. 00:27:08.864 [2024-11-20 11:21:36.203319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.864 [2024-11-20 11:21:36.203352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.864 qpair failed and we were unable to recover it. 00:27:08.864 [2024-11-20 11:21:36.203597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.864 [2024-11-20 11:21:36.203630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.864 qpair failed and we were unable to recover it. 00:27:08.864 [2024-11-20 11:21:36.203810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.864 [2024-11-20 11:21:36.203842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.864 qpair failed and we were unable to recover it. 00:27:08.864 [2024-11-20 11:21:36.204080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.864 [2024-11-20 11:21:36.204113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.864 qpair failed and we were unable to recover it. 00:27:08.864 [2024-11-20 11:21:36.204253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.864 [2024-11-20 11:21:36.204284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.864 qpair failed and we were unable to recover it. 00:27:08.864 [2024-11-20 11:21:36.204483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.864 [2024-11-20 11:21:36.204515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.864 qpair failed and we were unable to recover it. 00:27:08.864 [2024-11-20 11:21:36.204799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.864 [2024-11-20 11:21:36.204832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.864 qpair failed and we were unable to recover it. 00:27:08.864 [2024-11-20 11:21:36.205076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.864 [2024-11-20 11:21:36.205109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.864 qpair failed and we were unable to recover it. 00:27:08.864 [2024-11-20 11:21:36.205242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.864 [2024-11-20 11:21:36.205274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.864 qpair failed and we were unable to recover it. 00:27:08.864 [2024-11-20 11:21:36.205463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.864 [2024-11-20 11:21:36.205494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.864 qpair failed and we were unable to recover it. 00:27:08.864 [2024-11-20 11:21:36.205765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.864 [2024-11-20 11:21:36.205797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.864 qpair failed and we were unable to recover it. 00:27:08.864 [2024-11-20 11:21:36.206024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.865 [2024-11-20 11:21:36.206058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.865 qpair failed and we were unable to recover it. 00:27:08.865 [2024-11-20 11:21:36.206307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.865 [2024-11-20 11:21:36.206340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.865 qpair failed and we were unable to recover it. 00:27:08.865 [2024-11-20 11:21:36.206570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.865 [2024-11-20 11:21:36.206602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.865 qpair failed and we were unable to recover it. 00:27:08.865 [2024-11-20 11:21:36.206817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.865 [2024-11-20 11:21:36.206849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.865 qpair failed and we were unable to recover it. 00:27:08.865 [2024-11-20 11:21:36.207125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.865 [2024-11-20 11:21:36.207160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.865 qpair failed and we were unable to recover it. 00:27:08.865 [2024-11-20 11:21:36.207357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.865 [2024-11-20 11:21:36.207389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.865 qpair failed and we were unable to recover it. 00:27:08.865 [2024-11-20 11:21:36.207607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.865 [2024-11-20 11:21:36.207639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.865 qpair failed and we were unable to recover it. 00:27:08.865 [2024-11-20 11:21:36.207893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.865 [2024-11-20 11:21:36.207925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.865 qpair failed and we were unable to recover it. 00:27:08.865 [2024-11-20 11:21:36.208127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.865 [2024-11-20 11:21:36.208160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.865 qpair failed and we were unable to recover it. 00:27:08.865 [2024-11-20 11:21:36.208431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.865 [2024-11-20 11:21:36.208463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.865 qpair failed and we were unable to recover it. 00:27:08.865 [2024-11-20 11:21:36.208765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.865 [2024-11-20 11:21:36.208798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.865 qpair failed and we were unable to recover it. 00:27:08.865 [2024-11-20 11:21:36.209059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.865 [2024-11-20 11:21:36.209093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.865 qpair failed and we were unable to recover it. 00:27:08.865 [2024-11-20 11:21:36.209233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.865 [2024-11-20 11:21:36.209265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.865 qpair failed and we were unable to recover it. 00:27:08.865 [2024-11-20 11:21:36.209432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.865 [2024-11-20 11:21:36.209470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.865 qpair failed and we were unable to recover it. 00:27:08.865 [2024-11-20 11:21:36.209682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.865 [2024-11-20 11:21:36.209714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.865 qpair failed and we were unable to recover it. 00:27:08.865 [2024-11-20 11:21:36.209914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.865 [2024-11-20 11:21:36.209954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.865 qpair failed and we were unable to recover it. 00:27:08.865 [2024-11-20 11:21:36.210081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.865 [2024-11-20 11:21:36.210114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.865 qpair failed and we were unable to recover it. 00:27:08.865 [2024-11-20 11:21:36.210377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.865 [2024-11-20 11:21:36.210409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.865 qpair failed and we were unable to recover it. 00:27:08.865 [2024-11-20 11:21:36.210671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.865 [2024-11-20 11:21:36.210703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.865 qpair failed and we were unable to recover it. 00:27:08.865 [2024-11-20 11:21:36.210877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.865 [2024-11-20 11:21:36.210909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.865 qpair failed and we were unable to recover it. 00:27:08.865 [2024-11-20 11:21:36.211055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.865 [2024-11-20 11:21:36.211088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.865 qpair failed and we were unable to recover it. 00:27:08.865 [2024-11-20 11:21:36.211279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.865 [2024-11-20 11:21:36.211311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.865 qpair failed and we were unable to recover it. 00:27:08.865 [2024-11-20 11:21:36.211501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.865 [2024-11-20 11:21:36.211532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.865 qpair failed and we were unable to recover it. 00:27:08.865 [2024-11-20 11:21:36.211777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.865 [2024-11-20 11:21:36.211810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.865 qpair failed and we were unable to recover it. 00:27:08.865 [2024-11-20 11:21:36.212027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.865 [2024-11-20 11:21:36.212061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.865 qpair failed and we were unable to recover it. 00:27:08.865 [2024-11-20 11:21:36.212237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.865 [2024-11-20 11:21:36.212269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.865 qpair failed and we were unable to recover it. 00:27:08.865 [2024-11-20 11:21:36.212396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.865 [2024-11-20 11:21:36.212429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.865 qpair failed and we were unable to recover it. 00:27:08.865 [2024-11-20 11:21:36.212668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.865 [2024-11-20 11:21:36.212701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.865 qpair failed and we were unable to recover it. 00:27:08.865 [2024-11-20 11:21:36.212831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.865 [2024-11-20 11:21:36.212863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.865 qpair failed and we were unable to recover it. 00:27:08.865 [2024-11-20 11:21:36.213060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.865 [2024-11-20 11:21:36.213094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.865 qpair failed and we were unable to recover it. 00:27:08.865 [2024-11-20 11:21:36.213265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.865 [2024-11-20 11:21:36.213297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.865 qpair failed and we were unable to recover it. 00:27:08.865 [2024-11-20 11:21:36.213420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.865 [2024-11-20 11:21:36.213451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.865 qpair failed and we were unable to recover it. 00:27:08.865 [2024-11-20 11:21:36.213717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.865 [2024-11-20 11:21:36.213750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.865 qpair failed and we were unable to recover it. 00:27:08.865 [2024-11-20 11:21:36.213933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.866 [2024-11-20 11:21:36.214003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.866 qpair failed and we were unable to recover it. 00:27:08.866 [2024-11-20 11:21:36.214141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.866 [2024-11-20 11:21:36.214173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.866 qpair failed and we were unable to recover it. 00:27:08.866 [2024-11-20 11:21:36.214385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.866 [2024-11-20 11:21:36.214417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.866 qpair failed and we were unable to recover it. 00:27:08.866 [2024-11-20 11:21:36.214697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.866 [2024-11-20 11:21:36.214729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.866 qpair failed and we were unable to recover it. 00:27:08.866 [2024-11-20 11:21:36.214900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.866 [2024-11-20 11:21:36.214932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.866 qpair failed and we were unable to recover it. 00:27:08.866 [2024-11-20 11:21:36.215119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.866 [2024-11-20 11:21:36.215152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.866 qpair failed and we were unable to recover it. 00:27:08.866 [2024-11-20 11:21:36.215352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.866 [2024-11-20 11:21:36.215384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.866 qpair failed and we were unable to recover it. 00:27:08.866 [2024-11-20 11:21:36.215582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.866 [2024-11-20 11:21:36.215619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.866 qpair failed and we were unable to recover it. 00:27:08.866 [2024-11-20 11:21:36.215884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.866 [2024-11-20 11:21:36.215917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.866 qpair failed and we were unable to recover it. 00:27:08.866 [2024-11-20 11:21:36.216070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.866 [2024-11-20 11:21:36.216103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.866 qpair failed and we were unable to recover it. 00:27:08.866 [2024-11-20 11:21:36.216314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.866 [2024-11-20 11:21:36.216347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.866 qpair failed and we were unable to recover it. 00:27:08.866 [2024-11-20 11:21:36.216476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.866 [2024-11-20 11:21:36.216508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.866 qpair failed and we were unable to recover it. 00:27:08.866 [2024-11-20 11:21:36.216752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.866 [2024-11-20 11:21:36.216784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.866 qpair failed and we were unable to recover it. 00:27:08.866 [2024-11-20 11:21:36.216969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.866 [2024-11-20 11:21:36.217003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.866 qpair failed and we were unable to recover it. 00:27:08.866 [2024-11-20 11:21:36.217216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.866 [2024-11-20 11:21:36.217248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.866 qpair failed and we were unable to recover it. 00:27:08.866 [2024-11-20 11:21:36.217387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.866 [2024-11-20 11:21:36.217419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.866 qpair failed and we were unable to recover it. 00:27:08.866 [2024-11-20 11:21:36.217636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.866 [2024-11-20 11:21:36.217668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.866 qpair failed and we were unable to recover it. 00:27:08.866 [2024-11-20 11:21:36.217804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.866 [2024-11-20 11:21:36.217837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.866 qpair failed and we were unable to recover it. 00:27:08.866 [2024-11-20 11:21:36.218102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.866 [2024-11-20 11:21:36.218136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.866 qpair failed and we were unable to recover it. 00:27:08.866 [2024-11-20 11:21:36.218330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.866 [2024-11-20 11:21:36.218362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.866 qpair failed and we were unable to recover it. 00:27:08.866 [2024-11-20 11:21:36.218484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.866 [2024-11-20 11:21:36.218515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.866 qpair failed and we were unable to recover it. 00:27:08.866 [2024-11-20 11:21:36.218810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.866 [2024-11-20 11:21:36.218842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.866 qpair failed and we were unable to recover it. 00:27:08.866 [2024-11-20 11:21:36.219044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.866 [2024-11-20 11:21:36.219078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.866 qpair failed and we were unable to recover it. 00:27:08.866 [2024-11-20 11:21:36.219261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.866 [2024-11-20 11:21:36.219293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.866 qpair failed and we were unable to recover it. 00:27:08.866 [2024-11-20 11:21:36.219476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.866 [2024-11-20 11:21:36.219509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.866 qpair failed and we were unable to recover it. 00:27:08.866 [2024-11-20 11:21:36.219720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.866 [2024-11-20 11:21:36.219751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.866 qpair failed and we were unable to recover it. 00:27:08.866 [2024-11-20 11:21:36.219941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.866 [2024-11-20 11:21:36.219981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.866 qpair failed and we were unable to recover it. 00:27:08.866 [2024-11-20 11:21:36.220196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.866 [2024-11-20 11:21:36.220229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.866 qpair failed and we were unable to recover it. 00:27:08.866 [2024-11-20 11:21:36.220421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.866 [2024-11-20 11:21:36.220453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.866 qpair failed and we were unable to recover it. 00:27:08.866 [2024-11-20 11:21:36.220766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.866 [2024-11-20 11:21:36.220798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.866 qpair failed and we were unable to recover it. 00:27:08.866 [2024-11-20 11:21:36.221088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.866 [2024-11-20 11:21:36.221121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.866 qpair failed and we were unable to recover it. 00:27:08.866 [2024-11-20 11:21:36.221364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.866 [2024-11-20 11:21:36.221396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.866 qpair failed and we were unable to recover it. 00:27:08.866 [2024-11-20 11:21:36.221588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.866 [2024-11-20 11:21:36.221620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.866 qpair failed and we were unable to recover it. 00:27:08.866 [2024-11-20 11:21:36.221807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.867 [2024-11-20 11:21:36.221840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.867 qpair failed and we were unable to recover it. 00:27:08.867 [2024-11-20 11:21:36.222104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.867 [2024-11-20 11:21:36.222142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.867 qpair failed and we were unable to recover it. 00:27:08.867 [2024-11-20 11:21:36.222383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.867 [2024-11-20 11:21:36.222415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.867 qpair failed and we were unable to recover it. 00:27:08.867 [2024-11-20 11:21:36.222748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.867 [2024-11-20 11:21:36.222779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.867 qpair failed and we were unable to recover it. 00:27:08.867 [2024-11-20 11:21:36.223065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.867 [2024-11-20 11:21:36.223099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.867 qpair failed and we were unable to recover it. 00:27:08.867 [2024-11-20 11:21:36.223223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.867 [2024-11-20 11:21:36.223254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.867 qpair failed and we were unable to recover it. 00:27:08.867 [2024-11-20 11:21:36.223447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.867 [2024-11-20 11:21:36.223479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.867 qpair failed and we were unable to recover it. 00:27:08.867 [2024-11-20 11:21:36.223624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.867 [2024-11-20 11:21:36.223656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.867 qpair failed and we were unable to recover it. 00:27:08.867 [2024-11-20 11:21:36.223844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.867 [2024-11-20 11:21:36.223875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.867 qpair failed and we were unable to recover it. 00:27:08.867 [2024-11-20 11:21:36.224120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.867 [2024-11-20 11:21:36.224154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.867 qpair failed and we were unable to recover it. 00:27:08.867 [2024-11-20 11:21:36.224346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.867 [2024-11-20 11:21:36.224384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.867 qpair failed and we were unable to recover it. 00:27:08.867 [2024-11-20 11:21:36.224580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.867 [2024-11-20 11:21:36.224612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.867 qpair failed and we were unable to recover it. 00:27:08.867 [2024-11-20 11:21:36.224735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.867 [2024-11-20 11:21:36.224767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.867 qpair failed and we were unable to recover it. 00:27:08.867 [2024-11-20 11:21:36.225032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.867 [2024-11-20 11:21:36.225065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.867 qpair failed and we were unable to recover it. 00:27:08.867 [2024-11-20 11:21:36.225251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.867 [2024-11-20 11:21:36.225284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.867 qpair failed and we were unable to recover it. 00:27:08.867 [2024-11-20 11:21:36.225473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.867 [2024-11-20 11:21:36.225506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.867 qpair failed and we were unable to recover it. 00:27:08.867 [2024-11-20 11:21:36.225781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.867 [2024-11-20 11:21:36.225814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.867 qpair failed and we were unable to recover it. 00:27:08.867 [2024-11-20 11:21:36.226128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.867 [2024-11-20 11:21:36.226161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.867 qpair failed and we were unable to recover it. 00:27:08.867 [2024-11-20 11:21:36.226360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.867 [2024-11-20 11:21:36.226391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.867 qpair failed and we were unable to recover it. 00:27:08.867 [2024-11-20 11:21:36.226653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.867 [2024-11-20 11:21:36.226686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.867 qpair failed and we were unable to recover it. 00:27:08.867 [2024-11-20 11:21:36.226978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.867 [2024-11-20 11:21:36.227012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.867 qpair failed and we were unable to recover it. 00:27:08.867 [2024-11-20 11:21:36.227209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.867 [2024-11-20 11:21:36.227241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.867 qpair failed and we were unable to recover it. 00:27:08.867 [2024-11-20 11:21:36.227422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.867 [2024-11-20 11:21:36.227453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.867 qpair failed and we were unable to recover it. 00:27:08.867 [2024-11-20 11:21:36.227573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.867 [2024-11-20 11:21:36.227605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.867 qpair failed and we were unable to recover it. 00:27:08.867 [2024-11-20 11:21:36.227815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.867 [2024-11-20 11:21:36.227846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.867 qpair failed and we were unable to recover it. 00:27:08.867 [2024-11-20 11:21:36.227997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.867 [2024-11-20 11:21:36.228031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.867 qpair failed and we were unable to recover it. 00:27:08.867 [2024-11-20 11:21:36.228226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.867 [2024-11-20 11:21:36.228258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.867 qpair failed and we were unable to recover it. 00:27:08.867 [2024-11-20 11:21:36.228434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.867 [2024-11-20 11:21:36.228466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.867 qpair failed and we were unable to recover it. 00:27:08.867 [2024-11-20 11:21:36.228655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.867 [2024-11-20 11:21:36.228687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.867 qpair failed and we were unable to recover it. 00:27:08.867 [2024-11-20 11:21:36.228987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.867 [2024-11-20 11:21:36.229021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.867 qpair failed and we were unable to recover it. 00:27:08.867 [2024-11-20 11:21:36.229201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.867 [2024-11-20 11:21:36.229233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.867 qpair failed and we were unable to recover it. 00:27:08.867 [2024-11-20 11:21:36.229361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.867 [2024-11-20 11:21:36.229393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.867 qpair failed and we were unable to recover it. 00:27:08.867 [2024-11-20 11:21:36.229692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.867 [2024-11-20 11:21:36.229725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.867 qpair failed and we were unable to recover it. 00:27:08.867 [2024-11-20 11:21:36.229993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.868 [2024-11-20 11:21:36.230027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.868 qpair failed and we were unable to recover it. 00:27:08.868 [2024-11-20 11:21:36.230162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.868 [2024-11-20 11:21:36.230194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.868 qpair failed and we were unable to recover it. 00:27:08.868 [2024-11-20 11:21:36.230318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.868 [2024-11-20 11:21:36.230350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.868 qpair failed and we were unable to recover it. 00:27:08.868 [2024-11-20 11:21:36.230482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.868 [2024-11-20 11:21:36.230514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.868 qpair failed and we were unable to recover it. 00:27:08.868 [2024-11-20 11:21:36.230686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.868 [2024-11-20 11:21:36.230717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.868 qpair failed and we were unable to recover it. 00:27:08.868 [2024-11-20 11:21:36.230910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.868 [2024-11-20 11:21:36.230942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.868 qpair failed and we were unable to recover it. 00:27:08.868 [2024-11-20 11:21:36.231152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.868 [2024-11-20 11:21:36.231185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.868 qpair failed and we were unable to recover it. 00:27:08.868 [2024-11-20 11:21:36.231379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.868 [2024-11-20 11:21:36.231410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.868 qpair failed and we were unable to recover it. 00:27:08.868 [2024-11-20 11:21:36.231674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.868 [2024-11-20 11:21:36.231706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.868 qpair failed and we were unable to recover it. 00:27:08.868 [2024-11-20 11:21:36.231969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.868 [2024-11-20 11:21:36.232003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.868 qpair failed and we were unable to recover it. 00:27:08.868 [2024-11-20 11:21:36.232214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.868 [2024-11-20 11:21:36.232246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.868 qpair failed and we were unable to recover it. 00:27:08.868 [2024-11-20 11:21:36.232491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.868 [2024-11-20 11:21:36.232524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.868 qpair failed and we were unable to recover it. 00:27:08.868 [2024-11-20 11:21:36.232770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.868 [2024-11-20 11:21:36.232803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.868 qpair failed and we were unable to recover it. 00:27:08.868 [2024-11-20 11:21:36.233097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.868 [2024-11-20 11:21:36.233131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.868 qpair failed and we were unable to recover it. 00:27:08.868 [2024-11-20 11:21:36.233324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.868 [2024-11-20 11:21:36.233355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.868 qpair failed and we were unable to recover it. 00:27:08.868 [2024-11-20 11:21:36.233644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.868 [2024-11-20 11:21:36.233676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.868 qpair failed and we were unable to recover it. 00:27:08.868 [2024-11-20 11:21:36.233959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.868 [2024-11-20 11:21:36.233993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.868 qpair failed and we were unable to recover it. 00:27:08.868 [2024-11-20 11:21:36.234234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.868 [2024-11-20 11:21:36.234266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.868 qpair failed and we were unable to recover it. 00:27:08.868 [2024-11-20 11:21:36.234456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.868 [2024-11-20 11:21:36.234488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.868 qpair failed and we were unable to recover it. 00:27:08.868 [2024-11-20 11:21:36.234731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.868 [2024-11-20 11:21:36.234763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.868 qpair failed and we were unable to recover it. 00:27:08.868 [2024-11-20 11:21:36.234962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.868 [2024-11-20 11:21:36.234996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.868 qpair failed and we were unable to recover it. 00:27:08.868 [2024-11-20 11:21:36.235180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.868 [2024-11-20 11:21:36.235212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.868 qpair failed and we were unable to recover it. 00:27:08.868 [2024-11-20 11:21:36.235353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.868 [2024-11-20 11:21:36.235385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.868 qpair failed and we were unable to recover it. 00:27:08.868 [2024-11-20 11:21:36.235568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.868 [2024-11-20 11:21:36.235600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.868 qpair failed and we were unable to recover it. 00:27:08.868 [2024-11-20 11:21:36.235877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.868 [2024-11-20 11:21:36.235909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.868 qpair failed and we were unable to recover it. 00:27:08.868 [2024-11-20 11:21:36.236178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.868 [2024-11-20 11:21:36.236212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.868 qpair failed and we were unable to recover it. 00:27:08.868 [2024-11-20 11:21:36.236476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.868 [2024-11-20 11:21:36.236508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.868 qpair failed and we were unable to recover it. 00:27:08.868 [2024-11-20 11:21:36.236718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.868 [2024-11-20 11:21:36.236751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.868 qpair failed and we were unable to recover it. 00:27:08.868 [2024-11-20 11:21:36.236979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.868 [2024-11-20 11:21:36.237013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.868 qpair failed and we were unable to recover it. 00:27:08.868 [2024-11-20 11:21:36.237212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.868 [2024-11-20 11:21:36.237244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.868 qpair failed and we were unable to recover it. 00:27:08.869 [2024-11-20 11:21:36.237445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.869 [2024-11-20 11:21:36.237477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.869 qpair failed and we were unable to recover it. 00:27:08.869 [2024-11-20 11:21:36.237720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.869 [2024-11-20 11:21:36.237751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.869 qpair failed and we were unable to recover it. 00:27:08.869 [2024-11-20 11:21:36.238036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.869 [2024-11-20 11:21:36.238068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.869 qpair failed and we were unable to recover it. 00:27:08.869 [2024-11-20 11:21:36.238272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.869 [2024-11-20 11:21:36.238304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.869 qpair failed and we were unable to recover it. 00:27:08.869 [2024-11-20 11:21:36.238589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.869 [2024-11-20 11:21:36.238620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.869 qpair failed and we were unable to recover it. 00:27:08.869 [2024-11-20 11:21:36.238831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.869 [2024-11-20 11:21:36.238864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.869 qpair failed and we were unable to recover it. 00:27:08.869 [2024-11-20 11:21:36.239141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.869 [2024-11-20 11:21:36.239180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.869 qpair failed and we were unable to recover it. 00:27:08.869 [2024-11-20 11:21:36.239451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.869 [2024-11-20 11:21:36.239483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.869 qpair failed and we were unable to recover it. 00:27:08.869 [2024-11-20 11:21:36.239785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.869 [2024-11-20 11:21:36.239816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.869 qpair failed and we were unable to recover it. 00:27:08.869 [2024-11-20 11:21:36.240047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.869 [2024-11-20 11:21:36.240080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.869 qpair failed and we were unable to recover it. 00:27:08.869 [2024-11-20 11:21:36.240371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.869 [2024-11-20 11:21:36.240404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.869 qpair failed and we were unable to recover it. 00:27:08.869 [2024-11-20 11:21:36.240628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.869 [2024-11-20 11:21:36.240660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.869 qpair failed and we were unable to recover it. 00:27:08.869 [2024-11-20 11:21:36.240858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.869 [2024-11-20 11:21:36.240889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.869 qpair failed and we were unable to recover it. 00:27:08.869 [2024-11-20 11:21:36.241181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.869 [2024-11-20 11:21:36.241215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.869 qpair failed and we were unable to recover it. 00:27:08.869 [2024-11-20 11:21:36.241447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.869 [2024-11-20 11:21:36.241479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.869 qpair failed and we were unable to recover it. 00:27:08.869 [2024-11-20 11:21:36.241744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.869 [2024-11-20 11:21:36.241776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.869 qpair failed and we were unable to recover it. 00:27:08.869 [2024-11-20 11:21:36.241974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.869 [2024-11-20 11:21:36.242007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.869 qpair failed and we were unable to recover it. 00:27:08.869 [2024-11-20 11:21:36.242280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.869 [2024-11-20 11:21:36.242313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.869 qpair failed and we were unable to recover it. 00:27:08.869 [2024-11-20 11:21:36.242490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.869 [2024-11-20 11:21:36.242522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.869 qpair failed and we were unable to recover it. 00:27:08.869 [2024-11-20 11:21:36.242650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.869 [2024-11-20 11:21:36.242682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.869 qpair failed and we were unable to recover it. 00:27:08.869 [2024-11-20 11:21:36.242872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.869 [2024-11-20 11:21:36.242904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.869 qpair failed and we were unable to recover it. 00:27:08.869 [2024-11-20 11:21:36.243090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.869 [2024-11-20 11:21:36.243123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.869 qpair failed and we were unable to recover it. 00:27:08.869 [2024-11-20 11:21:36.243365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.869 [2024-11-20 11:21:36.243397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.869 qpair failed and we were unable to recover it. 00:27:08.869 [2024-11-20 11:21:36.243642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.869 [2024-11-20 11:21:36.243674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.869 qpair failed and we were unable to recover it. 00:27:08.869 [2024-11-20 11:21:36.243928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.869 [2024-11-20 11:21:36.243968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.869 qpair failed and we were unable to recover it. 00:27:08.869 [2024-11-20 11:21:36.244106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.869 [2024-11-20 11:21:36.244140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.869 qpair failed and we were unable to recover it. 00:27:08.869 [2024-11-20 11:21:36.244331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.869 [2024-11-20 11:21:36.244363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.869 qpair failed and we were unable to recover it. 00:27:08.869 [2024-11-20 11:21:36.244467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.869 [2024-11-20 11:21:36.244500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.869 qpair failed and we were unable to recover it. 00:27:08.869 [2024-11-20 11:21:36.244792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.869 [2024-11-20 11:21:36.244825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.869 qpair failed and we were unable to recover it. 00:27:08.869 [2024-11-20 11:21:36.245049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.869 [2024-11-20 11:21:36.245083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.869 qpair failed and we were unable to recover it. 00:27:08.869 [2024-11-20 11:21:36.245280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.869 [2024-11-20 11:21:36.245313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.869 qpair failed and we were unable to recover it. 00:27:08.869 [2024-11-20 11:21:36.245545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.869 [2024-11-20 11:21:36.245579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.869 qpair failed and we were unable to recover it. 00:27:08.869 [2024-11-20 11:21:36.245749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.869 [2024-11-20 11:21:36.245780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.869 qpair failed and we were unable to recover it. 00:27:08.869 [2024-11-20 11:21:36.245970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.870 [2024-11-20 11:21:36.246012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.870 qpair failed and we were unable to recover it. 00:27:08.870 [2024-11-20 11:21:36.246240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.870 [2024-11-20 11:21:36.246276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.870 qpair failed and we were unable to recover it. 00:27:08.870 [2024-11-20 11:21:36.246434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.870 [2024-11-20 11:21:36.246467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.870 qpair failed and we were unable to recover it. 00:27:08.870 [2024-11-20 11:21:36.246688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.870 [2024-11-20 11:21:36.246721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.870 qpair failed and we were unable to recover it. 00:27:08.870 [2024-11-20 11:21:36.246907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.870 [2024-11-20 11:21:36.246939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.870 qpair failed and we were unable to recover it. 00:27:08.870 [2024-11-20 11:21:36.247176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.870 [2024-11-20 11:21:36.247209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.870 qpair failed and we were unable to recover it. 00:27:08.870 [2024-11-20 11:21:36.247345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.870 [2024-11-20 11:21:36.247377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.870 qpair failed and we were unable to recover it. 00:27:08.870 [2024-11-20 11:21:36.247511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.870 [2024-11-20 11:21:36.247542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.870 qpair failed and we were unable to recover it. 00:27:08.870 [2024-11-20 11:21:36.247799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.870 [2024-11-20 11:21:36.247832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.870 qpair failed and we were unable to recover it. 00:27:08.870 [2024-11-20 11:21:36.248017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.870 [2024-11-20 11:21:36.248051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.870 qpair failed and we were unable to recover it. 00:27:08.870 [2024-11-20 11:21:36.248258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.870 [2024-11-20 11:21:36.248291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.870 qpair failed and we were unable to recover it. 00:27:08.870 [2024-11-20 11:21:36.248483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.870 [2024-11-20 11:21:36.248534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.870 qpair failed and we were unable to recover it. 00:27:08.870 [2024-11-20 11:21:36.248679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.870 [2024-11-20 11:21:36.248711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.870 qpair failed and we were unable to recover it. 00:27:08.870 [2024-11-20 11:21:36.248845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.870 [2024-11-20 11:21:36.248879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.870 qpair failed and we were unable to recover it. 00:27:08.870 [2024-11-20 11:21:36.249193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.870 [2024-11-20 11:21:36.249228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.870 qpair failed and we were unable to recover it. 00:27:08.870 [2024-11-20 11:21:36.249432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.870 [2024-11-20 11:21:36.249463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.870 qpair failed and we were unable to recover it. 00:27:08.870 [2024-11-20 11:21:36.249605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.870 [2024-11-20 11:21:36.249638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.870 qpair failed and we were unable to recover it. 00:27:08.870 [2024-11-20 11:21:36.249926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.870 [2024-11-20 11:21:36.249972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.870 qpair failed and we were unable to recover it. 00:27:08.870 [2024-11-20 11:21:36.250156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.870 [2024-11-20 11:21:36.250188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.870 qpair failed and we were unable to recover it. 00:27:08.870 [2024-11-20 11:21:36.250444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.870 [2024-11-20 11:21:36.250476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.870 qpair failed and we were unable to recover it. 00:27:08.870 [2024-11-20 11:21:36.250701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.870 [2024-11-20 11:21:36.250734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.870 qpair failed and we were unable to recover it. 00:27:08.870 [2024-11-20 11:21:36.250983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.870 [2024-11-20 11:21:36.251018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.870 qpair failed and we were unable to recover it. 00:27:08.870 [2024-11-20 11:21:36.251223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.870 [2024-11-20 11:21:36.251255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.870 qpair failed and we were unable to recover it. 00:27:08.870 [2024-11-20 11:21:36.251393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.870 [2024-11-20 11:21:36.251424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.870 qpair failed and we were unable to recover it. 00:27:08.870 [2024-11-20 11:21:36.251551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.870 [2024-11-20 11:21:36.251583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.870 qpair failed and we were unable to recover it. 00:27:08.870 [2024-11-20 11:21:36.251762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.870 [2024-11-20 11:21:36.251794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.870 qpair failed and we were unable to recover it. 00:27:08.870 [2024-11-20 11:21:36.252053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.870 [2024-11-20 11:21:36.252087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.870 qpair failed and we were unable to recover it. 00:27:08.870 [2024-11-20 11:21:36.252351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.870 [2024-11-20 11:21:36.252384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.870 qpair failed and we were unable to recover it. 00:27:08.870 [2024-11-20 11:21:36.252577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.870 [2024-11-20 11:21:36.252610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.870 qpair failed and we were unable to recover it. 00:27:08.870 [2024-11-20 11:21:36.252854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.870 [2024-11-20 11:21:36.252886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.870 qpair failed and we were unable to recover it. 00:27:08.870 [2024-11-20 11:21:36.253030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.870 [2024-11-20 11:21:36.253063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.870 qpair failed and we were unable to recover it. 00:27:08.870 [2024-11-20 11:21:36.253214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.870 [2024-11-20 11:21:36.253245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.870 qpair failed and we were unable to recover it. 00:27:08.870 [2024-11-20 11:21:36.253437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.870 [2024-11-20 11:21:36.253470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.870 qpair failed and we were unable to recover it. 00:27:08.870 [2024-11-20 11:21:36.253667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.871 [2024-11-20 11:21:36.253700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.871 qpair failed and we were unable to recover it. 00:27:08.871 [2024-11-20 11:21:36.253894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.871 [2024-11-20 11:21:36.253926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.871 qpair failed and we were unable to recover it. 00:27:08.871 [2024-11-20 11:21:36.254056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.871 [2024-11-20 11:21:36.254088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.871 qpair failed and we were unable to recover it. 00:27:08.871 [2024-11-20 11:21:36.254278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.871 [2024-11-20 11:21:36.254311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.871 qpair failed and we were unable to recover it. 00:27:08.871 [2024-11-20 11:21:36.254505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.871 [2024-11-20 11:21:36.254539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.871 qpair failed and we were unable to recover it. 00:27:08.871 [2024-11-20 11:21:36.254796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.871 [2024-11-20 11:21:36.254829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.871 qpair failed and we were unable to recover it. 00:27:08.871 [2024-11-20 11:21:36.255024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.871 [2024-11-20 11:21:36.255057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.871 qpair failed and we were unable to recover it. 00:27:08.871 [2024-11-20 11:21:36.255214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.871 [2024-11-20 11:21:36.255246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.871 qpair failed and we were unable to recover it. 00:27:08.871 [2024-11-20 11:21:36.255479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.871 [2024-11-20 11:21:36.255512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.871 qpair failed and we were unable to recover it. 00:27:08.871 [2024-11-20 11:21:36.255801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.871 [2024-11-20 11:21:36.255833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.871 qpair failed and we were unable to recover it. 00:27:08.871 [2024-11-20 11:21:36.256104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.871 [2024-11-20 11:21:36.256140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.871 qpair failed and we were unable to recover it. 00:27:08.871 [2024-11-20 11:21:36.256380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.871 [2024-11-20 11:21:36.256413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.871 qpair failed and we were unable to recover it. 00:27:08.871 [2024-11-20 11:21:36.256671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.871 [2024-11-20 11:21:36.256705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.871 qpair failed and we were unable to recover it. 00:27:08.871 [2024-11-20 11:21:36.256959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.871 [2024-11-20 11:21:36.256993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.871 qpair failed and we were unable to recover it. 00:27:08.871 [2024-11-20 11:21:36.257222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.871 [2024-11-20 11:21:36.257253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.871 qpair failed and we were unable to recover it. 00:27:08.871 [2024-11-20 11:21:36.257391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.871 [2024-11-20 11:21:36.257425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.871 qpair failed and we were unable to recover it. 00:27:08.871 [2024-11-20 11:21:36.257614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.871 [2024-11-20 11:21:36.257646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.871 qpair failed and we were unable to recover it. 00:27:08.871 [2024-11-20 11:21:36.257822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.871 [2024-11-20 11:21:36.257854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.871 qpair failed and we were unable to recover it. 00:27:08.871 [2024-11-20 11:21:36.258110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.871 [2024-11-20 11:21:36.258161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.871 qpair failed and we were unable to recover it. 00:27:08.871 [2024-11-20 11:21:36.258356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.871 [2024-11-20 11:21:36.258388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.871 qpair failed and we were unable to recover it. 00:27:08.871 [2024-11-20 11:21:36.258674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.871 [2024-11-20 11:21:36.258708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.871 qpair failed and we were unable to recover it. 00:27:08.871 [2024-11-20 11:21:36.258945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.871 [2024-11-20 11:21:36.259003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.871 qpair failed and we were unable to recover it. 00:27:08.871 [2024-11-20 11:21:36.259213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.871 [2024-11-20 11:21:36.259245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.871 qpair failed and we were unable to recover it. 00:27:08.871 [2024-11-20 11:21:36.259433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.871 [2024-11-20 11:21:36.259465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.871 qpair failed and we were unable to recover it. 00:27:08.871 [2024-11-20 11:21:36.259729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.871 [2024-11-20 11:21:36.259761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.871 qpair failed and we were unable to recover it. 00:27:08.871 [2024-11-20 11:21:36.259966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.871 [2024-11-20 11:21:36.260000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.871 qpair failed and we were unable to recover it. 00:27:08.871 [2024-11-20 11:21:36.260198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.871 [2024-11-20 11:21:36.260230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.871 qpair failed and we were unable to recover it. 00:27:08.871 [2024-11-20 11:21:36.260432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.871 [2024-11-20 11:21:36.260466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.871 qpair failed and we were unable to recover it. 00:27:08.871 [2024-11-20 11:21:36.260780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.871 [2024-11-20 11:21:36.260813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.871 qpair failed and we were unable to recover it. 00:27:08.871 [2024-11-20 11:21:36.261048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.871 [2024-11-20 11:21:36.261082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.871 qpair failed and we were unable to recover it. 00:27:08.871 [2024-11-20 11:21:36.261276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.871 [2024-11-20 11:21:36.261309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.871 qpair failed and we were unable to recover it. 00:27:08.871 [2024-11-20 11:21:36.261506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.871 [2024-11-20 11:21:36.261539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.871 qpair failed and we were unable to recover it. 00:27:08.871 [2024-11-20 11:21:36.261725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.871 [2024-11-20 11:21:36.261759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.871 qpair failed and we were unable to recover it. 00:27:08.871 [2024-11-20 11:21:36.261973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.871 [2024-11-20 11:21:36.262007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.872 qpair failed and we were unable to recover it. 00:27:08.872 [2024-11-20 11:21:36.262145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.872 [2024-11-20 11:21:36.262177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.872 qpair failed and we were unable to recover it. 00:27:08.872 [2024-11-20 11:21:36.262339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.872 [2024-11-20 11:21:36.262377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.872 qpair failed and we were unable to recover it. 00:27:08.872 [2024-11-20 11:21:36.262564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.872 [2024-11-20 11:21:36.262597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.872 qpair failed and we were unable to recover it. 00:27:08.872 [2024-11-20 11:21:36.262808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.872 [2024-11-20 11:21:36.262840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.872 qpair failed and we were unable to recover it. 00:27:08.872 [2024-11-20 11:21:36.262985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.872 [2024-11-20 11:21:36.263021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.872 qpair failed and we were unable to recover it. 00:27:08.872 [2024-11-20 11:21:36.263220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.872 [2024-11-20 11:21:36.263253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.872 qpair failed and we were unable to recover it. 00:27:08.872 [2024-11-20 11:21:36.263461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.872 [2024-11-20 11:21:36.263492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.872 qpair failed and we were unable to recover it. 00:27:08.872 [2024-11-20 11:21:36.263646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.872 [2024-11-20 11:21:36.263680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.872 qpair failed and we were unable to recover it. 00:27:08.872 [2024-11-20 11:21:36.263872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.872 [2024-11-20 11:21:36.263903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.872 qpair failed and we were unable to recover it. 00:27:08.872 [2024-11-20 11:21:36.264186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.872 [2024-11-20 11:21:36.264220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.872 qpair failed and we were unable to recover it. 00:27:08.872 [2024-11-20 11:21:36.264332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.872 [2024-11-20 11:21:36.264364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.872 qpair failed and we were unable to recover it. 00:27:08.872 [2024-11-20 11:21:36.264562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.872 [2024-11-20 11:21:36.264594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.872 qpair failed and we were unable to recover it. 00:27:08.872 [2024-11-20 11:21:36.264712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.872 [2024-11-20 11:21:36.264742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.872 qpair failed and we were unable to recover it. 00:27:08.872 [2024-11-20 11:21:36.264917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.872 [2024-11-20 11:21:36.264960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.872 qpair failed and we were unable to recover it. 00:27:08.872 [2024-11-20 11:21:36.265223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.872 [2024-11-20 11:21:36.265256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.872 qpair failed and we were unable to recover it. 00:27:08.872 [2024-11-20 11:21:36.265466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.872 [2024-11-20 11:21:36.265500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.872 qpair failed and we were unable to recover it. 00:27:08.872 [2024-11-20 11:21:36.265610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.872 [2024-11-20 11:21:36.265643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.872 qpair failed and we were unable to recover it. 00:27:08.872 [2024-11-20 11:21:36.265835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.872 [2024-11-20 11:21:36.265868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.872 qpair failed and we were unable to recover it. 00:27:08.872 [2024-11-20 11:21:36.265991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.872 [2024-11-20 11:21:36.266027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.872 qpair failed and we were unable to recover it. 00:27:08.872 [2024-11-20 11:21:36.266146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.872 [2024-11-20 11:21:36.266177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.872 qpair failed and we were unable to recover it. 00:27:08.872 [2024-11-20 11:21:36.266418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.872 [2024-11-20 11:21:36.266451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.872 qpair failed and we were unable to recover it. 00:27:08.872 [2024-11-20 11:21:36.266662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.872 [2024-11-20 11:21:36.266694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.872 qpair failed and we were unable to recover it. 00:27:08.872 [2024-11-20 11:21:36.266884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.872 [2024-11-20 11:21:36.266916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.872 qpair failed and we were unable to recover it. 00:27:08.872 [2024-11-20 11:21:36.267130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.872 [2024-11-20 11:21:36.267163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.872 qpair failed and we were unable to recover it. 00:27:08.872 [2024-11-20 11:21:36.267358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.872 [2024-11-20 11:21:36.267390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.872 qpair failed and we were unable to recover it. 00:27:08.872 [2024-11-20 11:21:36.267589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.872 [2024-11-20 11:21:36.267621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.872 qpair failed and we were unable to recover it. 00:27:08.872 [2024-11-20 11:21:36.267877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.872 [2024-11-20 11:21:36.267910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.872 qpair failed and we were unable to recover it. 00:27:08.872 [2024-11-20 11:21:36.268085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.872 [2024-11-20 11:21:36.268118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.872 qpair failed and we were unable to recover it. 00:27:08.872 [2024-11-20 11:21:36.268310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.872 [2024-11-20 11:21:36.268346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.872 qpair failed and we were unable to recover it. 00:27:08.872 [2024-11-20 11:21:36.268551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.872 [2024-11-20 11:21:36.268584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.872 qpair failed and we were unable to recover it. 00:27:08.872 [2024-11-20 11:21:36.268855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.872 [2024-11-20 11:21:36.268888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.872 qpair failed and we were unable to recover it. 00:27:08.872 [2024-11-20 11:21:36.269105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.872 [2024-11-20 11:21:36.269138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.872 qpair failed and we were unable to recover it. 00:27:08.872 [2024-11-20 11:21:36.269270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.872 [2024-11-20 11:21:36.269302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.872 qpair failed and we were unable to recover it. 00:27:08.873 [2024-11-20 11:21:36.269480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.873 [2024-11-20 11:21:36.269511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.873 qpair failed and we were unable to recover it. 00:27:08.873 [2024-11-20 11:21:36.269820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.873 [2024-11-20 11:21:36.269852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.873 qpair failed and we were unable to recover it. 00:27:08.873 [2024-11-20 11:21:36.270085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.873 [2024-11-20 11:21:36.270119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.873 qpair failed and we were unable to recover it. 00:27:08.873 [2024-11-20 11:21:36.270308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.873 [2024-11-20 11:21:36.270342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.873 qpair failed and we were unable to recover it. 00:27:08.873 [2024-11-20 11:21:36.270663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.873 [2024-11-20 11:21:36.270695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.873 qpair failed and we were unable to recover it. 00:27:08.873 [2024-11-20 11:21:36.271001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.873 [2024-11-20 11:21:36.271034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.873 qpair failed and we were unable to recover it. 00:27:08.873 [2024-11-20 11:21:36.271172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.873 [2024-11-20 11:21:36.271204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.873 qpair failed and we were unable to recover it. 00:27:08.873 [2024-11-20 11:21:36.271439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.873 [2024-11-20 11:21:36.271471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.873 qpair failed and we were unable to recover it. 00:27:08.873 [2024-11-20 11:21:36.271684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.873 [2024-11-20 11:21:36.271716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.873 qpair failed and we were unable to recover it. 00:27:08.873 [2024-11-20 11:21:36.271972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.873 [2024-11-20 11:21:36.272007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.873 qpair failed and we were unable to recover it. 00:27:08.873 [2024-11-20 11:21:36.272272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.873 [2024-11-20 11:21:36.272305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.873 qpair failed and we were unable to recover it. 00:27:08.873 [2024-11-20 11:21:36.272492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.873 [2024-11-20 11:21:36.272525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.873 qpair failed and we were unable to recover it. 00:27:08.873 [2024-11-20 11:21:36.272772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.873 [2024-11-20 11:21:36.272805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.873 qpair failed and we were unable to recover it. 00:27:08.873 [2024-11-20 11:21:36.272983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.873 [2024-11-20 11:21:36.273019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.873 qpair failed and we were unable to recover it. 00:27:08.873 [2024-11-20 11:21:36.273166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.873 [2024-11-20 11:21:36.273198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.873 qpair failed and we were unable to recover it. 00:27:08.873 [2024-11-20 11:21:36.273475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.873 [2024-11-20 11:21:36.273508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.873 qpair failed and we were unable to recover it. 00:27:08.873 [2024-11-20 11:21:36.273708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.873 [2024-11-20 11:21:36.273740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.873 qpair failed and we were unable to recover it. 00:27:08.873 [2024-11-20 11:21:36.273876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.873 [2024-11-20 11:21:36.273910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.873 qpair failed and we were unable to recover it. 00:27:08.873 [2024-11-20 11:21:36.274119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.873 [2024-11-20 11:21:36.274153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.873 qpair failed and we were unable to recover it. 00:27:08.873 [2024-11-20 11:21:36.274416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.873 [2024-11-20 11:21:36.274450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.873 qpair failed and we were unable to recover it. 00:27:08.873 [2024-11-20 11:21:36.274718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.873 [2024-11-20 11:21:36.274751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.873 qpair failed and we were unable to recover it. 00:27:08.873 [2024-11-20 11:21:36.274984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.873 [2024-11-20 11:21:36.275019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.873 qpair failed and we were unable to recover it. 00:27:08.873 [2024-11-20 11:21:36.275349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.873 [2024-11-20 11:21:36.275389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.873 qpair failed and we were unable to recover it. 00:27:08.873 [2024-11-20 11:21:36.275505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.873 [2024-11-20 11:21:36.275536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.873 qpair failed and we were unable to recover it. 00:27:08.873 [2024-11-20 11:21:36.275807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.873 [2024-11-20 11:21:36.275840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.873 qpair failed and we were unable to recover it. 00:27:08.873 [2024-11-20 11:21:36.276061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.873 [2024-11-20 11:21:36.276096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.873 qpair failed and we were unable to recover it. 00:27:08.873 [2024-11-20 11:21:36.276345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.873 [2024-11-20 11:21:36.276379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.873 qpair failed and we were unable to recover it. 00:27:08.873 [2024-11-20 11:21:36.276519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.873 [2024-11-20 11:21:36.276552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.873 qpair failed and we were unable to recover it. 00:27:08.873 [2024-11-20 11:21:36.276766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.873 [2024-11-20 11:21:36.276798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.873 qpair failed and we were unable to recover it. 00:27:08.873 [2024-11-20 11:21:36.277016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.873 [2024-11-20 11:21:36.277049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.873 qpair failed and we were unable to recover it. 00:27:08.873 [2024-11-20 11:21:36.277174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.873 [2024-11-20 11:21:36.277208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.873 qpair failed and we were unable to recover it. 00:27:08.873 [2024-11-20 11:21:36.277483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.873 [2024-11-20 11:21:36.277516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.873 qpair failed and we were unable to recover it. 00:27:08.873 [2024-11-20 11:21:36.277636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.873 [2024-11-20 11:21:36.277668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.873 qpair failed and we were unable to recover it. 00:27:08.874 [2024-11-20 11:21:36.277943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.874 [2024-11-20 11:21:36.277985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.874 qpair failed and we were unable to recover it. 00:27:08.874 [2024-11-20 11:21:36.278179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.874 [2024-11-20 11:21:36.278231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.874 qpair failed and we were unable to recover it. 00:27:08.874 [2024-11-20 11:21:36.278510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.874 [2024-11-20 11:21:36.278542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.874 qpair failed and we were unable to recover it. 00:27:08.874 [2024-11-20 11:21:36.278860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.874 [2024-11-20 11:21:36.278934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.874 qpair failed and we were unable to recover it. 00:27:08.874 [2024-11-20 11:21:36.279129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.874 [2024-11-20 11:21:36.279180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.874 qpair failed and we were unable to recover it. 00:27:08.874 [2024-11-20 11:21:36.279363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.874 [2024-11-20 11:21:36.279409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.874 qpair failed and we were unable to recover it. 00:27:08.874 [2024-11-20 11:21:36.279558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.874 [2024-11-20 11:21:36.279603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.874 qpair failed and we were unable to recover it. 00:27:08.874 [2024-11-20 11:21:36.279843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.874 [2024-11-20 11:21:36.279884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.874 qpair failed and we were unable to recover it. 00:27:08.874 [2024-11-20 11:21:36.280110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.874 [2024-11-20 11:21:36.280152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:08.874 qpair failed and we were unable to recover it. 00:27:08.874 [2024-11-20 11:21:36.280472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.874 [2024-11-20 11:21:36.280510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.874 qpair failed and we were unable to recover it. 00:27:08.874 [2024-11-20 11:21:36.280804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.874 [2024-11-20 11:21:36.280839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.874 qpair failed and we were unable to recover it. 00:27:08.874 [2024-11-20 11:21:36.281058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.874 [2024-11-20 11:21:36.281092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.874 qpair failed and we were unable to recover it. 00:27:08.874 [2024-11-20 11:21:36.281347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.874 [2024-11-20 11:21:36.281380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.874 qpair failed and we were unable to recover it. 00:27:08.874 [2024-11-20 11:21:36.281517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.874 [2024-11-20 11:21:36.281550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.874 qpair failed and we were unable to recover it. 00:27:08.874 [2024-11-20 11:21:36.281749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.874 [2024-11-20 11:21:36.281781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.874 qpair failed and we were unable to recover it. 00:27:08.874 [2024-11-20 11:21:36.281988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.874 [2024-11-20 11:21:36.282024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.874 qpair failed and we were unable to recover it. 00:27:08.874 [2024-11-20 11:21:36.282293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.874 [2024-11-20 11:21:36.282325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.874 qpair failed and we were unable to recover it. 00:27:08.874 [2024-11-20 11:21:36.282518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.874 [2024-11-20 11:21:36.282553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.874 qpair failed and we were unable to recover it. 00:27:08.874 [2024-11-20 11:21:36.282799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.874 [2024-11-20 11:21:36.282831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.874 qpair failed and we were unable to recover it. 00:27:08.874 [2024-11-20 11:21:36.283138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.874 [2024-11-20 11:21:36.283172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.874 qpair failed and we were unable to recover it. 00:27:08.874 [2024-11-20 11:21:36.283389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.874 [2024-11-20 11:21:36.283423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.874 qpair failed and we were unable to recover it. 00:27:08.874 [2024-11-20 11:21:36.283637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.874 [2024-11-20 11:21:36.283669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.874 qpair failed and we were unable to recover it. 00:27:08.874 [2024-11-20 11:21:36.283939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.874 [2024-11-20 11:21:36.283982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.874 qpair failed and we were unable to recover it. 00:27:08.874 [2024-11-20 11:21:36.284276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.874 [2024-11-20 11:21:36.284310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.874 qpair failed and we were unable to recover it. 00:27:08.874 [2024-11-20 11:21:36.284543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.874 [2024-11-20 11:21:36.284575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.874 qpair failed and we were unable to recover it. 00:27:08.874 [2024-11-20 11:21:36.284777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.874 [2024-11-20 11:21:36.284812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.874 qpair failed and we were unable to recover it. 00:27:08.874 [2024-11-20 11:21:36.284995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.874 [2024-11-20 11:21:36.285030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.874 qpair failed and we were unable to recover it. 00:27:08.874 [2024-11-20 11:21:36.285240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.874 [2024-11-20 11:21:36.285273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.874 qpair failed and we were unable to recover it. 00:27:08.874 [2024-11-20 11:21:36.285471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.874 [2024-11-20 11:21:36.285505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.874 qpair failed and we were unable to recover it. 00:27:08.875 [2024-11-20 11:21:36.285735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.875 [2024-11-20 11:21:36.285767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.875 qpair failed and we were unable to recover it. 00:27:08.875 [2024-11-20 11:21:36.285984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.875 [2024-11-20 11:21:36.286018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.875 qpair failed and we were unable to recover it. 00:27:08.875 [2024-11-20 11:21:36.286200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.875 [2024-11-20 11:21:36.286234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.875 qpair failed and we were unable to recover it. 00:27:08.875 [2024-11-20 11:21:36.286437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.875 [2024-11-20 11:21:36.286470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.875 qpair failed and we were unable to recover it. 00:27:08.875 [2024-11-20 11:21:36.286667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.875 [2024-11-20 11:21:36.286699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.875 qpair failed and we were unable to recover it. 00:27:08.875 [2024-11-20 11:21:36.286829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.875 [2024-11-20 11:21:36.286862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.875 qpair failed and we were unable to recover it. 00:27:08.875 [2024-11-20 11:21:36.287051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.875 [2024-11-20 11:21:36.287084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.875 qpair failed and we were unable to recover it. 00:27:08.875 [2024-11-20 11:21:36.287213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.875 [2024-11-20 11:21:36.287246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.875 qpair failed and we were unable to recover it. 00:27:08.875 [2024-11-20 11:21:36.287469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.875 [2024-11-20 11:21:36.287503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.875 qpair failed and we were unable to recover it. 00:27:08.875 [2024-11-20 11:21:36.287625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.875 [2024-11-20 11:21:36.287657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.875 qpair failed and we were unable to recover it. 00:27:08.875 [2024-11-20 11:21:36.287933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.875 [2024-11-20 11:21:36.287974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.875 qpair failed and we were unable to recover it. 00:27:08.875 [2024-11-20 11:21:36.288249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.875 [2024-11-20 11:21:36.288284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.875 qpair failed and we were unable to recover it. 00:27:08.875 [2024-11-20 11:21:36.288437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.875 [2024-11-20 11:21:36.288470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.875 qpair failed and we were unable to recover it. 00:27:08.875 [2024-11-20 11:21:36.288753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.875 [2024-11-20 11:21:36.288787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.875 qpair failed and we were unable to recover it. 00:27:08.875 [2024-11-20 11:21:36.288999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.875 [2024-11-20 11:21:36.289034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.875 qpair failed and we were unable to recover it. 00:27:08.875 [2024-11-20 11:21:36.289238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.875 [2024-11-20 11:21:36.289271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.875 qpair failed and we were unable to recover it. 00:27:08.875 [2024-11-20 11:21:36.289448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.875 [2024-11-20 11:21:36.289480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.875 qpair failed and we were unable to recover it. 00:27:08.875 [2024-11-20 11:21:36.289600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.875 [2024-11-20 11:21:36.289632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.875 qpair failed and we were unable to recover it. 00:27:08.875 [2024-11-20 11:21:36.289931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.875 [2024-11-20 11:21:36.289975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.875 qpair failed and we were unable to recover it. 00:27:08.875 [2024-11-20 11:21:36.290171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.875 [2024-11-20 11:21:36.290202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.875 qpair failed and we were unable to recover it. 00:27:08.875 [2024-11-20 11:21:36.290396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.875 [2024-11-20 11:21:36.290429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.875 qpair failed and we were unable to recover it. 00:27:08.875 [2024-11-20 11:21:36.290616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.875 [2024-11-20 11:21:36.290650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.875 qpair failed and we were unable to recover it. 00:27:08.875 [2024-11-20 11:21:36.290868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.875 [2024-11-20 11:21:36.290902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.875 qpair failed and we were unable to recover it. 00:27:08.875 [2024-11-20 11:21:36.291063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.875 [2024-11-20 11:21:36.291098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.875 qpair failed and we were unable to recover it. 00:27:08.875 [2024-11-20 11:21:36.291239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.875 [2024-11-20 11:21:36.291273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.875 qpair failed and we were unable to recover it. 00:27:08.875 [2024-11-20 11:21:36.291394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.875 [2024-11-20 11:21:36.291425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.875 qpair failed and we were unable to recover it. 00:27:08.875 [2024-11-20 11:21:36.291607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.875 [2024-11-20 11:21:36.291640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.875 qpair failed and we were unable to recover it. 00:27:08.875 [2024-11-20 11:21:36.291888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.875 [2024-11-20 11:21:36.291921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.875 qpair failed and we were unable to recover it. 00:27:08.875 [2024-11-20 11:21:36.292054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.875 [2024-11-20 11:21:36.292091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.875 qpair failed and we were unable to recover it. 00:27:08.875 [2024-11-20 11:21:36.292238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.875 [2024-11-20 11:21:36.292270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.875 qpair failed and we were unable to recover it. 00:27:08.875 [2024-11-20 11:21:36.292471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.875 [2024-11-20 11:21:36.292504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.875 qpair failed and we were unable to recover it. 00:27:08.875 [2024-11-20 11:21:36.292700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.875 [2024-11-20 11:21:36.292733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.875 qpair failed and we were unable to recover it. 00:27:08.875 [2024-11-20 11:21:36.292913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.875 [2024-11-20 11:21:36.292957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.875 qpair failed and we were unable to recover it. 00:27:08.876 [2024-11-20 11:21:36.293083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.876 [2024-11-20 11:21:36.293116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.876 qpair failed and we were unable to recover it. 00:27:08.876 [2024-11-20 11:21:36.293319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.876 [2024-11-20 11:21:36.293352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.876 qpair failed and we were unable to recover it. 00:27:08.876 [2024-11-20 11:21:36.293474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.876 [2024-11-20 11:21:36.293507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.876 qpair failed and we were unable to recover it. 00:27:08.876 [2024-11-20 11:21:36.293657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.876 [2024-11-20 11:21:36.293691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.876 qpair failed and we were unable to recover it. 00:27:08.876 [2024-11-20 11:21:36.293905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.876 [2024-11-20 11:21:36.293937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.876 qpair failed and we were unable to recover it. 00:27:08.876 [2024-11-20 11:21:36.294099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.876 [2024-11-20 11:21:36.294132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:08.876 qpair failed and we were unable to recover it. 00:27:09.153 [2024-11-20 11:21:36.294335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.153 [2024-11-20 11:21:36.294368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.153 qpair failed and we were unable to recover it. 00:27:09.153 [2024-11-20 11:21:36.294479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.153 [2024-11-20 11:21:36.294511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.153 qpair failed and we were unable to recover it. 00:27:09.153 [2024-11-20 11:21:36.294724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.153 [2024-11-20 11:21:36.294756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.153 qpair failed and we were unable to recover it. 00:27:09.153 [2024-11-20 11:21:36.294875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.153 [2024-11-20 11:21:36.294908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.153 qpair failed and we were unable to recover it. 00:27:09.153 [2024-11-20 11:21:36.295119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.153 [2024-11-20 11:21:36.295154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.153 qpair failed and we were unable to recover it. 00:27:09.153 [2024-11-20 11:21:36.295364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.153 [2024-11-20 11:21:36.295396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.153 qpair failed and we were unable to recover it. 00:27:09.153 [2024-11-20 11:21:36.295520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.153 [2024-11-20 11:21:36.295555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.153 qpair failed and we were unable to recover it. 00:27:09.153 [2024-11-20 11:21:36.295730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.153 [2024-11-20 11:21:36.295763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.153 qpair failed and we were unable to recover it. 00:27:09.153 [2024-11-20 11:21:36.295944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.153 [2024-11-20 11:21:36.295990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.153 qpair failed and we were unable to recover it. 00:27:09.153 [2024-11-20 11:21:36.296125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.153 [2024-11-20 11:21:36.296157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.153 qpair failed and we were unable to recover it. 00:27:09.153 [2024-11-20 11:21:36.296352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.153 [2024-11-20 11:21:36.296385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.153 qpair failed and we were unable to recover it. 00:27:09.153 [2024-11-20 11:21:36.296572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.153 [2024-11-20 11:21:36.296605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.153 qpair failed and we were unable to recover it. 00:27:09.153 [2024-11-20 11:21:36.296824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.153 [2024-11-20 11:21:36.296856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.153 qpair failed and we were unable to recover it. 00:27:09.153 [2024-11-20 11:21:36.296972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.153 [2024-11-20 11:21:36.297003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.153 qpair failed and we were unable to recover it. 00:27:09.153 [2024-11-20 11:21:36.297195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.153 [2024-11-20 11:21:36.297229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.154 qpair failed and we were unable to recover it. 00:27:09.154 [2024-11-20 11:21:36.297340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.154 [2024-11-20 11:21:36.297372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.154 qpair failed and we were unable to recover it. 00:27:09.154 [2024-11-20 11:21:36.297565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.154 [2024-11-20 11:21:36.297601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.154 qpair failed and we were unable to recover it. 00:27:09.154 [2024-11-20 11:21:36.297738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.154 [2024-11-20 11:21:36.297770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.154 qpair failed and we were unable to recover it. 00:27:09.154 [2024-11-20 11:21:36.297981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.154 [2024-11-20 11:21:36.298016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.154 qpair failed and we were unable to recover it. 00:27:09.154 [2024-11-20 11:21:36.298271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.154 [2024-11-20 11:21:36.298304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.154 qpair failed and we were unable to recover it. 00:27:09.154 [2024-11-20 11:21:36.298417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.154 [2024-11-20 11:21:36.298449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.154 qpair failed and we were unable to recover it. 00:27:09.154 [2024-11-20 11:21:36.298587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.154 [2024-11-20 11:21:36.298620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.154 qpair failed and we were unable to recover it. 00:27:09.154 [2024-11-20 11:21:36.298753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.154 [2024-11-20 11:21:36.298784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.154 qpair failed and we were unable to recover it. 00:27:09.154 [2024-11-20 11:21:36.298925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.154 [2024-11-20 11:21:36.298968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.154 qpair failed and we were unable to recover it. 00:27:09.154 [2024-11-20 11:21:36.299162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.154 [2024-11-20 11:21:36.299196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.154 qpair failed and we were unable to recover it. 00:27:09.154 [2024-11-20 11:21:36.299444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.154 [2024-11-20 11:21:36.299478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.154 qpair failed and we were unable to recover it. 00:27:09.154 [2024-11-20 11:21:36.299672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.154 [2024-11-20 11:21:36.299705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.154 qpair failed and we were unable to recover it. 00:27:09.154 [2024-11-20 11:21:36.299831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.154 [2024-11-20 11:21:36.299863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.154 qpair failed and we were unable to recover it. 00:27:09.154 [2024-11-20 11:21:36.300052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.154 [2024-11-20 11:21:36.300086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.154 qpair failed and we were unable to recover it. 00:27:09.154 [2024-11-20 11:21:36.300276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.154 [2024-11-20 11:21:36.300310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.154 qpair failed and we were unable to recover it. 00:27:09.154 [2024-11-20 11:21:36.300509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.154 [2024-11-20 11:21:36.300543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.154 qpair failed and we were unable to recover it. 00:27:09.154 [2024-11-20 11:21:36.300720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.154 [2024-11-20 11:21:36.300752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.154 qpair failed and we were unable to recover it. 00:27:09.154 [2024-11-20 11:21:36.300944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.154 [2024-11-20 11:21:36.300988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.154 qpair failed and we were unable to recover it. 00:27:09.154 [2024-11-20 11:21:36.301100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.154 [2024-11-20 11:21:36.301134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.154 qpair failed and we were unable to recover it. 00:27:09.154 [2024-11-20 11:21:36.301266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.154 [2024-11-20 11:21:36.301298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.154 qpair failed and we were unable to recover it. 00:27:09.154 [2024-11-20 11:21:36.301427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.154 [2024-11-20 11:21:36.301460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.154 qpair failed and we were unable to recover it. 00:27:09.154 [2024-11-20 11:21:36.301584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.154 [2024-11-20 11:21:36.301617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.154 qpair failed and we were unable to recover it. 00:27:09.154 [2024-11-20 11:21:36.301818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.154 [2024-11-20 11:21:36.301853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.154 qpair failed and we were unable to recover it. 00:27:09.154 [2024-11-20 11:21:36.302095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.154 [2024-11-20 11:21:36.302130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.154 qpair failed and we were unable to recover it. 00:27:09.154 [2024-11-20 11:21:36.302335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.154 [2024-11-20 11:21:36.302368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.154 qpair failed and we were unable to recover it. 00:27:09.154 [2024-11-20 11:21:36.302631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.154 [2024-11-20 11:21:36.302664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.154 qpair failed and we were unable to recover it. 00:27:09.154 [2024-11-20 11:21:36.302878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.154 [2024-11-20 11:21:36.302911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.154 qpair failed and we were unable to recover it. 00:27:09.154 [2024-11-20 11:21:36.303177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.154 [2024-11-20 11:21:36.303211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.154 qpair failed and we were unable to recover it. 00:27:09.154 [2024-11-20 11:21:36.303347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.154 [2024-11-20 11:21:36.303387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.154 qpair failed and we were unable to recover it. 00:27:09.154 [2024-11-20 11:21:36.303584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.154 [2024-11-20 11:21:36.303618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.154 qpair failed and we were unable to recover it. 00:27:09.154 [2024-11-20 11:21:36.303809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.154 [2024-11-20 11:21:36.303842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.154 qpair failed and we were unable to recover it. 00:27:09.154 [2024-11-20 11:21:36.304092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.154 [2024-11-20 11:21:36.304127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.154 qpair failed and we were unable to recover it. 00:27:09.154 [2024-11-20 11:21:36.304247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.155 [2024-11-20 11:21:36.304280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.155 qpair failed and we were unable to recover it. 00:27:09.155 [2024-11-20 11:21:36.304565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.155 [2024-11-20 11:21:36.304597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.155 qpair failed and we were unable to recover it. 00:27:09.155 [2024-11-20 11:21:36.304867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.155 [2024-11-20 11:21:36.304900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.155 qpair failed and we were unable to recover it. 00:27:09.155 [2024-11-20 11:21:36.305192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.155 [2024-11-20 11:21:36.305226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.155 qpair failed and we were unable to recover it. 00:27:09.155 [2024-11-20 11:21:36.305361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.155 [2024-11-20 11:21:36.305393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.155 qpair failed and we were unable to recover it. 00:27:09.155 [2024-11-20 11:21:36.305520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.155 [2024-11-20 11:21:36.305552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.155 qpair failed and we were unable to recover it. 00:27:09.155 [2024-11-20 11:21:36.305862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.155 [2024-11-20 11:21:36.305895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.155 qpair failed and we were unable to recover it. 00:27:09.155 [2024-11-20 11:21:36.306121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.155 [2024-11-20 11:21:36.306156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.155 qpair failed and we were unable to recover it. 00:27:09.155 [2024-11-20 11:21:36.306339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.155 [2024-11-20 11:21:36.306371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.155 qpair failed and we were unable to recover it. 00:27:09.155 [2024-11-20 11:21:36.306593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.155 [2024-11-20 11:21:36.306625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.155 qpair failed and we were unable to recover it. 00:27:09.155 [2024-11-20 11:21:36.306892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.155 [2024-11-20 11:21:36.306924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.155 qpair failed and we were unable to recover it. 00:27:09.155 [2024-11-20 11:21:36.307133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.155 [2024-11-20 11:21:36.307167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.155 qpair failed and we were unable to recover it. 00:27:09.155 [2024-11-20 11:21:36.307406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.155 [2024-11-20 11:21:36.307438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.155 qpair failed and we were unable to recover it. 00:27:09.155 [2024-11-20 11:21:36.307726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.155 [2024-11-20 11:21:36.307761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.155 qpair failed and we were unable to recover it. 00:27:09.155 [2024-11-20 11:21:36.307885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.155 [2024-11-20 11:21:36.307917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.155 qpair failed and we were unable to recover it. 00:27:09.155 [2024-11-20 11:21:36.308074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.155 [2024-11-20 11:21:36.308108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.155 qpair failed and we were unable to recover it. 00:27:09.155 [2024-11-20 11:21:36.308234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.155 [2024-11-20 11:21:36.308268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.155 qpair failed and we were unable to recover it. 00:27:09.155 [2024-11-20 11:21:36.308532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.155 [2024-11-20 11:21:36.308567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.155 qpair failed and we were unable to recover it. 00:27:09.155 [2024-11-20 11:21:36.308774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.155 [2024-11-20 11:21:36.308808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.155 qpair failed and we were unable to recover it. 00:27:09.155 [2024-11-20 11:21:36.309055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.155 [2024-11-20 11:21:36.309090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.155 qpair failed and we were unable to recover it. 00:27:09.155 [2024-11-20 11:21:36.309316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.155 [2024-11-20 11:21:36.309350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.155 qpair failed and we were unable to recover it. 00:27:09.155 [2024-11-20 11:21:36.309491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.155 [2024-11-20 11:21:36.309525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.155 qpair failed and we were unable to recover it. 00:27:09.155 [2024-11-20 11:21:36.309718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.155 [2024-11-20 11:21:36.309750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.155 qpair failed and we were unable to recover it. 00:27:09.155 [2024-11-20 11:21:36.310022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.155 [2024-11-20 11:21:36.310056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.155 qpair failed and we were unable to recover it. 00:27:09.155 [2024-11-20 11:21:36.310210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.155 [2024-11-20 11:21:36.310243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.155 qpair failed and we were unable to recover it. 00:27:09.155 [2024-11-20 11:21:36.310441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.155 [2024-11-20 11:21:36.310474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.155 qpair failed and we were unable to recover it. 00:27:09.155 [2024-11-20 11:21:36.310686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.155 [2024-11-20 11:21:36.310719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.155 qpair failed and we were unable to recover it. 00:27:09.155 [2024-11-20 11:21:36.310994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.155 [2024-11-20 11:21:36.311028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.155 qpair failed and we were unable to recover it. 00:27:09.155 [2024-11-20 11:21:36.311223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.155 [2024-11-20 11:21:36.311256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.155 qpair failed and we were unable to recover it. 00:27:09.155 [2024-11-20 11:21:36.311394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.155 [2024-11-20 11:21:36.311427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.155 qpair failed and we were unable to recover it. 00:27:09.155 [2024-11-20 11:21:36.311629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.155 [2024-11-20 11:21:36.311662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.155 qpair failed and we were unable to recover it. 00:27:09.155 [2024-11-20 11:21:36.311889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.155 [2024-11-20 11:21:36.311922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.155 qpair failed and we were unable to recover it. 00:27:09.155 [2024-11-20 11:21:36.312181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.155 [2024-11-20 11:21:36.312214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.155 qpair failed and we were unable to recover it. 00:27:09.155 [2024-11-20 11:21:36.312409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.155 [2024-11-20 11:21:36.312442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.156 qpair failed and we were unable to recover it. 00:27:09.156 [2024-11-20 11:21:36.312692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.156 [2024-11-20 11:21:36.312723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.156 qpair failed and we were unable to recover it. 00:27:09.156 [2024-11-20 11:21:36.313021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.156 [2024-11-20 11:21:36.313055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.156 qpair failed and we were unable to recover it. 00:27:09.156 [2024-11-20 11:21:36.313355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.156 [2024-11-20 11:21:36.313388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.156 qpair failed and we were unable to recover it. 00:27:09.156 [2024-11-20 11:21:36.313531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.156 [2024-11-20 11:21:36.313563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.156 qpair failed and we were unable to recover it. 00:27:09.156 [2024-11-20 11:21:36.313741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.156 [2024-11-20 11:21:36.313772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.156 qpair failed and we were unable to recover it. 00:27:09.156 [2024-11-20 11:21:36.313986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.156 [2024-11-20 11:21:36.314019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.156 qpair failed and we were unable to recover it. 00:27:09.156 [2024-11-20 11:21:36.314282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.156 [2024-11-20 11:21:36.314314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.156 qpair failed and we were unable to recover it. 00:27:09.156 [2024-11-20 11:21:36.314494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.156 [2024-11-20 11:21:36.314526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.156 qpair failed and we were unable to recover it. 00:27:09.156 [2024-11-20 11:21:36.314727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.156 [2024-11-20 11:21:36.314759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.156 qpair failed and we were unable to recover it. 00:27:09.156 [2024-11-20 11:21:36.314959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.156 [2024-11-20 11:21:36.314994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.156 qpair failed and we were unable to recover it. 00:27:09.156 [2024-11-20 11:21:36.315196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.156 [2024-11-20 11:21:36.315229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.156 qpair failed and we were unable to recover it. 00:27:09.156 [2024-11-20 11:21:36.315485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.156 [2024-11-20 11:21:36.315517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.156 qpair failed and we were unable to recover it. 00:27:09.156 [2024-11-20 11:21:36.315698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.156 [2024-11-20 11:21:36.315730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.156 qpair failed and we were unable to recover it. 00:27:09.156 [2024-11-20 11:21:36.316006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.156 [2024-11-20 11:21:36.316040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.156 qpair failed and we were unable to recover it. 00:27:09.156 [2024-11-20 11:21:36.316239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.156 [2024-11-20 11:21:36.316271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.156 qpair failed and we were unable to recover it. 00:27:09.156 [2024-11-20 11:21:36.316477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.156 [2024-11-20 11:21:36.316510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.156 qpair failed and we were unable to recover it. 00:27:09.156 [2024-11-20 11:21:36.316690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.156 [2024-11-20 11:21:36.316722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.156 qpair failed and we were unable to recover it. 00:27:09.156 [2024-11-20 11:21:36.316998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.156 [2024-11-20 11:21:36.317032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.156 qpair failed and we were unable to recover it. 00:27:09.156 [2024-11-20 11:21:36.317224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.156 [2024-11-20 11:21:36.317256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.156 qpair failed and we were unable to recover it. 00:27:09.156 [2024-11-20 11:21:36.317369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.156 [2024-11-20 11:21:36.317402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.156 qpair failed and we were unable to recover it. 00:27:09.156 [2024-11-20 11:21:36.317688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.156 [2024-11-20 11:21:36.317720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.156 qpair failed and we were unable to recover it. 00:27:09.156 [2024-11-20 11:21:36.317911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.156 [2024-11-20 11:21:36.317944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.156 qpair failed and we were unable to recover it. 00:27:09.156 [2024-11-20 11:21:36.318252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.156 [2024-11-20 11:21:36.318285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.156 qpair failed and we were unable to recover it. 00:27:09.156 [2024-11-20 11:21:36.318511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.156 [2024-11-20 11:21:36.318544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.156 qpair failed and we were unable to recover it. 00:27:09.156 [2024-11-20 11:21:36.318820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.156 [2024-11-20 11:21:36.318853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.156 qpair failed and we were unable to recover it. 00:27:09.156 [2024-11-20 11:21:36.318998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.156 [2024-11-20 11:21:36.319032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.156 qpair failed and we were unable to recover it. 00:27:09.156 [2024-11-20 11:21:36.319284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.156 [2024-11-20 11:21:36.319317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.156 qpair failed and we were unable to recover it. 00:27:09.156 [2024-11-20 11:21:36.319497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.156 [2024-11-20 11:21:36.319529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.156 qpair failed and we were unable to recover it. 00:27:09.156 [2024-11-20 11:21:36.319778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.156 [2024-11-20 11:21:36.319811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.156 qpair failed and we were unable to recover it. 00:27:09.156 [2024-11-20 11:21:36.320056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.156 [2024-11-20 11:21:36.320090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.156 qpair failed and we were unable to recover it. 00:27:09.156 [2024-11-20 11:21:36.320282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.156 [2024-11-20 11:21:36.320320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.156 qpair failed and we were unable to recover it. 00:27:09.156 [2024-11-20 11:21:36.320542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.156 [2024-11-20 11:21:36.320574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.156 qpair failed and we were unable to recover it. 00:27:09.156 [2024-11-20 11:21:36.320771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.156 [2024-11-20 11:21:36.320803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.156 qpair failed and we were unable to recover it. 00:27:09.157 [2024-11-20 11:21:36.321076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.157 [2024-11-20 11:21:36.321109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.157 qpair failed and we were unable to recover it. 00:27:09.157 [2024-11-20 11:21:36.321233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.157 [2024-11-20 11:21:36.321265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.157 qpair failed and we were unable to recover it. 00:27:09.157 [2024-11-20 11:21:36.321514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.157 [2024-11-20 11:21:36.321545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.157 qpair failed and we were unable to recover it. 00:27:09.157 [2024-11-20 11:21:36.321678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.157 [2024-11-20 11:21:36.321709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.157 qpair failed and we were unable to recover it. 00:27:09.157 [2024-11-20 11:21:36.321987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.157 [2024-11-20 11:21:36.322020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.157 qpair failed and we were unable to recover it. 00:27:09.157 [2024-11-20 11:21:36.322301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.157 [2024-11-20 11:21:36.322332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.157 qpair failed and we were unable to recover it. 00:27:09.157 [2024-11-20 11:21:36.322543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.157 [2024-11-20 11:21:36.322574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.157 qpair failed and we were unable to recover it. 00:27:09.157 [2024-11-20 11:21:36.322800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.157 [2024-11-20 11:21:36.322831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.157 qpair failed and we were unable to recover it. 00:27:09.157 [2024-11-20 11:21:36.323033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.157 [2024-11-20 11:21:36.323065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.157 qpair failed and we were unable to recover it. 00:27:09.157 [2024-11-20 11:21:36.323345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.157 [2024-11-20 11:21:36.323377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.157 qpair failed and we were unable to recover it. 00:27:09.157 [2024-11-20 11:21:36.323651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.157 [2024-11-20 11:21:36.323683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.157 qpair failed and we were unable to recover it. 00:27:09.157 [2024-11-20 11:21:36.323974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.157 [2024-11-20 11:21:36.324008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.157 qpair failed and we were unable to recover it. 00:27:09.157 [2024-11-20 11:21:36.324264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.157 [2024-11-20 11:21:36.324302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.157 qpair failed and we were unable to recover it. 00:27:09.157 [2024-11-20 11:21:36.324466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.157 [2024-11-20 11:21:36.324500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.157 qpair failed and we were unable to recover it. 00:27:09.157 [2024-11-20 11:21:36.324748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.157 [2024-11-20 11:21:36.324779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.157 qpair failed and we were unable to recover it. 00:27:09.157 [2024-11-20 11:21:36.324968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.157 [2024-11-20 11:21:36.325003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.157 qpair failed and we were unable to recover it. 00:27:09.157 [2024-11-20 11:21:36.325218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.157 [2024-11-20 11:21:36.325249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.157 qpair failed and we were unable to recover it. 00:27:09.157 [2024-11-20 11:21:36.325521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.157 [2024-11-20 11:21:36.325552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.157 qpair failed and we were unable to recover it. 00:27:09.157 [2024-11-20 11:21:36.325849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.157 [2024-11-20 11:21:36.325881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.157 qpair failed and we were unable to recover it. 00:27:09.157 [2024-11-20 11:21:36.326085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.157 [2024-11-20 11:21:36.326119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.157 qpair failed and we were unable to recover it. 00:27:09.157 [2024-11-20 11:21:36.326314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.157 [2024-11-20 11:21:36.326346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.157 qpair failed and we were unable to recover it. 00:27:09.157 [2024-11-20 11:21:36.326590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.157 [2024-11-20 11:21:36.326623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.157 qpair failed and we were unable to recover it. 00:27:09.157 [2024-11-20 11:21:36.326812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.157 [2024-11-20 11:21:36.326844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.157 qpair failed and we were unable to recover it. 00:27:09.157 [2024-11-20 11:21:36.327096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.157 [2024-11-20 11:21:36.327129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.157 qpair failed and we were unable to recover it. 00:27:09.157 [2024-11-20 11:21:36.327347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.157 [2024-11-20 11:21:36.327391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.157 qpair failed and we were unable to recover it. 00:27:09.157 [2024-11-20 11:21:36.327691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.157 [2024-11-20 11:21:36.327724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.157 qpair failed and we were unable to recover it. 00:27:09.157 [2024-11-20 11:21:36.327984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.157 [2024-11-20 11:21:36.328017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.157 qpair failed and we were unable to recover it. 00:27:09.157 [2024-11-20 11:21:36.328322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.157 [2024-11-20 11:21:36.328355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.157 qpair failed and we were unable to recover it. 00:27:09.157 [2024-11-20 11:21:36.328616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.157 [2024-11-20 11:21:36.328648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.157 qpair failed and we were unable to recover it. 00:27:09.157 [2024-11-20 11:21:36.328902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.157 [2024-11-20 11:21:36.328933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.157 qpair failed and we were unable to recover it. 00:27:09.157 [2024-11-20 11:21:36.329192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.157 [2024-11-20 11:21:36.329225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.157 qpair failed and we were unable to recover it. 00:27:09.157 [2024-11-20 11:21:36.329467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.157 [2024-11-20 11:21:36.329499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.157 qpair failed and we were unable to recover it. 00:27:09.157 [2024-11-20 11:21:36.329756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.157 [2024-11-20 11:21:36.329788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.157 qpair failed and we were unable to recover it. 00:27:09.157 [2024-11-20 11:21:36.330101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.158 [2024-11-20 11:21:36.330135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.158 qpair failed and we were unable to recover it. 00:27:09.158 [2024-11-20 11:21:36.330337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.158 [2024-11-20 11:21:36.330370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.158 qpair failed and we were unable to recover it. 00:27:09.158 [2024-11-20 11:21:36.330563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.158 [2024-11-20 11:21:36.330595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.158 qpair failed and we were unable to recover it. 00:27:09.158 [2024-11-20 11:21:36.330774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.158 [2024-11-20 11:21:36.330805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.158 qpair failed and we were unable to recover it. 00:27:09.158 [2024-11-20 11:21:36.331060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.158 [2024-11-20 11:21:36.331094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.158 qpair failed and we were unable to recover it. 00:27:09.158 [2024-11-20 11:21:36.331352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.158 [2024-11-20 11:21:36.331384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.158 qpair failed and we were unable to recover it. 00:27:09.158 [2024-11-20 11:21:36.331638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.158 [2024-11-20 11:21:36.331670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.158 qpair failed and we were unable to recover it. 00:27:09.158 [2024-11-20 11:21:36.331972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.158 [2024-11-20 11:21:36.332006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.158 qpair failed and we were unable to recover it. 00:27:09.158 [2024-11-20 11:21:36.332271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.158 [2024-11-20 11:21:36.332303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.158 qpair failed and we were unable to recover it. 00:27:09.158 [2024-11-20 11:21:36.332482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.158 [2024-11-20 11:21:36.332515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.158 qpair failed and we were unable to recover it. 00:27:09.158 [2024-11-20 11:21:36.332715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.158 [2024-11-20 11:21:36.332747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.158 qpair failed and we were unable to recover it. 00:27:09.158 [2024-11-20 11:21:36.333021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.158 [2024-11-20 11:21:36.333055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.158 qpair failed and we were unable to recover it. 00:27:09.158 [2024-11-20 11:21:36.333345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.158 [2024-11-20 11:21:36.333377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.158 qpair failed and we were unable to recover it. 00:27:09.158 [2024-11-20 11:21:36.333651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.158 [2024-11-20 11:21:36.333683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.158 qpair failed and we were unable to recover it. 00:27:09.158 [2024-11-20 11:21:36.333887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.158 [2024-11-20 11:21:36.333919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.158 qpair failed and we were unable to recover it. 00:27:09.158 [2024-11-20 11:21:36.334158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.158 [2024-11-20 11:21:36.334190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.158 qpair failed and we were unable to recover it. 00:27:09.158 [2024-11-20 11:21:36.334443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.158 [2024-11-20 11:21:36.334475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.158 qpair failed and we were unable to recover it. 00:27:09.158 [2024-11-20 11:21:36.334740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.158 [2024-11-20 11:21:36.334772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.158 qpair failed and we were unable to recover it. 00:27:09.158 [2024-11-20 11:21:36.335066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.158 [2024-11-20 11:21:36.335100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.158 qpair failed and we were unable to recover it. 00:27:09.158 [2024-11-20 11:21:36.335301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.158 [2024-11-20 11:21:36.335334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.158 qpair failed and we were unable to recover it. 00:27:09.158 [2024-11-20 11:21:36.335613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.158 [2024-11-20 11:21:36.335645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.158 qpair failed and we were unable to recover it. 00:27:09.158 [2024-11-20 11:21:36.335900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.158 [2024-11-20 11:21:36.335933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.158 qpair failed and we were unable to recover it. 00:27:09.158 [2024-11-20 11:21:36.336166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.158 [2024-11-20 11:21:36.336199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.158 qpair failed and we were unable to recover it. 00:27:09.158 [2024-11-20 11:21:36.336471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.158 [2024-11-20 11:21:36.336505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.158 qpair failed and we were unable to recover it. 00:27:09.158 [2024-11-20 11:21:36.336784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.158 [2024-11-20 11:21:36.336817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.158 qpair failed and we were unable to recover it. 00:27:09.158 [2024-11-20 11:21:36.337105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.158 [2024-11-20 11:21:36.337139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.158 qpair failed and we were unable to recover it. 00:27:09.158 [2024-11-20 11:21:36.337414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.158 [2024-11-20 11:21:36.337447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.158 qpair failed and we were unable to recover it. 00:27:09.158 [2024-11-20 11:21:36.337737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.158 [2024-11-20 11:21:36.337768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.158 qpair failed and we were unable to recover it. 00:27:09.158 [2024-11-20 11:21:36.338044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.158 [2024-11-20 11:21:36.338079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.158 qpair failed and we were unable to recover it. 00:27:09.159 [2024-11-20 11:21:36.338278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.159 [2024-11-20 11:21:36.338311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.159 qpair failed and we were unable to recover it. 00:27:09.159 [2024-11-20 11:21:36.338563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.159 [2024-11-20 11:21:36.338596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.159 qpair failed and we were unable to recover it. 00:27:09.159 [2024-11-20 11:21:36.338772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.159 [2024-11-20 11:21:36.338804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.159 qpair failed and we were unable to recover it. 00:27:09.159 [2024-11-20 11:21:36.339072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.159 [2024-11-20 11:21:36.339108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.159 qpair failed and we were unable to recover it. 00:27:09.159 [2024-11-20 11:21:36.339386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.159 [2024-11-20 11:21:36.339419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.159 qpair failed and we were unable to recover it. 00:27:09.159 [2024-11-20 11:21:36.339643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.159 [2024-11-20 11:21:36.339675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.159 qpair failed and we were unable to recover it. 00:27:09.159 [2024-11-20 11:21:36.339959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.159 [2024-11-20 11:21:36.339992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.159 qpair failed and we were unable to recover it. 00:27:09.159 [2024-11-20 11:21:36.340245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.159 [2024-11-20 11:21:36.340278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.159 qpair failed and we were unable to recover it. 00:27:09.159 [2024-11-20 11:21:36.340573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.159 [2024-11-20 11:21:36.340605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.159 qpair failed and we were unable to recover it. 00:27:09.159 [2024-11-20 11:21:36.340899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.159 [2024-11-20 11:21:36.340932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.159 qpair failed and we were unable to recover it. 00:27:09.159 [2024-11-20 11:21:36.341207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.159 [2024-11-20 11:21:36.341240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.159 qpair failed and we were unable to recover it. 00:27:09.159 [2024-11-20 11:21:36.341493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.159 [2024-11-20 11:21:36.341525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.159 qpair failed and we were unable to recover it. 00:27:09.159 [2024-11-20 11:21:36.341804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.159 [2024-11-20 11:21:36.341837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.159 qpair failed and we were unable to recover it. 00:27:09.159 [2024-11-20 11:21:36.342117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.159 [2024-11-20 11:21:36.342150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.159 qpair failed and we were unable to recover it. 00:27:09.159 [2024-11-20 11:21:36.342433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.159 [2024-11-20 11:21:36.342466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.159 qpair failed and we were unable to recover it. 00:27:09.159 [2024-11-20 11:21:36.342768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.159 [2024-11-20 11:21:36.342800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.159 qpair failed and we were unable to recover it. 00:27:09.159 [2024-11-20 11:21:36.343067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.159 [2024-11-20 11:21:36.343102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.159 qpair failed and we were unable to recover it. 00:27:09.159 [2024-11-20 11:21:36.343291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.159 [2024-11-20 11:21:36.343324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.159 qpair failed and we were unable to recover it. 00:27:09.159 [2024-11-20 11:21:36.343525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.159 [2024-11-20 11:21:36.343556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.159 qpair failed and we were unable to recover it. 00:27:09.159 [2024-11-20 11:21:36.343756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.159 [2024-11-20 11:21:36.343789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.159 qpair failed and we were unable to recover it. 00:27:09.159 [2024-11-20 11:21:36.344061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.159 [2024-11-20 11:21:36.344096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.159 qpair failed and we were unable to recover it. 00:27:09.159 [2024-11-20 11:21:36.344275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.159 [2024-11-20 11:21:36.344306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.159 qpair failed and we were unable to recover it. 00:27:09.159 [2024-11-20 11:21:36.344583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.159 [2024-11-20 11:21:36.344615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.159 qpair failed and we were unable to recover it. 00:27:09.159 [2024-11-20 11:21:36.344890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.159 [2024-11-20 11:21:36.344922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.159 qpair failed and we were unable to recover it. 00:27:09.159 [2024-11-20 11:21:36.345189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.159 [2024-11-20 11:21:36.345223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.159 qpair failed and we were unable to recover it. 00:27:09.159 [2024-11-20 11:21:36.345423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.159 [2024-11-20 11:21:36.345455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.159 qpair failed and we were unable to recover it. 00:27:09.159 [2024-11-20 11:21:36.345680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.159 [2024-11-20 11:21:36.345712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.159 qpair failed and we were unable to recover it. 00:27:09.159 [2024-11-20 11:21:36.345969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.159 [2024-11-20 11:21:36.346003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.159 qpair failed and we were unable to recover it. 00:27:09.159 [2024-11-20 11:21:36.346196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.159 [2024-11-20 11:21:36.346229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.159 qpair failed and we were unable to recover it. 00:27:09.159 [2024-11-20 11:21:36.346429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.159 [2024-11-20 11:21:36.346461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.159 qpair failed and we were unable to recover it. 00:27:09.159 [2024-11-20 11:21:36.346735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.159 [2024-11-20 11:21:36.346773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.159 qpair failed and we were unable to recover it. 00:27:09.159 [2024-11-20 11:21:36.346983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.159 [2024-11-20 11:21:36.347017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.159 qpair failed and we were unable to recover it. 00:27:09.159 [2024-11-20 11:21:36.347272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.159 [2024-11-20 11:21:36.347305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.159 qpair failed and we were unable to recover it. 00:27:09.159 [2024-11-20 11:21:36.347586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.160 [2024-11-20 11:21:36.347618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.160 qpair failed and we were unable to recover it. 00:27:09.160 [2024-11-20 11:21:36.347868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.160 [2024-11-20 11:21:36.347900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.160 qpair failed and we were unable to recover it. 00:27:09.160 [2024-11-20 11:21:36.348112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.160 [2024-11-20 11:21:36.348146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.160 qpair failed and we were unable to recover it. 00:27:09.160 [2024-11-20 11:21:36.348424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.160 [2024-11-20 11:21:36.348457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.160 qpair failed and we were unable to recover it. 00:27:09.160 [2024-11-20 11:21:36.348712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.160 [2024-11-20 11:21:36.348744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.160 qpair failed and we were unable to recover it. 00:27:09.160 [2024-11-20 11:21:36.348935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.160 [2024-11-20 11:21:36.348977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.160 qpair failed and we were unable to recover it. 00:27:09.160 [2024-11-20 11:21:36.349187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.160 [2024-11-20 11:21:36.349219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.160 qpair failed and we were unable to recover it. 00:27:09.160 [2024-11-20 11:21:36.349518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.160 [2024-11-20 11:21:36.349550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.160 qpair failed and we were unable to recover it. 00:27:09.160 [2024-11-20 11:21:36.349818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.160 [2024-11-20 11:21:36.349851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.160 qpair failed and we were unable to recover it. 00:27:09.160 [2024-11-20 11:21:36.350076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.160 [2024-11-20 11:21:36.350133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.160 qpair failed and we were unable to recover it. 00:27:09.160 [2024-11-20 11:21:36.350436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.160 [2024-11-20 11:21:36.350469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.160 qpair failed and we were unable to recover it. 00:27:09.160 [2024-11-20 11:21:36.350764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.160 [2024-11-20 11:21:36.350796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.160 qpair failed and we were unable to recover it. 00:27:09.160 [2024-11-20 11:21:36.351069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.160 [2024-11-20 11:21:36.351103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.160 qpair failed and we were unable to recover it. 00:27:09.160 [2024-11-20 11:21:36.351404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.160 [2024-11-20 11:21:36.351436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.160 qpair failed and we were unable to recover it. 00:27:09.160 [2024-11-20 11:21:36.351616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.160 [2024-11-20 11:21:36.351648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.160 qpair failed and we were unable to recover it. 00:27:09.160 [2024-11-20 11:21:36.351830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.160 [2024-11-20 11:21:36.351862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.160 qpair failed and we were unable to recover it. 00:27:09.160 [2024-11-20 11:21:36.352163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.160 [2024-11-20 11:21:36.352196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.160 qpair failed and we were unable to recover it. 00:27:09.160 [2024-11-20 11:21:36.352467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.160 [2024-11-20 11:21:36.352500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.160 qpair failed and we were unable to recover it. 00:27:09.160 [2024-11-20 11:21:36.352696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.160 [2024-11-20 11:21:36.352728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.160 qpair failed and we were unable to recover it. 00:27:09.160 [2024-11-20 11:21:36.352992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.160 [2024-11-20 11:21:36.353026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.160 qpair failed and we were unable to recover it. 00:27:09.160 [2024-11-20 11:21:36.353221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.160 [2024-11-20 11:21:36.353253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.160 qpair failed and we were unable to recover it. 00:27:09.160 [2024-11-20 11:21:36.353457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.160 [2024-11-20 11:21:36.353489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.160 qpair failed and we were unable to recover it. 00:27:09.160 [2024-11-20 11:21:36.353737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.160 [2024-11-20 11:21:36.353770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.160 qpair failed and we were unable to recover it. 00:27:09.160 [2024-11-20 11:21:36.354073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.160 [2024-11-20 11:21:36.354107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.160 qpair failed and we were unable to recover it. 00:27:09.160 [2024-11-20 11:21:36.354393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.160 [2024-11-20 11:21:36.354431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.160 qpair failed and we were unable to recover it. 00:27:09.160 [2024-11-20 11:21:36.354705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.160 [2024-11-20 11:21:36.354738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.160 qpair failed and we were unable to recover it. 00:27:09.160 [2024-11-20 11:21:36.354940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.160 [2024-11-20 11:21:36.354995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.160 qpair failed and we were unable to recover it. 00:27:09.160 [2024-11-20 11:21:36.355270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.160 [2024-11-20 11:21:36.355302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.160 qpair failed and we were unable to recover it. 00:27:09.160 [2024-11-20 11:21:36.355601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.160 [2024-11-20 11:21:36.355634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.160 qpair failed and we were unable to recover it. 00:27:09.160 [2024-11-20 11:21:36.355904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.160 [2024-11-20 11:21:36.355937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.160 qpair failed and we were unable to recover it. 00:27:09.160 [2024-11-20 11:21:36.356223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.160 [2024-11-20 11:21:36.356256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.160 qpair failed and we were unable to recover it. 00:27:09.160 [2024-11-20 11:21:36.356474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.160 [2024-11-20 11:21:36.356507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.160 qpair failed and we were unable to recover it. 00:27:09.160 [2024-11-20 11:21:36.356761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.160 [2024-11-20 11:21:36.356794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.160 qpair failed and we were unable to recover it. 00:27:09.160 [2024-11-20 11:21:36.357053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.161 [2024-11-20 11:21:36.357087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.161 qpair failed and we were unable to recover it. 00:27:09.161 [2024-11-20 11:21:36.357389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.161 [2024-11-20 11:21:36.357422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.161 qpair failed and we were unable to recover it. 00:27:09.161 [2024-11-20 11:21:36.357685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.161 [2024-11-20 11:21:36.357718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.161 qpair failed and we were unable to recover it. 00:27:09.161 [2024-11-20 11:21:36.358018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.161 [2024-11-20 11:21:36.358052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.161 qpair failed and we were unable to recover it. 00:27:09.161 [2024-11-20 11:21:36.358317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.161 [2024-11-20 11:21:36.358349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.161 qpair failed and we were unable to recover it. 00:27:09.161 [2024-11-20 11:21:36.358637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.161 [2024-11-20 11:21:36.358670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.161 qpair failed and we were unable to recover it. 00:27:09.161 [2024-11-20 11:21:36.358968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.161 [2024-11-20 11:21:36.359002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.161 qpair failed and we were unable to recover it. 00:27:09.161 [2024-11-20 11:21:36.359269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.161 [2024-11-20 11:21:36.359301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.161 qpair failed and we were unable to recover it. 00:27:09.161 [2024-11-20 11:21:36.359584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.161 [2024-11-20 11:21:36.359617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.161 qpair failed and we were unable to recover it. 00:27:09.161 [2024-11-20 11:21:36.359835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.161 [2024-11-20 11:21:36.359869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.161 qpair failed and we were unable to recover it. 00:27:09.161 [2024-11-20 11:21:36.360143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.161 [2024-11-20 11:21:36.360178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.161 qpair failed and we were unable to recover it. 00:27:09.161 [2024-11-20 11:21:36.360400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.161 [2024-11-20 11:21:36.360432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.161 qpair failed and we were unable to recover it. 00:27:09.161 [2024-11-20 11:21:36.360714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.161 [2024-11-20 11:21:36.360746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.161 qpair failed and we were unable to recover it. 00:27:09.161 [2024-11-20 11:21:36.360957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.161 [2024-11-20 11:21:36.360991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.161 qpair failed and we were unable to recover it. 00:27:09.161 [2024-11-20 11:21:36.361243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.161 [2024-11-20 11:21:36.361275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.161 qpair failed and we were unable to recover it. 00:27:09.161 [2024-11-20 11:21:36.361550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.161 [2024-11-20 11:21:36.361582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.161 qpair failed and we were unable to recover it. 00:27:09.161 [2024-11-20 11:21:36.361783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.161 [2024-11-20 11:21:36.361816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.161 qpair failed and we were unable to recover it. 00:27:09.161 [2024-11-20 11:21:36.362117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.161 [2024-11-20 11:21:36.362151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.161 qpair failed and we were unable to recover it. 00:27:09.161 [2024-11-20 11:21:36.362428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.161 [2024-11-20 11:21:36.362466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.161 qpair failed and we were unable to recover it. 00:27:09.161 [2024-11-20 11:21:36.362742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.161 [2024-11-20 11:21:36.362774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.161 qpair failed and we were unable to recover it. 00:27:09.161 [2024-11-20 11:21:36.363035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.161 [2024-11-20 11:21:36.363069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.161 qpair failed and we were unable to recover it. 00:27:09.161 [2024-11-20 11:21:36.363348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.161 [2024-11-20 11:21:36.363381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.161 qpair failed and we were unable to recover it. 00:27:09.161 [2024-11-20 11:21:36.363632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.161 [2024-11-20 11:21:36.363664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.161 qpair failed and we were unable to recover it. 00:27:09.161 [2024-11-20 11:21:36.363887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.161 [2024-11-20 11:21:36.363918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.161 qpair failed and we were unable to recover it. 00:27:09.161 [2024-11-20 11:21:36.364233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.161 [2024-11-20 11:21:36.364266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.161 qpair failed and we were unable to recover it. 00:27:09.161 [2024-11-20 11:21:36.364520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.161 [2024-11-20 11:21:36.364552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.161 qpair failed and we were unable to recover it. 00:27:09.161 [2024-11-20 11:21:36.364857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.161 [2024-11-20 11:21:36.364890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.161 qpair failed and we were unable to recover it. 00:27:09.161 [2024-11-20 11:21:36.365156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.161 [2024-11-20 11:21:36.365189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.161 qpair failed and we were unable to recover it. 00:27:09.161 [2024-11-20 11:21:36.365388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.161 [2024-11-20 11:21:36.365421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.161 qpair failed and we were unable to recover it. 00:27:09.161 [2024-11-20 11:21:36.365611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.161 [2024-11-20 11:21:36.365643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.161 qpair failed and we were unable to recover it. 00:27:09.161 [2024-11-20 11:21:36.365822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.161 [2024-11-20 11:21:36.365856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.161 qpair failed and we were unable to recover it. 00:27:09.161 [2024-11-20 11:21:36.366071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.161 [2024-11-20 11:21:36.366104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.161 qpair failed and we were unable to recover it. 00:27:09.161 [2024-11-20 11:21:36.366389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.161 [2024-11-20 11:21:36.366423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.161 qpair failed and we were unable to recover it. 00:27:09.161 [2024-11-20 11:21:36.366704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.162 [2024-11-20 11:21:36.366736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.162 qpair failed and we were unable to recover it. 00:27:09.162 [2024-11-20 11:21:36.367046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.162 [2024-11-20 11:21:36.367080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.162 qpair failed and we were unable to recover it. 00:27:09.162 [2024-11-20 11:21:36.367337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.162 [2024-11-20 11:21:36.367370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.162 qpair failed and we were unable to recover it. 00:27:09.162 [2024-11-20 11:21:36.367495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.162 [2024-11-20 11:21:36.367527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.162 qpair failed and we were unable to recover it. 00:27:09.162 [2024-11-20 11:21:36.367798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.162 [2024-11-20 11:21:36.367831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.162 qpair failed and we were unable to recover it. 00:27:09.162 [2024-11-20 11:21:36.368107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.162 [2024-11-20 11:21:36.368141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.162 qpair failed and we were unable to recover it. 00:27:09.162 [2024-11-20 11:21:36.368357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.162 [2024-11-20 11:21:36.368388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.162 qpair failed and we were unable to recover it. 00:27:09.162 [2024-11-20 11:21:36.368659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.162 [2024-11-20 11:21:36.368691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.162 qpair failed and we were unable to recover it. 00:27:09.162 [2024-11-20 11:21:36.368884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.162 [2024-11-20 11:21:36.368916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.162 qpair failed and we were unable to recover it. 00:27:09.162 [2024-11-20 11:21:36.369122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.162 [2024-11-20 11:21:36.369155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.162 qpair failed and we were unable to recover it. 00:27:09.162 [2024-11-20 11:21:36.369288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.162 [2024-11-20 11:21:36.369320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.162 qpair failed and we were unable to recover it. 00:27:09.162 [2024-11-20 11:21:36.369591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.162 [2024-11-20 11:21:36.369622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.162 qpair failed and we were unable to recover it. 00:27:09.162 [2024-11-20 11:21:36.369845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.162 [2024-11-20 11:21:36.369877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.162 qpair failed and we were unable to recover it. 00:27:09.162 [2024-11-20 11:21:36.370155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.162 [2024-11-20 11:21:36.370189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.162 qpair failed and we were unable to recover it. 00:27:09.162 [2024-11-20 11:21:36.370456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.162 [2024-11-20 11:21:36.370489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.162 qpair failed and we were unable to recover it. 00:27:09.162 [2024-11-20 11:21:36.370767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.162 [2024-11-20 11:21:36.370800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.162 qpair failed and we were unable to recover it. 00:27:09.162 [2024-11-20 11:21:36.371050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.162 [2024-11-20 11:21:36.371085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.162 qpair failed and we were unable to recover it. 00:27:09.162 [2024-11-20 11:21:36.371265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.162 [2024-11-20 11:21:36.371296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.162 qpair failed and we were unable to recover it. 00:27:09.162 [2024-11-20 11:21:36.371492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.162 [2024-11-20 11:21:36.371525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.162 qpair failed and we were unable to recover it. 00:27:09.162 [2024-11-20 11:21:36.371797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.162 [2024-11-20 11:21:36.371830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.162 qpair failed and we were unable to recover it. 00:27:09.162 [2024-11-20 11:21:36.372147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.162 [2024-11-20 11:21:36.372181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.162 qpair failed and we were unable to recover it. 00:27:09.162 [2024-11-20 11:21:36.372380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.162 [2024-11-20 11:21:36.372413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.162 qpair failed and we were unable to recover it. 00:27:09.162 [2024-11-20 11:21:36.372539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.162 [2024-11-20 11:21:36.372571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.162 qpair failed and we were unable to recover it. 00:27:09.162 [2024-11-20 11:21:36.372843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.162 [2024-11-20 11:21:36.372876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.162 qpair failed and we were unable to recover it. 00:27:09.162 [2024-11-20 11:21:36.373068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.162 [2024-11-20 11:21:36.373102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.162 qpair failed and we were unable to recover it. 00:27:09.162 [2024-11-20 11:21:36.373366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.162 [2024-11-20 11:21:36.373398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.162 qpair failed and we were unable to recover it. 00:27:09.162 [2024-11-20 11:21:36.373654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.162 [2024-11-20 11:21:36.373687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.162 qpair failed and we were unable to recover it. 00:27:09.162 [2024-11-20 11:21:36.373984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.162 [2024-11-20 11:21:36.374018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.162 qpair failed and we were unable to recover it. 00:27:09.162 [2024-11-20 11:21:36.374244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.162 [2024-11-20 11:21:36.374276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.162 qpair failed and we were unable to recover it. 00:27:09.162 [2024-11-20 11:21:36.374539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.162 [2024-11-20 11:21:36.374572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.162 qpair failed and we were unable to recover it. 00:27:09.162 [2024-11-20 11:21:36.374874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.162 [2024-11-20 11:21:36.374905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.162 qpair failed and we were unable to recover it. 00:27:09.162 [2024-11-20 11:21:36.375190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.162 [2024-11-20 11:21:36.375224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.162 qpair failed and we were unable to recover it. 00:27:09.162 [2024-11-20 11:21:36.375359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.162 [2024-11-20 11:21:36.375391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.162 qpair failed and we were unable to recover it. 00:27:09.162 [2024-11-20 11:21:36.375667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.163 [2024-11-20 11:21:36.375700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.163 qpair failed and we were unable to recover it. 00:27:09.163 [2024-11-20 11:21:36.375981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.163 [2024-11-20 11:21:36.376016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.163 qpair failed and we were unable to recover it. 00:27:09.163 [2024-11-20 11:21:36.376298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.163 [2024-11-20 11:21:36.376330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.163 qpair failed and we were unable to recover it. 00:27:09.163 [2024-11-20 11:21:36.376610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.163 [2024-11-20 11:21:36.376642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.163 qpair failed and we were unable to recover it. 00:27:09.163 [2024-11-20 11:21:36.376868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.163 [2024-11-20 11:21:36.376901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.163 qpair failed and we were unable to recover it. 00:27:09.163 [2024-11-20 11:21:36.377213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.163 [2024-11-20 11:21:36.377246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.163 qpair failed and we were unable to recover it. 00:27:09.163 [2024-11-20 11:21:36.377448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.163 [2024-11-20 11:21:36.377480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.163 qpair failed and we were unable to recover it. 00:27:09.163 [2024-11-20 11:21:36.377745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.163 [2024-11-20 11:21:36.377777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.163 qpair failed and we were unable to recover it. 00:27:09.163 [2024-11-20 11:21:36.377889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.163 [2024-11-20 11:21:36.377921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.163 qpair failed and we were unable to recover it. 00:27:09.163 [2024-11-20 11:21:36.378144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.163 [2024-11-20 11:21:36.378178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.163 qpair failed and we were unable to recover it. 00:27:09.163 [2024-11-20 11:21:36.378374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.163 [2024-11-20 11:21:36.378408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.163 qpair failed and we were unable to recover it. 00:27:09.163 [2024-11-20 11:21:36.378685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.163 [2024-11-20 11:21:36.378716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.163 qpair failed and we were unable to recover it. 00:27:09.163 [2024-11-20 11:21:36.378939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.163 [2024-11-20 11:21:36.378982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.163 qpair failed and we were unable to recover it. 00:27:09.163 [2024-11-20 11:21:36.379259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.163 [2024-11-20 11:21:36.379292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.163 qpair failed and we were unable to recover it. 00:27:09.163 [2024-11-20 11:21:36.379484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.163 [2024-11-20 11:21:36.379516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.163 qpair failed and we were unable to recover it. 00:27:09.163 [2024-11-20 11:21:36.379735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.163 [2024-11-20 11:21:36.379767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.163 qpair failed and we were unable to recover it. 00:27:09.163 [2024-11-20 11:21:36.380042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.163 [2024-11-20 11:21:36.380076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.163 qpair failed and we were unable to recover it. 00:27:09.163 [2024-11-20 11:21:36.380365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.163 [2024-11-20 11:21:36.380396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.163 qpair failed and we were unable to recover it. 00:27:09.163 [2024-11-20 11:21:36.380576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.163 [2024-11-20 11:21:36.380608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.163 qpair failed and we were unable to recover it. 00:27:09.163 [2024-11-20 11:21:36.380860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.163 [2024-11-20 11:21:36.380893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.163 qpair failed and we were unable to recover it. 00:27:09.163 [2024-11-20 11:21:36.381192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.163 [2024-11-20 11:21:36.381231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.163 qpair failed and we were unable to recover it. 00:27:09.163 [2024-11-20 11:21:36.381517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.163 [2024-11-20 11:21:36.381549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.163 qpair failed and we were unable to recover it. 00:27:09.163 [2024-11-20 11:21:36.381691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.163 [2024-11-20 11:21:36.381724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.163 qpair failed and we were unable to recover it. 00:27:09.163 [2024-11-20 11:21:36.381846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.163 [2024-11-20 11:21:36.381878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.163 qpair failed and we were unable to recover it. 00:27:09.163 [2024-11-20 11:21:36.382110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.163 [2024-11-20 11:21:36.382144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.163 qpair failed and we were unable to recover it. 00:27:09.163 [2024-11-20 11:21:36.382400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.163 [2024-11-20 11:21:36.382433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.163 qpair failed and we were unable to recover it. 00:27:09.163 [2024-11-20 11:21:36.382614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.163 [2024-11-20 11:21:36.382645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.163 qpair failed and we were unable to recover it. 00:27:09.163 [2024-11-20 11:21:36.382788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.163 [2024-11-20 11:21:36.382820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.163 qpair failed and we were unable to recover it. 00:27:09.163 [2024-11-20 11:21:36.383099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.163 [2024-11-20 11:21:36.383134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.163 qpair failed and we were unable to recover it. 00:27:09.163 [2024-11-20 11:21:36.383415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.163 [2024-11-20 11:21:36.383446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.163 qpair failed and we were unable to recover it. 00:27:09.163 [2024-11-20 11:21:36.383728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.163 [2024-11-20 11:21:36.383761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.163 qpair failed and we were unable to recover it. 00:27:09.163 [2024-11-20 11:21:36.384045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.163 [2024-11-20 11:21:36.384080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.163 qpair failed and we were unable to recover it. 00:27:09.163 [2024-11-20 11:21:36.384332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.163 [2024-11-20 11:21:36.384364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.163 qpair failed and we were unable to recover it. 00:27:09.163 [2024-11-20 11:21:36.384621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.164 [2024-11-20 11:21:36.384654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.164 qpair failed and we were unable to recover it. 00:27:09.164 [2024-11-20 11:21:36.384884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.164 [2024-11-20 11:21:36.384917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.164 qpair failed and we were unable to recover it. 00:27:09.164 [2024-11-20 11:21:36.385151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.164 [2024-11-20 11:21:36.385186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.164 qpair failed and we were unable to recover it. 00:27:09.164 [2024-11-20 11:21:36.385488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.164 [2024-11-20 11:21:36.385519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.164 qpair failed and we were unable to recover it. 00:27:09.164 [2024-11-20 11:21:36.385783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.164 [2024-11-20 11:21:36.385816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.164 qpair failed and we were unable to recover it. 00:27:09.164 [2024-11-20 11:21:36.386093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.164 [2024-11-20 11:21:36.386127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.164 qpair failed and we were unable to recover it. 00:27:09.164 [2024-11-20 11:21:36.386420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.164 [2024-11-20 11:21:36.386452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.164 qpair failed and we were unable to recover it. 00:27:09.164 [2024-11-20 11:21:36.386725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.164 [2024-11-20 11:21:36.386757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.164 qpair failed and we were unable to recover it. 00:27:09.164 [2024-11-20 11:21:36.387033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.164 [2024-11-20 11:21:36.387067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.164 qpair failed and we were unable to recover it. 00:27:09.164 [2024-11-20 11:21:36.387357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.164 [2024-11-20 11:21:36.387389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.164 qpair failed and we were unable to recover it. 00:27:09.164 [2024-11-20 11:21:36.387669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.164 [2024-11-20 11:21:36.387702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.164 qpair failed and we were unable to recover it. 00:27:09.164 [2024-11-20 11:21:36.387897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.164 [2024-11-20 11:21:36.387930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.164 qpair failed and we were unable to recover it. 00:27:09.164 [2024-11-20 11:21:36.388197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.164 [2024-11-20 11:21:36.388231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.164 qpair failed and we were unable to recover it. 00:27:09.164 [2024-11-20 11:21:36.388414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.164 [2024-11-20 11:21:36.388446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.164 qpair failed and we were unable to recover it. 00:27:09.164 [2024-11-20 11:21:36.388626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.164 [2024-11-20 11:21:36.388664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.164 qpair failed and we were unable to recover it. 00:27:09.164 [2024-11-20 11:21:36.388990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.164 [2024-11-20 11:21:36.389025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.164 qpair failed and we were unable to recover it. 00:27:09.164 [2024-11-20 11:21:36.389243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.164 [2024-11-20 11:21:36.389276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.164 qpair failed and we were unable to recover it. 00:27:09.164 [2024-11-20 11:21:36.389543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.164 [2024-11-20 11:21:36.389576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.164 qpair failed and we were unable to recover it. 00:27:09.164 [2024-11-20 11:21:36.389839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.164 [2024-11-20 11:21:36.389872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.164 qpair failed and we were unable to recover it. 00:27:09.164 [2024-11-20 11:21:36.390072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.164 [2024-11-20 11:21:36.390108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.164 qpair failed and we were unable to recover it. 00:27:09.164 [2024-11-20 11:21:36.390383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.164 [2024-11-20 11:21:36.390415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.164 qpair failed and we were unable to recover it. 00:27:09.164 [2024-11-20 11:21:36.390635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.164 [2024-11-20 11:21:36.390666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.164 qpair failed and we were unable to recover it. 00:27:09.164 [2024-11-20 11:21:36.390946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.164 [2024-11-20 11:21:36.390988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.164 qpair failed and we were unable to recover it. 00:27:09.164 [2024-11-20 11:21:36.391243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.164 [2024-11-20 11:21:36.391276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.164 qpair failed and we were unable to recover it. 00:27:09.164 [2024-11-20 11:21:36.391480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.164 [2024-11-20 11:21:36.391513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.164 qpair failed and we were unable to recover it. 00:27:09.164 [2024-11-20 11:21:36.391713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.164 [2024-11-20 11:21:36.391745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.164 qpair failed and we were unable to recover it. 00:27:09.164 [2024-11-20 11:21:36.391945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.164 [2024-11-20 11:21:36.391988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.164 qpair failed and we were unable to recover it. 00:27:09.164 [2024-11-20 11:21:36.392242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.164 [2024-11-20 11:21:36.392275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.164 qpair failed and we were unable to recover it. 00:27:09.164 [2024-11-20 11:21:36.392475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.164 [2024-11-20 11:21:36.392507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.164 qpair failed and we were unable to recover it. 00:27:09.164 [2024-11-20 11:21:36.392688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.164 [2024-11-20 11:21:36.392720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.164 qpair failed and we were unable to recover it. 00:27:09.165 [2024-11-20 11:21:36.392898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.165 [2024-11-20 11:21:36.392931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.165 qpair failed and we were unable to recover it. 00:27:09.165 [2024-11-20 11:21:36.393210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.165 [2024-11-20 11:21:36.393243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.165 qpair failed and we were unable to recover it. 00:27:09.165 [2024-11-20 11:21:36.393508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.165 [2024-11-20 11:21:36.393539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.165 qpair failed and we were unable to recover it. 00:27:09.165 [2024-11-20 11:21:36.393791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.165 [2024-11-20 11:21:36.393823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.165 qpair failed and we were unable to recover it. 00:27:09.165 [2024-11-20 11:21:36.394074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.165 [2024-11-20 11:21:36.394109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.165 qpair failed and we were unable to recover it. 00:27:09.165 [2024-11-20 11:21:36.394357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.165 [2024-11-20 11:21:36.394389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.165 qpair failed and we were unable to recover it. 00:27:09.165 [2024-11-20 11:21:36.394703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.165 [2024-11-20 11:21:36.394735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.165 qpair failed and we were unable to recover it. 00:27:09.165 [2024-11-20 11:21:36.395028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.165 [2024-11-20 11:21:36.395062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.165 qpair failed and we were unable to recover it. 00:27:09.165 [2024-11-20 11:21:36.395200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.165 [2024-11-20 11:21:36.395233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.165 qpair failed and we were unable to recover it. 00:27:09.165 [2024-11-20 11:21:36.395530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.165 [2024-11-20 11:21:36.395563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.165 qpair failed and we were unable to recover it. 00:27:09.165 [2024-11-20 11:21:36.395783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.165 [2024-11-20 11:21:36.395816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.165 qpair failed and we were unable to recover it. 00:27:09.165 [2024-11-20 11:21:36.396027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.165 [2024-11-20 11:21:36.396061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.165 qpair failed and we were unable to recover it. 00:27:09.165 [2024-11-20 11:21:36.396274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.165 [2024-11-20 11:21:36.396307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.165 qpair failed and we were unable to recover it. 00:27:09.165 [2024-11-20 11:21:36.396606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.165 [2024-11-20 11:21:36.396638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.165 qpair failed and we were unable to recover it. 00:27:09.165 [2024-11-20 11:21:36.396903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.165 [2024-11-20 11:21:36.396936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.165 qpair failed and we were unable to recover it. 00:27:09.165 [2024-11-20 11:21:36.397238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.165 [2024-11-20 11:21:36.397271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.165 qpair failed and we were unable to recover it. 00:27:09.165 [2024-11-20 11:21:36.397538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.165 [2024-11-20 11:21:36.397571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.165 qpair failed and we were unable to recover it. 00:27:09.165 [2024-11-20 11:21:36.397867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.165 [2024-11-20 11:21:36.397899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.165 qpair failed and we were unable to recover it. 00:27:09.165 [2024-11-20 11:21:36.398152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.165 [2024-11-20 11:21:36.398187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.165 qpair failed and we were unable to recover it. 00:27:09.165 [2024-11-20 11:21:36.398440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.165 [2024-11-20 11:21:36.398472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.165 qpair failed and we were unable to recover it. 00:27:09.165 [2024-11-20 11:21:36.398732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.165 [2024-11-20 11:21:36.398764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.165 qpair failed and we were unable to recover it. 00:27:09.165 [2024-11-20 11:21:36.398979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.165 [2024-11-20 11:21:36.399015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.165 qpair failed and we were unable to recover it. 00:27:09.165 [2024-11-20 11:21:36.399271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.165 [2024-11-20 11:21:36.399304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.165 qpair failed and we were unable to recover it. 00:27:09.165 [2024-11-20 11:21:36.399412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.165 [2024-11-20 11:21:36.399445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.165 qpair failed and we were unable to recover it. 00:27:09.165 [2024-11-20 11:21:36.399722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.165 [2024-11-20 11:21:36.399756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.165 qpair failed and we were unable to recover it. 00:27:09.165 [2024-11-20 11:21:36.399974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.165 [2024-11-20 11:21:36.400009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.165 qpair failed and we were unable to recover it. 00:27:09.165 [2024-11-20 11:21:36.400133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.165 [2024-11-20 11:21:36.400166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.165 qpair failed and we were unable to recover it. 00:27:09.165 [2024-11-20 11:21:36.400439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.165 [2024-11-20 11:21:36.400471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.165 qpair failed and we were unable to recover it. 00:27:09.165 [2024-11-20 11:21:36.400662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.165 [2024-11-20 11:21:36.400693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.165 qpair failed and we were unable to recover it. 00:27:09.165 [2024-11-20 11:21:36.400981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.165 [2024-11-20 11:21:36.401016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.165 qpair failed and we were unable to recover it. 00:27:09.165 [2024-11-20 11:21:36.401319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.165 [2024-11-20 11:21:36.401354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.165 qpair failed and we were unable to recover it. 00:27:09.165 [2024-11-20 11:21:36.401604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.165 [2024-11-20 11:21:36.401636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.165 qpair failed and we were unable to recover it. 00:27:09.165 [2024-11-20 11:21:36.401924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.165 [2024-11-20 11:21:36.401964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.165 qpair failed and we were unable to recover it. 00:27:09.165 [2024-11-20 11:21:36.402283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.166 [2024-11-20 11:21:36.402316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.166 qpair failed and we were unable to recover it. 00:27:09.166 [2024-11-20 11:21:36.402532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.166 [2024-11-20 11:21:36.402564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.166 qpair failed and we were unable to recover it. 00:27:09.166 [2024-11-20 11:21:36.402842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.166 [2024-11-20 11:21:36.402875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.166 qpair failed and we were unable to recover it. 00:27:09.166 [2024-11-20 11:21:36.403093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.166 [2024-11-20 11:21:36.403127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.166 qpair failed and we were unable to recover it. 00:27:09.166 [2024-11-20 11:21:36.403378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.166 [2024-11-20 11:21:36.403411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.166 qpair failed and we were unable to recover it. 00:27:09.166 [2024-11-20 11:21:36.403675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.166 [2024-11-20 11:21:36.403706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.166 qpair failed and we were unable to recover it. 00:27:09.166 [2024-11-20 11:21:36.403919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.166 [2024-11-20 11:21:36.403960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.166 qpair failed and we were unable to recover it. 00:27:09.166 [2024-11-20 11:21:36.404237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.166 [2024-11-20 11:21:36.404271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.166 qpair failed and we were unable to recover it. 00:27:09.166 [2024-11-20 11:21:36.404551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.166 [2024-11-20 11:21:36.404584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.166 qpair failed and we were unable to recover it. 00:27:09.166 [2024-11-20 11:21:36.404869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.166 [2024-11-20 11:21:36.404901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.166 qpair failed and we were unable to recover it. 00:27:09.166 [2024-11-20 11:21:36.405153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.166 [2024-11-20 11:21:36.405186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.166 qpair failed and we were unable to recover it. 00:27:09.166 [2024-11-20 11:21:36.405312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.166 [2024-11-20 11:21:36.405344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.166 qpair failed and we were unable to recover it. 00:27:09.166 [2024-11-20 11:21:36.405545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.166 [2024-11-20 11:21:36.405577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.166 qpair failed and we were unable to recover it. 00:27:09.166 [2024-11-20 11:21:36.405851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.166 [2024-11-20 11:21:36.405884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.166 qpair failed and we were unable to recover it. 00:27:09.166 [2024-11-20 11:21:36.406078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.166 [2024-11-20 11:21:36.406112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.166 qpair failed and we were unable to recover it. 00:27:09.166 [2024-11-20 11:21:36.406352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.166 [2024-11-20 11:21:36.406384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.166 qpair failed and we were unable to recover it. 00:27:09.166 [2024-11-20 11:21:36.406699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.166 [2024-11-20 11:21:36.406732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.166 qpair failed and we were unable to recover it. 00:27:09.166 [2024-11-20 11:21:36.406992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.166 [2024-11-20 11:21:36.407026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.166 qpair failed and we were unable to recover it. 00:27:09.166 [2024-11-20 11:21:36.407222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.166 [2024-11-20 11:21:36.407255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.166 qpair failed and we were unable to recover it. 00:27:09.166 [2024-11-20 11:21:36.407472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.166 [2024-11-20 11:21:36.407509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.166 qpair failed and we were unable to recover it. 00:27:09.166 [2024-11-20 11:21:36.407781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.166 [2024-11-20 11:21:36.407814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.166 qpair failed and we were unable to recover it. 00:27:09.166 [2024-11-20 11:21:36.408092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.166 [2024-11-20 11:21:36.408126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.166 qpair failed and we were unable to recover it. 00:27:09.166 [2024-11-20 11:21:36.408305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.166 [2024-11-20 11:21:36.408337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.166 qpair failed and we were unable to recover it. 00:27:09.166 [2024-11-20 11:21:36.408561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.166 [2024-11-20 11:21:36.408595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.166 qpair failed and we were unable to recover it. 00:27:09.166 [2024-11-20 11:21:36.408917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.166 [2024-11-20 11:21:36.408965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.166 qpair failed and we were unable to recover it. 00:27:09.166 [2024-11-20 11:21:36.409094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.166 [2024-11-20 11:21:36.409125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.166 qpair failed and we were unable to recover it. 00:27:09.166 [2024-11-20 11:21:36.409391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.166 [2024-11-20 11:21:36.409424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.166 qpair failed and we were unable to recover it. 00:27:09.166 [2024-11-20 11:21:36.409720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.166 [2024-11-20 11:21:36.409752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.166 qpair failed and we were unable to recover it. 00:27:09.166 [2024-11-20 11:21:36.409956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.166 [2024-11-20 11:21:36.409990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.166 qpair failed and we were unable to recover it. 00:27:09.166 [2024-11-20 11:21:36.410188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.166 [2024-11-20 11:21:36.410221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.166 qpair failed and we were unable to recover it. 00:27:09.166 [2024-11-20 11:21:36.410398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.166 [2024-11-20 11:21:36.410430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.166 qpair failed and we were unable to recover it. 00:27:09.166 [2024-11-20 11:21:36.410703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.166 [2024-11-20 11:21:36.410736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.166 qpair failed and we were unable to recover it. 00:27:09.166 [2024-11-20 11:21:36.411011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.166 [2024-11-20 11:21:36.411046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.166 qpair failed and we were unable to recover it. 00:27:09.166 [2024-11-20 11:21:36.411308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.166 [2024-11-20 11:21:36.411340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.167 qpair failed and we were unable to recover it. 00:27:09.167 [2024-11-20 11:21:36.411644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.167 [2024-11-20 11:21:36.411676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.167 qpair failed and we were unable to recover it. 00:27:09.167 [2024-11-20 11:21:36.411940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.167 [2024-11-20 11:21:36.411983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.167 qpair failed and we were unable to recover it. 00:27:09.167 [2024-11-20 11:21:36.412165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.167 [2024-11-20 11:21:36.412196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.167 qpair failed and we were unable to recover it. 00:27:09.167 [2024-11-20 11:21:36.412398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.167 [2024-11-20 11:21:36.412430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.167 qpair failed and we were unable to recover it. 00:27:09.167 [2024-11-20 11:21:36.412647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.167 [2024-11-20 11:21:36.412680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.167 qpair failed and we were unable to recover it. 00:27:09.167 [2024-11-20 11:21:36.412965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.167 [2024-11-20 11:21:36.413000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.167 qpair failed and we were unable to recover it. 00:27:09.167 [2024-11-20 11:21:36.413225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.167 [2024-11-20 11:21:36.413258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.167 qpair failed and we were unable to recover it. 00:27:09.167 [2024-11-20 11:21:36.413386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.167 [2024-11-20 11:21:36.413419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.167 qpair failed and we were unable to recover it. 00:27:09.167 [2024-11-20 11:21:36.413641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.167 [2024-11-20 11:21:36.413673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.167 qpair failed and we were unable to recover it. 00:27:09.167 [2024-11-20 11:21:36.413874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.167 [2024-11-20 11:21:36.413906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.167 qpair failed and we were unable to recover it. 00:27:09.167 [2024-11-20 11:21:36.414157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.167 [2024-11-20 11:21:36.414189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.167 qpair failed and we were unable to recover it. 00:27:09.167 [2024-11-20 11:21:36.414369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.167 [2024-11-20 11:21:36.414402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.167 qpair failed and we were unable to recover it. 00:27:09.167 [2024-11-20 11:21:36.414650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.167 [2024-11-20 11:21:36.414687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.167 qpair failed and we were unable to recover it. 00:27:09.167 [2024-11-20 11:21:36.414981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.167 [2024-11-20 11:21:36.415015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.167 qpair failed and we were unable to recover it. 00:27:09.167 [2024-11-20 11:21:36.415241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.167 [2024-11-20 11:21:36.415274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.167 qpair failed and we were unable to recover it. 00:27:09.167 [2024-11-20 11:21:36.415538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.167 [2024-11-20 11:21:36.415570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.167 qpair failed and we were unable to recover it. 00:27:09.167 [2024-11-20 11:21:36.415753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.167 [2024-11-20 11:21:36.415783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.167 qpair failed and we were unable to recover it. 00:27:09.167 [2024-11-20 11:21:36.415977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.167 [2024-11-20 11:21:36.416011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.167 qpair failed and we were unable to recover it. 00:27:09.167 [2024-11-20 11:21:36.416189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.167 [2024-11-20 11:21:36.416221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.167 qpair failed and we were unable to recover it. 00:27:09.167 [2024-11-20 11:21:36.416491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.167 [2024-11-20 11:21:36.416524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.167 qpair failed and we were unable to recover it. 00:27:09.167 [2024-11-20 11:21:36.416809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.167 [2024-11-20 11:21:36.416841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.167 qpair failed and we were unable to recover it. 00:27:09.167 [2024-11-20 11:21:36.417124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.167 [2024-11-20 11:21:36.417157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.167 qpair failed and we were unable to recover it. 00:27:09.167 [2024-11-20 11:21:36.417407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.167 [2024-11-20 11:21:36.417440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.167 qpair failed and we were unable to recover it. 00:27:09.167 [2024-11-20 11:21:36.417703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.167 [2024-11-20 11:21:36.417735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.167 qpair failed and we were unable to recover it. 00:27:09.167 [2024-11-20 11:21:36.417937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.167 [2024-11-20 11:21:36.417981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.167 qpair failed and we were unable to recover it. 00:27:09.167 [2024-11-20 11:21:36.418184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.167 [2024-11-20 11:21:36.418216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.167 qpair failed and we were unable to recover it. 00:27:09.167 [2024-11-20 11:21:36.418518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.167 [2024-11-20 11:21:36.418550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.167 qpair failed and we were unable to recover it. 00:27:09.167 [2024-11-20 11:21:36.418767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.167 [2024-11-20 11:21:36.418799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.167 qpair failed and we were unable to recover it. 00:27:09.167 [2024-11-20 11:21:36.419071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.167 [2024-11-20 11:21:36.419105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.167 qpair failed and we were unable to recover it. 00:27:09.167 [2024-11-20 11:21:36.419241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.167 [2024-11-20 11:21:36.419274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.167 qpair failed and we were unable to recover it. 00:27:09.167 [2024-11-20 11:21:36.419475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.167 [2024-11-20 11:21:36.419506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.167 qpair failed and we were unable to recover it. 00:27:09.167 [2024-11-20 11:21:36.419755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.167 [2024-11-20 11:21:36.419786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.167 qpair failed and we were unable to recover it. 00:27:09.167 [2024-11-20 11:21:36.420061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.167 [2024-11-20 11:21:36.420094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.167 qpair failed and we were unable to recover it. 00:27:09.168 [2024-11-20 11:21:36.420300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.168 [2024-11-20 11:21:36.420333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.168 qpair failed and we were unable to recover it. 00:27:09.168 [2024-11-20 11:21:36.420518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.168 [2024-11-20 11:21:36.420549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.168 qpair failed and we were unable to recover it. 00:27:09.168 [2024-11-20 11:21:36.420769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.168 [2024-11-20 11:21:36.420801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.168 qpair failed and we were unable to recover it. 00:27:09.168 [2024-11-20 11:21:36.421051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.168 [2024-11-20 11:21:36.421084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.168 qpair failed and we were unable to recover it. 00:27:09.168 [2024-11-20 11:21:36.421345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.168 [2024-11-20 11:21:36.421376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.168 qpair failed and we were unable to recover it. 00:27:09.168 [2024-11-20 11:21:36.421628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.168 [2024-11-20 11:21:36.421660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.168 qpair failed and we were unable to recover it. 00:27:09.168 [2024-11-20 11:21:36.421966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.168 [2024-11-20 11:21:36.422006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.168 qpair failed and we were unable to recover it. 00:27:09.168 [2024-11-20 11:21:36.422299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.168 [2024-11-20 11:21:36.422331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.168 qpair failed and we were unable to recover it. 00:27:09.168 [2024-11-20 11:21:36.422626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.168 [2024-11-20 11:21:36.422658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.168 qpair failed and we were unable to recover it. 00:27:09.168 [2024-11-20 11:21:36.422932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.168 [2024-11-20 11:21:36.422978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.168 qpair failed and we were unable to recover it. 00:27:09.168 [2024-11-20 11:21:36.423163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.168 [2024-11-20 11:21:36.423195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.168 qpair failed and we were unable to recover it. 00:27:09.168 [2024-11-20 11:21:36.423371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.168 [2024-11-20 11:21:36.423404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.168 qpair failed and we were unable to recover it. 00:27:09.168 [2024-11-20 11:21:36.423657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.168 [2024-11-20 11:21:36.423689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.168 qpair failed and we were unable to recover it. 00:27:09.168 [2024-11-20 11:21:36.423978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.168 [2024-11-20 11:21:36.424012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.168 qpair failed and we were unable to recover it. 00:27:09.168 [2024-11-20 11:21:36.424233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.168 [2024-11-20 11:21:36.424266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.168 qpair failed and we were unable to recover it. 00:27:09.168 [2024-11-20 11:21:36.424548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.168 [2024-11-20 11:21:36.424586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.168 qpair failed and we were unable to recover it. 00:27:09.168 [2024-11-20 11:21:36.424733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.168 [2024-11-20 11:21:36.424766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.168 qpair failed and we were unable to recover it. 00:27:09.168 [2024-11-20 11:21:36.425058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.168 [2024-11-20 11:21:36.425092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.168 qpair failed and we were unable to recover it. 00:27:09.168 [2024-11-20 11:21:36.425380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.168 [2024-11-20 11:21:36.425413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.168 qpair failed and we were unable to recover it. 00:27:09.168 [2024-11-20 11:21:36.425710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.168 [2024-11-20 11:21:36.425743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.168 qpair failed and we were unable to recover it. 00:27:09.168 [2024-11-20 11:21:36.426022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.168 [2024-11-20 11:21:36.426058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.168 qpair failed and we were unable to recover it. 00:27:09.168 [2024-11-20 11:21:36.426236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.168 [2024-11-20 11:21:36.426268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.168 qpair failed and we were unable to recover it. 00:27:09.168 [2024-11-20 11:21:36.426541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.168 [2024-11-20 11:21:36.426574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.168 qpair failed and we were unable to recover it. 00:27:09.168 [2024-11-20 11:21:36.426754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.168 [2024-11-20 11:21:36.426787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.168 qpair failed and we were unable to recover it. 00:27:09.168 [2024-11-20 11:21:36.426899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.168 [2024-11-20 11:21:36.426932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.168 qpair failed and we were unable to recover it. 00:27:09.168 [2024-11-20 11:21:36.427218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.168 [2024-11-20 11:21:36.427251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.168 qpair failed and we were unable to recover it. 00:27:09.168 [2024-11-20 11:21:36.427460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.168 [2024-11-20 11:21:36.427493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.168 qpair failed and we were unable to recover it. 00:27:09.168 [2024-11-20 11:21:36.427788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.168 [2024-11-20 11:21:36.427820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.168 qpair failed and we were unable to recover it. 00:27:09.168 [2024-11-20 11:21:36.428089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.168 [2024-11-20 11:21:36.428123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.168 qpair failed and we were unable to recover it. 00:27:09.168 [2024-11-20 11:21:36.428400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.168 [2024-11-20 11:21:36.428432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.168 qpair failed and we were unable to recover it. 00:27:09.168 [2024-11-20 11:21:36.428724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.168 [2024-11-20 11:21:36.428757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.168 qpair failed and we were unable to recover it. 00:27:09.168 [2024-11-20 11:21:36.429033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.168 [2024-11-20 11:21:36.429067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.168 qpair failed and we were unable to recover it. 00:27:09.168 [2024-11-20 11:21:36.429275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.168 [2024-11-20 11:21:36.429308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.168 qpair failed and we were unable to recover it. 00:27:09.169 [2024-11-20 11:21:36.429580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.169 [2024-11-20 11:21:36.429613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.169 qpair failed and we were unable to recover it. 00:27:09.169 [2024-11-20 11:21:36.429902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.169 [2024-11-20 11:21:36.429934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.169 qpair failed and we were unable to recover it. 00:27:09.169 [2024-11-20 11:21:36.430213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.169 [2024-11-20 11:21:36.430247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.169 qpair failed and we were unable to recover it. 00:27:09.169 [2024-11-20 11:21:36.430529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.169 [2024-11-20 11:21:36.430562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.169 qpair failed and we were unable to recover it. 00:27:09.169 [2024-11-20 11:21:36.430851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.169 [2024-11-20 11:21:36.430883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.169 qpair failed and we were unable to recover it. 00:27:09.169 [2024-11-20 11:21:36.431132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.169 [2024-11-20 11:21:36.431165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.169 qpair failed and we were unable to recover it. 00:27:09.169 [2024-11-20 11:21:36.431365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.169 [2024-11-20 11:21:36.431398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.169 qpair failed and we were unable to recover it. 00:27:09.169 [2024-11-20 11:21:36.431676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.169 [2024-11-20 11:21:36.431710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.169 qpair failed and we were unable to recover it. 00:27:09.169 [2024-11-20 11:21:36.432013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.169 [2024-11-20 11:21:36.432047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.169 qpair failed and we were unable to recover it. 00:27:09.169 [2024-11-20 11:21:36.432309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.169 [2024-11-20 11:21:36.432342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.169 qpair failed and we were unable to recover it. 00:27:09.169 [2024-11-20 11:21:36.432565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.169 [2024-11-20 11:21:36.432597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.169 qpair failed and we were unable to recover it. 00:27:09.169 [2024-11-20 11:21:36.432850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.169 [2024-11-20 11:21:36.432882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.169 qpair failed and we were unable to recover it. 00:27:09.169 [2024-11-20 11:21:36.433181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.169 [2024-11-20 11:21:36.433215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.169 qpair failed and we were unable to recover it. 00:27:09.169 [2024-11-20 11:21:36.433345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.169 [2024-11-20 11:21:36.433376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.169 qpair failed and we were unable to recover it. 00:27:09.169 [2024-11-20 11:21:36.433659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.169 [2024-11-20 11:21:36.433691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.169 qpair failed and we were unable to recover it. 00:27:09.169 [2024-11-20 11:21:36.433837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.169 [2024-11-20 11:21:36.433870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.169 qpair failed and we were unable to recover it. 00:27:09.169 [2024-11-20 11:21:36.434142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.169 [2024-11-20 11:21:36.434176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.169 qpair failed and we were unable to recover it. 00:27:09.169 [2024-11-20 11:21:36.434357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.169 [2024-11-20 11:21:36.434389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.169 qpair failed and we were unable to recover it. 00:27:09.169 [2024-11-20 11:21:36.434592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.169 [2024-11-20 11:21:36.434623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.169 qpair failed and we were unable to recover it. 00:27:09.169 [2024-11-20 11:21:36.434895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.169 [2024-11-20 11:21:36.434928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.169 qpair failed and we were unable to recover it. 00:27:09.169 [2024-11-20 11:21:36.435216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.169 [2024-11-20 11:21:36.435249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.169 qpair failed and we were unable to recover it. 00:27:09.169 [2024-11-20 11:21:36.435504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.169 [2024-11-20 11:21:36.435537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.169 qpair failed and we were unable to recover it. 00:27:09.169 [2024-11-20 11:21:36.435844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.169 [2024-11-20 11:21:36.435877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.169 qpair failed and we were unable to recover it. 00:27:09.169 [2024-11-20 11:21:36.436080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.169 [2024-11-20 11:21:36.436114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.169 qpair failed and we were unable to recover it. 00:27:09.169 [2024-11-20 11:21:36.436389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.169 [2024-11-20 11:21:36.436421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.169 qpair failed and we were unable to recover it. 00:27:09.169 [2024-11-20 11:21:36.436701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.169 [2024-11-20 11:21:36.436734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.169 qpair failed and we were unable to recover it. 00:27:09.169 [2024-11-20 11:21:36.437023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.169 [2024-11-20 11:21:36.437057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.169 qpair failed and we were unable to recover it. 00:27:09.169 [2024-11-20 11:21:36.437308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.169 [2024-11-20 11:21:36.437341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.169 qpair failed and we were unable to recover it. 00:27:09.169 [2024-11-20 11:21:36.437648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.169 [2024-11-20 11:21:36.437680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.169 qpair failed and we were unable to recover it. 00:27:09.169 [2024-11-20 11:21:36.437974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.169 [2024-11-20 11:21:36.438007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.169 qpair failed and we were unable to recover it. 00:27:09.169 [2024-11-20 11:21:36.438209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.170 [2024-11-20 11:21:36.438242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.170 qpair failed and we were unable to recover it. 00:27:09.170 [2024-11-20 11:21:36.438445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.170 [2024-11-20 11:21:36.438478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.170 qpair failed and we were unable to recover it. 00:27:09.170 [2024-11-20 11:21:36.438735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.170 [2024-11-20 11:21:36.438768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.170 qpair failed and we were unable to recover it. 00:27:09.170 [2024-11-20 11:21:36.438959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.170 [2024-11-20 11:21:36.438993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.170 qpair failed and we were unable to recover it. 00:27:09.170 [2024-11-20 11:21:36.439267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.170 [2024-11-20 11:21:36.439300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.170 qpair failed and we were unable to recover it. 00:27:09.170 [2024-11-20 11:21:36.439481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.170 [2024-11-20 11:21:36.439513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.170 qpair failed and we were unable to recover it. 00:27:09.170 [2024-11-20 11:21:36.439693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.170 [2024-11-20 11:21:36.439725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.170 qpair failed and we were unable to recover it. 00:27:09.170 [2024-11-20 11:21:36.439904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.170 [2024-11-20 11:21:36.439936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.170 qpair failed and we were unable to recover it. 00:27:09.170 [2024-11-20 11:21:36.440163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.170 [2024-11-20 11:21:36.440196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.170 qpair failed and we were unable to recover it. 00:27:09.170 [2024-11-20 11:21:36.440395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.170 [2024-11-20 11:21:36.440428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.170 qpair failed and we were unable to recover it. 00:27:09.170 [2024-11-20 11:21:36.440699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.170 [2024-11-20 11:21:36.440731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.170 qpair failed and we were unable to recover it. 00:27:09.170 [2024-11-20 11:21:36.440930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.170 [2024-11-20 11:21:36.440994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.170 qpair failed and we were unable to recover it. 00:27:09.170 [2024-11-20 11:21:36.441279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.170 [2024-11-20 11:21:36.441312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.170 qpair failed and we were unable to recover it. 00:27:09.170 [2024-11-20 11:21:36.441516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.170 [2024-11-20 11:21:36.441548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.170 qpair failed and we were unable to recover it. 00:27:09.170 [2024-11-20 11:21:36.441845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.170 [2024-11-20 11:21:36.441878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.170 qpair failed and we were unable to recover it. 00:27:09.170 [2024-11-20 11:21:36.442147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.170 [2024-11-20 11:21:36.442182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.170 qpair failed and we were unable to recover it. 00:27:09.170 [2024-11-20 11:21:36.442385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.170 [2024-11-20 11:21:36.442417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.170 qpair failed and we were unable to recover it. 00:27:09.170 [2024-11-20 11:21:36.442673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.170 [2024-11-20 11:21:36.442705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.170 qpair failed and we were unable to recover it. 00:27:09.170 [2024-11-20 11:21:36.443002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.170 [2024-11-20 11:21:36.443037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.170 qpair failed and we were unable to recover it. 00:27:09.170 [2024-11-20 11:21:36.443258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.170 [2024-11-20 11:21:36.443291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.170 qpair failed and we were unable to recover it. 00:27:09.170 [2024-11-20 11:21:36.443567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.170 [2024-11-20 11:21:36.443600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.170 qpair failed and we were unable to recover it. 00:27:09.170 [2024-11-20 11:21:36.443742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.170 [2024-11-20 11:21:36.443775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.170 qpair failed and we were unable to recover it. 00:27:09.170 [2024-11-20 11:21:36.444088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.170 [2024-11-20 11:21:36.444122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.170 qpair failed and we were unable to recover it. 00:27:09.170 [2024-11-20 11:21:36.444326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.170 [2024-11-20 11:21:36.444358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.170 qpair failed and we were unable to recover it. 00:27:09.170 [2024-11-20 11:21:36.444576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.170 [2024-11-20 11:21:36.444608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.170 qpair failed and we were unable to recover it. 00:27:09.170 [2024-11-20 11:21:36.444792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.170 [2024-11-20 11:21:36.444825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.170 qpair failed and we were unable to recover it. 00:27:09.170 [2024-11-20 11:21:36.444946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.170 [2024-11-20 11:21:36.444996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.170 qpair failed and we were unable to recover it. 00:27:09.170 [2024-11-20 11:21:36.445140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.170 [2024-11-20 11:21:36.445173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.170 qpair failed and we were unable to recover it. 00:27:09.170 [2024-11-20 11:21:36.445374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.170 [2024-11-20 11:21:36.445406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.170 qpair failed and we were unable to recover it. 00:27:09.170 [2024-11-20 11:21:36.445607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.170 [2024-11-20 11:21:36.445640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.170 qpair failed and we were unable to recover it. 00:27:09.170 [2024-11-20 11:21:36.445827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.170 [2024-11-20 11:21:36.445859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.170 qpair failed and we were unable to recover it. 00:27:09.170 [2024-11-20 11:21:36.446137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.170 [2024-11-20 11:21:36.446172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.170 qpair failed and we were unable to recover it. 00:27:09.170 [2024-11-20 11:21:36.446456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.170 [2024-11-20 11:21:36.446489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.170 qpair failed and we were unable to recover it. 00:27:09.170 [2024-11-20 11:21:36.446767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.171 [2024-11-20 11:21:36.446800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.171 qpair failed and we were unable to recover it. 00:27:09.171 [2024-11-20 11:21:36.447025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.171 [2024-11-20 11:21:36.447059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.171 qpair failed and we were unable to recover it. 00:27:09.171 [2024-11-20 11:21:36.447315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.171 [2024-11-20 11:21:36.447346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.171 qpair failed and we were unable to recover it. 00:27:09.171 [2024-11-20 11:21:36.447530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.171 [2024-11-20 11:21:36.447562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.171 qpair failed and we were unable to recover it. 00:27:09.171 [2024-11-20 11:21:36.447841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.171 [2024-11-20 11:21:36.447874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.171 qpair failed and we were unable to recover it. 00:27:09.171 [2024-11-20 11:21:36.448075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.171 [2024-11-20 11:21:36.448114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.171 qpair failed and we were unable to recover it. 00:27:09.171 [2024-11-20 11:21:36.448322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.171 [2024-11-20 11:21:36.448355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.171 qpair failed and we were unable to recover it. 00:27:09.171 [2024-11-20 11:21:36.448546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.171 [2024-11-20 11:21:36.448579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.171 qpair failed and we were unable to recover it. 00:27:09.171 [2024-11-20 11:21:36.448762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.171 [2024-11-20 11:21:36.448794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.171 qpair failed and we were unable to recover it. 00:27:09.171 [2024-11-20 11:21:36.449070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.171 [2024-11-20 11:21:36.449104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.171 qpair failed and we were unable to recover it. 00:27:09.171 [2024-11-20 11:21:36.449373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.171 [2024-11-20 11:21:36.449406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.171 qpair failed and we were unable to recover it. 00:27:09.171 [2024-11-20 11:21:36.449684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.171 [2024-11-20 11:21:36.449716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.171 qpair failed and we were unable to recover it. 00:27:09.171 [2024-11-20 11:21:36.449919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.171 [2024-11-20 11:21:36.449962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.171 qpair failed and we were unable to recover it. 00:27:09.171 [2024-11-20 11:21:36.450235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.171 [2024-11-20 11:21:36.450268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.171 qpair failed and we were unable to recover it. 00:27:09.171 [2024-11-20 11:21:36.450552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.171 [2024-11-20 11:21:36.450585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.171 qpair failed and we were unable to recover it. 00:27:09.171 [2024-11-20 11:21:36.450841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.171 [2024-11-20 11:21:36.450873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.171 qpair failed and we were unable to recover it. 00:27:09.171 [2024-11-20 11:21:36.451128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.171 [2024-11-20 11:21:36.451162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.171 qpair failed and we were unable to recover it. 00:27:09.171 [2024-11-20 11:21:36.451462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.171 [2024-11-20 11:21:36.451495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.171 qpair failed and we were unable to recover it. 00:27:09.171 [2024-11-20 11:21:36.451803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.171 [2024-11-20 11:21:36.451835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.171 qpair failed and we were unable to recover it. 00:27:09.171 [2024-11-20 11:21:36.452048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.171 [2024-11-20 11:21:36.452082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.171 qpair failed and we were unable to recover it. 00:27:09.171 [2024-11-20 11:21:36.452279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.171 [2024-11-20 11:21:36.452313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.171 qpair failed and we were unable to recover it. 00:27:09.171 [2024-11-20 11:21:36.452589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.171 [2024-11-20 11:21:36.452621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.171 qpair failed and we were unable to recover it. 00:27:09.171 [2024-11-20 11:21:36.452858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.171 [2024-11-20 11:21:36.452890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.171 qpair failed and we were unable to recover it. 00:27:09.171 [2024-11-20 11:21:36.453093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.171 [2024-11-20 11:21:36.453128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.171 qpair failed and we were unable to recover it. 00:27:09.171 [2024-11-20 11:21:36.453407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.171 [2024-11-20 11:21:36.453439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.171 qpair failed and we were unable to recover it. 00:27:09.171 [2024-11-20 11:21:36.453690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.171 [2024-11-20 11:21:36.453721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.171 qpair failed and we were unable to recover it. 00:27:09.171 [2024-11-20 11:21:36.453988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.171 [2024-11-20 11:21:36.454023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.171 qpair failed and we were unable to recover it. 00:27:09.171 [2024-11-20 11:21:36.454364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.171 [2024-11-20 11:21:36.454400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.171 qpair failed and we were unable to recover it. 00:27:09.171 [2024-11-20 11:21:36.454724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.171 [2024-11-20 11:21:36.454756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.171 qpair failed and we were unable to recover it. 00:27:09.171 [2024-11-20 11:21:36.455030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.171 [2024-11-20 11:21:36.455065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.171 qpair failed and we were unable to recover it. 00:27:09.171 [2024-11-20 11:21:36.455297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.171 [2024-11-20 11:21:36.455329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.171 qpair failed and we were unable to recover it. 00:27:09.171 [2024-11-20 11:21:36.455602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.171 [2024-11-20 11:21:36.455634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.171 qpair failed and we were unable to recover it. 00:27:09.171 [2024-11-20 11:21:36.455928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.171 [2024-11-20 11:21:36.455970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.171 qpair failed and we were unable to recover it. 00:27:09.171 [2024-11-20 11:21:36.456182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.171 [2024-11-20 11:21:36.456215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.172 qpair failed and we were unable to recover it. 00:27:09.172 [2024-11-20 11:21:36.456483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.172 [2024-11-20 11:21:36.456514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.172 qpair failed and we were unable to recover it. 00:27:09.172 [2024-11-20 11:21:36.456712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.172 [2024-11-20 11:21:36.456744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.172 qpair failed and we were unable to recover it. 00:27:09.172 [2024-11-20 11:21:36.457007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.172 [2024-11-20 11:21:36.457041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.172 qpair failed and we were unable to recover it. 00:27:09.172 [2024-11-20 11:21:36.457233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.172 [2024-11-20 11:21:36.457265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.172 qpair failed and we were unable to recover it. 00:27:09.172 [2024-11-20 11:21:36.457541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.172 [2024-11-20 11:21:36.457573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.172 qpair failed and we were unable to recover it. 00:27:09.172 [2024-11-20 11:21:36.457852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.172 [2024-11-20 11:21:36.457884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.172 qpair failed and we were unable to recover it. 00:27:09.172 [2024-11-20 11:21:36.458155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.172 [2024-11-20 11:21:36.458189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.172 qpair failed and we were unable to recover it. 00:27:09.172 [2024-11-20 11:21:36.458384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.172 [2024-11-20 11:21:36.458417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.172 qpair failed and we were unable to recover it. 00:27:09.172 [2024-11-20 11:21:36.458688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.172 [2024-11-20 11:21:36.458721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.172 qpair failed and we were unable to recover it. 00:27:09.172 [2024-11-20 11:21:36.458985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.172 [2024-11-20 11:21:36.459020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.172 qpair failed and we were unable to recover it. 00:27:09.172 [2024-11-20 11:21:36.459211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.172 [2024-11-20 11:21:36.459244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.172 qpair failed and we were unable to recover it. 00:27:09.172 [2024-11-20 11:21:36.459518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.172 [2024-11-20 11:21:36.459550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.172 qpair failed and we were unable to recover it. 00:27:09.172 [2024-11-20 11:21:36.459830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.172 [2024-11-20 11:21:36.459863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.172 qpair failed and we were unable to recover it. 00:27:09.172 [2024-11-20 11:21:36.460091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.172 [2024-11-20 11:21:36.460125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.172 qpair failed and we were unable to recover it. 00:27:09.172 [2024-11-20 11:21:36.460310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.172 [2024-11-20 11:21:36.460343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.172 qpair failed and we were unable to recover it. 00:27:09.172 [2024-11-20 11:21:36.460607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.172 [2024-11-20 11:21:36.460640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.172 qpair failed and we were unable to recover it. 00:27:09.172 [2024-11-20 11:21:36.460940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.172 [2024-11-20 11:21:36.460982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.172 qpair failed and we were unable to recover it. 00:27:09.172 [2024-11-20 11:21:36.461243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.172 [2024-11-20 11:21:36.461276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.172 qpair failed and we were unable to recover it. 00:27:09.172 [2024-11-20 11:21:36.461404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.172 [2024-11-20 11:21:36.461436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.172 qpair failed and we were unable to recover it. 00:27:09.172 [2024-11-20 11:21:36.461684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.172 [2024-11-20 11:21:36.461716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.172 qpair failed and we were unable to recover it. 00:27:09.172 [2024-11-20 11:21:36.461857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.172 [2024-11-20 11:21:36.461890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.172 qpair failed and we were unable to recover it. 00:27:09.172 [2024-11-20 11:21:36.462165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.172 [2024-11-20 11:21:36.462198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.172 qpair failed and we were unable to recover it. 00:27:09.172 [2024-11-20 11:21:36.462447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.172 [2024-11-20 11:21:36.462479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.172 qpair failed and we were unable to recover it. 00:27:09.172 [2024-11-20 11:21:36.462687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.172 [2024-11-20 11:21:36.462720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.172 qpair failed and we were unable to recover it. 00:27:09.172 [2024-11-20 11:21:36.463031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.172 [2024-11-20 11:21:36.463065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.172 qpair failed and we were unable to recover it. 00:27:09.172 [2024-11-20 11:21:36.463275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.172 [2024-11-20 11:21:36.463308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.172 qpair failed and we were unable to recover it. 00:27:09.172 [2024-11-20 11:21:36.463518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.172 [2024-11-20 11:21:36.463549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.172 qpair failed and we were unable to recover it. 00:27:09.172 [2024-11-20 11:21:36.463745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.172 [2024-11-20 11:21:36.463778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.172 qpair failed and we were unable to recover it. 00:27:09.172 [2024-11-20 11:21:36.464073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.172 [2024-11-20 11:21:36.464106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.172 qpair failed and we were unable to recover it. 00:27:09.172 [2024-11-20 11:21:36.464396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.172 [2024-11-20 11:21:36.464429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.172 qpair failed and we were unable to recover it. 00:27:09.172 [2024-11-20 11:21:36.464726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.172 [2024-11-20 11:21:36.464758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.172 qpair failed and we were unable to recover it. 00:27:09.172 [2024-11-20 11:21:36.465031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.172 [2024-11-20 11:21:36.465065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.172 qpair failed and we were unable to recover it. 00:27:09.172 [2024-11-20 11:21:36.465354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.172 [2024-11-20 11:21:36.465387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.172 qpair failed and we were unable to recover it. 00:27:09.173 [2024-11-20 11:21:36.465664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.173 [2024-11-20 11:21:36.465696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.173 qpair failed and we were unable to recover it. 00:27:09.173 [2024-11-20 11:21:36.465946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.173 [2024-11-20 11:21:36.465990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.173 qpair failed and we were unable to recover it. 00:27:09.173 [2024-11-20 11:21:36.466266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.173 [2024-11-20 11:21:36.466300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.173 qpair failed and we were unable to recover it. 00:27:09.173 [2024-11-20 11:21:36.466580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.173 [2024-11-20 11:21:36.466613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.173 qpair failed and we were unable to recover it. 00:27:09.173 [2024-11-20 11:21:36.466860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.173 [2024-11-20 11:21:36.466893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.173 qpair failed and we were unable to recover it. 00:27:09.173 [2024-11-20 11:21:36.467184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.173 [2024-11-20 11:21:36.467218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.173 qpair failed and we were unable to recover it. 00:27:09.173 [2024-11-20 11:21:36.467493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.173 [2024-11-20 11:21:36.467531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.173 qpair failed and we were unable to recover it. 00:27:09.173 [2024-11-20 11:21:36.467785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.173 [2024-11-20 11:21:36.467817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.173 qpair failed and we were unable to recover it. 00:27:09.173 [2024-11-20 11:21:36.468113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.173 [2024-11-20 11:21:36.468148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.173 qpair failed and we were unable to recover it. 00:27:09.173 [2024-11-20 11:21:36.468371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.173 [2024-11-20 11:21:36.468403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.173 qpair failed and we were unable to recover it. 00:27:09.173 [2024-11-20 11:21:36.468670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.173 [2024-11-20 11:21:36.468703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.173 qpair failed and we were unable to recover it. 00:27:09.173 [2024-11-20 11:21:36.469007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.173 [2024-11-20 11:21:36.469042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.173 qpair failed and we were unable to recover it. 00:27:09.173 [2024-11-20 11:21:36.469234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.173 [2024-11-20 11:21:36.469266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.173 qpair failed and we were unable to recover it. 00:27:09.173 [2024-11-20 11:21:36.469390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.173 [2024-11-20 11:21:36.469423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.173 qpair failed and we were unable to recover it. 00:27:09.173 [2024-11-20 11:21:36.469713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.173 [2024-11-20 11:21:36.469746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.173 qpair failed and we were unable to recover it. 00:27:09.173 [2024-11-20 11:21:36.470000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.173 [2024-11-20 11:21:36.470033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.173 qpair failed and we were unable to recover it. 00:27:09.173 [2024-11-20 11:21:36.470304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.173 [2024-11-20 11:21:36.470337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.173 qpair failed and we were unable to recover it. 00:27:09.173 [2024-11-20 11:21:36.470538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.173 [2024-11-20 11:21:36.470571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.173 qpair failed and we were unable to recover it. 00:27:09.173 [2024-11-20 11:21:36.470868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.173 [2024-11-20 11:21:36.470901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.173 qpair failed and we were unable to recover it. 00:27:09.173 [2024-11-20 11:21:36.471120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.173 [2024-11-20 11:21:36.471155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.173 qpair failed and we were unable to recover it. 00:27:09.173 [2024-11-20 11:21:36.471422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.173 [2024-11-20 11:21:36.471455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.173 qpair failed and we were unable to recover it. 00:27:09.173 [2024-11-20 11:21:36.471657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.173 [2024-11-20 11:21:36.471690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.173 qpair failed and we were unable to recover it. 00:27:09.173 [2024-11-20 11:21:36.471964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.173 [2024-11-20 11:21:36.471998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.173 qpair failed and we were unable to recover it. 00:27:09.173 [2024-11-20 11:21:36.472183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.173 [2024-11-20 11:21:36.472215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.173 qpair failed and we were unable to recover it. 00:27:09.173 [2024-11-20 11:21:36.472483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.173 [2024-11-20 11:21:36.472515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.173 qpair failed and we were unable to recover it. 00:27:09.173 [2024-11-20 11:21:36.472764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.173 [2024-11-20 11:21:36.472796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.173 qpair failed and we were unable to recover it. 00:27:09.173 [2024-11-20 11:21:36.473097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.173 [2024-11-20 11:21:36.473132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.173 qpair failed and we were unable to recover it. 00:27:09.173 [2024-11-20 11:21:36.473393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.173 [2024-11-20 11:21:36.473425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.173 qpair failed and we were unable to recover it. 00:27:09.173 [2024-11-20 11:21:36.473726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.173 [2024-11-20 11:21:36.473758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.173 qpair failed and we were unable to recover it. 00:27:09.173 [2024-11-20 11:21:36.473960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.173 [2024-11-20 11:21:36.473994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.173 qpair failed and we were unable to recover it. 00:27:09.173 [2024-11-20 11:21:36.474196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.173 [2024-11-20 11:21:36.474229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.173 qpair failed and we were unable to recover it. 00:27:09.173 [2024-11-20 11:21:36.474489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.173 [2024-11-20 11:21:36.474521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.173 qpair failed and we were unable to recover it. 00:27:09.173 [2024-11-20 11:21:36.474818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.173 [2024-11-20 11:21:36.474852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.173 qpair failed and we were unable to recover it. 00:27:09.174 [2024-11-20 11:21:36.475054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.174 [2024-11-20 11:21:36.475095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.174 qpair failed and we were unable to recover it. 00:27:09.174 [2024-11-20 11:21:36.475374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.174 [2024-11-20 11:21:36.475407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.174 qpair failed and we were unable to recover it. 00:27:09.174 [2024-11-20 11:21:36.475680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.174 [2024-11-20 11:21:36.475713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.174 qpair failed and we were unable to recover it. 00:27:09.174 [2024-11-20 11:21:36.476002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.174 [2024-11-20 11:21:36.476035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.174 qpair failed and we were unable to recover it. 00:27:09.174 [2024-11-20 11:21:36.476320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.174 [2024-11-20 11:21:36.476352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.174 qpair failed and we were unable to recover it. 00:27:09.174 [2024-11-20 11:21:36.476554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.174 [2024-11-20 11:21:36.476586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.174 qpair failed and we were unable to recover it. 00:27:09.174 [2024-11-20 11:21:36.476762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.174 [2024-11-20 11:21:36.476794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.174 qpair failed and we were unable to recover it. 00:27:09.174 [2024-11-20 11:21:36.477065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.174 [2024-11-20 11:21:36.477099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.174 qpair failed and we were unable to recover it. 00:27:09.174 [2024-11-20 11:21:36.477398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.174 [2024-11-20 11:21:36.477430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.174 qpair failed and we were unable to recover it. 00:27:09.174 [2024-11-20 11:21:36.477700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.174 [2024-11-20 11:21:36.477732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.174 qpair failed and we were unable to recover it. 00:27:09.174 [2024-11-20 11:21:36.478000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.174 [2024-11-20 11:21:36.478034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.174 qpair failed and we were unable to recover it. 00:27:09.174 [2024-11-20 11:21:36.478254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.174 [2024-11-20 11:21:36.478287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.174 qpair failed and we were unable to recover it. 00:27:09.174 [2024-11-20 11:21:36.478562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.174 [2024-11-20 11:21:36.478594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.174 qpair failed and we were unable to recover it. 00:27:09.174 [2024-11-20 11:21:36.478889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.174 [2024-11-20 11:21:36.478922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.174 qpair failed and we were unable to recover it. 00:27:09.174 [2024-11-20 11:21:36.479131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.174 [2024-11-20 11:21:36.479166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.174 qpair failed and we were unable to recover it. 00:27:09.174 [2024-11-20 11:21:36.479445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.174 [2024-11-20 11:21:36.479477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.174 qpair failed and we were unable to recover it. 00:27:09.174 [2024-11-20 11:21:36.479727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.174 [2024-11-20 11:21:36.479760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.174 qpair failed and we were unable to recover it. 00:27:09.174 [2024-11-20 11:21:36.479943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.174 [2024-11-20 11:21:36.479986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.174 qpair failed and we were unable to recover it. 00:27:09.174 [2024-11-20 11:21:36.480263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.174 [2024-11-20 11:21:36.480296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.174 qpair failed and we were unable to recover it. 00:27:09.174 [2024-11-20 11:21:36.480556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.174 [2024-11-20 11:21:36.480589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.174 qpair failed and we were unable to recover it. 00:27:09.174 [2024-11-20 11:21:36.480891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.174 [2024-11-20 11:21:36.480924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.174 qpair failed and we were unable to recover it. 00:27:09.174 [2024-11-20 11:21:36.481187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.174 [2024-11-20 11:21:36.481221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.174 qpair failed and we were unable to recover it. 00:27:09.174 [2024-11-20 11:21:36.481450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.174 [2024-11-20 11:21:36.481482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.174 qpair failed and we were unable to recover it. 00:27:09.174 [2024-11-20 11:21:36.481671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.174 [2024-11-20 11:21:36.481704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.174 qpair failed and we were unable to recover it. 00:27:09.174 [2024-11-20 11:21:36.481883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.174 [2024-11-20 11:21:36.481916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.174 qpair failed and we were unable to recover it. 00:27:09.174 [2024-11-20 11:21:36.482210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.174 [2024-11-20 11:21:36.482244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.174 qpair failed and we were unable to recover it. 00:27:09.174 [2024-11-20 11:21:36.482538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.174 [2024-11-20 11:21:36.482571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.174 qpair failed and we were unable to recover it. 00:27:09.174 [2024-11-20 11:21:36.482795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.174 [2024-11-20 11:21:36.482833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.174 qpair failed and we were unable to recover it. 00:27:09.174 [2024-11-20 11:21:36.483060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.174 [2024-11-20 11:21:36.483095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.174 qpair failed and we were unable to recover it. 00:27:09.174 [2024-11-20 11:21:36.483369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.174 [2024-11-20 11:21:36.483402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.174 qpair failed and we were unable to recover it. 00:27:09.174 [2024-11-20 11:21:36.483667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.175 [2024-11-20 11:21:36.483699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.175 qpair failed and we were unable to recover it. 00:27:09.175 [2024-11-20 11:21:36.483957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.175 [2024-11-20 11:21:36.483992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.175 qpair failed and we were unable to recover it. 00:27:09.175 [2024-11-20 11:21:36.484262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.175 [2024-11-20 11:21:36.484295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.175 qpair failed and we were unable to recover it. 00:27:09.175 [2024-11-20 11:21:36.484570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.175 [2024-11-20 11:21:36.484602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.175 qpair failed and we were unable to recover it. 00:27:09.175 [2024-11-20 11:21:36.484851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.175 [2024-11-20 11:21:36.484884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.175 qpair failed and we were unable to recover it. 00:27:09.175 [2024-11-20 11:21:36.485092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.175 [2024-11-20 11:21:36.485125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.175 qpair failed and we were unable to recover it. 00:27:09.175 [2024-11-20 11:21:36.485336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.175 [2024-11-20 11:21:36.485369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.175 qpair failed and we were unable to recover it. 00:27:09.175 [2024-11-20 11:21:36.485643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.175 [2024-11-20 11:21:36.485676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.175 qpair failed and we were unable to recover it. 00:27:09.175 [2024-11-20 11:21:36.485938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.175 [2024-11-20 11:21:36.485982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.175 qpair failed and we were unable to recover it. 00:27:09.175 [2024-11-20 11:21:36.486199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.175 [2024-11-20 11:21:36.486231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.175 qpair failed and we were unable to recover it. 00:27:09.175 [2024-11-20 11:21:36.486511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.175 [2024-11-20 11:21:36.486544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.175 qpair failed and we were unable to recover it. 00:27:09.175 [2024-11-20 11:21:36.486769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.175 [2024-11-20 11:21:36.486802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.175 qpair failed and we were unable to recover it. 00:27:09.175 [2024-11-20 11:21:36.487088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.175 [2024-11-20 11:21:36.487122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.175 qpair failed and we were unable to recover it. 00:27:09.175 [2024-11-20 11:21:36.487397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.175 [2024-11-20 11:21:36.487430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.175 qpair failed and we were unable to recover it. 00:27:09.175 [2024-11-20 11:21:36.487719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.175 [2024-11-20 11:21:36.487752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.175 qpair failed and we were unable to recover it. 00:27:09.175 [2024-11-20 11:21:36.487961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.175 [2024-11-20 11:21:36.487996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.175 qpair failed and we were unable to recover it. 00:27:09.175 [2024-11-20 11:21:36.488192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.175 [2024-11-20 11:21:36.488224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.175 qpair failed and we were unable to recover it. 00:27:09.175 [2024-11-20 11:21:36.488478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.175 [2024-11-20 11:21:36.488510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.175 qpair failed and we were unable to recover it. 00:27:09.175 [2024-11-20 11:21:36.488815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.175 [2024-11-20 11:21:36.488848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.175 qpair failed and we were unable to recover it. 00:27:09.175 [2024-11-20 11:21:36.489111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.175 [2024-11-20 11:21:36.489145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.175 qpair failed and we were unable to recover it. 00:27:09.175 [2024-11-20 11:21:36.489290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.175 [2024-11-20 11:21:36.489323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.175 qpair failed and we were unable to recover it. 00:27:09.175 [2024-11-20 11:21:36.489540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.175 [2024-11-20 11:21:36.489572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.175 qpair failed and we were unable to recover it. 00:27:09.175 [2024-11-20 11:21:36.489753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.175 [2024-11-20 11:21:36.489784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.175 qpair failed and we were unable to recover it. 00:27:09.175 [2024-11-20 11:21:36.489935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.175 [2024-11-20 11:21:36.489989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.175 qpair failed and we were unable to recover it. 00:27:09.175 [2024-11-20 11:21:36.490286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.175 [2024-11-20 11:21:36.490319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.175 qpair failed and we were unable to recover it. 00:27:09.175 [2024-11-20 11:21:36.490605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.175 [2024-11-20 11:21:36.490636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.175 qpair failed and we were unable to recover it. 00:27:09.175 [2024-11-20 11:21:36.490891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.175 [2024-11-20 11:21:36.490924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.175 qpair failed and we were unable to recover it. 00:27:09.175 [2024-11-20 11:21:36.491216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.175 [2024-11-20 11:21:36.491250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.175 qpair failed and we were unable to recover it. 00:27:09.175 [2024-11-20 11:21:36.491519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.175 [2024-11-20 11:21:36.491551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.175 qpair failed and we were unable to recover it. 00:27:09.175 [2024-11-20 11:21:36.491750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.175 [2024-11-20 11:21:36.491783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.175 qpair failed and we were unable to recover it. 00:27:09.175 [2024-11-20 11:21:36.492037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.175 [2024-11-20 11:21:36.492071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.175 qpair failed and we were unable to recover it. 00:27:09.175 [2024-11-20 11:21:36.492262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.175 [2024-11-20 11:21:36.492294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.175 qpair failed and we were unable to recover it. 00:27:09.175 [2024-11-20 11:21:36.492498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.175 [2024-11-20 11:21:36.492530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.175 qpair failed and we were unable to recover it. 00:27:09.175 [2024-11-20 11:21:36.492782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.176 [2024-11-20 11:21:36.492815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.176 qpair failed and we were unable to recover it. 00:27:09.176 [2024-11-20 11:21:36.493081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.176 [2024-11-20 11:21:36.493115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.176 qpair failed and we were unable to recover it. 00:27:09.176 [2024-11-20 11:21:36.493364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.176 [2024-11-20 11:21:36.493397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.176 qpair failed and we were unable to recover it. 00:27:09.176 [2024-11-20 11:21:36.493650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.176 [2024-11-20 11:21:36.493682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.176 qpair failed and we were unable to recover it. 00:27:09.176 [2024-11-20 11:21:36.493960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.176 [2024-11-20 11:21:36.493992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.176 qpair failed and we were unable to recover it. 00:27:09.176 [2024-11-20 11:21:36.494212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.176 [2024-11-20 11:21:36.494244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.176 qpair failed and we were unable to recover it. 00:27:09.176 [2024-11-20 11:21:36.494523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.176 [2024-11-20 11:21:36.494555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.176 qpair failed and we were unable to recover it. 00:27:09.176 [2024-11-20 11:21:36.494841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.176 [2024-11-20 11:21:36.494874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.176 qpair failed and we were unable to recover it. 00:27:09.176 [2024-11-20 11:21:36.495071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.176 [2024-11-20 11:21:36.495108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.176 qpair failed and we were unable to recover it. 00:27:09.176 [2024-11-20 11:21:36.495360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.176 [2024-11-20 11:21:36.495393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.176 qpair failed and we were unable to recover it. 00:27:09.176 [2024-11-20 11:21:36.495641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.176 [2024-11-20 11:21:36.495674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.176 qpair failed and we were unable to recover it. 00:27:09.176 [2024-11-20 11:21:36.495928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.176 [2024-11-20 11:21:36.495975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.176 qpair failed and we were unable to recover it. 00:27:09.176 [2024-11-20 11:21:36.496124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.176 [2024-11-20 11:21:36.496158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.176 qpair failed and we were unable to recover it. 00:27:09.176 [2024-11-20 11:21:36.496363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.176 [2024-11-20 11:21:36.496398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.176 qpair failed and we were unable to recover it. 00:27:09.176 [2024-11-20 11:21:36.496646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.176 [2024-11-20 11:21:36.496680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.176 qpair failed and we were unable to recover it. 00:27:09.176 [2024-11-20 11:21:36.496965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.176 [2024-11-20 11:21:36.497002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.176 qpair failed and we were unable to recover it. 00:27:09.176 [2024-11-20 11:21:36.497219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.176 [2024-11-20 11:21:36.497253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.176 qpair failed and we were unable to recover it. 00:27:09.176 [2024-11-20 11:21:36.497393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.176 [2024-11-20 11:21:36.497427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.176 qpair failed and we were unable to recover it. 00:27:09.176 [2024-11-20 11:21:36.497613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.176 [2024-11-20 11:21:36.497648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.176 qpair failed and we were unable to recover it. 00:27:09.176 [2024-11-20 11:21:36.497887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.176 [2024-11-20 11:21:36.497920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.176 qpair failed and we were unable to recover it. 00:27:09.176 [2024-11-20 11:21:36.498122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.176 [2024-11-20 11:21:36.498156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.176 qpair failed and we were unable to recover it. 00:27:09.176 [2024-11-20 11:21:36.498459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.176 [2024-11-20 11:21:36.498492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.176 qpair failed and we were unable to recover it. 00:27:09.176 [2024-11-20 11:21:36.498794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.176 [2024-11-20 11:21:36.498827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.176 qpair failed and we were unable to recover it. 00:27:09.176 [2024-11-20 11:21:36.499046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.176 [2024-11-20 11:21:36.499082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.176 qpair failed and we were unable to recover it. 00:27:09.176 [2024-11-20 11:21:36.499287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.176 [2024-11-20 11:21:36.499319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.176 qpair failed and we were unable to recover it. 00:27:09.176 [2024-11-20 11:21:36.499575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.176 [2024-11-20 11:21:36.499608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.176 qpair failed and we were unable to recover it. 00:27:09.176 [2024-11-20 11:21:36.499807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.176 [2024-11-20 11:21:36.499839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.176 qpair failed and we were unable to recover it. 00:27:09.176 [2024-11-20 11:21:36.500034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.176 [2024-11-20 11:21:36.500069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.176 qpair failed and we were unable to recover it. 00:27:09.176 [2024-11-20 11:21:36.500323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.176 [2024-11-20 11:21:36.500355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.176 qpair failed and we were unable to recover it. 00:27:09.176 [2024-11-20 11:21:36.500683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.176 [2024-11-20 11:21:36.500716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.176 qpair failed and we were unable to recover it. 00:27:09.176 [2024-11-20 11:21:36.500992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.176 [2024-11-20 11:21:36.501027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.176 qpair failed and we were unable to recover it. 00:27:09.176 [2024-11-20 11:21:36.501314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.176 [2024-11-20 11:21:36.501348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.176 qpair failed and we were unable to recover it. 00:27:09.176 [2024-11-20 11:21:36.501493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.176 [2024-11-20 11:21:36.501531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.176 qpair failed and we were unable to recover it. 00:27:09.176 [2024-11-20 11:21:36.501832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.176 [2024-11-20 11:21:36.501864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.177 qpair failed and we were unable to recover it. 00:27:09.177 [2024-11-20 11:21:36.502128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.177 [2024-11-20 11:21:36.502165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.177 qpair failed and we were unable to recover it. 00:27:09.177 [2024-11-20 11:21:36.502461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.177 [2024-11-20 11:21:36.502494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.177 qpair failed and we were unable to recover it. 00:27:09.177 [2024-11-20 11:21:36.502744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.177 [2024-11-20 11:21:36.502777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.177 qpair failed and we were unable to recover it. 00:27:09.177 [2024-11-20 11:21:36.503049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.177 [2024-11-20 11:21:36.503085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.177 qpair failed and we were unable to recover it. 00:27:09.177 [2024-11-20 11:21:36.503229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.177 [2024-11-20 11:21:36.503261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.177 qpair failed and we were unable to recover it. 00:27:09.177 [2024-11-20 11:21:36.503512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.177 [2024-11-20 11:21:36.503545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.177 qpair failed and we were unable to recover it. 00:27:09.177 [2024-11-20 11:21:36.503740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.177 [2024-11-20 11:21:36.503774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.177 qpair failed and we were unable to recover it. 00:27:09.177 [2024-11-20 11:21:36.504050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.177 [2024-11-20 11:21:36.504085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.177 qpair failed and we were unable to recover it. 00:27:09.177 [2024-11-20 11:21:36.504388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.177 [2024-11-20 11:21:36.504421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.177 qpair failed and we were unable to recover it. 00:27:09.177 [2024-11-20 11:21:36.504708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.177 [2024-11-20 11:21:36.504742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.177 qpair failed and we were unable to recover it. 00:27:09.177 [2024-11-20 11:21:36.504946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.177 [2024-11-20 11:21:36.504992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.177 qpair failed and we were unable to recover it. 00:27:09.177 [2024-11-20 11:21:36.505269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.177 [2024-11-20 11:21:36.505304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.177 qpair failed and we were unable to recover it. 00:27:09.177 [2024-11-20 11:21:36.505563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.177 [2024-11-20 11:21:36.505598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.177 qpair failed and we were unable to recover it. 00:27:09.177 [2024-11-20 11:21:36.505886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.177 [2024-11-20 11:21:36.505921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.177 qpair failed and we were unable to recover it. 00:27:09.177 [2024-11-20 11:21:36.506221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.177 [2024-11-20 11:21:36.506255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.177 qpair failed and we were unable to recover it. 00:27:09.177 [2024-11-20 11:21:36.506454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.177 [2024-11-20 11:21:36.506488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.177 qpair failed and we were unable to recover it. 00:27:09.177 [2024-11-20 11:21:36.506762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.177 [2024-11-20 11:21:36.506796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.177 qpair failed and we were unable to recover it. 00:27:09.177 [2024-11-20 11:21:36.506962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.177 [2024-11-20 11:21:36.506998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.177 qpair failed and we were unable to recover it. 00:27:09.177 [2024-11-20 11:21:36.507216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.177 [2024-11-20 11:21:36.507249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.177 qpair failed and we were unable to recover it. 00:27:09.177 [2024-11-20 11:21:36.507386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.177 [2024-11-20 11:21:36.507418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.177 qpair failed and we were unable to recover it. 00:27:09.177 [2024-11-20 11:21:36.507571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.177 [2024-11-20 11:21:36.507605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.177 qpair failed and we were unable to recover it. 00:27:09.177 [2024-11-20 11:21:36.507880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.177 [2024-11-20 11:21:36.507912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.177 qpair failed and we were unable to recover it. 00:27:09.177 [2024-11-20 11:21:36.508148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.177 [2024-11-20 11:21:36.508184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.177 qpair failed and we were unable to recover it. 00:27:09.177 [2024-11-20 11:21:36.508401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.177 [2024-11-20 11:21:36.508435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.177 qpair failed and we were unable to recover it. 00:27:09.177 [2024-11-20 11:21:36.508652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.177 [2024-11-20 11:21:36.508686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.177 qpair failed and we were unable to recover it. 00:27:09.177 [2024-11-20 11:21:36.508895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.177 [2024-11-20 11:21:36.508934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.177 qpair failed and we were unable to recover it. 00:27:09.177 [2024-11-20 11:21:36.509160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.177 [2024-11-20 11:21:36.509194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.177 qpair failed and we were unable to recover it. 00:27:09.177 [2024-11-20 11:21:36.509391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.177 [2024-11-20 11:21:36.509425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.177 qpair failed and we were unable to recover it. 00:27:09.177 [2024-11-20 11:21:36.509625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.177 [2024-11-20 11:21:36.509658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.177 qpair failed and we were unable to recover it. 00:27:09.177 [2024-11-20 11:21:36.509789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.177 [2024-11-20 11:21:36.509821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.177 qpair failed and we were unable to recover it. 00:27:09.177 [2024-11-20 11:21:36.510016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.177 [2024-11-20 11:21:36.510051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.177 qpair failed and we were unable to recover it. 00:27:09.177 [2024-11-20 11:21:36.510237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.177 [2024-11-20 11:21:36.510271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.177 qpair failed and we were unable to recover it. 00:27:09.177 [2024-11-20 11:21:36.510461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.177 [2024-11-20 11:21:36.510495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.177 qpair failed and we were unable to recover it. 00:27:09.178 [2024-11-20 11:21:36.510679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.178 [2024-11-20 11:21:36.510713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.178 qpair failed and we were unable to recover it. 00:27:09.178 [2024-11-20 11:21:36.510988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.178 [2024-11-20 11:21:36.511022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.178 qpair failed and we were unable to recover it. 00:27:09.178 [2024-11-20 11:21:36.511145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.178 [2024-11-20 11:21:36.511178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.178 qpair failed and we were unable to recover it. 00:27:09.178 [2024-11-20 11:21:36.511451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.178 [2024-11-20 11:21:36.511484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.178 qpair failed and we were unable to recover it. 00:27:09.178 [2024-11-20 11:21:36.511667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.178 [2024-11-20 11:21:36.511700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.178 qpair failed and we were unable to recover it. 00:27:09.178 [2024-11-20 11:21:36.511881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.178 [2024-11-20 11:21:36.511915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.178 qpair failed and we were unable to recover it. 00:27:09.178 [2024-11-20 11:21:36.512199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.178 [2024-11-20 11:21:36.512235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.178 qpair failed and we were unable to recover it. 00:27:09.178 [2024-11-20 11:21:36.512491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.178 [2024-11-20 11:21:36.512524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.178 qpair failed and we were unable to recover it. 00:27:09.178 [2024-11-20 11:21:36.512740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.178 [2024-11-20 11:21:36.512774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.178 qpair failed and we were unable to recover it. 00:27:09.178 [2024-11-20 11:21:36.513047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.178 [2024-11-20 11:21:36.513083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.178 qpair failed and we were unable to recover it. 00:27:09.178 [2024-11-20 11:21:36.513302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.178 [2024-11-20 11:21:36.513336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.178 qpair failed and we were unable to recover it. 00:27:09.178 [2024-11-20 11:21:36.513631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.178 [2024-11-20 11:21:36.513663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.178 qpair failed and we were unable to recover it. 00:27:09.178 [2024-11-20 11:21:36.513931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.178 [2024-11-20 11:21:36.513992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.178 qpair failed and we were unable to recover it. 00:27:09.178 [2024-11-20 11:21:36.514267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.178 [2024-11-20 11:21:36.514300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.178 qpair failed and we were unable to recover it. 00:27:09.178 [2024-11-20 11:21:36.514580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.178 [2024-11-20 11:21:36.514615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.178 qpair failed and we were unable to recover it. 00:27:09.178 [2024-11-20 11:21:36.514900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.178 [2024-11-20 11:21:36.514934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.178 qpair failed and we were unable to recover it. 00:27:09.178 [2024-11-20 11:21:36.515220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.178 [2024-11-20 11:21:36.515255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.178 qpair failed and we were unable to recover it. 00:27:09.178 [2024-11-20 11:21:36.515530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.178 [2024-11-20 11:21:36.515565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.178 qpair failed and we were unable to recover it. 00:27:09.178 [2024-11-20 11:21:36.515754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.178 [2024-11-20 11:21:36.515786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.178 qpair failed and we were unable to recover it. 00:27:09.178 [2024-11-20 11:21:36.515987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.178 [2024-11-20 11:21:36.516021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.178 qpair failed and we were unable to recover it. 00:27:09.178 [2024-11-20 11:21:36.516284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.178 [2024-11-20 11:21:36.516316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.178 qpair failed and we were unable to recover it. 00:27:09.178 [2024-11-20 11:21:36.516528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.178 [2024-11-20 11:21:36.516561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.178 qpair failed and we were unable to recover it. 00:27:09.178 [2024-11-20 11:21:36.516762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.178 [2024-11-20 11:21:36.516795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.178 qpair failed and we were unable to recover it. 00:27:09.178 [2024-11-20 11:21:36.516998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.178 [2024-11-20 11:21:36.517032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.178 qpair failed and we were unable to recover it. 00:27:09.178 [2024-11-20 11:21:36.517280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.178 [2024-11-20 11:21:36.517312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.178 qpair failed and we were unable to recover it. 00:27:09.178 [2024-11-20 11:21:36.517513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.178 [2024-11-20 11:21:36.517548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.178 qpair failed and we were unable to recover it. 00:27:09.178 [2024-11-20 11:21:36.517743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.178 [2024-11-20 11:21:36.517776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.178 qpair failed and we were unable to recover it. 00:27:09.178 [2024-11-20 11:21:36.517984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.178 [2024-11-20 11:21:36.518018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.178 qpair failed and we were unable to recover it. 00:27:09.178 [2024-11-20 11:21:36.518290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.178 [2024-11-20 11:21:36.518323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.178 qpair failed and we were unable to recover it. 00:27:09.178 [2024-11-20 11:21:36.518646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.178 [2024-11-20 11:21:36.518679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.178 qpair failed and we were unable to recover it. 00:27:09.178 [2024-11-20 11:21:36.518807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.178 [2024-11-20 11:21:36.518839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.178 qpair failed and we were unable to recover it. 00:27:09.178 [2024-11-20 11:21:36.519036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.178 [2024-11-20 11:21:36.519070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.178 qpair failed and we were unable to recover it. 00:27:09.178 [2024-11-20 11:21:36.519348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.178 [2024-11-20 11:21:36.519383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.178 qpair failed and we were unable to recover it. 00:27:09.179 [2024-11-20 11:21:36.519600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.179 [2024-11-20 11:21:36.519633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.179 qpair failed and we were unable to recover it. 00:27:09.179 [2024-11-20 11:21:36.519836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.179 [2024-11-20 11:21:36.519869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.179 qpair failed and we were unable to recover it. 00:27:09.179 [2024-11-20 11:21:36.520061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.179 [2024-11-20 11:21:36.520095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.179 qpair failed and we were unable to recover it. 00:27:09.179 [2024-11-20 11:21:36.520382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.179 [2024-11-20 11:21:36.520416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.179 qpair failed and we were unable to recover it. 00:27:09.179 [2024-11-20 11:21:36.520694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.179 [2024-11-20 11:21:36.520728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.179 qpair failed and we were unable to recover it. 00:27:09.179 [2024-11-20 11:21:36.520913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.179 [2024-11-20 11:21:36.520957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.179 qpair failed and we were unable to recover it. 00:27:09.179 [2024-11-20 11:21:36.521159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.179 [2024-11-20 11:21:36.521193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.179 qpair failed and we were unable to recover it. 00:27:09.179 [2024-11-20 11:21:36.521456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.179 [2024-11-20 11:21:36.521491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.179 qpair failed and we were unable to recover it. 00:27:09.179 [2024-11-20 11:21:36.521631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.179 [2024-11-20 11:21:36.521663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.179 qpair failed and we were unable to recover it. 00:27:09.179 [2024-11-20 11:21:36.521803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.179 [2024-11-20 11:21:36.521836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.179 qpair failed and we were unable to recover it. 00:27:09.179 [2024-11-20 11:21:36.522087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.179 [2024-11-20 11:21:36.522122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.179 qpair failed and we were unable to recover it. 00:27:09.179 [2024-11-20 11:21:36.522260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.179 [2024-11-20 11:21:36.522294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.179 qpair failed and we were unable to recover it. 00:27:09.179 [2024-11-20 11:21:36.522550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.179 [2024-11-20 11:21:36.522583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.179 qpair failed and we were unable to recover it. 00:27:09.179 [2024-11-20 11:21:36.522712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.179 [2024-11-20 11:21:36.522745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.179 qpair failed and we were unable to recover it. 00:27:09.179 [2024-11-20 11:21:36.523020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.179 [2024-11-20 11:21:36.523054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.179 qpair failed and we were unable to recover it. 00:27:09.179 [2024-11-20 11:21:36.523254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.179 [2024-11-20 11:21:36.523287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.179 qpair failed and we were unable to recover it. 00:27:09.179 [2024-11-20 11:21:36.523562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.179 [2024-11-20 11:21:36.523597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.179 qpair failed and we were unable to recover it. 00:27:09.179 [2024-11-20 11:21:36.523881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.179 [2024-11-20 11:21:36.523915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.179 qpair failed and we were unable to recover it. 00:27:09.179 [2024-11-20 11:21:36.524183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.179 [2024-11-20 11:21:36.524218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.179 qpair failed and we were unable to recover it. 00:27:09.179 [2024-11-20 11:21:36.524515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.179 [2024-11-20 11:21:36.524557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.179 qpair failed and we were unable to recover it. 00:27:09.179 [2024-11-20 11:21:36.524821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.179 [2024-11-20 11:21:36.524855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.179 qpair failed and we were unable to recover it. 00:27:09.179 [2024-11-20 11:21:36.525166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.179 [2024-11-20 11:21:36.525201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.179 qpair failed and we were unable to recover it. 00:27:09.179 [2024-11-20 11:21:36.525418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.179 [2024-11-20 11:21:36.525452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.179 qpair failed and we were unable to recover it. 00:27:09.179 [2024-11-20 11:21:36.525771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.179 [2024-11-20 11:21:36.525805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.179 qpair failed and we were unable to recover it. 00:27:09.179 [2024-11-20 11:21:36.525917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.179 [2024-11-20 11:21:36.525960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.179 qpair failed and we were unable to recover it. 00:27:09.179 [2024-11-20 11:21:36.526260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.179 [2024-11-20 11:21:36.526293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.179 qpair failed and we were unable to recover it. 00:27:09.179 [2024-11-20 11:21:36.526475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.179 [2024-11-20 11:21:36.526505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.179 qpair failed and we were unable to recover it. 00:27:09.179 [2024-11-20 11:21:36.526763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.179 [2024-11-20 11:21:36.526803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.179 qpair failed and we were unable to recover it. 00:27:09.179 [2024-11-20 11:21:36.527090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.179 [2024-11-20 11:21:36.527125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.179 qpair failed and we were unable to recover it. 00:27:09.179 [2024-11-20 11:21:36.527397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.180 [2024-11-20 11:21:36.527430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.180 qpair failed and we were unable to recover it. 00:27:09.180 [2024-11-20 11:21:36.527570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.180 [2024-11-20 11:21:36.527603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.180 qpair failed and we were unable to recover it. 00:27:09.180 [2024-11-20 11:21:36.527856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.180 [2024-11-20 11:21:36.527888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.180 qpair failed and we were unable to recover it. 00:27:09.180 [2024-11-20 11:21:36.528093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.180 [2024-11-20 11:21:36.528127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.180 qpair failed and we were unable to recover it. 00:27:09.180 [2024-11-20 11:21:36.528334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.180 [2024-11-20 11:21:36.528367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.180 qpair failed and we were unable to recover it. 00:27:09.180 [2024-11-20 11:21:36.528619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.180 [2024-11-20 11:21:36.528651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.180 qpair failed and we were unable to recover it. 00:27:09.180 [2024-11-20 11:21:36.528776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.180 [2024-11-20 11:21:36.528809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.180 qpair failed and we were unable to recover it. 00:27:09.180 [2024-11-20 11:21:36.529007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.180 [2024-11-20 11:21:36.529041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.180 qpair failed and we were unable to recover it. 00:27:09.180 [2024-11-20 11:21:36.529312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.180 [2024-11-20 11:21:36.529345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.180 qpair failed and we were unable to recover it. 00:27:09.180 [2024-11-20 11:21:36.529619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.180 [2024-11-20 11:21:36.529653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.180 qpair failed and we were unable to recover it. 00:27:09.180 [2024-11-20 11:21:36.529943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.180 [2024-11-20 11:21:36.529990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.180 qpair failed and we were unable to recover it. 00:27:09.180 [2024-11-20 11:21:36.530188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.180 [2024-11-20 11:21:36.530219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.180 qpair failed and we were unable to recover it. 00:27:09.180 [2024-11-20 11:21:36.530448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.180 [2024-11-20 11:21:36.530481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.180 qpair failed and we were unable to recover it. 00:27:09.180 [2024-11-20 11:21:36.530782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.180 [2024-11-20 11:21:36.530815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.180 qpair failed and we were unable to recover it. 00:27:09.180 [2024-11-20 11:21:36.531083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.180 [2024-11-20 11:21:36.531119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.180 qpair failed and we were unable to recover it. 00:27:09.180 [2024-11-20 11:21:36.531315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.180 [2024-11-20 11:21:36.531348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.180 qpair failed and we were unable to recover it. 00:27:09.180 [2024-11-20 11:21:36.531566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.180 [2024-11-20 11:21:36.531600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.180 qpair failed and we were unable to recover it. 00:27:09.180 [2024-11-20 11:21:36.531869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.180 [2024-11-20 11:21:36.531902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.180 qpair failed and we were unable to recover it. 00:27:09.180 [2024-11-20 11:21:36.532062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.180 [2024-11-20 11:21:36.532097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.180 qpair failed and we were unable to recover it. 00:27:09.180 [2024-11-20 11:21:36.532351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.180 [2024-11-20 11:21:36.532384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.180 qpair failed and we were unable to recover it. 00:27:09.180 [2024-11-20 11:21:36.532659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.180 [2024-11-20 11:21:36.532690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.180 qpair failed and we were unable to recover it. 00:27:09.180 [2024-11-20 11:21:36.532964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.180 [2024-11-20 11:21:36.532998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.180 qpair failed and we were unable to recover it. 00:27:09.180 [2024-11-20 11:21:36.533262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.180 [2024-11-20 11:21:36.533298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.180 qpair failed and we were unable to recover it. 00:27:09.180 [2024-11-20 11:21:36.533566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.180 [2024-11-20 11:21:36.533600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.180 qpair failed and we were unable to recover it. 00:27:09.180 [2024-11-20 11:21:36.533852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.180 [2024-11-20 11:21:36.533885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.180 qpair failed and we were unable to recover it. 00:27:09.180 [2024-11-20 11:21:36.534173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.180 [2024-11-20 11:21:36.534214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.180 qpair failed and we were unable to recover it. 00:27:09.180 [2024-11-20 11:21:36.534525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.180 [2024-11-20 11:21:36.534557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.180 qpair failed and we were unable to recover it. 00:27:09.180 [2024-11-20 11:21:36.534808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.180 [2024-11-20 11:21:36.534843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.180 qpair failed and we were unable to recover it. 00:27:09.180 [2024-11-20 11:21:36.534989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.180 [2024-11-20 11:21:36.535023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.180 qpair failed and we were unable to recover it. 00:27:09.180 [2024-11-20 11:21:36.535222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.180 [2024-11-20 11:21:36.535257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.180 qpair failed and we were unable to recover it. 00:27:09.180 [2024-11-20 11:21:36.535523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.180 [2024-11-20 11:21:36.535556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.180 qpair failed and we were unable to recover it. 00:27:09.180 [2024-11-20 11:21:36.535828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.180 [2024-11-20 11:21:36.535862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.180 qpair failed and we were unable to recover it. 00:27:09.180 [2024-11-20 11:21:36.536116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.180 [2024-11-20 11:21:36.536153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.180 qpair failed and we were unable to recover it. 00:27:09.180 [2024-11-20 11:21:36.536403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.181 [2024-11-20 11:21:36.536437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.181 qpair failed and we were unable to recover it. 00:27:09.181 [2024-11-20 11:21:36.536567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.181 [2024-11-20 11:21:36.536601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.181 qpair failed and we were unable to recover it. 00:27:09.181 [2024-11-20 11:21:36.536856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.181 [2024-11-20 11:21:36.536890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.181 qpair failed and we were unable to recover it. 00:27:09.181 [2024-11-20 11:21:36.537098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.181 [2024-11-20 11:21:36.537131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.181 qpair failed and we were unable to recover it. 00:27:09.181 [2024-11-20 11:21:36.537389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.181 [2024-11-20 11:21:36.537424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.181 qpair failed and we were unable to recover it. 00:27:09.181 [2024-11-20 11:21:36.537637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.181 [2024-11-20 11:21:36.537671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.181 qpair failed and we were unable to recover it. 00:27:09.181 [2024-11-20 11:21:36.537937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.181 [2024-11-20 11:21:36.537982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.181 qpair failed and we were unable to recover it. 00:27:09.181 [2024-11-20 11:21:36.538267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.181 [2024-11-20 11:21:36.538300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.181 qpair failed and we were unable to recover it. 00:27:09.181 [2024-11-20 11:21:36.538524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.181 [2024-11-20 11:21:36.538558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.181 qpair failed and we were unable to recover it. 00:27:09.181 [2024-11-20 11:21:36.538854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.181 [2024-11-20 11:21:36.538887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.181 qpair failed and we were unable to recover it. 00:27:09.181 [2024-11-20 11:21:36.539160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.181 [2024-11-20 11:21:36.539196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.181 qpair failed and we were unable to recover it. 00:27:09.181 [2024-11-20 11:21:36.539475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.181 [2024-11-20 11:21:36.539509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.181 qpair failed and we were unable to recover it. 00:27:09.181 [2024-11-20 11:21:36.539729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.181 [2024-11-20 11:21:36.539764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.181 qpair failed and we were unable to recover it. 00:27:09.181 [2024-11-20 11:21:36.540046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.181 [2024-11-20 11:21:36.540082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.181 qpair failed and we were unable to recover it. 00:27:09.181 [2024-11-20 11:21:36.540360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.181 [2024-11-20 11:21:36.540392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.181 qpair failed and we were unable to recover it. 00:27:09.181 [2024-11-20 11:21:36.540614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.181 [2024-11-20 11:21:36.540647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.181 qpair failed and we were unable to recover it. 00:27:09.181 [2024-11-20 11:21:36.540843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.181 [2024-11-20 11:21:36.540877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.181 qpair failed and we were unable to recover it. 00:27:09.181 [2024-11-20 11:21:36.541162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.181 [2024-11-20 11:21:36.541197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.181 qpair failed and we were unable to recover it. 00:27:09.181 [2024-11-20 11:21:36.541382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.181 [2024-11-20 11:21:36.541415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.181 qpair failed and we were unable to recover it. 00:27:09.181 [2024-11-20 11:21:36.541596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.181 [2024-11-20 11:21:36.541634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.181 qpair failed and we were unable to recover it. 00:27:09.181 [2024-11-20 11:21:36.541912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.181 [2024-11-20 11:21:36.541945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.181 qpair failed and we were unable to recover it. 00:27:09.181 [2024-11-20 11:21:36.542232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.181 [2024-11-20 11:21:36.542267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.181 qpair failed and we were unable to recover it. 00:27:09.181 [2024-11-20 11:21:36.542487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.181 [2024-11-20 11:21:36.542519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.181 qpair failed and we were unable to recover it. 00:27:09.181 [2024-11-20 11:21:36.542714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.181 [2024-11-20 11:21:36.542747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.181 qpair failed and we were unable to recover it. 00:27:09.181 [2024-11-20 11:21:36.543021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.181 [2024-11-20 11:21:36.543057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.181 qpair failed and we were unable to recover it. 00:27:09.181 [2024-11-20 11:21:36.543338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.181 [2024-11-20 11:21:36.543371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.181 qpair failed and we were unable to recover it. 00:27:09.181 [2024-11-20 11:21:36.543651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.181 [2024-11-20 11:21:36.543684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.181 qpair failed and we were unable to recover it. 00:27:09.181 [2024-11-20 11:21:36.543890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.181 [2024-11-20 11:21:36.543923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.181 qpair failed and we were unable to recover it. 00:27:09.181 [2024-11-20 11:21:36.544136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.181 [2024-11-20 11:21:36.544171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.181 qpair failed and we were unable to recover it. 00:27:09.181 [2024-11-20 11:21:36.544473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.181 [2024-11-20 11:21:36.544505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.181 qpair failed and we were unable to recover it. 00:27:09.181 [2024-11-20 11:21:36.544714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.181 [2024-11-20 11:21:36.544747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.181 qpair failed and we were unable to recover it. 00:27:09.181 [2024-11-20 11:21:36.545021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.181 [2024-11-20 11:21:36.545057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.181 qpair failed and we were unable to recover it. 00:27:09.181 [2024-11-20 11:21:36.545255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.181 [2024-11-20 11:21:36.545287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.181 qpair failed and we were unable to recover it. 00:27:09.181 [2024-11-20 11:21:36.545490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.182 [2024-11-20 11:21:36.545524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.182 qpair failed and we were unable to recover it. 00:27:09.182 [2024-11-20 11:21:36.545780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.182 [2024-11-20 11:21:36.545812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.182 qpair failed and we were unable to recover it. 00:27:09.182 [2024-11-20 11:21:36.546112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.182 [2024-11-20 11:21:36.546147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.182 qpair failed and we were unable to recover it. 00:27:09.182 [2024-11-20 11:21:36.546421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.182 [2024-11-20 11:21:36.546456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.182 qpair failed and we were unable to recover it. 00:27:09.182 [2024-11-20 11:21:36.546668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.182 [2024-11-20 11:21:36.546701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.182 qpair failed and we were unable to recover it. 00:27:09.182 [2024-11-20 11:21:36.546970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.182 [2024-11-20 11:21:36.547005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.182 qpair failed and we were unable to recover it. 00:27:09.182 [2024-11-20 11:21:36.547304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.182 [2024-11-20 11:21:36.547338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.182 qpair failed and we were unable to recover it. 00:27:09.182 [2024-11-20 11:21:36.547601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.182 [2024-11-20 11:21:36.547634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.182 qpair failed and we were unable to recover it. 00:27:09.182 [2024-11-20 11:21:36.547919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.182 [2024-11-20 11:21:36.547971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.182 qpair failed and we were unable to recover it. 00:27:09.182 [2024-11-20 11:21:36.548227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.182 [2024-11-20 11:21:36.548260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.182 qpair failed and we were unable to recover it. 00:27:09.182 [2024-11-20 11:21:36.548514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.182 [2024-11-20 11:21:36.548546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.182 qpair failed and we were unable to recover it. 00:27:09.182 [2024-11-20 11:21:36.548849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.182 [2024-11-20 11:21:36.548880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.182 qpair failed and we were unable to recover it. 00:27:09.182 [2024-11-20 11:21:36.549148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.182 [2024-11-20 11:21:36.549182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.182 qpair failed and we were unable to recover it. 00:27:09.182 [2024-11-20 11:21:36.549378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.182 [2024-11-20 11:21:36.549412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.182 qpair failed and we were unable to recover it. 00:27:09.182 [2024-11-20 11:21:36.549643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.182 [2024-11-20 11:21:36.549675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.182 qpair failed and we were unable to recover it. 00:27:09.182 [2024-11-20 11:21:36.549963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.182 [2024-11-20 11:21:36.549997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.182 qpair failed and we were unable to recover it. 00:27:09.182 [2024-11-20 11:21:36.550154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.182 [2024-11-20 11:21:36.550188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.182 qpair failed and we were unable to recover it. 00:27:09.182 [2024-11-20 11:21:36.550407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.182 [2024-11-20 11:21:36.550442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.182 qpair failed and we were unable to recover it. 00:27:09.182 [2024-11-20 11:21:36.550648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.182 [2024-11-20 11:21:36.550681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.182 qpair failed and we were unable to recover it. 00:27:09.182 [2024-11-20 11:21:36.550901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.182 [2024-11-20 11:21:36.550932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.182 qpair failed and we were unable to recover it. 00:27:09.182 [2024-11-20 11:21:36.551198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.182 [2024-11-20 11:21:36.551231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.182 qpair failed and we were unable to recover it. 00:27:09.182 [2024-11-20 11:21:36.551370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.182 [2024-11-20 11:21:36.551404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.182 qpair failed and we were unable to recover it. 00:27:09.182 [2024-11-20 11:21:36.551593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.182 [2024-11-20 11:21:36.551627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.182 qpair failed and we were unable to recover it. 00:27:09.182 [2024-11-20 11:21:36.551852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.182 [2024-11-20 11:21:36.551885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.182 qpair failed and we were unable to recover it. 00:27:09.182 [2024-11-20 11:21:36.552028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.182 [2024-11-20 11:21:36.552061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.182 qpair failed and we were unable to recover it. 00:27:09.182 [2024-11-20 11:21:36.552210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.182 [2024-11-20 11:21:36.552242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.182 qpair failed and we were unable to recover it. 00:27:09.182 [2024-11-20 11:21:36.552494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.182 [2024-11-20 11:21:36.552527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.182 qpair failed and we were unable to recover it. 00:27:09.182 [2024-11-20 11:21:36.552867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.182 [2024-11-20 11:21:36.552938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.182 qpair failed and we were unable to recover it. 00:27:09.182 [2024-11-20 11:21:36.553203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.182 [2024-11-20 11:21:36.553249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.182 qpair failed and we were unable to recover it. 00:27:09.182 [2024-11-20 11:21:36.553561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.182 [2024-11-20 11:21:36.553601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.182 qpair failed and we were unable to recover it. 00:27:09.182 [2024-11-20 11:21:36.553839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.182 [2024-11-20 11:21:36.553880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.182 qpair failed and we were unable to recover it. 00:27:09.182 [2024-11-20 11:21:36.554121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.182 [2024-11-20 11:21:36.554164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.182 qpair failed and we were unable to recover it. 00:27:09.182 [2024-11-20 11:21:36.554487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.182 [2024-11-20 11:21:36.554528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.182 qpair failed and we were unable to recover it. 00:27:09.182 [2024-11-20 11:21:36.554748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.183 [2024-11-20 11:21:36.554790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.183 qpair failed and we were unable to recover it. 00:27:09.183 [2024-11-20 11:21:36.555011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.183 [2024-11-20 11:21:36.555053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.183 qpair failed and we were unable to recover it. 00:27:09.183 [2024-11-20 11:21:36.555219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.183 [2024-11-20 11:21:36.555268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.183 qpair failed and we were unable to recover it. 00:27:09.183 [2024-11-20 11:21:36.555615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.183 [2024-11-20 11:21:36.555653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.183 qpair failed and we were unable to recover it. 00:27:09.183 [2024-11-20 11:21:36.555930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.183 [2024-11-20 11:21:36.555989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.183 qpair failed and we were unable to recover it. 00:27:09.183 [2024-11-20 11:21:36.556237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.183 [2024-11-20 11:21:36.556272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.183 qpair failed and we were unable to recover it. 00:27:09.183 [2024-11-20 11:21:36.556549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.183 [2024-11-20 11:21:36.556583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.183 qpair failed and we were unable to recover it. 00:27:09.183 [2024-11-20 11:21:36.556790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.183 [2024-11-20 11:21:36.556822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.183 qpair failed and we were unable to recover it. 00:27:09.183 [2024-11-20 11:21:36.556978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.183 [2024-11-20 11:21:36.557014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.183 qpair failed and we were unable to recover it. 00:27:09.183 [2024-11-20 11:21:36.557168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.183 [2024-11-20 11:21:36.557202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.183 qpair failed and we were unable to recover it. 00:27:09.183 [2024-11-20 11:21:36.557394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.183 [2024-11-20 11:21:36.557427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.183 qpair failed and we were unable to recover it. 00:27:09.183 [2024-11-20 11:21:36.557666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.183 [2024-11-20 11:21:36.557698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.183 qpair failed and we were unable to recover it. 00:27:09.183 [2024-11-20 11:21:36.557891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.183 [2024-11-20 11:21:36.557923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.183 qpair failed and we were unable to recover it. 00:27:09.183 [2024-11-20 11:21:36.558117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.183 [2024-11-20 11:21:36.558173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.183 qpair failed and we were unable to recover it. 00:27:09.183 [2024-11-20 11:21:36.558448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.183 [2024-11-20 11:21:36.558481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.183 qpair failed and we were unable to recover it. 00:27:09.183 [2024-11-20 11:21:36.558661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.183 [2024-11-20 11:21:36.558693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.183 qpair failed and we were unable to recover it. 00:27:09.183 [2024-11-20 11:21:36.558840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.183 [2024-11-20 11:21:36.558871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.183 qpair failed and we were unable to recover it. 00:27:09.183 [2024-11-20 11:21:36.559143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.183 [2024-11-20 11:21:36.559176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.183 qpair failed and we were unable to recover it. 00:27:09.183 [2024-11-20 11:21:36.559449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.183 [2024-11-20 11:21:36.559483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.183 qpair failed and we were unable to recover it. 00:27:09.183 [2024-11-20 11:21:36.559685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.183 [2024-11-20 11:21:36.559717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.183 qpair failed and we were unable to recover it. 00:27:09.183 [2024-11-20 11:21:36.559925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.183 [2024-11-20 11:21:36.559977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.183 qpair failed and we were unable to recover it. 00:27:09.183 [2024-11-20 11:21:36.560234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.183 [2024-11-20 11:21:36.560267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.183 qpair failed and we were unable to recover it. 00:27:09.183 [2024-11-20 11:21:36.560448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.183 [2024-11-20 11:21:36.560480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.183 qpair failed and we were unable to recover it. 00:27:09.183 [2024-11-20 11:21:36.560783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.183 [2024-11-20 11:21:36.560817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.183 qpair failed and we were unable to recover it. 00:27:09.183 [2024-11-20 11:21:36.561067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.183 [2024-11-20 11:21:36.561101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.183 qpair failed and we were unable to recover it. 00:27:09.183 [2024-11-20 11:21:36.561294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.183 [2024-11-20 11:21:36.561326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.183 qpair failed and we were unable to recover it. 00:27:09.183 [2024-11-20 11:21:36.561510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.183 [2024-11-20 11:21:36.561542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.183 qpair failed and we were unable to recover it. 00:27:09.183 [2024-11-20 11:21:36.561745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.183 [2024-11-20 11:21:36.561779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.183 qpair failed and we were unable to recover it. 00:27:09.183 [2024-11-20 11:21:36.562062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.183 [2024-11-20 11:21:36.562098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.183 qpair failed and we were unable to recover it. 00:27:09.183 [2024-11-20 11:21:36.562376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.183 [2024-11-20 11:21:36.562409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.183 qpair failed and we were unable to recover it. 00:27:09.183 [2024-11-20 11:21:36.562612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.183 [2024-11-20 11:21:36.562644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.183 qpair failed and we were unable to recover it. 00:27:09.183 [2024-11-20 11:21:36.562918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.183 [2024-11-20 11:21:36.562961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.183 qpair failed and we were unable to recover it. 00:27:09.183 [2024-11-20 11:21:36.563150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.183 [2024-11-20 11:21:36.563182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.183 qpair failed and we were unable to recover it. 00:27:09.183 [2024-11-20 11:21:36.563428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.184 [2024-11-20 11:21:36.563461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.184 qpair failed and we were unable to recover it. 00:27:09.184 [2024-11-20 11:21:36.563733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.184 [2024-11-20 11:21:36.563767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.184 qpair failed and we were unable to recover it. 00:27:09.184 [2024-11-20 11:21:36.564030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.184 [2024-11-20 11:21:36.564066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.184 qpair failed and we were unable to recover it. 00:27:09.184 [2024-11-20 11:21:36.564316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.184 [2024-11-20 11:21:36.564351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.184 qpair failed and we were unable to recover it. 00:27:09.184 [2024-11-20 11:21:36.564625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.184 [2024-11-20 11:21:36.564656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.184 qpair failed and we were unable to recover it. 00:27:09.184 [2024-11-20 11:21:36.564862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.184 [2024-11-20 11:21:36.564895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.184 qpair failed and we were unable to recover it. 00:27:09.184 [2024-11-20 11:21:36.565151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.184 [2024-11-20 11:21:36.565188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.184 qpair failed and we were unable to recover it. 00:27:09.184 [2024-11-20 11:21:36.565372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.184 [2024-11-20 11:21:36.565405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.184 qpair failed and we were unable to recover it. 00:27:09.184 [2024-11-20 11:21:36.565652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.184 [2024-11-20 11:21:36.565685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.184 qpair failed and we were unable to recover it. 00:27:09.184 [2024-11-20 11:21:36.565970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.184 [2024-11-20 11:21:36.566005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.184 qpair failed and we were unable to recover it. 00:27:09.184 [2024-11-20 11:21:36.566144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.184 [2024-11-20 11:21:36.566177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.184 qpair failed and we were unable to recover it. 00:27:09.184 [2024-11-20 11:21:36.566378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.184 [2024-11-20 11:21:36.566412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.184 qpair failed and we were unable to recover it. 00:27:09.184 [2024-11-20 11:21:36.566673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.184 [2024-11-20 11:21:36.566707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.184 qpair failed and we were unable to recover it. 00:27:09.184 [2024-11-20 11:21:36.566823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.184 [2024-11-20 11:21:36.566855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.184 qpair failed and we were unable to recover it. 00:27:09.184 [2024-11-20 11:21:36.567168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.184 [2024-11-20 11:21:36.567203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.184 qpair failed and we were unable to recover it. 00:27:09.184 [2024-11-20 11:21:36.567386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.184 [2024-11-20 11:21:36.567424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.184 qpair failed and we were unable to recover it. 00:27:09.184 [2024-11-20 11:21:36.567670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.184 [2024-11-20 11:21:36.567703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.184 qpair failed and we were unable to recover it. 00:27:09.184 [2024-11-20 11:21:36.567985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.184 [2024-11-20 11:21:36.568020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.184 qpair failed and we were unable to recover it. 00:27:09.184 [2024-11-20 11:21:36.568273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.184 [2024-11-20 11:21:36.568308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.184 qpair failed and we were unable to recover it. 00:27:09.184 [2024-11-20 11:21:36.568502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.184 [2024-11-20 11:21:36.568534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.184 qpair failed and we were unable to recover it. 00:27:09.184 [2024-11-20 11:21:36.568711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.184 [2024-11-20 11:21:36.568744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.184 qpair failed and we were unable to recover it. 00:27:09.184 [2024-11-20 11:21:36.568959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.184 [2024-11-20 11:21:36.568995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.184 qpair failed and we were unable to recover it. 00:27:09.184 [2024-11-20 11:21:36.569127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.184 [2024-11-20 11:21:36.569162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.184 qpair failed and we were unable to recover it. 00:27:09.184 [2024-11-20 11:21:36.569381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.184 [2024-11-20 11:21:36.569413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.184 qpair failed and we were unable to recover it. 00:27:09.184 [2024-11-20 11:21:36.569712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.184 [2024-11-20 11:21:36.569746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.184 qpair failed and we were unable to recover it. 00:27:09.184 [2024-11-20 11:21:36.570035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.184 [2024-11-20 11:21:36.570069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.184 qpair failed and we were unable to recover it. 00:27:09.184 [2024-11-20 11:21:36.570347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.184 [2024-11-20 11:21:36.570380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.184 qpair failed and we were unable to recover it. 00:27:09.184 [2024-11-20 11:21:36.570589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.184 [2024-11-20 11:21:36.570621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.184 qpair failed and we were unable to recover it. 00:27:09.184 [2024-11-20 11:21:36.570819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.184 [2024-11-20 11:21:36.570854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.184 qpair failed and we were unable to recover it. 00:27:09.184 [2024-11-20 11:21:36.571067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.184 [2024-11-20 11:21:36.571102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.184 qpair failed and we were unable to recover it. 00:27:09.184 [2024-11-20 11:21:36.571248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.184 [2024-11-20 11:21:36.571282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.184 qpair failed and we were unable to recover it. 00:27:09.184 [2024-11-20 11:21:36.571498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.184 [2024-11-20 11:21:36.571534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.184 qpair failed and we were unable to recover it. 00:27:09.184 [2024-11-20 11:21:36.571696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.184 [2024-11-20 11:21:36.571729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.184 qpair failed and we were unable to recover it. 00:27:09.184 [2024-11-20 11:21:36.571866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.184 [2024-11-20 11:21:36.571902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.185 qpair failed and we were unable to recover it. 00:27:09.185 [2024-11-20 11:21:36.572190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.185 [2024-11-20 11:21:36.572224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.185 qpair failed and we were unable to recover it. 00:27:09.185 [2024-11-20 11:21:36.572498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.185 [2024-11-20 11:21:36.572531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.185 qpair failed and we were unable to recover it. 00:27:09.185 [2024-11-20 11:21:36.572674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.185 [2024-11-20 11:21:36.572707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.185 qpair failed and we were unable to recover it. 00:27:09.185 [2024-11-20 11:21:36.572986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.185 [2024-11-20 11:21:36.573021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.185 qpair failed and we were unable to recover it. 00:27:09.185 [2024-11-20 11:21:36.573142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.185 [2024-11-20 11:21:36.573175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.185 qpair failed and we were unable to recover it. 00:27:09.185 [2024-11-20 11:21:36.573357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.185 [2024-11-20 11:21:36.573390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.185 qpair failed and we were unable to recover it. 00:27:09.185 [2024-11-20 11:21:36.573646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.185 [2024-11-20 11:21:36.573682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.185 qpair failed and we were unable to recover it. 00:27:09.185 [2024-11-20 11:21:36.573829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.185 [2024-11-20 11:21:36.573862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.185 qpair failed and we were unable to recover it. 00:27:09.185 [2024-11-20 11:21:36.574072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.185 [2024-11-20 11:21:36.574115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.185 qpair failed and we were unable to recover it. 00:27:09.185 [2024-11-20 11:21:36.574316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.185 [2024-11-20 11:21:36.574351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.185 qpair failed and we were unable to recover it. 00:27:09.185 [2024-11-20 11:21:36.574546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.185 [2024-11-20 11:21:36.574577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.185 qpair failed and we were unable to recover it. 00:27:09.185 [2024-11-20 11:21:36.574794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.185 [2024-11-20 11:21:36.574827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.185 qpair failed and we were unable to recover it. 00:27:09.185 [2024-11-20 11:21:36.575098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.185 [2024-11-20 11:21:36.575134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.185 qpair failed and we were unable to recover it. 00:27:09.185 [2024-11-20 11:21:36.575331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.185 [2024-11-20 11:21:36.575363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.185 qpair failed and we were unable to recover it. 00:27:09.185 [2024-11-20 11:21:36.575593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.185 [2024-11-20 11:21:36.575626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.185 qpair failed and we were unable to recover it. 00:27:09.185 [2024-11-20 11:21:36.575814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.185 [2024-11-20 11:21:36.575850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.185 qpair failed and we were unable to recover it. 00:27:09.185 [2024-11-20 11:21:36.576127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.185 [2024-11-20 11:21:36.576161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.185 qpair failed and we were unable to recover it. 00:27:09.185 [2024-11-20 11:21:36.576357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.185 [2024-11-20 11:21:36.576392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.185 qpair failed and we were unable to recover it. 00:27:09.185 [2024-11-20 11:21:36.576668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.185 [2024-11-20 11:21:36.576701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.185 qpair failed and we were unable to recover it. 00:27:09.185 [2024-11-20 11:21:36.576962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.185 [2024-11-20 11:21:36.576997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.185 qpair failed and we were unable to recover it. 00:27:09.185 [2024-11-20 11:21:36.577302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.185 [2024-11-20 11:21:36.577336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.185 qpair failed and we were unable to recover it. 00:27:09.185 [2024-11-20 11:21:36.577549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.185 [2024-11-20 11:21:36.577581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.185 qpair failed and we were unable to recover it. 00:27:09.185 [2024-11-20 11:21:36.577772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.185 [2024-11-20 11:21:36.577804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.185 qpair failed and we were unable to recover it. 00:27:09.185 [2024-11-20 11:21:36.578071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.185 [2024-11-20 11:21:36.578105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.185 qpair failed and we were unable to recover it. 00:27:09.185 [2024-11-20 11:21:36.578292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.185 [2024-11-20 11:21:36.578324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.185 qpair failed and we were unable to recover it. 00:27:09.185 [2024-11-20 11:21:36.578621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.185 [2024-11-20 11:21:36.578654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.185 qpair failed and we were unable to recover it. 00:27:09.185 [2024-11-20 11:21:36.578908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.185 [2024-11-20 11:21:36.578943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.185 qpair failed and we were unable to recover it. 00:27:09.185 [2024-11-20 11:21:36.579227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.185 [2024-11-20 11:21:36.579262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.185 qpair failed and we were unable to recover it. 00:27:09.185 [2024-11-20 11:21:36.579468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.185 [2024-11-20 11:21:36.579501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.185 qpair failed and we were unable to recover it. 00:27:09.185 [2024-11-20 11:21:36.579755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.185 [2024-11-20 11:21:36.579787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.186 qpair failed and we were unable to recover it. 00:27:09.186 [2024-11-20 11:21:36.579997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.186 [2024-11-20 11:21:36.580032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.186 qpair failed and we were unable to recover it. 00:27:09.186 [2024-11-20 11:21:36.580242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.186 [2024-11-20 11:21:36.580276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.186 qpair failed and we were unable to recover it. 00:27:09.186 [2024-11-20 11:21:36.580529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.186 [2024-11-20 11:21:36.580561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.186 qpair failed and we were unable to recover it. 00:27:09.186 [2024-11-20 11:21:36.580863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.186 [2024-11-20 11:21:36.580896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.186 qpair failed and we were unable to recover it. 00:27:09.186 [2024-11-20 11:21:36.581113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.186 [2024-11-20 11:21:36.581147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.186 qpair failed and we were unable to recover it. 00:27:09.186 [2024-11-20 11:21:36.581425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.186 [2024-11-20 11:21:36.581459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.186 qpair failed and we were unable to recover it. 00:27:09.186 [2024-11-20 11:21:36.581748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.186 [2024-11-20 11:21:36.581781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.186 qpair failed and we were unable to recover it. 00:27:09.186 [2024-11-20 11:21:36.581914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.186 [2024-11-20 11:21:36.581958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.186 qpair failed and we were unable to recover it. 00:27:09.186 [2024-11-20 11:21:36.582189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.186 [2024-11-20 11:21:36.582223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.186 qpair failed and we were unable to recover it. 00:27:09.186 [2024-11-20 11:21:36.582442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.186 [2024-11-20 11:21:36.582475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.186 qpair failed and we were unable to recover it. 00:27:09.186 [2024-11-20 11:21:36.582681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.186 [2024-11-20 11:21:36.582718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.186 qpair failed and we were unable to recover it. 00:27:09.186 [2024-11-20 11:21:36.583020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.186 [2024-11-20 11:21:36.583053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.186 qpair failed and we were unable to recover it. 00:27:09.186 [2024-11-20 11:21:36.583252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.186 [2024-11-20 11:21:36.583285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.186 qpair failed and we were unable to recover it. 00:27:09.186 [2024-11-20 11:21:36.583438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.186 [2024-11-20 11:21:36.583471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.186 qpair failed and we were unable to recover it. 00:27:09.186 [2024-11-20 11:21:36.583665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.186 [2024-11-20 11:21:36.583699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.186 qpair failed and we were unable to recover it. 00:27:09.186 [2024-11-20 11:21:36.583900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.186 [2024-11-20 11:21:36.583933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.186 qpair failed and we were unable to recover it. 00:27:09.186 [2024-11-20 11:21:36.584234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.186 [2024-11-20 11:21:36.584271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.186 qpair failed and we were unable to recover it. 00:27:09.186 [2024-11-20 11:21:36.584469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.186 [2024-11-20 11:21:36.584503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.186 qpair failed and we were unable to recover it. 00:27:09.186 [2024-11-20 11:21:36.584767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.186 [2024-11-20 11:21:36.584800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.186 qpair failed and we were unable to recover it. 00:27:09.186 [2024-11-20 11:21:36.585039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.186 [2024-11-20 11:21:36.585076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.186 qpair failed and we were unable to recover it. 00:27:09.186 [2024-11-20 11:21:36.585351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.186 [2024-11-20 11:21:36.585386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.186 qpair failed and we were unable to recover it. 00:27:09.186 [2024-11-20 11:21:36.585652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.186 [2024-11-20 11:21:36.585685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.186 qpair failed and we were unable to recover it. 00:27:09.186 [2024-11-20 11:21:36.585891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.186 [2024-11-20 11:21:36.585923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.186 qpair failed and we were unable to recover it. 00:27:09.186 [2024-11-20 11:21:36.586083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.186 [2024-11-20 11:21:36.586115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.186 qpair failed and we were unable to recover it. 00:27:09.186 [2024-11-20 11:21:36.586326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.186 [2024-11-20 11:21:36.586358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.186 qpair failed and we were unable to recover it. 00:27:09.186 [2024-11-20 11:21:36.586615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.186 [2024-11-20 11:21:36.586650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.186 qpair failed and we were unable to recover it. 00:27:09.186 [2024-11-20 11:21:36.587559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.186 [2024-11-20 11:21:36.587610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.186 qpair failed and we were unable to recover it. 00:27:09.186 [2024-11-20 11:21:36.587974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.186 [2024-11-20 11:21:36.588014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.186 qpair failed and we were unable to recover it. 00:27:09.186 [2024-11-20 11:21:36.588243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.186 [2024-11-20 11:21:36.588278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.186 qpair failed and we were unable to recover it. 00:27:09.186 [2024-11-20 11:21:36.588477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.186 [2024-11-20 11:21:36.588510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.186 qpair failed and we were unable to recover it. 00:27:09.186 [2024-11-20 11:21:36.588767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.186 [2024-11-20 11:21:36.588801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.186 qpair failed and we were unable to recover it. 00:27:09.186 [2024-11-20 11:21:36.589038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.187 [2024-11-20 11:21:36.589072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.187 qpair failed and we were unable to recover it. 00:27:09.187 [2024-11-20 11:21:36.589277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.187 [2024-11-20 11:21:36.589311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.187 qpair failed and we were unable to recover it. 00:27:09.187 [2024-11-20 11:21:36.589585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.187 [2024-11-20 11:21:36.589618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.187 qpair failed and we were unable to recover it. 00:27:09.187 [2024-11-20 11:21:36.589842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.187 [2024-11-20 11:21:36.589875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.187 qpair failed and we were unable to recover it. 00:27:09.187 [2024-11-20 11:21:36.590040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.187 [2024-11-20 11:21:36.590075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.187 qpair failed and we were unable to recover it. 00:27:09.187 [2024-11-20 11:21:36.590357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.187 [2024-11-20 11:21:36.590389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.187 qpair failed and we were unable to recover it. 00:27:09.187 [2024-11-20 11:21:36.590653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.187 [2024-11-20 11:21:36.590687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.187 qpair failed and we were unable to recover it. 00:27:09.187 [2024-11-20 11:21:36.590988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.187 [2024-11-20 11:21:36.591023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.187 qpair failed and we were unable to recover it. 00:27:09.187 [2024-11-20 11:21:36.591249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.187 [2024-11-20 11:21:36.591281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.187 qpair failed and we were unable to recover it. 00:27:09.187 [2024-11-20 11:21:36.591535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.187 [2024-11-20 11:21:36.591567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.187 qpair failed and we were unable to recover it. 00:27:09.187 [2024-11-20 11:21:36.591842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.187 [2024-11-20 11:21:36.591877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.187 qpair failed and we were unable to recover it. 00:27:09.187 [2024-11-20 11:21:36.592073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.187 [2024-11-20 11:21:36.592106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.187 qpair failed and we were unable to recover it. 00:27:09.187 [2024-11-20 11:21:36.592316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.187 [2024-11-20 11:21:36.592350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.187 qpair failed and we were unable to recover it. 00:27:09.187 [2024-11-20 11:21:36.592599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.187 [2024-11-20 11:21:36.592632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.187 qpair failed and we were unable to recover it. 00:27:09.187 [2024-11-20 11:21:36.592769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.187 [2024-11-20 11:21:36.592801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.187 qpair failed and we were unable to recover it. 00:27:09.187 [2024-11-20 11:21:36.593060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.187 [2024-11-20 11:21:36.593101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.187 qpair failed and we were unable to recover it. 00:27:09.187 [2024-11-20 11:21:36.593236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.187 [2024-11-20 11:21:36.593269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.187 qpair failed and we were unable to recover it. 00:27:09.187 [2024-11-20 11:21:36.593416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.187 [2024-11-20 11:21:36.593449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.187 qpair failed and we were unable to recover it. 00:27:09.187 [2024-11-20 11:21:36.593716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.187 [2024-11-20 11:21:36.593749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.187 qpair failed and we were unable to recover it. 00:27:09.187 [2024-11-20 11:21:36.594022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.187 [2024-11-20 11:21:36.594055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.187 qpair failed and we were unable to recover it. 00:27:09.187 [2024-11-20 11:21:36.594284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.187 [2024-11-20 11:21:36.594316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.187 qpair failed and we were unable to recover it. 00:27:09.187 [2024-11-20 11:21:36.594451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.187 [2024-11-20 11:21:36.594484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.187 qpair failed and we were unable to recover it. 00:27:09.187 [2024-11-20 11:21:36.594707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.187 [2024-11-20 11:21:36.594739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.187 qpair failed and we were unable to recover it. 00:27:09.187 [2024-11-20 11:21:36.594924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.187 [2024-11-20 11:21:36.594969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.187 qpair failed and we were unable to recover it. 00:27:09.187 [2024-11-20 11:21:36.595116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.187 [2024-11-20 11:21:36.595149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.187 qpair failed and we were unable to recover it. 00:27:09.187 [2024-11-20 11:21:36.595282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.187 [2024-11-20 11:21:36.595314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.187 qpair failed and we were unable to recover it. 00:27:09.187 [2024-11-20 11:21:36.595505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.187 [2024-11-20 11:21:36.595539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.187 qpair failed and we were unable to recover it. 00:27:09.187 [2024-11-20 11:21:36.595854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.187 [2024-11-20 11:21:36.595886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.187 qpair failed and we were unable to recover it. 00:27:09.187 [2024-11-20 11:21:36.596033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.187 [2024-11-20 11:21:36.596066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.187 qpair failed and we were unable to recover it. 00:27:09.187 [2024-11-20 11:21:36.596282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.187 [2024-11-20 11:21:36.596315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.187 qpair failed and we were unable to recover it. 00:27:09.187 [2024-11-20 11:21:36.596519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.187 [2024-11-20 11:21:36.596551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.187 qpair failed and we were unable to recover it. 00:27:09.187 [2024-11-20 11:21:36.596737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.187 [2024-11-20 11:21:36.596770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.187 qpair failed and we were unable to recover it. 00:27:09.187 [2024-11-20 11:21:36.596976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.187 [2024-11-20 11:21:36.597011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.187 qpair failed and we were unable to recover it. 00:27:09.187 [2024-11-20 11:21:36.597237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.188 [2024-11-20 11:21:36.597271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.188 qpair failed and we were unable to recover it. 00:27:09.188 [2024-11-20 11:21:36.597473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.188 [2024-11-20 11:21:36.597507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.188 qpair failed and we were unable to recover it. 00:27:09.188 [2024-11-20 11:21:36.597647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.188 [2024-11-20 11:21:36.597680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.188 qpair failed and we were unable to recover it. 00:27:09.188 [2024-11-20 11:21:36.597892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.188 [2024-11-20 11:21:36.597925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.188 qpair failed and we were unable to recover it. 00:27:09.188 [2024-11-20 11:21:36.598130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.188 [2024-11-20 11:21:36.598164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.188 qpair failed and we were unable to recover it. 00:27:09.188 [2024-11-20 11:21:36.598417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.188 [2024-11-20 11:21:36.598450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.188 qpair failed and we were unable to recover it. 00:27:09.188 [2024-11-20 11:21:36.598762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.188 [2024-11-20 11:21:36.598794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.188 qpair failed and we were unable to recover it. 00:27:09.188 [2024-11-20 11:21:36.599078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.188 [2024-11-20 11:21:36.599112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.188 qpair failed and we were unable to recover it. 00:27:09.188 [2024-11-20 11:21:36.599318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.188 [2024-11-20 11:21:36.599350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.188 qpair failed and we were unable to recover it. 00:27:09.188 [2024-11-20 11:21:36.599541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.188 [2024-11-20 11:21:36.599578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.188 qpair failed and we were unable to recover it. 00:27:09.188 [2024-11-20 11:21:36.599778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.188 [2024-11-20 11:21:36.599810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.188 qpair failed and we were unable to recover it. 00:27:09.188 [2024-11-20 11:21:36.600019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.188 [2024-11-20 11:21:36.600054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.188 qpair failed and we were unable to recover it. 00:27:09.188 [2024-11-20 11:21:36.600281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.188 [2024-11-20 11:21:36.600314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.188 qpair failed and we were unable to recover it. 00:27:09.188 [2024-11-20 11:21:36.600553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.188 [2024-11-20 11:21:36.600586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.188 qpair failed and we were unable to recover it. 00:27:09.188 [2024-11-20 11:21:36.600805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.188 [2024-11-20 11:21:36.600837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.188 qpair failed and we were unable to recover it. 00:27:09.188 [2024-11-20 11:21:36.601034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.188 [2024-11-20 11:21:36.601070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.188 qpair failed and we were unable to recover it. 00:27:09.188 [2024-11-20 11:21:36.601278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.188 [2024-11-20 11:21:36.601311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.188 qpair failed and we were unable to recover it. 00:27:09.188 [2024-11-20 11:21:36.601561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.188 [2024-11-20 11:21:36.601595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.188 qpair failed and we were unable to recover it. 00:27:09.188 [2024-11-20 11:21:36.601725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.188 [2024-11-20 11:21:36.601757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.188 qpair failed and we were unable to recover it. 00:27:09.188 [2024-11-20 11:21:36.602024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.188 [2024-11-20 11:21:36.602059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.188 qpair failed and we were unable to recover it. 00:27:09.188 [2024-11-20 11:21:36.602219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.188 [2024-11-20 11:21:36.602249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.188 qpair failed and we were unable to recover it. 00:27:09.188 [2024-11-20 11:21:36.602390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.188 [2024-11-20 11:21:36.602422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.188 qpair failed and we were unable to recover it. 00:27:09.188 [2024-11-20 11:21:36.602532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.188 [2024-11-20 11:21:36.602564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.188 qpair failed and we were unable to recover it. 00:27:09.188 [2024-11-20 11:21:36.602773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.188 [2024-11-20 11:21:36.602805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.188 qpair failed and we were unable to recover it. 00:27:09.188 [2024-11-20 11:21:36.603011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.188 [2024-11-20 11:21:36.603046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.188 qpair failed and we were unable to recover it. 00:27:09.188 [2024-11-20 11:21:36.603173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.188 [2024-11-20 11:21:36.603204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.188 qpair failed and we were unable to recover it. 00:27:09.188 [2024-11-20 11:21:36.603406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.188 [2024-11-20 11:21:36.603439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.188 qpair failed and we were unable to recover it. 00:27:09.188 [2024-11-20 11:21:36.603728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.188 [2024-11-20 11:21:36.603762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.188 qpair failed and we were unable to recover it. 00:27:09.188 [2024-11-20 11:21:36.603916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.188 [2024-11-20 11:21:36.603968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.188 qpair failed and we were unable to recover it. 00:27:09.188 [2024-11-20 11:21:36.604242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.188 [2024-11-20 11:21:36.604275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.188 qpair failed and we were unable to recover it. 00:27:09.188 [2024-11-20 11:21:36.604480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.188 [2024-11-20 11:21:36.604514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.188 qpair failed and we were unable to recover it. 00:27:09.188 [2024-11-20 11:21:36.604787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.188 [2024-11-20 11:21:36.604819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.188 qpair failed and we were unable to recover it. 00:27:09.188 [2024-11-20 11:21:36.604945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.188 [2024-11-20 11:21:36.604994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.188 qpair failed and we were unable to recover it. 00:27:09.188 [2024-11-20 11:21:36.605198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.188 [2024-11-20 11:21:36.605232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.188 qpair failed and we were unable to recover it. 00:27:09.189 [2024-11-20 11:21:36.605375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.189 [2024-11-20 11:21:36.605406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.189 qpair failed and we were unable to recover it. 00:27:09.189 [2024-11-20 11:21:36.605647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.189 [2024-11-20 11:21:36.605680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.189 qpair failed and we were unable to recover it. 00:27:09.189 [2024-11-20 11:21:36.605880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.189 [2024-11-20 11:21:36.605919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.189 qpair failed and we were unable to recover it. 00:27:09.189 [2024-11-20 11:21:36.606206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.189 [2024-11-20 11:21:36.606273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.189 qpair failed and we were unable to recover it. 00:27:09.189 [2024-11-20 11:21:36.606537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.189 [2024-11-20 11:21:36.606581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.189 qpair failed and we were unable to recover it. 00:27:09.189 [2024-11-20 11:21:36.606891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.189 [2024-11-20 11:21:36.606931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.189 qpair failed and we were unable to recover it. 00:27:09.189 [2024-11-20 11:21:36.607295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.189 [2024-11-20 11:21:36.607337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.189 qpair failed and we were unable to recover it. 00:27:09.189 [2024-11-20 11:21:36.607589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.189 [2024-11-20 11:21:36.607631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.189 qpair failed and we were unable to recover it. 00:27:09.189 [2024-11-20 11:21:36.607930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.189 [2024-11-20 11:21:36.607989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.189 qpair failed and we were unable to recover it. 00:27:09.189 [2024-11-20 11:21:36.608181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.189 [2024-11-20 11:21:36.608223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.189 qpair failed and we were unable to recover it. 00:27:09.189 [2024-11-20 11:21:36.608487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.189 [2024-11-20 11:21:36.608527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.189 qpair failed and we were unable to recover it. 00:27:09.189 [2024-11-20 11:21:36.608794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.189 [2024-11-20 11:21:36.608834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.189 qpair failed and we were unable to recover it. 00:27:09.189 [2024-11-20 11:21:36.609056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.189 [2024-11-20 11:21:36.609099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.189 qpair failed and we were unable to recover it. 00:27:09.189 [2024-11-20 11:21:36.609279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.189 [2024-11-20 11:21:36.609328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.189 qpair failed and we were unable to recover it. 00:27:09.189 [2024-11-20 11:21:36.609519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.189 [2024-11-20 11:21:36.609560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.189 qpair failed and we were unable to recover it. 00:27:09.189 [2024-11-20 11:21:36.609863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.189 [2024-11-20 11:21:36.609904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.189 qpair failed and we were unable to recover it. 00:27:09.189 [2024-11-20 11:21:36.610205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.189 [2024-11-20 11:21:36.610245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.189 qpair failed and we were unable to recover it. 00:27:09.189 [2024-11-20 11:21:36.610455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.189 [2024-11-20 11:21:36.610495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.189 qpair failed and we were unable to recover it. 00:27:09.189 [2024-11-20 11:21:36.610832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.189 [2024-11-20 11:21:36.610869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.189 qpair failed and we were unable to recover it. 00:27:09.189 [2024-11-20 11:21:36.611079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.189 [2024-11-20 11:21:36.611114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.189 qpair failed and we were unable to recover it. 00:27:09.189 [2024-11-20 11:21:36.611304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.189 [2024-11-20 11:21:36.611335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.189 qpair failed and we were unable to recover it. 00:27:09.189 [2024-11-20 11:21:36.611492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.189 [2024-11-20 11:21:36.611524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.189 qpair failed and we were unable to recover it. 00:27:09.189 [2024-11-20 11:21:36.611734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.189 [2024-11-20 11:21:36.611766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.189 qpair failed and we were unable to recover it. 00:27:09.189 [2024-11-20 11:21:36.612036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.189 [2024-11-20 11:21:36.612069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.189 qpair failed and we were unable to recover it. 00:27:09.189 [2024-11-20 11:21:36.612280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.189 [2024-11-20 11:21:36.612313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.189 qpair failed and we were unable to recover it. 00:27:09.189 [2024-11-20 11:21:36.612535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.189 [2024-11-20 11:21:36.612568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.189 qpair failed and we were unable to recover it. 00:27:09.189 [2024-11-20 11:21:36.612703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.189 [2024-11-20 11:21:36.612735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.189 qpair failed and we were unable to recover it. 00:27:09.189 [2024-11-20 11:21:36.612927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.189 [2024-11-20 11:21:36.612981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.189 qpair failed and we were unable to recover it. 00:27:09.189 [2024-11-20 11:21:36.613169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.189 [2024-11-20 11:21:36.613202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.189 qpair failed and we were unable to recover it. 00:27:09.189 [2024-11-20 11:21:36.613402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.189 [2024-11-20 11:21:36.613441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.189 qpair failed and we were unable to recover it. 00:27:09.189 [2024-11-20 11:21:36.613715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.189 [2024-11-20 11:21:36.613748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.189 qpair failed and we were unable to recover it. 00:27:09.189 [2024-11-20 11:21:36.613868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.189 [2024-11-20 11:21:36.613901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.189 qpair failed and we were unable to recover it. 00:27:09.189 [2024-11-20 11:21:36.614207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.189 [2024-11-20 11:21:36.614241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.189 qpair failed and we were unable to recover it. 00:27:09.190 [2024-11-20 11:21:36.614358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.190 [2024-11-20 11:21:36.614388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.190 qpair failed and we were unable to recover it. 00:27:09.190 [2024-11-20 11:21:36.614594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.190 [2024-11-20 11:21:36.614627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.190 qpair failed and we were unable to recover it. 00:27:09.190 [2024-11-20 11:21:36.614905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.190 [2024-11-20 11:21:36.614937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.190 qpair failed and we were unable to recover it. 00:27:09.190 [2024-11-20 11:21:36.615155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.190 [2024-11-20 11:21:36.615191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.190 qpair failed and we were unable to recover it. 00:27:09.190 [2024-11-20 11:21:36.615440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.190 [2024-11-20 11:21:36.615473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.190 qpair failed and we were unable to recover it. 00:27:09.190 [2024-11-20 11:21:36.615602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.190 [2024-11-20 11:21:36.615633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.190 qpair failed and we were unable to recover it. 00:27:09.190 [2024-11-20 11:21:36.615852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.190 [2024-11-20 11:21:36.615886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.190 qpair failed and we were unable to recover it. 00:27:09.190 [2024-11-20 11:21:36.616116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.190 [2024-11-20 11:21:36.616150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.190 qpair failed and we were unable to recover it. 00:27:09.190 [2024-11-20 11:21:36.616365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.190 [2024-11-20 11:21:36.616397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.190 qpair failed and we were unable to recover it. 00:27:09.190 [2024-11-20 11:21:36.616529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.190 [2024-11-20 11:21:36.616560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.190 qpair failed and we were unable to recover it. 00:27:09.190 [2024-11-20 11:21:36.616758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.190 [2024-11-20 11:21:36.616790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.190 qpair failed and we were unable to recover it. 00:27:09.190 [2024-11-20 11:21:36.616993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.190 [2024-11-20 11:21:36.617026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.190 qpair failed and we were unable to recover it. 00:27:09.190 [2024-11-20 11:21:36.617294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.190 [2024-11-20 11:21:36.617327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.190 qpair failed and we were unable to recover it. 00:27:09.190 [2024-11-20 11:21:36.617613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.190 [2024-11-20 11:21:36.617645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.190 qpair failed and we were unable to recover it. 00:27:09.190 [2024-11-20 11:21:36.617853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.190 [2024-11-20 11:21:36.617885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.190 qpair failed and we were unable to recover it. 00:27:09.190 [2024-11-20 11:21:36.618160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.190 [2024-11-20 11:21:36.618194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.190 qpair failed and we were unable to recover it. 00:27:09.190 [2024-11-20 11:21:36.618470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.190 [2024-11-20 11:21:36.618504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.190 qpair failed and we were unable to recover it. 00:27:09.190 [2024-11-20 11:21:36.618788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.190 [2024-11-20 11:21:36.618820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.190 qpair failed and we were unable to recover it. 00:27:09.190 [2024-11-20 11:21:36.619111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.190 [2024-11-20 11:21:36.619146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.190 qpair failed and we were unable to recover it. 00:27:09.190 [2024-11-20 11:21:36.619422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.190 [2024-11-20 11:21:36.619457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.190 qpair failed and we were unable to recover it. 00:27:09.190 [2024-11-20 11:21:36.619608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.190 [2024-11-20 11:21:36.619641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.190 qpair failed and we were unable to recover it. 00:27:09.190 [2024-11-20 11:21:36.619828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.190 [2024-11-20 11:21:36.619860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.190 qpair failed and we were unable to recover it. 00:27:09.190 [2024-11-20 11:21:36.620042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.190 [2024-11-20 11:21:36.620075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.190 qpair failed and we were unable to recover it. 00:27:09.190 [2024-11-20 11:21:36.620289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.190 [2024-11-20 11:21:36.620329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.190 qpair failed and we were unable to recover it. 00:27:09.190 [2024-11-20 11:21:36.620475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.190 [2024-11-20 11:21:36.620507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.190 qpair failed and we were unable to recover it. 00:27:09.190 [2024-11-20 11:21:36.620766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.190 [2024-11-20 11:21:36.620797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.190 qpair failed and we were unable to recover it. 00:27:09.190 [2024-11-20 11:21:36.621074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.190 [2024-11-20 11:21:36.621110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.190 qpair failed and we were unable to recover it. 00:27:09.190 [2024-11-20 11:21:36.621390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.190 [2024-11-20 11:21:36.621424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.190 qpair failed and we were unable to recover it. 00:27:09.190 [2024-11-20 11:21:36.621615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.190 [2024-11-20 11:21:36.621648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.190 qpair failed and we were unable to recover it. 00:27:09.190 [2024-11-20 11:21:36.621903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.190 [2024-11-20 11:21:36.621935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.190 qpair failed and we were unable to recover it. 00:27:09.191 [2024-11-20 11:21:36.622225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.191 [2024-11-20 11:21:36.622258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.191 qpair failed and we were unable to recover it. 00:27:09.191 [2024-11-20 11:21:36.622581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.191 [2024-11-20 11:21:36.622616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.191 qpair failed and we were unable to recover it. 00:27:09.191 [2024-11-20 11:21:36.622889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.191 [2024-11-20 11:21:36.622920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.191 qpair failed and we were unable to recover it. 00:27:09.191 [2024-11-20 11:21:36.623215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.191 [2024-11-20 11:21:36.623250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.191 qpair failed and we were unable to recover it. 00:27:09.191 [2024-11-20 11:21:36.623451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.191 [2024-11-20 11:21:36.623485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.191 qpair failed and we were unable to recover it. 00:27:09.191 [2024-11-20 11:21:36.623755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.191 [2024-11-20 11:21:36.623788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.191 qpair failed and we were unable to recover it. 00:27:09.191 [2024-11-20 11:21:36.623905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.191 [2024-11-20 11:21:36.623936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.191 qpair failed and we were unable to recover it. 00:27:09.191 [2024-11-20 11:21:36.624187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.191 [2024-11-20 11:21:36.624222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.191 qpair failed and we were unable to recover it. 00:27:09.191 [2024-11-20 11:21:36.624410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.191 [2024-11-20 11:21:36.624442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.191 qpair failed and we were unable to recover it. 00:27:09.191 [2024-11-20 11:21:36.624650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.191 [2024-11-20 11:21:36.624687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.191 qpair failed and we were unable to recover it. 00:27:09.191 [2024-11-20 11:21:36.624961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.191 [2024-11-20 11:21:36.624998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.191 qpair failed and we were unable to recover it. 00:27:09.191 [2024-11-20 11:21:36.625280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.191 [2024-11-20 11:21:36.625314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.191 qpair failed and we were unable to recover it. 00:27:09.191 [2024-11-20 11:21:36.625508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.191 [2024-11-20 11:21:36.625540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.191 qpair failed and we were unable to recover it. 00:27:09.191 [2024-11-20 11:21:36.625739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.191 [2024-11-20 11:21:36.625771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.191 qpair failed and we were unable to recover it. 00:27:09.191 [2024-11-20 11:21:36.626048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.191 [2024-11-20 11:21:36.626082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.191 qpair failed and we were unable to recover it. 00:27:09.191 [2024-11-20 11:21:36.626360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.191 [2024-11-20 11:21:36.626393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.191 qpair failed and we were unable to recover it. 00:27:09.191 [2024-11-20 11:21:36.626675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.191 [2024-11-20 11:21:36.626708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.191 qpair failed and we were unable to recover it. 00:27:09.191 [2024-11-20 11:21:36.626970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.191 [2024-11-20 11:21:36.627005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.191 qpair failed and we were unable to recover it. 00:27:09.191 [2024-11-20 11:21:36.627214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.191 [2024-11-20 11:21:36.627248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.191 qpair failed and we were unable to recover it. 00:27:09.191 [2024-11-20 11:21:36.627395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.191 [2024-11-20 11:21:36.627427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.191 qpair failed and we were unable to recover it. 00:27:09.191 [2024-11-20 11:21:36.627686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.191 [2024-11-20 11:21:36.627720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.191 qpair failed and we were unable to recover it. 00:27:09.191 [2024-11-20 11:21:36.627986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.191 [2024-11-20 11:21:36.628022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.191 qpair failed and we were unable to recover it. 00:27:09.191 [2024-11-20 11:21:36.628274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.191 [2024-11-20 11:21:36.628308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.191 qpair failed and we were unable to recover it. 00:27:09.191 [2024-11-20 11:21:36.628529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.191 [2024-11-20 11:21:36.628562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.191 qpair failed and we were unable to recover it. 00:27:09.191 [2024-11-20 11:21:36.628789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.191 [2024-11-20 11:21:36.628820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.191 qpair failed and we were unable to recover it. 00:27:09.191 [2024-11-20 11:21:36.629094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.191 [2024-11-20 11:21:36.629129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.465 qpair failed and we were unable to recover it. 00:27:09.465 [2024-11-20 11:21:36.629312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.465 [2024-11-20 11:21:36.629345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.465 qpair failed and we were unable to recover it. 00:27:09.465 [2024-11-20 11:21:36.629527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.465 [2024-11-20 11:21:36.629560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.465 qpair failed and we were unable to recover it. 00:27:09.465 [2024-11-20 11:21:36.629858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.465 [2024-11-20 11:21:36.629891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.465 qpair failed and we were unable to recover it. 00:27:09.465 [2024-11-20 11:21:36.630157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.465 [2024-11-20 11:21:36.630193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.465 qpair failed and we were unable to recover it. 00:27:09.465 [2024-11-20 11:21:36.630490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.465 [2024-11-20 11:21:36.630523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.465 qpair failed and we were unable to recover it. 00:27:09.465 [2024-11-20 11:21:36.630747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.465 [2024-11-20 11:21:36.630778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.465 qpair failed and we were unable to recover it. 00:27:09.465 [2024-11-20 11:21:36.631070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.465 [2024-11-20 11:21:36.631106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.465 qpair failed and we were unable to recover it. 00:27:09.465 [2024-11-20 11:21:36.631234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.465 [2024-11-20 11:21:36.631267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.465 qpair failed and we were unable to recover it. 00:27:09.465 [2024-11-20 11:21:36.631526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.465 [2024-11-20 11:21:36.631560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.465 qpair failed and we were unable to recover it. 00:27:09.465 [2024-11-20 11:21:36.631801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.465 [2024-11-20 11:21:36.631835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.465 qpair failed and we were unable to recover it. 00:27:09.465 [2024-11-20 11:21:36.632078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.465 [2024-11-20 11:21:36.632114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.465 qpair failed and we were unable to recover it. 00:27:09.465 [2024-11-20 11:21:36.632418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.465 [2024-11-20 11:21:36.632452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.465 qpair failed and we were unable to recover it. 00:27:09.465 [2024-11-20 11:21:36.632662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.465 [2024-11-20 11:21:36.632696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.465 qpair failed and we were unable to recover it. 00:27:09.465 [2024-11-20 11:21:36.632899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.465 [2024-11-20 11:21:36.632931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.465 qpair failed and we were unable to recover it. 00:27:09.465 [2024-11-20 11:21:36.633137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.465 [2024-11-20 11:21:36.633170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.465 qpair failed and we were unable to recover it. 00:27:09.465 [2024-11-20 11:21:36.633424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.465 [2024-11-20 11:21:36.633456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.465 qpair failed and we were unable to recover it. 00:27:09.465 [2024-11-20 11:21:36.633738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.465 [2024-11-20 11:21:36.633772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.465 qpair failed and we were unable to recover it. 00:27:09.465 [2024-11-20 11:21:36.633973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.465 [2024-11-20 11:21:36.634008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.465 qpair failed and we were unable to recover it. 00:27:09.465 [2024-11-20 11:21:36.634201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.465 [2024-11-20 11:21:36.634234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.465 qpair failed and we were unable to recover it. 00:27:09.465 [2024-11-20 11:21:36.634435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.465 [2024-11-20 11:21:36.634469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.465 qpair failed and we were unable to recover it. 00:27:09.465 [2024-11-20 11:21:36.634699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.465 [2024-11-20 11:21:36.634731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.465 qpair failed and we were unable to recover it. 00:27:09.465 [2024-11-20 11:21:36.635019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.465 [2024-11-20 11:21:36.635055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.465 qpair failed and we were unable to recover it. 00:27:09.465 [2024-11-20 11:21:36.635242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.465 [2024-11-20 11:21:36.635276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.465 qpair failed and we were unable to recover it. 00:27:09.465 [2024-11-20 11:21:36.635460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.465 [2024-11-20 11:21:36.635495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.465 qpair failed and we were unable to recover it. 00:27:09.465 [2024-11-20 11:21:36.635703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.465 [2024-11-20 11:21:36.635735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.465 qpair failed and we were unable to recover it. 00:27:09.465 [2024-11-20 11:21:36.636012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.465 [2024-11-20 11:21:36.636047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.465 qpair failed and we were unable to recover it. 00:27:09.465 [2024-11-20 11:21:36.636203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.465 [2024-11-20 11:21:36.636238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.465 qpair failed and we were unable to recover it. 00:27:09.465 [2024-11-20 11:21:36.636516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.465 [2024-11-20 11:21:36.636549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.465 qpair failed and we were unable to recover it. 00:27:09.465 [2024-11-20 11:21:36.636751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.465 [2024-11-20 11:21:36.636785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.465 qpair failed and we were unable to recover it. 00:27:09.465 [2024-11-20 11:21:36.636913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.466 [2024-11-20 11:21:36.636960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.466 qpair failed and we were unable to recover it. 00:27:09.466 [2024-11-20 11:21:36.637209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.466 [2024-11-20 11:21:36.637243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.466 qpair failed and we were unable to recover it. 00:27:09.466 [2024-11-20 11:21:36.637375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.466 [2024-11-20 11:21:36.637409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.466 qpair failed and we were unable to recover it. 00:27:09.466 [2024-11-20 11:21:36.637659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.466 [2024-11-20 11:21:36.637692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.466 qpair failed and we were unable to recover it. 00:27:09.466 [2024-11-20 11:21:36.637881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.466 [2024-11-20 11:21:36.637914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.466 qpair failed and we were unable to recover it. 00:27:09.466 [2024-11-20 11:21:36.638130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.466 [2024-11-20 11:21:36.638164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.466 qpair failed and we were unable to recover it. 00:27:09.466 [2024-11-20 11:21:36.638364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.466 [2024-11-20 11:21:36.638404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.466 qpair failed and we were unable to recover it. 00:27:09.466 [2024-11-20 11:21:36.638658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.466 [2024-11-20 11:21:36.638691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.466 qpair failed and we were unable to recover it. 00:27:09.466 [2024-11-20 11:21:36.638875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.466 [2024-11-20 11:21:36.638908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.466 qpair failed and we were unable to recover it. 00:27:09.466 [2024-11-20 11:21:36.639203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.466 [2024-11-20 11:21:36.639239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.466 qpair failed and we were unable to recover it. 00:27:09.466 [2024-11-20 11:21:36.639498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.466 [2024-11-20 11:21:36.639531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.466 qpair failed and we were unable to recover it. 00:27:09.466 [2024-11-20 11:21:36.639805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.466 [2024-11-20 11:21:36.639837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.466 qpair failed and we were unable to recover it. 00:27:09.466 [2024-11-20 11:21:36.639972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.466 [2024-11-20 11:21:36.640007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.466 qpair failed and we were unable to recover it. 00:27:09.466 [2024-11-20 11:21:36.640279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.466 [2024-11-20 11:21:36.640314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.466 qpair failed and we were unable to recover it. 00:27:09.466 [2024-11-20 11:21:36.640589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.466 [2024-11-20 11:21:36.640623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.466 qpair failed and we were unable to recover it. 00:27:09.466 [2024-11-20 11:21:36.640914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.466 [2024-11-20 11:21:36.640960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.466 qpair failed and we were unable to recover it. 00:27:09.466 [2024-11-20 11:21:36.641233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.466 [2024-11-20 11:21:36.641264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.466 qpair failed and we were unable to recover it. 00:27:09.466 [2024-11-20 11:21:36.641530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.466 [2024-11-20 11:21:36.641563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.466 qpair failed and we were unable to recover it. 00:27:09.466 [2024-11-20 11:21:36.641791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.466 [2024-11-20 11:21:36.641823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.466 qpair failed and we were unable to recover it. 00:27:09.466 [2024-11-20 11:21:36.642030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.466 [2024-11-20 11:21:36.642066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.466 qpair failed and we were unable to recover it. 00:27:09.466 [2024-11-20 11:21:36.642356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.466 [2024-11-20 11:21:36.642390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.466 qpair failed and we were unable to recover it. 00:27:09.466 [2024-11-20 11:21:36.642607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.466 [2024-11-20 11:21:36.642640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.466 qpair failed and we were unable to recover it. 00:27:09.466 [2024-11-20 11:21:36.642775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.466 [2024-11-20 11:21:36.642808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.466 qpair failed and we were unable to recover it. 00:27:09.466 [2024-11-20 11:21:36.643056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.466 [2024-11-20 11:21:36.643091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.466 qpair failed and we were unable to recover it. 00:27:09.466 [2024-11-20 11:21:36.643366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.466 [2024-11-20 11:21:36.643398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.466 qpair failed and we were unable to recover it. 00:27:09.466 [2024-11-20 11:21:36.643586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.466 [2024-11-20 11:21:36.643620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.466 qpair failed and we were unable to recover it. 00:27:09.466 [2024-11-20 11:21:36.643885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.466 [2024-11-20 11:21:36.643917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.466 qpair failed and we were unable to recover it. 00:27:09.466 [2024-11-20 11:21:36.644123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.466 [2024-11-20 11:21:36.644156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.466 qpair failed and we were unable to recover it. 00:27:09.466 [2024-11-20 11:21:36.644340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.466 [2024-11-20 11:21:36.644375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.466 qpair failed and we were unable to recover it. 00:27:09.466 [2024-11-20 11:21:36.644650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.466 [2024-11-20 11:21:36.644684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.466 qpair failed and we were unable to recover it. 00:27:09.466 [2024-11-20 11:21:36.644963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.466 [2024-11-20 11:21:36.644997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.466 qpair failed and we were unable to recover it. 00:27:09.466 [2024-11-20 11:21:36.645300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.466 [2024-11-20 11:21:36.645335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.466 qpair failed and we were unable to recover it. 00:27:09.466 [2024-11-20 11:21:36.645540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.466 [2024-11-20 11:21:36.645574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.466 qpair failed and we were unable to recover it. 00:27:09.466 [2024-11-20 11:21:36.645760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.467 [2024-11-20 11:21:36.645798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.467 qpair failed and we were unable to recover it. 00:27:09.467 [2024-11-20 11:21:36.646079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.467 [2024-11-20 11:21:36.646115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.467 qpair failed and we were unable to recover it. 00:27:09.467 [2024-11-20 11:21:36.646241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.467 [2024-11-20 11:21:36.646273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.467 qpair failed and we were unable to recover it. 00:27:09.467 [2024-11-20 11:21:36.646405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.467 [2024-11-20 11:21:36.646438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.467 qpair failed and we were unable to recover it. 00:27:09.467 [2024-11-20 11:21:36.646709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.467 [2024-11-20 11:21:36.646742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.467 qpair failed and we were unable to recover it. 00:27:09.467 [2024-11-20 11:21:36.646929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.467 [2024-11-20 11:21:36.646987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.467 qpair failed and we were unable to recover it. 00:27:09.467 [2024-11-20 11:21:36.647247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.467 [2024-11-20 11:21:36.647278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.467 qpair failed and we were unable to recover it. 00:27:09.467 [2024-11-20 11:21:36.647551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.467 [2024-11-20 11:21:36.647583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.467 qpair failed and we were unable to recover it. 00:27:09.467 [2024-11-20 11:21:36.647863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.467 [2024-11-20 11:21:36.647898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.467 qpair failed and we were unable to recover it. 00:27:09.467 [2024-11-20 11:21:36.648233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.467 [2024-11-20 11:21:36.648269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.467 qpair failed and we were unable to recover it. 00:27:09.467 [2024-11-20 11:21:36.648462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.467 [2024-11-20 11:21:36.648495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.467 qpair failed and we were unable to recover it. 00:27:09.467 [2024-11-20 11:21:36.648768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.467 [2024-11-20 11:21:36.648801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.467 qpair failed and we were unable to recover it. 00:27:09.467 [2024-11-20 11:21:36.649018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.467 [2024-11-20 11:21:36.649052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.467 qpair failed and we were unable to recover it. 00:27:09.467 [2024-11-20 11:21:36.649305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.467 [2024-11-20 11:21:36.649339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.467 qpair failed and we were unable to recover it. 00:27:09.467 [2024-11-20 11:21:36.649629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.467 [2024-11-20 11:21:36.649661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.467 qpair failed and we were unable to recover it. 00:27:09.467 [2024-11-20 11:21:36.649855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.467 [2024-11-20 11:21:36.649889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.467 qpair failed and we were unable to recover it. 00:27:09.467 [2024-11-20 11:21:36.650049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.467 [2024-11-20 11:21:36.650082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.467 qpair failed and we were unable to recover it. 00:27:09.467 [2024-11-20 11:21:36.650294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.467 [2024-11-20 11:21:36.650327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.467 qpair failed and we were unable to recover it. 00:27:09.467 [2024-11-20 11:21:36.650509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.467 [2024-11-20 11:21:36.650543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.467 qpair failed and we were unable to recover it. 00:27:09.467 [2024-11-20 11:21:36.650740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.467 [2024-11-20 11:21:36.650772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.467 qpair failed and we were unable to recover it. 00:27:09.467 [2024-11-20 11:21:36.651045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.467 [2024-11-20 11:21:36.651079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.467 qpair failed and we were unable to recover it. 00:27:09.467 [2024-11-20 11:21:36.651293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.467 [2024-11-20 11:21:36.651326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.467 qpair failed and we were unable to recover it. 00:27:09.467 [2024-11-20 11:21:36.651587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.467 [2024-11-20 11:21:36.651620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.467 qpair failed and we were unable to recover it. 00:27:09.467 [2024-11-20 11:21:36.651820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.467 [2024-11-20 11:21:36.651853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.467 qpair failed and we were unable to recover it. 00:27:09.467 [2024-11-20 11:21:36.651979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.467 [2024-11-20 11:21:36.652014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.467 qpair failed and we were unable to recover it. 00:27:09.467 [2024-11-20 11:21:36.652267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.467 [2024-11-20 11:21:36.652300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.467 qpair failed and we were unable to recover it. 00:27:09.467 [2024-11-20 11:21:36.652526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.467 [2024-11-20 11:21:36.652559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.467 qpair failed and we were unable to recover it. 00:27:09.467 [2024-11-20 11:21:36.652853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.467 [2024-11-20 11:21:36.652886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.467 qpair failed and we were unable to recover it. 00:27:09.467 [2024-11-20 11:21:36.653206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.467 [2024-11-20 11:21:36.653240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.467 qpair failed and we were unable to recover it. 00:27:09.467 [2024-11-20 11:21:36.653514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.467 [2024-11-20 11:21:36.653548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.467 qpair failed and we were unable to recover it. 00:27:09.467 [2024-11-20 11:21:36.653834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.467 [2024-11-20 11:21:36.653866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.467 qpair failed and we were unable to recover it. 00:27:09.467 [2024-11-20 11:21:36.654023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.467 [2024-11-20 11:21:36.654057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.467 qpair failed and we were unable to recover it. 00:27:09.467 [2024-11-20 11:21:36.654263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.467 [2024-11-20 11:21:36.654298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.467 qpair failed and we were unable to recover it. 00:27:09.467 [2024-11-20 11:21:36.654573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.467 [2024-11-20 11:21:36.654605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.468 qpair failed and we were unable to recover it. 00:27:09.468 [2024-11-20 11:21:36.654786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.468 [2024-11-20 11:21:36.654819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.468 qpair failed and we were unable to recover it. 00:27:09.468 [2024-11-20 11:21:36.655095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.468 [2024-11-20 11:21:36.655130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.468 qpair failed and we were unable to recover it. 00:27:09.468 [2024-11-20 11:21:36.655363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.468 [2024-11-20 11:21:36.655396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.468 qpair failed and we were unable to recover it. 00:27:09.468 [2024-11-20 11:21:36.655599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.468 [2024-11-20 11:21:36.655631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.468 qpair failed and we were unable to recover it. 00:27:09.468 [2024-11-20 11:21:36.655884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.468 [2024-11-20 11:21:36.655916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.468 qpair failed and we were unable to recover it. 00:27:09.468 [2024-11-20 11:21:36.656179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.468 [2024-11-20 11:21:36.656213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.468 qpair failed and we were unable to recover it. 00:27:09.468 [2024-11-20 11:21:36.656424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.468 [2024-11-20 11:21:36.656458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.468 qpair failed and we were unable to recover it. 00:27:09.468 [2024-11-20 11:21:36.656666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.468 [2024-11-20 11:21:36.656700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.468 qpair failed and we were unable to recover it. 00:27:09.468 [2024-11-20 11:21:36.656890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.468 [2024-11-20 11:21:36.656925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.468 qpair failed and we were unable to recover it. 00:27:09.468 [2024-11-20 11:21:36.657211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.468 [2024-11-20 11:21:36.657246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.468 qpair failed and we were unable to recover it. 00:27:09.468 [2024-11-20 11:21:36.657475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.468 [2024-11-20 11:21:36.657508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.468 qpair failed and we were unable to recover it. 00:27:09.468 [2024-11-20 11:21:36.657792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.468 [2024-11-20 11:21:36.657825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.468 qpair failed and we were unable to recover it. 00:27:09.468 [2024-11-20 11:21:36.658016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.468 [2024-11-20 11:21:36.658050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.468 qpair failed and we were unable to recover it. 00:27:09.468 [2024-11-20 11:21:36.658329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.468 [2024-11-20 11:21:36.658363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.468 qpair failed and we were unable to recover it. 00:27:09.468 [2024-11-20 11:21:36.658584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.468 [2024-11-20 11:21:36.658616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.468 qpair failed and we were unable to recover it. 00:27:09.468 [2024-11-20 11:21:36.658753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.468 [2024-11-20 11:21:36.658789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.468 qpair failed and we were unable to recover it. 00:27:09.468 [2024-11-20 11:21:36.659070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.468 [2024-11-20 11:21:36.659104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.468 qpair failed and we were unable to recover it. 00:27:09.468 [2024-11-20 11:21:36.659313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.468 [2024-11-20 11:21:36.659346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.468 qpair failed and we were unable to recover it. 00:27:09.468 [2024-11-20 11:21:36.659588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.468 [2024-11-20 11:21:36.659621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.468 qpair failed and we were unable to recover it. 00:27:09.468 [2024-11-20 11:21:36.659820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.468 [2024-11-20 11:21:36.659852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.468 qpair failed and we were unable to recover it. 00:27:09.468 [2024-11-20 11:21:36.660053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.468 [2024-11-20 11:21:36.660087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.468 qpair failed and we were unable to recover it. 00:27:09.468 [2024-11-20 11:21:36.660366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.468 [2024-11-20 11:21:36.660399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.468 qpair failed and we were unable to recover it. 00:27:09.468 [2024-11-20 11:21:36.660677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.468 [2024-11-20 11:21:36.660709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.468 qpair failed and we were unable to recover it. 00:27:09.468 [2024-11-20 11:21:36.661000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.468 [2024-11-20 11:21:36.661035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.468 qpair failed and we were unable to recover it. 00:27:09.468 [2024-11-20 11:21:36.661223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.468 [2024-11-20 11:21:36.661255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.468 qpair failed and we were unable to recover it. 00:27:09.468 [2024-11-20 11:21:36.661452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.468 [2024-11-20 11:21:36.661484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.468 qpair failed and we were unable to recover it. 00:27:09.468 [2024-11-20 11:21:36.661685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.468 [2024-11-20 11:21:36.661717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.468 qpair failed and we were unable to recover it. 00:27:09.468 [2024-11-20 11:21:36.661969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.468 [2024-11-20 11:21:36.662002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.468 qpair failed and we were unable to recover it. 00:27:09.468 [2024-11-20 11:21:36.662202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.468 [2024-11-20 11:21:36.662235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.468 qpair failed and we were unable to recover it. 00:27:09.468 [2024-11-20 11:21:36.662372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.468 [2024-11-20 11:21:36.662404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.468 qpair failed and we were unable to recover it. 00:27:09.468 [2024-11-20 11:21:36.662602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.468 [2024-11-20 11:21:36.662634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.468 qpair failed and we were unable to recover it. 00:27:09.468 [2024-11-20 11:21:36.662883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.468 [2024-11-20 11:21:36.662915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.468 qpair failed and we were unable to recover it. 00:27:09.468 [2024-11-20 11:21:36.663234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.468 [2024-11-20 11:21:36.663269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.468 qpair failed and we were unable to recover it. 00:27:09.469 [2024-11-20 11:21:36.663470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.469 [2024-11-20 11:21:36.663502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.469 qpair failed and we were unable to recover it. 00:27:09.469 [2024-11-20 11:21:36.663808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.469 [2024-11-20 11:21:36.663846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.469 qpair failed and we were unable to recover it. 00:27:09.469 [2024-11-20 11:21:36.664113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.469 [2024-11-20 11:21:36.664148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.469 qpair failed and we were unable to recover it. 00:27:09.469 [2024-11-20 11:21:36.664431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.469 [2024-11-20 11:21:36.664463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.469 qpair failed and we were unable to recover it. 00:27:09.469 [2024-11-20 11:21:36.664666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.469 [2024-11-20 11:21:36.664698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.469 qpair failed and we were unable to recover it. 00:27:09.469 [2024-11-20 11:21:36.664920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.469 [2024-11-20 11:21:36.664961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.469 qpair failed and we were unable to recover it. 00:27:09.469 [2024-11-20 11:21:36.665186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.469 [2024-11-20 11:21:36.665219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.469 qpair failed and we were unable to recover it. 00:27:09.469 [2024-11-20 11:21:36.665365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.469 [2024-11-20 11:21:36.665397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.469 qpair failed and we were unable to recover it. 00:27:09.469 [2024-11-20 11:21:36.665527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.469 [2024-11-20 11:21:36.665559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.469 qpair failed and we were unable to recover it. 00:27:09.469 [2024-11-20 11:21:36.665735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.469 [2024-11-20 11:21:36.665767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.469 qpair failed and we were unable to recover it. 00:27:09.469 [2024-11-20 11:21:36.666086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.469 [2024-11-20 11:21:36.666120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.469 qpair failed and we were unable to recover it. 00:27:09.469 [2024-11-20 11:21:36.666315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.469 [2024-11-20 11:21:36.666347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.469 qpair failed and we were unable to recover it. 00:27:09.469 [2024-11-20 11:21:36.666547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.469 [2024-11-20 11:21:36.666579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.469 qpair failed and we were unable to recover it. 00:27:09.469 [2024-11-20 11:21:36.666853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.469 [2024-11-20 11:21:36.666886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.469 qpair failed and we were unable to recover it. 00:27:09.469 [2024-11-20 11:21:36.667183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.469 [2024-11-20 11:21:36.667216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.469 qpair failed and we were unable to recover it. 00:27:09.469 [2024-11-20 11:21:36.667484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.469 [2024-11-20 11:21:36.667517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.469 qpair failed and we were unable to recover it. 00:27:09.469 [2024-11-20 11:21:36.667743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.469 [2024-11-20 11:21:36.667776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.469 qpair failed and we were unable to recover it. 00:27:09.469 [2024-11-20 11:21:36.667973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.469 [2024-11-20 11:21:36.668007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.469 qpair failed and we were unable to recover it. 00:27:09.469 [2024-11-20 11:21:36.668261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.469 [2024-11-20 11:21:36.668293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.469 qpair failed and we were unable to recover it. 00:27:09.469 [2024-11-20 11:21:36.668475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.469 [2024-11-20 11:21:36.668508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.469 qpair failed and we were unable to recover it. 00:27:09.469 [2024-11-20 11:21:36.668696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.469 [2024-11-20 11:21:36.668727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.469 qpair failed and we were unable to recover it. 00:27:09.469 [2024-11-20 11:21:36.668925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.469 [2024-11-20 11:21:36.668968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.469 qpair failed and we were unable to recover it. 00:27:09.469 [2024-11-20 11:21:36.669187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.469 [2024-11-20 11:21:36.669220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.469 qpair failed and we were unable to recover it. 00:27:09.469 [2024-11-20 11:21:36.669402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.469 [2024-11-20 11:21:36.669434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.469 qpair failed and we were unable to recover it. 00:27:09.469 [2024-11-20 11:21:36.669682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.469 [2024-11-20 11:21:36.669715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.469 qpair failed and we were unable to recover it. 00:27:09.469 [2024-11-20 11:21:36.669914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.469 [2024-11-20 11:21:36.669946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.469 qpair failed and we were unable to recover it. 00:27:09.469 [2024-11-20 11:21:36.670235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.469 [2024-11-20 11:21:36.670268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.469 qpair failed and we were unable to recover it. 00:27:09.469 [2024-11-20 11:21:36.670465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.469 [2024-11-20 11:21:36.670498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.469 qpair failed and we were unable to recover it. 00:27:09.469 [2024-11-20 11:21:36.670676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.469 [2024-11-20 11:21:36.670715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.469 qpair failed and we were unable to recover it. 00:27:09.469 [2024-11-20 11:21:36.670912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.469 [2024-11-20 11:21:36.670945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.469 qpair failed and we were unable to recover it. 00:27:09.469 [2024-11-20 11:21:36.671172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.469 [2024-11-20 11:21:36.671205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.469 qpair failed and we were unable to recover it. 00:27:09.469 [2024-11-20 11:21:36.671480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.469 [2024-11-20 11:21:36.671512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.469 qpair failed and we were unable to recover it. 00:27:09.469 [2024-11-20 11:21:36.671791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.469 [2024-11-20 11:21:36.671824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.469 qpair failed and we were unable to recover it. 00:27:09.470 [2024-11-20 11:21:36.671971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.470 [2024-11-20 11:21:36.672006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.470 qpair failed and we were unable to recover it. 00:27:09.470 [2024-11-20 11:21:36.672214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.470 [2024-11-20 11:21:36.672246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.470 qpair failed and we were unable to recover it. 00:27:09.470 [2024-11-20 11:21:36.672559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.470 [2024-11-20 11:21:36.672591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.470 qpair failed and we were unable to recover it. 00:27:09.470 [2024-11-20 11:21:36.672858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.470 [2024-11-20 11:21:36.672891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.470 qpair failed and we were unable to recover it. 00:27:09.470 [2024-11-20 11:21:36.673152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.470 [2024-11-20 11:21:36.673186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.470 qpair failed and we were unable to recover it. 00:27:09.470 [2024-11-20 11:21:36.673485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.470 [2024-11-20 11:21:36.673517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.470 qpair failed and we were unable to recover it. 00:27:09.470 [2024-11-20 11:21:36.673804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.470 [2024-11-20 11:21:36.673836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.470 qpair failed and we were unable to recover it. 00:27:09.470 [2024-11-20 11:21:36.674113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.470 [2024-11-20 11:21:36.674148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.470 qpair failed and we were unable to recover it. 00:27:09.470 [2024-11-20 11:21:36.674411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.470 [2024-11-20 11:21:36.674444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.470 qpair failed and we were unable to recover it. 00:27:09.470 [2024-11-20 11:21:36.674741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.470 [2024-11-20 11:21:36.674773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.470 qpair failed and we were unable to recover it. 00:27:09.470 [2024-11-20 11:21:36.675001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.470 [2024-11-20 11:21:36.675035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.470 qpair failed and we were unable to recover it. 00:27:09.470 [2024-11-20 11:21:36.675308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.470 [2024-11-20 11:21:36.675340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.470 qpair failed and we were unable to recover it. 00:27:09.470 [2024-11-20 11:21:36.675545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.470 [2024-11-20 11:21:36.675577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.470 qpair failed and we were unable to recover it. 00:27:09.470 [2024-11-20 11:21:36.675846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.470 [2024-11-20 11:21:36.675877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.470 qpair failed and we were unable to recover it. 00:27:09.470 [2024-11-20 11:21:36.676195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.470 [2024-11-20 11:21:36.676230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.470 qpair failed and we were unable to recover it. 00:27:09.470 [2024-11-20 11:21:36.676423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.470 [2024-11-20 11:21:36.676456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.470 qpair failed and we were unable to recover it. 00:27:09.470 [2024-11-20 11:21:36.676592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.470 [2024-11-20 11:21:36.676624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.470 qpair failed and we were unable to recover it. 00:27:09.470 [2024-11-20 11:21:36.676878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.470 [2024-11-20 11:21:36.676911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.470 qpair failed and we were unable to recover it. 00:27:09.470 [2024-11-20 11:21:36.677121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.470 [2024-11-20 11:21:36.677155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.470 qpair failed and we were unable to recover it. 00:27:09.470 [2024-11-20 11:21:36.677418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.470 [2024-11-20 11:21:36.677451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.470 qpair failed and we were unable to recover it. 00:27:09.470 [2024-11-20 11:21:36.677722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.470 [2024-11-20 11:21:36.677755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.470 qpair failed and we were unable to recover it. 00:27:09.470 [2024-11-20 11:21:36.678007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.470 [2024-11-20 11:21:36.678041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.470 qpair failed and we were unable to recover it. 00:27:09.470 [2024-11-20 11:21:36.678338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.470 [2024-11-20 11:21:36.678382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.470 qpair failed and we were unable to recover it. 00:27:09.470 [2024-11-20 11:21:36.678524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.470 [2024-11-20 11:21:36.678557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.470 qpair failed and we were unable to recover it. 00:27:09.470 [2024-11-20 11:21:36.678778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.470 [2024-11-20 11:21:36.678810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.470 qpair failed and we were unable to recover it. 00:27:09.470 [2024-11-20 11:21:36.679088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.470 [2024-11-20 11:21:36.679123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.470 qpair failed and we were unable to recover it. 00:27:09.470 [2024-11-20 11:21:36.679326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.470 [2024-11-20 11:21:36.679358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.470 qpair failed and we were unable to recover it. 00:27:09.470 [2024-11-20 11:21:36.679654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.471 [2024-11-20 11:21:36.679687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.471 qpair failed and we were unable to recover it. 00:27:09.471 [2024-11-20 11:21:36.679879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.471 [2024-11-20 11:21:36.679912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.471 qpair failed and we were unable to recover it. 00:27:09.471 [2024-11-20 11:21:36.680201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.471 [2024-11-20 11:21:36.680235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.471 qpair failed and we were unable to recover it. 00:27:09.471 [2024-11-20 11:21:36.680513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.471 [2024-11-20 11:21:36.680546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.471 qpair failed and we were unable to recover it. 00:27:09.471 [2024-11-20 11:21:36.680825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.471 [2024-11-20 11:21:36.680857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.471 qpair failed and we were unable to recover it. 00:27:09.471 [2024-11-20 11:21:36.681039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.471 [2024-11-20 11:21:36.681072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.471 qpair failed and we were unable to recover it. 00:27:09.471 [2024-11-20 11:21:36.681289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.471 [2024-11-20 11:21:36.681321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.471 qpair failed and we were unable to recover it. 00:27:09.471 [2024-11-20 11:21:36.681638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.471 [2024-11-20 11:21:36.681671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.471 qpair failed and we were unable to recover it. 00:27:09.471 [2024-11-20 11:21:36.681939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.471 [2024-11-20 11:21:36.681984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.471 qpair failed and we were unable to recover it. 00:27:09.471 [2024-11-20 11:21:36.682289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.471 [2024-11-20 11:21:36.682323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.471 qpair failed and we were unable to recover it. 00:27:09.471 [2024-11-20 11:21:36.682514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.471 [2024-11-20 11:21:36.682546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.471 qpair failed and we were unable to recover it. 00:27:09.471 [2024-11-20 11:21:36.682795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.471 [2024-11-20 11:21:36.682828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.471 qpair failed and we were unable to recover it. 00:27:09.471 [2024-11-20 11:21:36.683025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.471 [2024-11-20 11:21:36.683060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.471 qpair failed and we were unable to recover it. 00:27:09.471 [2024-11-20 11:21:36.683269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.471 [2024-11-20 11:21:36.683303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.471 qpair failed and we were unable to recover it. 00:27:09.471 [2024-11-20 11:21:36.683496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.471 [2024-11-20 11:21:36.683528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.471 qpair failed and we were unable to recover it. 00:27:09.471 [2024-11-20 11:21:36.683727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.471 [2024-11-20 11:21:36.683760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.471 qpair failed and we were unable to recover it. 00:27:09.471 [2024-11-20 11:21:36.684035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.471 [2024-11-20 11:21:36.684069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.471 qpair failed and we were unable to recover it. 00:27:09.471 [2024-11-20 11:21:36.684324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.471 [2024-11-20 11:21:36.684357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.471 qpair failed and we were unable to recover it. 00:27:09.471 [2024-11-20 11:21:36.684550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.471 [2024-11-20 11:21:36.684581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.471 qpair failed and we were unable to recover it. 00:27:09.471 [2024-11-20 11:21:36.685065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.471 [2024-11-20 11:21:36.685105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.471 qpair failed and we were unable to recover it. 00:27:09.471 [2024-11-20 11:21:36.685341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.471 [2024-11-20 11:21:36.685377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.471 qpair failed and we were unable to recover it. 00:27:09.471 [2024-11-20 11:21:36.685657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.471 [2024-11-20 11:21:36.685690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.471 qpair failed and we were unable to recover it. 00:27:09.471 [2024-11-20 11:21:36.685889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.471 [2024-11-20 11:21:36.685920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.471 qpair failed and we were unable to recover it. 00:27:09.471 [2024-11-20 11:21:36.686222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.471 [2024-11-20 11:21:36.686255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.471 qpair failed and we were unable to recover it. 00:27:09.471 [2024-11-20 11:21:36.686483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.471 [2024-11-20 11:21:36.686516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.471 qpair failed and we were unable to recover it. 00:27:09.471 [2024-11-20 11:21:36.686769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.471 [2024-11-20 11:21:36.686804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.471 qpair failed and we were unable to recover it. 00:27:09.471 [2024-11-20 11:21:36.687066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.471 [2024-11-20 11:21:36.687101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.471 qpair failed and we were unable to recover it. 00:27:09.471 [2024-11-20 11:21:36.687385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.471 [2024-11-20 11:21:36.687419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.471 qpair failed and we were unable to recover it. 00:27:09.471 [2024-11-20 11:21:36.687631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.471 [2024-11-20 11:21:36.687662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.471 qpair failed and we were unable to recover it. 00:27:09.471 [2024-11-20 11:21:36.687796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.471 [2024-11-20 11:21:36.687829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.471 qpair failed and we were unable to recover it. 00:27:09.471 [2024-11-20 11:21:36.688105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.471 [2024-11-20 11:21:36.688139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.471 qpair failed and we were unable to recover it. 00:27:09.471 [2024-11-20 11:21:36.688413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.471 [2024-11-20 11:21:36.688446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.471 qpair failed and we were unable to recover it. 00:27:09.471 [2024-11-20 11:21:36.688661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.471 [2024-11-20 11:21:36.688693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.471 qpair failed and we were unable to recover it. 00:27:09.471 [2024-11-20 11:21:36.688885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.471 [2024-11-20 11:21:36.688917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.471 qpair failed and we were unable to recover it. 00:27:09.472 [2024-11-20 11:21:36.689138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.472 [2024-11-20 11:21:36.689171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.472 qpair failed and we were unable to recover it. 00:27:09.472 [2024-11-20 11:21:36.689394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.472 [2024-11-20 11:21:36.689427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:09.472 qpair failed and we were unable to recover it. 00:27:09.472 [2024-11-20 11:21:36.689778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.472 [2024-11-20 11:21:36.689856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:09.472 qpair failed and we were unable to recover it. 00:27:09.472 [2024-11-20 11:21:36.690082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.472 [2024-11-20 11:21:36.690119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:09.472 qpair failed and we were unable to recover it. 00:27:09.472 [2024-11-20 11:21:36.690438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.472 [2024-11-20 11:21:36.690471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:09.472 qpair failed and we were unable to recover it. 00:27:09.472 [2024-11-20 11:21:36.690735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.472 [2024-11-20 11:21:36.690767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:09.472 qpair failed and we were unable to recover it. 00:27:09.472 [2024-11-20 11:21:36.691086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.472 [2024-11-20 11:21:36.691119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:09.472 qpair failed and we were unable to recover it. 00:27:09.472 [2024-11-20 11:21:36.691401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.472 [2024-11-20 11:21:36.691434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:09.472 qpair failed and we were unable to recover it. 00:27:09.472 [2024-11-20 11:21:36.691712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.472 [2024-11-20 11:21:36.691743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:09.472 qpair failed and we were unable to recover it. 00:27:09.472 [2024-11-20 11:21:36.691969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.472 [2024-11-20 11:21:36.692003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:09.472 qpair failed and we were unable to recover it. 00:27:09.472 [2024-11-20 11:21:36.692280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.472 [2024-11-20 11:21:36.692312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:09.472 qpair failed and we were unable to recover it. 00:27:09.472 [2024-11-20 11:21:36.692528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.472 [2024-11-20 11:21:36.692560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:09.472 qpair failed and we were unable to recover it. 00:27:09.472 [2024-11-20 11:21:36.692747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.472 [2024-11-20 11:21:36.692778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:09.472 qpair failed and we were unable to recover it. 00:27:09.472 [2024-11-20 11:21:36.692993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.472 [2024-11-20 11:21:36.693027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:09.472 qpair failed and we were unable to recover it. 00:27:09.472 [2024-11-20 11:21:36.693178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.472 [2024-11-20 11:21:36.693209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:09.472 qpair failed and we were unable to recover it. 00:27:09.472 [2024-11-20 11:21:36.693406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.472 [2024-11-20 11:21:36.693438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:09.472 qpair failed and we were unable to recover it. 00:27:09.472 [2024-11-20 11:21:36.693665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.472 [2024-11-20 11:21:36.693696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:09.472 qpair failed and we were unable to recover it. 00:27:09.472 [2024-11-20 11:21:36.693889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.472 [2024-11-20 11:21:36.693921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:09.472 qpair failed and we were unable to recover it. 00:27:09.472 [2024-11-20 11:21:36.694189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.472 [2024-11-20 11:21:36.694221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:09.472 qpair failed and we were unable to recover it. 00:27:09.472 [2024-11-20 11:21:36.694522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.472 [2024-11-20 11:21:36.694555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:09.472 qpair failed and we were unable to recover it. 00:27:09.472 [2024-11-20 11:21:36.694821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.472 [2024-11-20 11:21:36.694852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:09.472 qpair failed and we were unable to recover it. 00:27:09.472 [2024-11-20 11:21:36.695002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.472 [2024-11-20 11:21:36.695034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:09.472 qpair failed and we were unable to recover it. 00:27:09.472 [2024-11-20 11:21:36.695221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.472 [2024-11-20 11:21:36.695253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:09.472 qpair failed and we were unable to recover it. 00:27:09.472 [2024-11-20 11:21:36.695389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.472 [2024-11-20 11:21:36.695421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:09.472 qpair failed and we were unable to recover it. 00:27:09.472 [2024-11-20 11:21:36.695615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.472 [2024-11-20 11:21:36.695645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:09.472 qpair failed and we were unable to recover it. 00:27:09.472 [2024-11-20 11:21:36.695919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.472 [2024-11-20 11:21:36.695957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:09.472 qpair failed and we were unable to recover it. 00:27:09.472 [2024-11-20 11:21:36.696238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.472 [2024-11-20 11:21:36.696270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:09.472 qpair failed and we were unable to recover it. 00:27:09.472 [2024-11-20 11:21:36.696549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.472 [2024-11-20 11:21:36.696581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:09.472 qpair failed and we were unable to recover it. 00:27:09.472 [2024-11-20 11:21:36.696838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.472 [2024-11-20 11:21:36.696870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:09.472 qpair failed and we were unable to recover it. 00:27:09.472 [2024-11-20 11:21:36.697178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.472 [2024-11-20 11:21:36.697212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:09.472 qpair failed and we were unable to recover it. 00:27:09.472 [2024-11-20 11:21:36.697328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.472 [2024-11-20 11:21:36.697357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:09.472 qpair failed and we were unable to recover it. 00:27:09.472 [2024-11-20 11:21:36.697630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.472 [2024-11-20 11:21:36.697662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:09.472 qpair failed and we were unable to recover it. 00:27:09.472 [2024-11-20 11:21:36.697852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.472 [2024-11-20 11:21:36.697883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:09.472 qpair failed and we were unable to recover it. 00:27:09.473 [2024-11-20 11:21:36.698088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.473 [2024-11-20 11:21:36.698121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:09.473 qpair failed and we were unable to recover it. 00:27:09.473 [2024-11-20 11:21:36.698394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.473 [2024-11-20 11:21:36.698426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:09.473 qpair failed and we were unable to recover it. 00:27:09.473 [2024-11-20 11:21:36.698703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.473 [2024-11-20 11:21:36.698734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:09.473 qpair failed and we were unable to recover it. 00:27:09.473 [2024-11-20 11:21:36.698959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.473 [2024-11-20 11:21:36.698992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:09.473 qpair failed and we were unable to recover it. 00:27:09.473 [2024-11-20 11:21:36.699277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.473 [2024-11-20 11:21:36.699309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:09.473 qpair failed and we were unable to recover it. 00:27:09.473 [2024-11-20 11:21:36.699601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.473 [2024-11-20 11:21:36.699633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:09.473 qpair failed and we were unable to recover it. 00:27:09.473 [2024-11-20 11:21:36.699762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.473 [2024-11-20 11:21:36.699794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:09.473 qpair failed and we were unable to recover it. 00:27:09.473 [2024-11-20 11:21:36.700093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.473 [2024-11-20 11:21:36.700126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:09.473 qpair failed and we were unable to recover it. 00:27:09.473 [2024-11-20 11:21:36.700349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.473 [2024-11-20 11:21:36.700381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:09.473 qpair failed and we were unable to recover it. 00:27:09.473 [2024-11-20 11:21:36.700568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.473 [2024-11-20 11:21:36.700605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:09.473 qpair failed and we were unable to recover it. 00:27:09.473 [2024-11-20 11:21:36.700849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.473 [2024-11-20 11:21:36.700880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:09.473 qpair failed and we were unable to recover it. 00:27:09.473 [2024-11-20 11:21:36.701134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.473 [2024-11-20 11:21:36.701167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:09.473 qpair failed and we were unable to recover it. 00:27:09.473 [2024-11-20 11:21:36.701474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.473 [2024-11-20 11:21:36.701506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:09.473 qpair failed and we were unable to recover it. 00:27:09.473 [2024-11-20 11:21:36.701768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.473 [2024-11-20 11:21:36.701799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:09.473 qpair failed and we were unable to recover it. 00:27:09.473 [2024-11-20 11:21:36.702099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.473 [2024-11-20 11:21:36.702131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:09.473 qpair failed and we were unable to recover it. 00:27:09.473 [2024-11-20 11:21:36.702403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.473 [2024-11-20 11:21:36.702435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:09.473 qpair failed and we were unable to recover it. 00:27:09.473 [2024-11-20 11:21:36.702686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.473 [2024-11-20 11:21:36.702718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:09.473 qpair failed and we were unable to recover it. 00:27:09.473 [2024-11-20 11:21:36.703019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.473 [2024-11-20 11:21:36.703052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:09.473 qpair failed and we were unable to recover it. 00:27:09.473 [2024-11-20 11:21:36.703269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.473 [2024-11-20 11:21:36.703300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:09.473 qpair failed and we were unable to recover it. 00:27:09.473 [2024-11-20 11:21:36.703530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.473 [2024-11-20 11:21:36.703563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:09.473 qpair failed and we were unable to recover it. 00:27:09.473 [2024-11-20 11:21:36.703860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.473 [2024-11-20 11:21:36.703892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:09.473 qpair failed and we were unable to recover it. 00:27:09.473 [2024-11-20 11:21:36.704195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.473 [2024-11-20 11:21:36.704227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:09.473 qpair failed and we were unable to recover it. 00:27:09.473 [2024-11-20 11:21:36.704497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.473 [2024-11-20 11:21:36.704529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:09.473 qpair failed and we were unable to recover it. 00:27:09.473 [2024-11-20 11:21:36.704812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.473 [2024-11-20 11:21:36.704844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:09.473 qpair failed and we were unable to recover it. 00:27:09.473 [2024-11-20 11:21:36.705124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.473 [2024-11-20 11:21:36.705158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:09.473 qpair failed and we were unable to recover it. 00:27:09.473 [2024-11-20 11:21:36.705386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.473 [2024-11-20 11:21:36.705417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:09.473 qpair failed and we were unable to recover it. 00:27:09.473 [2024-11-20 11:21:36.705602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.473 [2024-11-20 11:21:36.705633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:09.473 qpair failed and we were unable to recover it. 00:27:09.473 [2024-11-20 11:21:36.705928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.473 [2024-11-20 11:21:36.705970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:09.473 qpair failed and we were unable to recover it. 00:27:09.473 [2024-11-20 11:21:36.706249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.473 [2024-11-20 11:21:36.706281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:09.473 qpair failed and we were unable to recover it. 00:27:09.473 [2024-11-20 11:21:36.706528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.473 [2024-11-20 11:21:36.706560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:09.473 qpair failed and we were unable to recover it. 00:27:09.473 [2024-11-20 11:21:36.706762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.473 [2024-11-20 11:21:36.706794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:09.473 qpair failed and we were unable to recover it. 00:27:09.473 [2024-11-20 11:21:36.707068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.473 [2024-11-20 11:21:36.707101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:09.473 qpair failed and we were unable to recover it. 00:27:09.473 [2024-11-20 11:21:36.707293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.473 [2024-11-20 11:21:36.707324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:09.474 qpair failed and we were unable to recover it. 00:27:09.474 [2024-11-20 11:21:36.707590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.474 [2024-11-20 11:21:36.707621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:09.474 qpair failed and we were unable to recover it. 00:27:09.474 [2024-11-20 11:21:36.707916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.474 [2024-11-20 11:21:36.707957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:09.474 qpair failed and we were unable to recover it. 00:27:09.474 [2024-11-20 11:21:36.708256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.474 [2024-11-20 11:21:36.708289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:09.474 qpair failed and we were unable to recover it. 00:27:09.474 [2024-11-20 11:21:36.708579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.474 [2024-11-20 11:21:36.708610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:09.474 qpair failed and we were unable to recover it. 00:27:09.474 [2024-11-20 11:21:36.708835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.474 [2024-11-20 11:21:36.708866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:09.474 qpair failed and we were unable to recover it. 00:27:09.474 [2024-11-20 11:21:36.709119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.474 [2024-11-20 11:21:36.709153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:09.474 qpair failed and we were unable to recover it. 00:27:09.474 [2024-11-20 11:21:36.709405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.474 [2024-11-20 11:21:36.709436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:09.474 qpair failed and we were unable to recover it. 00:27:09.474 [2024-11-20 11:21:36.709688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.474 [2024-11-20 11:21:36.709719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:09.474 qpair failed and we were unable to recover it. 00:27:09.474 [2024-11-20 11:21:36.710026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.474 [2024-11-20 11:21:36.710060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:09.474 qpair failed and we were unable to recover it. 00:27:09.474 [2024-11-20 11:21:36.710321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.474 [2024-11-20 11:21:36.710352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:09.474 qpair failed and we were unable to recover it. 00:27:09.474 [2024-11-20 11:21:36.710581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.474 [2024-11-20 11:21:36.710612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:09.474 qpair failed and we were unable to recover it. 00:27:09.474 [2024-11-20 11:21:36.710886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.474 [2024-11-20 11:21:36.710919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:09.474 qpair failed and we were unable to recover it. 00:27:09.474 [2024-11-20 11:21:36.711248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.474 [2024-11-20 11:21:36.711314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.474 qpair failed and we were unable to recover it. 00:27:09.474 [2024-11-20 11:21:36.711566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.474 [2024-11-20 11:21:36.711608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.474 qpair failed and we were unable to recover it. 00:27:09.474 [2024-11-20 11:21:36.711836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.474 [2024-11-20 11:21:36.711877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.474 qpair failed and we were unable to recover it. 00:27:09.474 [2024-11-20 11:21:36.712150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.474 [2024-11-20 11:21:36.712193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.474 qpair failed and we were unable to recover it. 00:27:09.474 [2024-11-20 11:21:36.712478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.474 [2024-11-20 11:21:36.712528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.474 qpair failed and we were unable to recover it. 00:27:09.474 [2024-11-20 11:21:36.712836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.474 [2024-11-20 11:21:36.712876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.474 qpair failed and we were unable to recover it. 00:27:09.474 [2024-11-20 11:21:36.713196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.474 [2024-11-20 11:21:36.713238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.474 qpair failed and we were unable to recover it. 00:27:09.474 [2024-11-20 11:21:36.713521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.474 [2024-11-20 11:21:36.713560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.474 qpair failed and we were unable to recover it. 00:27:09.474 [2024-11-20 11:21:36.713786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.474 [2024-11-20 11:21:36.713825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.474 qpair failed and we were unable to recover it. 00:27:09.474 [2024-11-20 11:21:36.714130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.474 [2024-11-20 11:21:36.714172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.474 qpair failed and we were unable to recover it. 00:27:09.474 [2024-11-20 11:21:36.714474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.474 [2024-11-20 11:21:36.714514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.474 qpair failed and we were unable to recover it. 00:27:09.474 [2024-11-20 11:21:36.714827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.474 [2024-11-20 11:21:36.714868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.474 qpair failed and we were unable to recover it. 00:27:09.474 [2024-11-20 11:21:36.715123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.474 [2024-11-20 11:21:36.715166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.474 qpair failed and we were unable to recover it. 00:27:09.474 [2024-11-20 11:21:36.715378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.474 [2024-11-20 11:21:36.715417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.474 qpair failed and we were unable to recover it. 00:27:09.474 [2024-11-20 11:21:36.715625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.474 [2024-11-20 11:21:36.715664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.474 qpair failed and we were unable to recover it. 00:27:09.474 [2024-11-20 11:21:36.715886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.474 [2024-11-20 11:21:36.715926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.474 qpair failed and we were unable to recover it. 00:27:09.474 [2024-11-20 11:21:36.716175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.474 [2024-11-20 11:21:36.716216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.474 qpair failed and we were unable to recover it. 00:27:09.474 [2024-11-20 11:21:36.716527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.474 [2024-11-20 11:21:36.716567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.474 qpair failed and we were unable to recover it. 00:27:09.474 [2024-11-20 11:21:36.716812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.474 [2024-11-20 11:21:36.716852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.474 qpair failed and we were unable to recover it. 00:27:09.474 [2024-11-20 11:21:36.717081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.474 [2024-11-20 11:21:36.717123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.474 qpair failed and we were unable to recover it. 00:27:09.474 [2024-11-20 11:21:36.717372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.475 [2024-11-20 11:21:36.717411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.475 qpair failed and we were unable to recover it. 00:27:09.475 [2024-11-20 11:21:36.717713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.475 [2024-11-20 11:21:36.717752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.475 qpair failed and we were unable to recover it. 00:27:09.475 [2024-11-20 11:21:36.718065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.475 [2024-11-20 11:21:36.718107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.475 qpair failed and we were unable to recover it. 00:27:09.475 [2024-11-20 11:21:36.718275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.475 [2024-11-20 11:21:36.718321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.475 qpair failed and we were unable to recover it. 00:27:09.475 [2024-11-20 11:21:36.718630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.475 [2024-11-20 11:21:36.718671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.475 qpair failed and we were unable to recover it. 00:27:09.475 [2024-11-20 11:21:36.718974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.475 [2024-11-20 11:21:36.719015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.475 qpair failed and we were unable to recover it. 00:27:09.475 [2024-11-20 11:21:36.719322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.475 [2024-11-20 11:21:36.719362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.475 qpair failed and we were unable to recover it. 00:27:09.475 [2024-11-20 11:21:36.719587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.475 [2024-11-20 11:21:36.719626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.475 qpair failed and we were unable to recover it. 00:27:09.475 [2024-11-20 11:21:36.719782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.475 [2024-11-20 11:21:36.719830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.475 qpair failed and we were unable to recover it. 00:27:09.475 [2024-11-20 11:21:36.720141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.475 [2024-11-20 11:21:36.720183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.475 qpair failed and we were unable to recover it. 00:27:09.475 [2024-11-20 11:21:36.720421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.475 [2024-11-20 11:21:36.720461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.475 qpair failed and we were unable to recover it. 00:27:09.475 [2024-11-20 11:21:36.720779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.475 [2024-11-20 11:21:36.720820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.475 qpair failed and we were unable to recover it. 00:27:09.475 [2024-11-20 11:21:36.721126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.475 [2024-11-20 11:21:36.721168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.475 qpair failed and we were unable to recover it. 00:27:09.475 [2024-11-20 11:21:36.721476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.475 [2024-11-20 11:21:36.721516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.475 qpair failed and we were unable to recover it. 00:27:09.475 [2024-11-20 11:21:36.721751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.475 [2024-11-20 11:21:36.721791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.475 qpair failed and we were unable to recover it. 00:27:09.475 [2024-11-20 11:21:36.722077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.475 [2024-11-20 11:21:36.722120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.475 qpair failed and we were unable to recover it. 00:27:09.475 [2024-11-20 11:21:36.722412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.475 [2024-11-20 11:21:36.722451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.475 qpair failed and we were unable to recover it. 00:27:09.475 [2024-11-20 11:21:36.722729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.475 [2024-11-20 11:21:36.722768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.475 qpair failed and we were unable to recover it. 00:27:09.475 [2024-11-20 11:21:36.723087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.475 [2024-11-20 11:21:36.723130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.475 qpair failed and we were unable to recover it. 00:27:09.475 [2024-11-20 11:21:36.723459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.475 [2024-11-20 11:21:36.723500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.475 qpair failed and we were unable to recover it. 00:27:09.475 [2024-11-20 11:21:36.723823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.475 [2024-11-20 11:21:36.723863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.475 qpair failed and we were unable to recover it. 00:27:09.475 [2024-11-20 11:21:36.724204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.475 [2024-11-20 11:21:36.724246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.475 qpair failed and we were unable to recover it. 00:27:09.475 [2024-11-20 11:21:36.724480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.475 [2024-11-20 11:21:36.724519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.475 qpair failed and we were unable to recover it. 00:27:09.475 [2024-11-20 11:21:36.724773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.475 [2024-11-20 11:21:36.724812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.475 qpair failed and we were unable to recover it. 00:27:09.475 [2024-11-20 11:21:36.725063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.475 [2024-11-20 11:21:36.725113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.475 qpair failed and we were unable to recover it. 00:27:09.475 [2024-11-20 11:21:36.725419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.475 [2024-11-20 11:21:36.725460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.475 qpair failed and we were unable to recover it. 00:27:09.475 [2024-11-20 11:21:36.725766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.475 [2024-11-20 11:21:36.725804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.475 qpair failed and we were unable to recover it. 00:27:09.475 [2024-11-20 11:21:36.726089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.475 [2024-11-20 11:21:36.726130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.475 qpair failed and we were unable to recover it. 00:27:09.475 [2024-11-20 11:21:36.726458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.475 [2024-11-20 11:21:36.726498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.476 qpair failed and we were unable to recover it. 00:27:09.476 [2024-11-20 11:21:36.726713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.476 [2024-11-20 11:21:36.726753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.476 qpair failed and we were unable to recover it. 00:27:09.476 [2024-11-20 11:21:36.727058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.476 [2024-11-20 11:21:36.727100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.476 qpair failed and we were unable to recover it. 00:27:09.476 [2024-11-20 11:21:36.727326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.476 [2024-11-20 11:21:36.727365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.476 qpair failed and we were unable to recover it. 00:27:09.476 [2024-11-20 11:21:36.727680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.476 [2024-11-20 11:21:36.727722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.476 qpair failed and we were unable to recover it. 00:27:09.476 [2024-11-20 11:21:36.728060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.476 [2024-11-20 11:21:36.728101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.476 qpair failed and we were unable to recover it. 00:27:09.476 [2024-11-20 11:21:36.728424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.476 [2024-11-20 11:21:36.728463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.476 qpair failed and we were unable to recover it. 00:27:09.476 [2024-11-20 11:21:36.728770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.476 [2024-11-20 11:21:36.728809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.476 qpair failed and we were unable to recover it. 00:27:09.476 [2024-11-20 11:21:36.729115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.476 [2024-11-20 11:21:36.729156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.476 qpair failed and we were unable to recover it. 00:27:09.476 [2024-11-20 11:21:36.729466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.476 [2024-11-20 11:21:36.729505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.476 qpair failed and we were unable to recover it. 00:27:09.476 [2024-11-20 11:21:36.729821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.476 [2024-11-20 11:21:36.729862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.476 qpair failed and we were unable to recover it. 00:27:09.476 [2024-11-20 11:21:36.730131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.476 [2024-11-20 11:21:36.730172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.476 qpair failed and we were unable to recover it. 00:27:09.476 [2024-11-20 11:21:36.730454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.476 [2024-11-20 11:21:36.730495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.476 qpair failed and we were unable to recover it. 00:27:09.476 [2024-11-20 11:21:36.730795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.476 [2024-11-20 11:21:36.730834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.476 qpair failed and we were unable to recover it. 00:27:09.476 [2024-11-20 11:21:36.731140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.476 [2024-11-20 11:21:36.731183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.476 qpair failed and we were unable to recover it. 00:27:09.476 [2024-11-20 11:21:36.731532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.476 [2024-11-20 11:21:36.731573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.476 qpair failed and we were unable to recover it. 00:27:09.476 [2024-11-20 11:21:36.731890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.476 [2024-11-20 11:21:36.731930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.476 qpair failed and we were unable to recover it. 00:27:09.476 [2024-11-20 11:21:36.732251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.476 [2024-11-20 11:21:36.732292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.476 qpair failed and we were unable to recover it. 00:27:09.476 [2024-11-20 11:21:36.732595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.476 [2024-11-20 11:21:36.732635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.476 qpair failed and we were unable to recover it. 00:27:09.476 [2024-11-20 11:21:36.732942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.476 [2024-11-20 11:21:36.732992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.476 qpair failed and we were unable to recover it. 00:27:09.476 [2024-11-20 11:21:36.733292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.476 [2024-11-20 11:21:36.733331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.476 qpair failed and we were unable to recover it. 00:27:09.476 [2024-11-20 11:21:36.733635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.476 [2024-11-20 11:21:36.733676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.476 qpair failed and we were unable to recover it. 00:27:09.476 [2024-11-20 11:21:36.733986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.476 [2024-11-20 11:21:36.734028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.476 qpair failed and we were unable to recover it. 00:27:09.476 [2024-11-20 11:21:36.734322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.476 [2024-11-20 11:21:36.734363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.476 qpair failed and we were unable to recover it. 00:27:09.476 [2024-11-20 11:21:36.734670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.476 [2024-11-20 11:21:36.734713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.476 qpair failed and we were unable to recover it. 00:27:09.476 [2024-11-20 11:21:36.735006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.476 [2024-11-20 11:21:36.735050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.476 qpair failed and we were unable to recover it. 00:27:09.476 [2024-11-20 11:21:36.735364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.476 [2024-11-20 11:21:36.735404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.476 qpair failed and we were unable to recover it. 00:27:09.476 [2024-11-20 11:21:36.735695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.476 [2024-11-20 11:21:36.735734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.476 qpair failed and we were unable to recover it. 00:27:09.476 [2024-11-20 11:21:36.736051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.476 [2024-11-20 11:21:36.736093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.476 qpair failed and we were unable to recover it. 00:27:09.476 [2024-11-20 11:21:36.736349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.476 [2024-11-20 11:21:36.736388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.476 qpair failed and we were unable to recover it. 00:27:09.476 [2024-11-20 11:21:36.736679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.476 [2024-11-20 11:21:36.736718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.476 qpair failed and we were unable to recover it. 00:27:09.476 [2024-11-20 11:21:36.737010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.476 [2024-11-20 11:21:36.737053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.476 qpair failed and we were unable to recover it. 00:27:09.476 [2024-11-20 11:21:36.737340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.476 [2024-11-20 11:21:36.737380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.476 qpair failed and we were unable to recover it. 00:27:09.476 [2024-11-20 11:21:36.737613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.477 [2024-11-20 11:21:36.737651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.477 qpair failed and we were unable to recover it. 00:27:09.477 [2024-11-20 11:21:36.737974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.477 [2024-11-20 11:21:36.738016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.477 qpair failed and we were unable to recover it. 00:27:09.477 [2024-11-20 11:21:36.738356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.477 [2024-11-20 11:21:36.738397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.477 qpair failed and we were unable to recover it. 00:27:09.477 [2024-11-20 11:21:36.738643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.477 [2024-11-20 11:21:36.738690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.477 qpair failed and we were unable to recover it. 00:27:09.477 [2024-11-20 11:21:36.738989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.477 [2024-11-20 11:21:36.739032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.477 qpair failed and we were unable to recover it. 00:27:09.477 [2024-11-20 11:21:36.739277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.477 [2024-11-20 11:21:36.739317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.477 qpair failed and we were unable to recover it. 00:27:09.477 [2024-11-20 11:21:36.739624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.477 [2024-11-20 11:21:36.739663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.477 qpair failed and we were unable to recover it. 00:27:09.477 [2024-11-20 11:21:36.739895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.477 [2024-11-20 11:21:36.739935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.477 qpair failed and we were unable to recover it. 00:27:09.477 [2024-11-20 11:21:36.740160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.477 [2024-11-20 11:21:36.740201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.477 qpair failed and we were unable to recover it. 00:27:09.477 [2024-11-20 11:21:36.740500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.477 [2024-11-20 11:21:36.740539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.477 qpair failed and we were unable to recover it. 00:27:09.477 [2024-11-20 11:21:36.740845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.477 [2024-11-20 11:21:36.740884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.477 qpair failed and we were unable to recover it. 00:27:09.477 [2024-11-20 11:21:36.741221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.477 [2024-11-20 11:21:36.741265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.477 qpair failed and we were unable to recover it. 00:27:09.477 [2024-11-20 11:21:36.741546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.477 [2024-11-20 11:21:36.741586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.477 qpair failed and we were unable to recover it. 00:27:09.477 [2024-11-20 11:21:36.741866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.477 [2024-11-20 11:21:36.741906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.477 qpair failed and we were unable to recover it. 00:27:09.477 [2024-11-20 11:21:36.742228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.477 [2024-11-20 11:21:36.742269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.477 qpair failed and we were unable to recover it. 00:27:09.477 [2024-11-20 11:21:36.742590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.477 [2024-11-20 11:21:36.742628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.477 qpair failed and we were unable to recover it. 00:27:09.477 [2024-11-20 11:21:36.742879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.477 [2024-11-20 11:21:36.742917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.477 qpair failed and we were unable to recover it. 00:27:09.477 [2024-11-20 11:21:36.743096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.477 [2024-11-20 11:21:36.743147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.477 qpair failed and we were unable to recover it. 00:27:09.477 [2024-11-20 11:21:36.743362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.477 [2024-11-20 11:21:36.743401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.477 qpair failed and we were unable to recover it. 00:27:09.477 [2024-11-20 11:21:36.743679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.477 [2024-11-20 11:21:36.743718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.477 qpair failed and we were unable to recover it. 00:27:09.477 [2024-11-20 11:21:36.744003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.477 [2024-11-20 11:21:36.744043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.477 qpair failed and we were unable to recover it. 00:27:09.477 [2024-11-20 11:21:36.744321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.477 [2024-11-20 11:21:36.744360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.477 qpair failed and we were unable to recover it. 00:27:09.477 [2024-11-20 11:21:36.744672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.477 [2024-11-20 11:21:36.744713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.477 qpair failed and we were unable to recover it. 00:27:09.477 [2024-11-20 11:21:36.744978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.477 [2024-11-20 11:21:36.745021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.477 qpair failed and we were unable to recover it. 00:27:09.477 [2024-11-20 11:21:36.745320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.477 [2024-11-20 11:21:36.745363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.477 qpair failed and we were unable to recover it. 00:27:09.477 [2024-11-20 11:21:36.745644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.477 [2024-11-20 11:21:36.745685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.477 qpair failed and we were unable to recover it. 00:27:09.477 [2024-11-20 11:21:36.745992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.477 [2024-11-20 11:21:36.746033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.477 qpair failed and we were unable to recover it. 00:27:09.477 [2024-11-20 11:21:36.746295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.477 [2024-11-20 11:21:36.746334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.477 qpair failed and we were unable to recover it. 00:27:09.477 [2024-11-20 11:21:36.746624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.477 [2024-11-20 11:21:36.746664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.477 qpair failed and we were unable to recover it. 00:27:09.477 [2024-11-20 11:21:36.746835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.477 [2024-11-20 11:21:36.746883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.477 qpair failed and we were unable to recover it. 00:27:09.477 [2024-11-20 11:21:36.747235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.477 [2024-11-20 11:21:36.747276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.477 qpair failed and we were unable to recover it. 00:27:09.477 [2024-11-20 11:21:36.747511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.477 [2024-11-20 11:21:36.747552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.477 qpair failed and we were unable to recover it. 00:27:09.477 [2024-11-20 11:21:36.747835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.477 [2024-11-20 11:21:36.747875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.477 qpair failed and we were unable to recover it. 00:27:09.477 [2024-11-20 11:21:36.748204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.478 [2024-11-20 11:21:36.748246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.478 qpair failed and we were unable to recover it. 00:27:09.478 [2024-11-20 11:21:36.748471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.478 [2024-11-20 11:21:36.748510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.478 qpair failed and we were unable to recover it. 00:27:09.478 [2024-11-20 11:21:36.748834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.478 [2024-11-20 11:21:36.748874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.478 qpair failed and we were unable to recover it. 00:27:09.478 [2024-11-20 11:21:36.749213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.478 [2024-11-20 11:21:36.749254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.478 qpair failed and we were unable to recover it. 00:27:09.478 [2024-11-20 11:21:36.749574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.478 [2024-11-20 11:21:36.749617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.478 qpair failed and we were unable to recover it. 00:27:09.478 [2024-11-20 11:21:36.749898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.478 [2024-11-20 11:21:36.749939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.478 qpair failed and we were unable to recover it. 00:27:09.478 [2024-11-20 11:21:36.750195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.478 [2024-11-20 11:21:36.750235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.478 qpair failed and we were unable to recover it. 00:27:09.478 [2024-11-20 11:21:36.750558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.478 [2024-11-20 11:21:36.750597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.478 qpair failed and we were unable to recover it. 00:27:09.478 [2024-11-20 11:21:36.750887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.478 [2024-11-20 11:21:36.750928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.478 qpair failed and we were unable to recover it. 00:27:09.478 [2024-11-20 11:21:36.751230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.478 [2024-11-20 11:21:36.751269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.478 qpair failed and we were unable to recover it. 00:27:09.478 [2024-11-20 11:21:36.751569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.478 [2024-11-20 11:21:36.751617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.478 qpair failed and we were unable to recover it. 00:27:09.478 [2024-11-20 11:21:36.751922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.478 [2024-11-20 11:21:36.751981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.478 qpair failed and we were unable to recover it. 00:27:09.478 [2024-11-20 11:21:36.752290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.478 [2024-11-20 11:21:36.752332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.478 qpair failed and we were unable to recover it. 00:27:09.478 [2024-11-20 11:21:36.752571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.478 [2024-11-20 11:21:36.752612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.478 qpair failed and we were unable to recover it. 00:27:09.478 [2024-11-20 11:21:36.752924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.478 [2024-11-20 11:21:36.752982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.478 qpair failed and we were unable to recover it. 00:27:09.478 [2024-11-20 11:21:36.753197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.478 [2024-11-20 11:21:36.753237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.478 qpair failed and we were unable to recover it. 00:27:09.478 [2024-11-20 11:21:36.753548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.478 [2024-11-20 11:21:36.753586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.478 qpair failed and we were unable to recover it. 00:27:09.478 [2024-11-20 11:21:36.753834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.478 [2024-11-20 11:21:36.753875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.478 qpair failed and we were unable to recover it. 00:27:09.478 [2024-11-20 11:21:36.754206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.478 [2024-11-20 11:21:36.754250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.478 qpair failed and we were unable to recover it. 00:27:09.478 [2024-11-20 11:21:36.754488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.478 [2024-11-20 11:21:36.754527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.478 qpair failed and we were unable to recover it. 00:27:09.478 [2024-11-20 11:21:36.754855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.478 [2024-11-20 11:21:36.754895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.478 qpair failed and we were unable to recover it. 00:27:09.478 [2024-11-20 11:21:36.755074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.478 [2024-11-20 11:21:36.755128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.478 qpair failed and we were unable to recover it. 00:27:09.478 [2024-11-20 11:21:36.755478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.478 [2024-11-20 11:21:36.755517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.478 qpair failed and we were unable to recover it. 00:27:09.478 [2024-11-20 11:21:36.755701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.478 [2024-11-20 11:21:36.755745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.478 qpair failed and we were unable to recover it. 00:27:09.478 [2024-11-20 11:21:36.756060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.478 [2024-11-20 11:21:36.756103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.478 qpair failed and we were unable to recover it. 00:27:09.478 [2024-11-20 11:21:36.756415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.478 [2024-11-20 11:21:36.756456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.478 qpair failed and we were unable to recover it. 00:27:09.478 [2024-11-20 11:21:36.756738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.478 [2024-11-20 11:21:36.756777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.478 qpair failed and we were unable to recover it. 00:27:09.478 [2024-11-20 11:21:36.757040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.478 [2024-11-20 11:21:36.757083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.478 qpair failed and we were unable to recover it. 00:27:09.478 [2024-11-20 11:21:36.757312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.478 [2024-11-20 11:21:36.757351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.478 qpair failed and we were unable to recover it. 00:27:09.478 [2024-11-20 11:21:36.757586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.478 [2024-11-20 11:21:36.757626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.478 qpair failed and we were unable to recover it. 00:27:09.478 [2024-11-20 11:21:36.757928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.478 [2024-11-20 11:21:36.757986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.478 qpair failed and we were unable to recover it. 00:27:09.478 [2024-11-20 11:21:36.758298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.478 [2024-11-20 11:21:36.758337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.478 qpair failed and we were unable to recover it. 00:27:09.478 [2024-11-20 11:21:36.758492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.478 [2024-11-20 11:21:36.758536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.478 qpair failed and we were unable to recover it. 00:27:09.478 [2024-11-20 11:21:36.758755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.479 [2024-11-20 11:21:36.758795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.479 qpair failed and we were unable to recover it. 00:27:09.479 [2024-11-20 11:21:36.759072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.479 [2024-11-20 11:21:36.759115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.479 qpair failed and we were unable to recover it. 00:27:09.479 [2024-11-20 11:21:36.759418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.479 [2024-11-20 11:21:36.759458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.479 qpair failed and we were unable to recover it. 00:27:09.479 [2024-11-20 11:21:36.759701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.479 [2024-11-20 11:21:36.759742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.479 qpair failed and we were unable to recover it. 00:27:09.479 [2024-11-20 11:21:36.760048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.479 [2024-11-20 11:21:36.760090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.479 qpair failed and we were unable to recover it. 00:27:09.479 [2024-11-20 11:21:36.760377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.479 [2024-11-20 11:21:36.760417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.479 qpair failed and we were unable to recover it. 00:27:09.479 [2024-11-20 11:21:36.760721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.479 [2024-11-20 11:21:36.760763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.479 qpair failed and we were unable to recover it. 00:27:09.479 [2024-11-20 11:21:36.761081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.479 [2024-11-20 11:21:36.761123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.479 qpair failed and we were unable to recover it. 00:27:09.479 [2024-11-20 11:21:36.761405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.479 [2024-11-20 11:21:36.761445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.479 qpair failed and we were unable to recover it. 00:27:09.479 [2024-11-20 11:21:36.761665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.479 [2024-11-20 11:21:36.761705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.479 qpair failed and we were unable to recover it. 00:27:09.479 [2024-11-20 11:21:36.761934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.479 [2024-11-20 11:21:36.761995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.479 qpair failed and we were unable to recover it. 00:27:09.479 [2024-11-20 11:21:36.762333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.479 [2024-11-20 11:21:36.762374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.479 qpair failed and we were unable to recover it. 00:27:09.479 [2024-11-20 11:21:36.762677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.479 [2024-11-20 11:21:36.762716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.479 qpair failed and we were unable to recover it. 00:27:09.479 [2024-11-20 11:21:36.763022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.479 [2024-11-20 11:21:36.763063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.479 qpair failed and we were unable to recover it. 00:27:09.479 [2024-11-20 11:21:36.763300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.479 [2024-11-20 11:21:36.763341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.479 qpair failed and we were unable to recover it. 00:27:09.479 [2024-11-20 11:21:36.763515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.479 [2024-11-20 11:21:36.763562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.479 qpair failed and we were unable to recover it. 00:27:09.479 [2024-11-20 11:21:36.763893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.479 [2024-11-20 11:21:36.763934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.479 qpair failed and we were unable to recover it. 00:27:09.479 [2024-11-20 11:21:36.764213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.479 [2024-11-20 11:21:36.764260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.479 qpair failed and we were unable to recover it. 00:27:09.479 [2024-11-20 11:21:36.764549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.479 [2024-11-20 11:21:36.764590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.479 qpair failed and we were unable to recover it. 00:27:09.479 [2024-11-20 11:21:36.764875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.479 [2024-11-20 11:21:36.764916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.479 qpair failed and we were unable to recover it. 00:27:09.479 [2024-11-20 11:21:36.765169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.479 [2024-11-20 11:21:36.765209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.479 qpair failed and we were unable to recover it. 00:27:09.479 [2024-11-20 11:21:36.765434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.479 [2024-11-20 11:21:36.765475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.479 qpair failed and we were unable to recover it. 00:27:09.479 [2024-11-20 11:21:36.765704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.479 [2024-11-20 11:21:36.765744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.479 qpair failed and we were unable to recover it. 00:27:09.479 [2024-11-20 11:21:36.765977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.479 [2024-11-20 11:21:36.766019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.479 qpair failed and we were unable to recover it. 00:27:09.479 [2024-11-20 11:21:36.766218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.479 [2024-11-20 11:21:36.766268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.479 qpair failed and we were unable to recover it. 00:27:09.479 [2024-11-20 11:21:36.766510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.479 [2024-11-20 11:21:36.766551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.479 qpair failed and we were unable to recover it. 00:27:09.479 [2024-11-20 11:21:36.766856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.479 [2024-11-20 11:21:36.766896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.479 qpair failed and we were unable to recover it. 00:27:09.479 [2024-11-20 11:21:36.767155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.479 [2024-11-20 11:21:36.767197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.479 qpair failed and we were unable to recover it. 00:27:09.479 [2024-11-20 11:21:36.767449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.479 [2024-11-20 11:21:36.767491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.479 qpair failed and we were unable to recover it. 00:27:09.479 [2024-11-20 11:21:36.767796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.479 [2024-11-20 11:21:36.767836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.479 qpair failed and we were unable to recover it. 00:27:09.479 [2024-11-20 11:21:36.768080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.479 [2024-11-20 11:21:36.768123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.479 qpair failed and we were unable to recover it. 00:27:09.479 [2024-11-20 11:21:36.768375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.479 [2024-11-20 11:21:36.768417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.479 qpair failed and we were unable to recover it. 00:27:09.479 [2024-11-20 11:21:36.768695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.479 [2024-11-20 11:21:36.768734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.479 qpair failed and we were unable to recover it. 00:27:09.480 [2024-11-20 11:21:36.768942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.480 [2024-11-20 11:21:36.769008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.480 qpair failed and we were unable to recover it. 00:27:09.480 [2024-11-20 11:21:36.769246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.480 [2024-11-20 11:21:36.769288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.480 qpair failed and we were unable to recover it. 00:27:09.480 [2024-11-20 11:21:36.769521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.480 [2024-11-20 11:21:36.769560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.480 qpair failed and we were unable to recover it. 00:27:09.480 [2024-11-20 11:21:36.769864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.480 [2024-11-20 11:21:36.769904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.480 qpair failed and we were unable to recover it. 00:27:09.480 [2024-11-20 11:21:36.770161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.480 [2024-11-20 11:21:36.770201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.480 qpair failed and we were unable to recover it. 00:27:09.480 [2024-11-20 11:21:36.770443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.480 [2024-11-20 11:21:36.770479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.480 qpair failed and we were unable to recover it. 00:27:09.480 [2024-11-20 11:21:36.770752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.480 [2024-11-20 11:21:36.770788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.480 qpair failed and we were unable to recover it. 00:27:09.480 [2024-11-20 11:21:36.770934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.480 [2024-11-20 11:21:36.770993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.480 qpair failed and we were unable to recover it. 00:27:09.480 [2024-11-20 11:21:36.771150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.480 [2024-11-20 11:21:36.771194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.480 qpair failed and we were unable to recover it. 00:27:09.480 [2024-11-20 11:21:36.771492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.480 [2024-11-20 11:21:36.771532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.480 qpair failed and we were unable to recover it. 00:27:09.480 [2024-11-20 11:21:36.771837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.480 [2024-11-20 11:21:36.771877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.480 qpair failed and we were unable to recover it. 00:27:09.480 [2024-11-20 11:21:36.772125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.480 [2024-11-20 11:21:36.772164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.480 qpair failed and we were unable to recover it. 00:27:09.480 [2024-11-20 11:21:36.772379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.480 [2024-11-20 11:21:36.772416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.480 qpair failed and we were unable to recover it. 00:27:09.480 [2024-11-20 11:21:36.772713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.480 [2024-11-20 11:21:36.772749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.480 qpair failed and we were unable to recover it. 00:27:09.480 [2024-11-20 11:21:36.772972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.480 [2024-11-20 11:21:36.773011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.480 qpair failed and we were unable to recover it. 00:27:09.480 [2024-11-20 11:21:36.773300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.480 [2024-11-20 11:21:36.773338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.480 qpair failed and we were unable to recover it. 00:27:09.480 [2024-11-20 11:21:36.773576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.480 [2024-11-20 11:21:36.773612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.480 qpair failed and we were unable to recover it. 00:27:09.480 [2024-11-20 11:21:36.773838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.480 [2024-11-20 11:21:36.773875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.480 qpair failed and we were unable to recover it. 00:27:09.480 [2024-11-20 11:21:36.774102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.480 [2024-11-20 11:21:36.774140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.480 qpair failed and we were unable to recover it. 00:27:09.480 [2024-11-20 11:21:36.774347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.480 [2024-11-20 11:21:36.774383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.480 qpair failed and we were unable to recover it. 00:27:09.480 [2024-11-20 11:21:36.774656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.480 [2024-11-20 11:21:36.774691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.480 qpair failed and we were unable to recover it. 00:27:09.480 [2024-11-20 11:21:36.774900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.480 [2024-11-20 11:21:36.774935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.480 qpair failed and we were unable to recover it. 00:27:09.480 [2024-11-20 11:21:36.775232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.480 [2024-11-20 11:21:36.775271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.480 qpair failed and we were unable to recover it. 00:27:09.480 [2024-11-20 11:21:36.775587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.480 [2024-11-20 11:21:36.775623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.480 qpair failed and we were unable to recover it. 00:27:09.480 [2024-11-20 11:21:36.775790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.480 [2024-11-20 11:21:36.775837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.480 qpair failed and we were unable to recover it. 00:27:09.480 [2024-11-20 11:21:36.776054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.480 [2024-11-20 11:21:36.776092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.480 qpair failed and we were unable to recover it. 00:27:09.480 [2024-11-20 11:21:36.776336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.480 [2024-11-20 11:21:36.776372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.480 qpair failed and we were unable to recover it. 00:27:09.480 [2024-11-20 11:21:36.776616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.480 [2024-11-20 11:21:36.776654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.480 qpair failed and we were unable to recover it. 00:27:09.480 [2024-11-20 11:21:36.776855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.480 [2024-11-20 11:21:36.776892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.480 qpair failed and we were unable to recover it. 00:27:09.480 [2024-11-20 11:21:36.777190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.481 [2024-11-20 11:21:36.777227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.481 qpair failed and we were unable to recover it. 00:27:09.481 [2024-11-20 11:21:36.777502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.481 [2024-11-20 11:21:36.777539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.481 qpair failed and we were unable to recover it. 00:27:09.481 [2024-11-20 11:21:36.777834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.481 [2024-11-20 11:21:36.777870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.481 qpair failed and we were unable to recover it. 00:27:09.481 [2024-11-20 11:21:36.778155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.481 [2024-11-20 11:21:36.778193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.481 qpair failed and we were unable to recover it. 00:27:09.481 [2024-11-20 11:21:36.778432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.481 [2024-11-20 11:21:36.778469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.481 qpair failed and we were unable to recover it. 00:27:09.481 [2024-11-20 11:21:36.778747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.481 [2024-11-20 11:21:36.778783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.481 qpair failed and we were unable to recover it. 00:27:09.481 [2024-11-20 11:21:36.779010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.481 [2024-11-20 11:21:36.779049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.481 qpair failed and we were unable to recover it. 00:27:09.481 [2024-11-20 11:21:36.779276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.481 [2024-11-20 11:21:36.779313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.481 qpair failed and we were unable to recover it. 00:27:09.481 [2024-11-20 11:21:36.779476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.481 [2024-11-20 11:21:36.779516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.481 qpair failed and we were unable to recover it. 00:27:09.481 [2024-11-20 11:21:36.779827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.481 [2024-11-20 11:21:36.779864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.481 qpair failed and we were unable to recover it. 00:27:09.481 [2024-11-20 11:21:36.780041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.481 [2024-11-20 11:21:36.780081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.481 qpair failed and we were unable to recover it. 00:27:09.481 [2024-11-20 11:21:36.780294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.481 [2024-11-20 11:21:36.780329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.481 qpair failed and we were unable to recover it. 00:27:09.481 [2024-11-20 11:21:36.780621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.481 [2024-11-20 11:21:36.780654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.481 qpair failed and we were unable to recover it. 00:27:09.481 [2024-11-20 11:21:36.780812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.481 [2024-11-20 11:21:36.780850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.481 qpair failed and we were unable to recover it. 00:27:09.481 [2024-11-20 11:21:36.781073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.481 [2024-11-20 11:21:36.781108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.481 qpair failed and we were unable to recover it. 00:27:09.481 [2024-11-20 11:21:36.781402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.481 [2024-11-20 11:21:36.781442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.481 qpair failed and we were unable to recover it. 00:27:09.481 [2024-11-20 11:21:36.781666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.481 [2024-11-20 11:21:36.781706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.481 qpair failed and we were unable to recover it. 00:27:09.481 [2024-11-20 11:21:36.781925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.481 [2024-11-20 11:21:36.781979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.481 qpair failed and we were unable to recover it. 00:27:09.481 [2024-11-20 11:21:36.782192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.481 [2024-11-20 11:21:36.782224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.481 qpair failed and we were unable to recover it. 00:27:09.481 [2024-11-20 11:21:36.782540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.481 [2024-11-20 11:21:36.782574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.481 qpair failed and we were unable to recover it. 00:27:09.481 [2024-11-20 11:21:36.782803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.481 [2024-11-20 11:21:36.782837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.481 qpair failed and we were unable to recover it. 00:27:09.481 [2024-11-20 11:21:36.782972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.481 [2024-11-20 11:21:36.783011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.481 qpair failed and we were unable to recover it. 00:27:09.481 [2024-11-20 11:21:36.783237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.481 [2024-11-20 11:21:36.783273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.481 qpair failed and we were unable to recover it. 00:27:09.481 [2024-11-20 11:21:36.783481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.481 [2024-11-20 11:21:36.783514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.481 qpair failed and we were unable to recover it. 00:27:09.481 [2024-11-20 11:21:36.783662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.481 [2024-11-20 11:21:36.783701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.481 qpair failed and we were unable to recover it. 00:27:09.481 [2024-11-20 11:21:36.783918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.481 [2024-11-20 11:21:36.783987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.481 qpair failed and we were unable to recover it. 00:27:09.481 [2024-11-20 11:21:36.784291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.481 [2024-11-20 11:21:36.784333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.481 qpair failed and we were unable to recover it. 00:27:09.481 [2024-11-20 11:21:36.784583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.481 [2024-11-20 11:21:36.784617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.481 qpair failed and we were unable to recover it. 00:27:09.481 [2024-11-20 11:21:36.784920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.481 [2024-11-20 11:21:36.784987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.481 qpair failed and we were unable to recover it. 00:27:09.481 [2024-11-20 11:21:36.785213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.481 [2024-11-20 11:21:36.785267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.481 qpair failed and we were unable to recover it. 00:27:09.481 [2024-11-20 11:21:36.785575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.481 [2024-11-20 11:21:36.785616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.481 qpair failed and we were unable to recover it. 00:27:09.481 [2024-11-20 11:21:36.785966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.481 [2024-11-20 11:21:36.786009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.481 qpair failed and we were unable to recover it. 00:27:09.481 [2024-11-20 11:21:36.786224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.481 [2024-11-20 11:21:36.786258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.481 qpair failed and we were unable to recover it. 00:27:09.482 [2024-11-20 11:21:36.786529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.482 [2024-11-20 11:21:36.786563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.482 qpair failed and we were unable to recover it. 00:27:09.482 [2024-11-20 11:21:36.786772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.482 [2024-11-20 11:21:36.786805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.482 qpair failed and we were unable to recover it. 00:27:09.482 [2024-11-20 11:21:36.786943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.482 [2024-11-20 11:21:36.787009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.482 qpair failed and we were unable to recover it. 00:27:09.482 [2024-11-20 11:21:36.787297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.482 [2024-11-20 11:21:36.787331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.482 qpair failed and we were unable to recover it. 00:27:09.482 [2024-11-20 11:21:36.787550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.482 [2024-11-20 11:21:36.787583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.482 qpair failed and we were unable to recover it. 00:27:09.482 [2024-11-20 11:21:36.787849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.482 [2024-11-20 11:21:36.787884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.482 qpair failed and we were unable to recover it. 00:27:09.482 [2024-11-20 11:21:36.788158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.482 [2024-11-20 11:21:36.788192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.482 qpair failed and we were unable to recover it. 00:27:09.482 [2024-11-20 11:21:36.788423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.482 [2024-11-20 11:21:36.788457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.482 qpair failed and we were unable to recover it. 00:27:09.482 [2024-11-20 11:21:36.788700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.482 [2024-11-20 11:21:36.788733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.482 qpair failed and we were unable to recover it. 00:27:09.482 [2024-11-20 11:21:36.789023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.482 [2024-11-20 11:21:36.789059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.482 qpair failed and we were unable to recover it. 00:27:09.482 [2024-11-20 11:21:36.789274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.482 [2024-11-20 11:21:36.789307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.482 qpair failed and we were unable to recover it. 00:27:09.482 [2024-11-20 11:21:36.789578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.482 [2024-11-20 11:21:36.789612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.482 qpair failed and we were unable to recover it. 00:27:09.482 [2024-11-20 11:21:36.789824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.482 [2024-11-20 11:21:36.789857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.482 qpair failed and we were unable to recover it. 00:27:09.482 [2024-11-20 11:21:36.790128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.482 [2024-11-20 11:21:36.790173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.482 qpair failed and we were unable to recover it. 00:27:09.482 [2024-11-20 11:21:36.790337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.482 [2024-11-20 11:21:36.790364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.482 qpair failed and we were unable to recover it. 00:27:09.482 [2024-11-20 11:21:36.790612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.482 [2024-11-20 11:21:36.790643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.482 qpair failed and we were unable to recover it. 00:27:09.482 [2024-11-20 11:21:36.790853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.482 [2024-11-20 11:21:36.790886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.482 qpair failed and we were unable to recover it. 00:27:09.482 [2024-11-20 11:21:36.791183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.482 [2024-11-20 11:21:36.791222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.482 qpair failed and we were unable to recover it. 00:27:09.482 [2024-11-20 11:21:36.791444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.482 [2024-11-20 11:21:36.791474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.482 qpair failed and we were unable to recover it. 00:27:09.482 [2024-11-20 11:21:36.791649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.482 [2024-11-20 11:21:36.791683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.482 qpair failed and we were unable to recover it. 00:27:09.482 [2024-11-20 11:21:36.791973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.482 [2024-11-20 11:21:36.792011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.482 qpair failed and we were unable to recover it. 00:27:09.482 [2024-11-20 11:21:36.792282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.482 [2024-11-20 11:21:36.792318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.482 qpair failed and we were unable to recover it. 00:27:09.482 [2024-11-20 11:21:36.792518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.482 [2024-11-20 11:21:36.792553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.482 qpair failed and we were unable to recover it. 00:27:09.482 [2024-11-20 11:21:36.792699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.482 [2024-11-20 11:21:36.792735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.482 qpair failed and we were unable to recover it. 00:27:09.482 [2024-11-20 11:21:36.793013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.482 [2024-11-20 11:21:36.793050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.482 qpair failed and we were unable to recover it. 00:27:09.482 [2024-11-20 11:21:36.793302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.482 [2024-11-20 11:21:36.793330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.482 qpair failed and we were unable to recover it. 00:27:09.482 [2024-11-20 11:21:36.793592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.482 [2024-11-20 11:21:36.793621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.482 qpair failed and we were unable to recover it. 00:27:09.482 [2024-11-20 11:21:36.793886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.482 [2024-11-20 11:21:36.793914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.482 qpair failed and we were unable to recover it. 00:27:09.482 [2024-11-20 11:21:36.794193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.482 [2024-11-20 11:21:36.794223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.482 qpair failed and we were unable to recover it. 00:27:09.482 [2024-11-20 11:21:36.794437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.482 [2024-11-20 11:21:36.794467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.482 qpair failed and we were unable to recover it. 00:27:09.482 [2024-11-20 11:21:36.794644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.482 [2024-11-20 11:21:36.794673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.482 qpair failed and we were unable to recover it. 00:27:09.482 [2024-11-20 11:21:36.794900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.482 [2024-11-20 11:21:36.794929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.482 qpair failed and we were unable to recover it. 00:27:09.482 [2024-11-20 11:21:36.795131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.482 [2024-11-20 11:21:36.795162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.483 qpair failed and we were unable to recover it. 00:27:09.483 [2024-11-20 11:21:36.795410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.483 [2024-11-20 11:21:36.795438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.483 qpair failed and we were unable to recover it. 00:27:09.483 [2024-11-20 11:21:36.795613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.483 [2024-11-20 11:21:36.795643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.483 qpair failed and we were unable to recover it. 00:27:09.483 [2024-11-20 11:21:36.795905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.483 [2024-11-20 11:21:36.795940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.483 qpair failed and we were unable to recover it. 00:27:09.483 [2024-11-20 11:21:36.796158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.483 [2024-11-20 11:21:36.796193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.483 qpair failed and we were unable to recover it. 00:27:09.483 [2024-11-20 11:21:36.796464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.483 [2024-11-20 11:21:36.796494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.483 qpair failed and we were unable to recover it. 00:27:09.483 [2024-11-20 11:21:36.796739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.483 [2024-11-20 11:21:36.796771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.483 qpair failed and we were unable to recover it. 00:27:09.483 [2024-11-20 11:21:36.796989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.483 [2024-11-20 11:21:36.797022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.483 qpair failed and we were unable to recover it. 00:27:09.483 [2024-11-20 11:21:36.797305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.483 [2024-11-20 11:21:36.797341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.483 qpair failed and we were unable to recover it. 00:27:09.483 [2024-11-20 11:21:36.797528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.483 [2024-11-20 11:21:36.797563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.483 qpair failed and we were unable to recover it. 00:27:09.483 [2024-11-20 11:21:36.797838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.483 [2024-11-20 11:21:36.797878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.483 qpair failed and we were unable to recover it. 00:27:09.483 [2024-11-20 11:21:36.798144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.483 [2024-11-20 11:21:36.798182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.483 qpair failed and we were unable to recover it. 00:27:09.483 [2024-11-20 11:21:36.798493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.483 [2024-11-20 11:21:36.798531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.483 qpair failed and we were unable to recover it. 00:27:09.483 [2024-11-20 11:21:36.798774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.483 [2024-11-20 11:21:36.798805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.483 qpair failed and we were unable to recover it. 00:27:09.483 [2024-11-20 11:21:36.799057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.483 [2024-11-20 11:21:36.799089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.483 qpair failed and we were unable to recover it. 00:27:09.483 [2024-11-20 11:21:36.799335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.483 [2024-11-20 11:21:36.799365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.483 qpair failed and we were unable to recover it. 00:27:09.483 [2024-11-20 11:21:36.799555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.483 [2024-11-20 11:21:36.799586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.483 qpair failed and we were unable to recover it. 00:27:09.483 [2024-11-20 11:21:36.799829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.483 [2024-11-20 11:21:36.799859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.483 qpair failed and we were unable to recover it. 00:27:09.483 [2024-11-20 11:21:36.800102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.483 [2024-11-20 11:21:36.800135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.483 qpair failed and we were unable to recover it. 00:27:09.483 [2024-11-20 11:21:36.800337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.483 [2024-11-20 11:21:36.800366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.483 qpair failed and we were unable to recover it. 00:27:09.483 [2024-11-20 11:21:36.800486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.483 [2024-11-20 11:21:36.800515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.483 qpair failed and we were unable to recover it. 00:27:09.483 [2024-11-20 11:21:36.800717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.483 [2024-11-20 11:21:36.800750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.483 qpair failed and we were unable to recover it. 00:27:09.483 [2024-11-20 11:21:36.801077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.483 [2024-11-20 11:21:36.801113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.483 qpair failed and we were unable to recover it. 00:27:09.483 [2024-11-20 11:21:36.801366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.483 [2024-11-20 11:21:36.801401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.483 qpair failed and we were unable to recover it. 00:27:09.483 [2024-11-20 11:21:36.801605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.483 [2024-11-20 11:21:36.801640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.483 qpair failed and we were unable to recover it. 00:27:09.483 [2024-11-20 11:21:36.801820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.483 [2024-11-20 11:21:36.801856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.483 qpair failed and we were unable to recover it. 00:27:09.483 [2024-11-20 11:21:36.801994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.483 [2024-11-20 11:21:36.802030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.483 qpair failed and we were unable to recover it. 00:27:09.483 [2024-11-20 11:21:36.802239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.483 [2024-11-20 11:21:36.802274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.483 qpair failed and we were unable to recover it. 00:27:09.483 [2024-11-20 11:21:36.802589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.483 [2024-11-20 11:21:36.802623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.483 qpair failed and we were unable to recover it. 00:27:09.483 [2024-11-20 11:21:36.802809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.483 [2024-11-20 11:21:36.802846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.483 qpair failed and we were unable to recover it. 00:27:09.483 [2024-11-20 11:21:36.803134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.483 [2024-11-20 11:21:36.803171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.483 qpair failed and we were unable to recover it. 00:27:09.483 [2024-11-20 11:21:36.803354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.483 [2024-11-20 11:21:36.803389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.483 qpair failed and we were unable to recover it. 00:27:09.483 [2024-11-20 11:21:36.803668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.483 [2024-11-20 11:21:36.803703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.483 qpair failed and we were unable to recover it. 00:27:09.483 [2024-11-20 11:21:36.803884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.484 [2024-11-20 11:21:36.803917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.484 qpair failed and we were unable to recover it. 00:27:09.484 [2024-11-20 11:21:36.804111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.484 [2024-11-20 11:21:36.804146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.484 qpair failed and we were unable to recover it. 00:27:09.484 [2024-11-20 11:21:36.804348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.484 [2024-11-20 11:21:36.804382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.484 qpair failed and we were unable to recover it. 00:27:09.484 [2024-11-20 11:21:36.804637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.484 [2024-11-20 11:21:36.804671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.484 qpair failed and we were unable to recover it. 00:27:09.484 [2024-11-20 11:21:36.804805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.484 [2024-11-20 11:21:36.804840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.484 qpair failed and we were unable to recover it. 00:27:09.484 [2024-11-20 11:21:36.805116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.484 [2024-11-20 11:21:36.805152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.484 qpair failed and we were unable to recover it. 00:27:09.484 [2024-11-20 11:21:36.805427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.484 [2024-11-20 11:21:36.805461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.484 qpair failed and we were unable to recover it. 00:27:09.484 [2024-11-20 11:21:36.805741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.484 [2024-11-20 11:21:36.805774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.484 qpair failed and we were unable to recover it. 00:27:09.484 [2024-11-20 11:21:36.805984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.484 [2024-11-20 11:21:36.806020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.484 qpair failed and we were unable to recover it. 00:27:09.484 [2024-11-20 11:21:36.806297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.484 [2024-11-20 11:21:36.806331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.484 qpair failed and we were unable to recover it. 00:27:09.484 [2024-11-20 11:21:36.806468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.484 [2024-11-20 11:21:36.806501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.484 qpair failed and we were unable to recover it. 00:27:09.484 [2024-11-20 11:21:36.806806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.484 [2024-11-20 11:21:36.806840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.484 qpair failed and we were unable to recover it. 00:27:09.484 [2024-11-20 11:21:36.807126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.484 [2024-11-20 11:21:36.807161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.484 qpair failed and we were unable to recover it. 00:27:09.484 [2024-11-20 11:21:36.807362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.484 [2024-11-20 11:21:36.807396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.484 qpair failed and we were unable to recover it. 00:27:09.484 [2024-11-20 11:21:36.807613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.484 [2024-11-20 11:21:36.807647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.484 qpair failed and we were unable to recover it. 00:27:09.484 [2024-11-20 11:21:36.807958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.484 [2024-11-20 11:21:36.807993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.484 qpair failed and we were unable to recover it. 00:27:09.484 [2024-11-20 11:21:36.808129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.484 [2024-11-20 11:21:36.808163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.484 qpair failed and we were unable to recover it. 00:27:09.484 [2024-11-20 11:21:36.808440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.484 [2024-11-20 11:21:36.808480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.484 qpair failed and we were unable to recover it. 00:27:09.484 [2024-11-20 11:21:36.808721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.484 [2024-11-20 11:21:36.808755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.484 qpair failed and we were unable to recover it. 00:27:09.484 [2024-11-20 11:21:36.809019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.484 [2024-11-20 11:21:36.809054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.484 qpair failed and we were unable to recover it. 00:27:09.484 [2024-11-20 11:21:36.809314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.484 [2024-11-20 11:21:36.809349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.484 qpair failed and we were unable to recover it. 00:27:09.484 [2024-11-20 11:21:36.809624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.484 [2024-11-20 11:21:36.809658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.484 qpair failed and we were unable to recover it. 00:27:09.484 [2024-11-20 11:21:36.809854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.484 [2024-11-20 11:21:36.809888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.484 qpair failed and we were unable to recover it. 00:27:09.484 [2024-11-20 11:21:36.810141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.484 [2024-11-20 11:21:36.810177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.484 qpair failed and we were unable to recover it. 00:27:09.484 [2024-11-20 11:21:36.810362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.484 [2024-11-20 11:21:36.810395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.484 qpair failed and we were unable to recover it. 00:27:09.484 [2024-11-20 11:21:36.810528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.484 [2024-11-20 11:21:36.810563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.484 qpair failed and we were unable to recover it. 00:27:09.484 [2024-11-20 11:21:36.810816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.484 [2024-11-20 11:21:36.810850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.484 qpair failed and we were unable to recover it. 00:27:09.484 [2024-11-20 11:21:36.811065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.484 [2024-11-20 11:21:36.811100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.484 qpair failed and we were unable to recover it. 00:27:09.484 [2024-11-20 11:21:36.811280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.484 [2024-11-20 11:21:36.811315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.484 qpair failed and we were unable to recover it. 00:27:09.484 [2024-11-20 11:21:36.811519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.484 [2024-11-20 11:21:36.811553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.484 qpair failed and we were unable to recover it. 00:27:09.484 [2024-11-20 11:21:36.811675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.484 [2024-11-20 11:21:36.811709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.484 qpair failed and we were unable to recover it. 00:27:09.484 [2024-11-20 11:21:36.811908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.484 [2024-11-20 11:21:36.811943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.484 qpair failed and we were unable to recover it. 00:27:09.484 [2024-11-20 11:21:36.812231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.484 [2024-11-20 11:21:36.812266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.484 qpair failed and we were unable to recover it. 00:27:09.485 [2024-11-20 11:21:36.812563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.485 [2024-11-20 11:21:36.812597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.485 qpair failed and we were unable to recover it. 00:27:09.485 [2024-11-20 11:21:36.812779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.485 [2024-11-20 11:21:36.812812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.485 qpair failed and we were unable to recover it. 00:27:09.485 [2024-11-20 11:21:36.813081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.485 [2024-11-20 11:21:36.813117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.485 qpair failed and we were unable to recover it. 00:27:09.485 [2024-11-20 11:21:36.813309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.485 [2024-11-20 11:21:36.813343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.485 qpair failed and we were unable to recover it. 00:27:09.485 [2024-11-20 11:21:36.813606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.485 [2024-11-20 11:21:36.813641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.485 qpair failed and we were unable to recover it. 00:27:09.485 [2024-11-20 11:21:36.813937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.485 [2024-11-20 11:21:36.813982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.485 qpair failed and we were unable to recover it. 00:27:09.485 [2024-11-20 11:21:36.814218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.485 [2024-11-20 11:21:36.814252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.485 qpair failed and we were unable to recover it. 00:27:09.485 [2024-11-20 11:21:36.814456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.485 [2024-11-20 11:21:36.814489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.485 qpair failed and we were unable to recover it. 00:27:09.485 [2024-11-20 11:21:36.814695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.485 [2024-11-20 11:21:36.814730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.485 qpair failed and we were unable to recover it. 00:27:09.485 [2024-11-20 11:21:36.814992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.485 [2024-11-20 11:21:36.815027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.485 qpair failed and we were unable to recover it. 00:27:09.485 [2024-11-20 11:21:36.815244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.485 [2024-11-20 11:21:36.815279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.485 qpair failed and we were unable to recover it. 00:27:09.485 [2024-11-20 11:21:36.815539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.485 [2024-11-20 11:21:36.815573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.485 qpair failed and we were unable to recover it. 00:27:09.485 [2024-11-20 11:21:36.815873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.485 [2024-11-20 11:21:36.815907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.485 qpair failed and we were unable to recover it. 00:27:09.485 [2024-11-20 11:21:36.816155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.485 [2024-11-20 11:21:36.816189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.485 qpair failed and we were unable to recover it. 00:27:09.485 [2024-11-20 11:21:36.816440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.485 [2024-11-20 11:21:36.816473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.485 qpair failed and we were unable to recover it. 00:27:09.485 [2024-11-20 11:21:36.816775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.485 [2024-11-20 11:21:36.816808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.485 qpair failed and we were unable to recover it. 00:27:09.485 [2024-11-20 11:21:36.817066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.485 [2024-11-20 11:21:36.817102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.485 qpair failed and we were unable to recover it. 00:27:09.485 [2024-11-20 11:21:36.817356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.485 [2024-11-20 11:21:36.817389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.485 qpair failed and we were unable to recover it. 00:27:09.485 [2024-11-20 11:21:36.817673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.485 [2024-11-20 11:21:36.817707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.485 qpair failed and we were unable to recover it. 00:27:09.485 [2024-11-20 11:21:36.817986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.485 [2024-11-20 11:21:36.818021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.485 qpair failed and we were unable to recover it. 00:27:09.485 [2024-11-20 11:21:36.818299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.485 [2024-11-20 11:21:36.818333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.485 qpair failed and we were unable to recover it. 00:27:09.485 [2024-11-20 11:21:36.818585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.485 [2024-11-20 11:21:36.818619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.485 qpair failed and we were unable to recover it. 00:27:09.485 [2024-11-20 11:21:36.818849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.485 [2024-11-20 11:21:36.818883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.485 qpair failed and we were unable to recover it. 00:27:09.485 [2024-11-20 11:21:36.819205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.485 [2024-11-20 11:21:36.819240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.485 qpair failed and we were unable to recover it. 00:27:09.485 [2024-11-20 11:21:36.819441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.485 [2024-11-20 11:21:36.819481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.485 qpair failed and we were unable to recover it. 00:27:09.485 [2024-11-20 11:21:36.819737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.485 [2024-11-20 11:21:36.819772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.485 qpair failed and we were unable to recover it. 00:27:09.485 [2024-11-20 11:21:36.819972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.485 [2024-11-20 11:21:36.820008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.485 qpair failed and we were unable to recover it. 00:27:09.485 [2024-11-20 11:21:36.820220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.485 [2024-11-20 11:21:36.820253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.485 qpair failed and we were unable to recover it. 00:27:09.485 [2024-11-20 11:21:36.820508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.485 [2024-11-20 11:21:36.820541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.485 qpair failed and we were unable to recover it. 00:27:09.485 [2024-11-20 11:21:36.820816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.485 [2024-11-20 11:21:36.820849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.485 qpair failed and we were unable to recover it. 00:27:09.485 [2024-11-20 11:21:36.821047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.485 [2024-11-20 11:21:36.821082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.485 qpair failed and we were unable to recover it. 00:27:09.485 [2024-11-20 11:21:36.821336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.485 [2024-11-20 11:21:36.821370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.485 qpair failed and we were unable to recover it. 00:27:09.485 [2024-11-20 11:21:36.821664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.485 [2024-11-20 11:21:36.821698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.486 qpair failed and we were unable to recover it. 00:27:09.486 [2024-11-20 11:21:36.821823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.486 [2024-11-20 11:21:36.821856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.486 qpair failed and we were unable to recover it. 00:27:09.486 [2024-11-20 11:21:36.822074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.486 [2024-11-20 11:21:36.822110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.486 qpair failed and we were unable to recover it. 00:27:09.486 [2024-11-20 11:21:36.822299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.486 [2024-11-20 11:21:36.822333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.486 qpair failed and we were unable to recover it. 00:27:09.486 [2024-11-20 11:21:36.822518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.486 [2024-11-20 11:21:36.822553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.486 qpair failed and we were unable to recover it. 00:27:09.486 [2024-11-20 11:21:36.822827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.486 [2024-11-20 11:21:36.822861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.486 qpair failed and we were unable to recover it. 00:27:09.486 [2024-11-20 11:21:36.823151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.486 [2024-11-20 11:21:36.823188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.486 qpair failed and we were unable to recover it. 00:27:09.486 [2024-11-20 11:21:36.823319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.486 [2024-11-20 11:21:36.823353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.486 qpair failed and we were unable to recover it. 00:27:09.486 [2024-11-20 11:21:36.823545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.486 [2024-11-20 11:21:36.823578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.486 qpair failed and we were unable to recover it. 00:27:09.486 [2024-11-20 11:21:36.823782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.486 [2024-11-20 11:21:36.823817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.486 qpair failed and we were unable to recover it. 00:27:09.486 [2024-11-20 11:21:36.824097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.486 [2024-11-20 11:21:36.824132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.486 qpair failed and we were unable to recover it. 00:27:09.486 [2024-11-20 11:21:36.824382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.486 [2024-11-20 11:21:36.824415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.486 qpair failed and we were unable to recover it. 00:27:09.486 [2024-11-20 11:21:36.824670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.486 [2024-11-20 11:21:36.824705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.486 qpair failed and we were unable to recover it. 00:27:09.486 [2024-11-20 11:21:36.824969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.486 [2024-11-20 11:21:36.825005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.486 qpair failed and we were unable to recover it. 00:27:09.486 [2024-11-20 11:21:36.825301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.486 [2024-11-20 11:21:36.825335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.486 qpair failed and we were unable to recover it. 00:27:09.486 [2024-11-20 11:21:36.825560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.486 [2024-11-20 11:21:36.825594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.486 qpair failed and we were unable to recover it. 00:27:09.486 [2024-11-20 11:21:36.825802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.486 [2024-11-20 11:21:36.825837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.486 qpair failed and we were unable to recover it. 00:27:09.486 [2024-11-20 11:21:36.826066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.486 [2024-11-20 11:21:36.826100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.486 qpair failed and we were unable to recover it. 00:27:09.486 [2024-11-20 11:21:36.826353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.486 [2024-11-20 11:21:36.826387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:09.486 qpair failed and we were unable to recover it. 00:27:09.486 [2024-11-20 11:21:36.826705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.486 [2024-11-20 11:21:36.826783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.486 qpair failed and we were unable to recover it. 00:27:09.486 [2024-11-20 11:21:36.827078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.486 [2024-11-20 11:21:36.827118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.486 qpair failed and we were unable to recover it. 00:27:09.486 [2024-11-20 11:21:36.827319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.486 [2024-11-20 11:21:36.827354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.486 qpair failed and we were unable to recover it. 00:27:09.486 [2024-11-20 11:21:36.827612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.486 [2024-11-20 11:21:36.827645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.486 qpair failed and we were unable to recover it. 00:27:09.486 [2024-11-20 11:21:36.827920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.486 [2024-11-20 11:21:36.827965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.486 qpair failed and we were unable to recover it. 00:27:09.486 [2024-11-20 11:21:36.828184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.486 [2024-11-20 11:21:36.828219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.486 qpair failed and we were unable to recover it. 00:27:09.486 [2024-11-20 11:21:36.828470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.486 [2024-11-20 11:21:36.828504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.486 qpair failed and we were unable to recover it. 00:27:09.486 [2024-11-20 11:21:36.828803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.486 [2024-11-20 11:21:36.828837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.486 qpair failed and we were unable to recover it. 00:27:09.486 [2024-11-20 11:21:36.829149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.486 [2024-11-20 11:21:36.829185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.486 qpair failed and we were unable to recover it. 00:27:09.486 [2024-11-20 11:21:36.829462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.486 [2024-11-20 11:21:36.829495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.486 qpair failed and we were unable to recover it. 00:27:09.486 [2024-11-20 11:21:36.829697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.486 [2024-11-20 11:21:36.829730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.486 qpair failed and we were unable to recover it. 00:27:09.487 [2024-11-20 11:21:36.830000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.487 [2024-11-20 11:21:36.830035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.487 qpair failed and we were unable to recover it. 00:27:09.487 [2024-11-20 11:21:36.830293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.487 [2024-11-20 11:21:36.830327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.487 qpair failed and we were unable to recover it. 00:27:09.487 [2024-11-20 11:21:36.830579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.487 [2024-11-20 11:21:36.830623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.487 qpair failed and we were unable to recover it. 00:27:09.487 [2024-11-20 11:21:36.830838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.487 [2024-11-20 11:21:36.830872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.487 qpair failed and we were unable to recover it. 00:27:09.487 [2024-11-20 11:21:36.831064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.487 [2024-11-20 11:21:36.831100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.487 qpair failed and we were unable to recover it. 00:27:09.487 [2024-11-20 11:21:36.831359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.487 [2024-11-20 11:21:36.831392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.487 qpair failed and we were unable to recover it. 00:27:09.487 [2024-11-20 11:21:36.831694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.487 [2024-11-20 11:21:36.831729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.487 qpair failed and we were unable to recover it. 00:27:09.487 [2024-11-20 11:21:36.831994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.487 [2024-11-20 11:21:36.832030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.487 qpair failed and we were unable to recover it. 00:27:09.487 [2024-11-20 11:21:36.832312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.487 [2024-11-20 11:21:36.832345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.487 qpair failed and we were unable to recover it. 00:27:09.487 [2024-11-20 11:21:36.832630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.487 [2024-11-20 11:21:36.832665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.487 qpair failed and we were unable to recover it. 00:27:09.487 [2024-11-20 11:21:36.832804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.487 [2024-11-20 11:21:36.832838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.487 qpair failed and we were unable to recover it. 00:27:09.487 [2024-11-20 11:21:36.833136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.487 [2024-11-20 11:21:36.833172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.487 qpair failed and we were unable to recover it. 00:27:09.487 [2024-11-20 11:21:36.833426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.487 [2024-11-20 11:21:36.833459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.487 qpair failed and we were unable to recover it. 00:27:09.487 [2024-11-20 11:21:36.833666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.487 [2024-11-20 11:21:36.833702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.487 qpair failed and we were unable to recover it. 00:27:09.487 [2024-11-20 11:21:36.833979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.487 [2024-11-20 11:21:36.834015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.487 qpair failed and we were unable to recover it. 00:27:09.487 [2024-11-20 11:21:36.834316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.487 [2024-11-20 11:21:36.834349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.487 qpair failed and we were unable to recover it. 00:27:09.487 [2024-11-20 11:21:36.834611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.487 [2024-11-20 11:21:36.834645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.487 qpair failed and we were unable to recover it. 00:27:09.487 [2024-11-20 11:21:36.834966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.487 [2024-11-20 11:21:36.835002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.487 qpair failed and we were unable to recover it. 00:27:09.487 [2024-11-20 11:21:36.835133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.487 [2024-11-20 11:21:36.835167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.487 qpair failed and we were unable to recover it. 00:27:09.487 [2024-11-20 11:21:36.835441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.487 [2024-11-20 11:21:36.835476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.487 qpair failed and we were unable to recover it. 00:27:09.487 [2024-11-20 11:21:36.835743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.487 [2024-11-20 11:21:36.835777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.487 qpair failed and we were unable to recover it. 00:27:09.487 [2024-11-20 11:21:36.836078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.487 [2024-11-20 11:21:36.836115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.487 qpair failed and we were unable to recover it. 00:27:09.487 [2024-11-20 11:21:36.836377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.487 [2024-11-20 11:21:36.836410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.487 qpair failed and we were unable to recover it. 00:27:09.487 [2024-11-20 11:21:36.836702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.487 [2024-11-20 11:21:36.836736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.487 qpair failed and we were unable to recover it. 00:27:09.487 [2024-11-20 11:21:36.836970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.487 [2024-11-20 11:21:36.837005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.487 qpair failed and we were unable to recover it. 00:27:09.487 [2024-11-20 11:21:36.837212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.487 [2024-11-20 11:21:36.837245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.487 qpair failed and we were unable to recover it. 00:27:09.487 [2024-11-20 11:21:36.837426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.487 [2024-11-20 11:21:36.837459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.487 qpair failed and we were unable to recover it. 00:27:09.487 [2024-11-20 11:21:36.837652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.487 [2024-11-20 11:21:36.837686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.487 qpair failed and we were unable to recover it. 00:27:09.487 [2024-11-20 11:21:36.837964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.487 [2024-11-20 11:21:36.838000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.487 qpair failed and we were unable to recover it. 00:27:09.487 [2024-11-20 11:21:36.838211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.487 [2024-11-20 11:21:36.838246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.487 qpair failed and we were unable to recover it. 00:27:09.487 [2024-11-20 11:21:36.838437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.487 [2024-11-20 11:21:36.838472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.487 qpair failed and we were unable to recover it. 00:27:09.487 [2024-11-20 11:21:36.838693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.487 [2024-11-20 11:21:36.838727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.487 qpair failed and we were unable to recover it. 00:27:09.487 [2024-11-20 11:21:36.838942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.488 [2024-11-20 11:21:36.838995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.488 qpair failed and we were unable to recover it. 00:27:09.488 [2024-11-20 11:21:36.839273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.488 [2024-11-20 11:21:36.839307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.488 qpair failed and we were unable to recover it. 00:27:09.488 [2024-11-20 11:21:36.839576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.488 [2024-11-20 11:21:36.839610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.488 qpair failed and we were unable to recover it. 00:27:09.488 [2024-11-20 11:21:36.839904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.488 [2024-11-20 11:21:36.839938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.488 qpair failed and we were unable to recover it. 00:27:09.488 [2024-11-20 11:21:36.840209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.488 [2024-11-20 11:21:36.840244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.488 qpair failed and we were unable to recover it. 00:27:09.488 [2024-11-20 11:21:36.840460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.488 [2024-11-20 11:21:36.840494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.488 qpair failed and we were unable to recover it. 00:27:09.488 [2024-11-20 11:21:36.840771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.488 [2024-11-20 11:21:36.840805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.488 qpair failed and we were unable to recover it. 00:27:09.488 [2024-11-20 11:21:36.841056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.488 [2024-11-20 11:21:36.841091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.488 qpair failed and we were unable to recover it. 00:27:09.488 [2024-11-20 11:21:36.841286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.488 [2024-11-20 11:21:36.841320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.488 qpair failed and we were unable to recover it. 00:27:09.488 [2024-11-20 11:21:36.841598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.488 [2024-11-20 11:21:36.841632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.488 qpair failed and we were unable to recover it. 00:27:09.488 [2024-11-20 11:21:36.841908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.488 [2024-11-20 11:21:36.841943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.488 qpair failed and we were unable to recover it. 00:27:09.488 [2024-11-20 11:21:36.842161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.488 [2024-11-20 11:21:36.842196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.488 qpair failed and we were unable to recover it. 00:27:09.488 [2024-11-20 11:21:36.842388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.488 [2024-11-20 11:21:36.842421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.488 qpair failed and we were unable to recover it. 00:27:09.488 [2024-11-20 11:21:36.842695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.488 [2024-11-20 11:21:36.842728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.488 qpair failed and we were unable to recover it. 00:27:09.488 [2024-11-20 11:21:36.842937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.488 [2024-11-20 11:21:36.842980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.488 qpair failed and we were unable to recover it. 00:27:09.488 [2024-11-20 11:21:36.843123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.488 [2024-11-20 11:21:36.843158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.488 qpair failed and we were unable to recover it. 00:27:09.488 [2024-11-20 11:21:36.843434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.488 [2024-11-20 11:21:36.843467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.488 qpair failed and we were unable to recover it. 00:27:09.488 [2024-11-20 11:21:36.843649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.488 [2024-11-20 11:21:36.843682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.488 qpair failed and we were unable to recover it. 00:27:09.488 [2024-11-20 11:21:36.843963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.488 [2024-11-20 11:21:36.843999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.488 qpair failed and we were unable to recover it. 00:27:09.488 [2024-11-20 11:21:36.844297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.488 [2024-11-20 11:21:36.844330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.488 qpair failed and we were unable to recover it. 00:27:09.488 [2024-11-20 11:21:36.844529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.488 [2024-11-20 11:21:36.844563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.488 qpair failed and we were unable to recover it. 00:27:09.488 [2024-11-20 11:21:36.844820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.488 [2024-11-20 11:21:36.844855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.488 qpair failed and we were unable to recover it. 00:27:09.488 [2024-11-20 11:21:36.845149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.488 [2024-11-20 11:21:36.845185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.488 qpair failed and we were unable to recover it. 00:27:09.488 [2024-11-20 11:21:36.845422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.488 [2024-11-20 11:21:36.845456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.488 qpair failed and we were unable to recover it. 00:27:09.488 [2024-11-20 11:21:36.845771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.488 [2024-11-20 11:21:36.845805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.488 qpair failed and we were unable to recover it. 00:27:09.488 [2024-11-20 11:21:36.846085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.488 [2024-11-20 11:21:36.846120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.488 qpair failed and we were unable to recover it. 00:27:09.488 [2024-11-20 11:21:36.846406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.488 [2024-11-20 11:21:36.846439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.488 qpair failed and we were unable to recover it. 00:27:09.488 [2024-11-20 11:21:36.846718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.488 [2024-11-20 11:21:36.846752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.488 qpair failed and we were unable to recover it. 00:27:09.488 [2024-11-20 11:21:36.846983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.488 [2024-11-20 11:21:36.847018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.488 qpair failed and we were unable to recover it. 00:27:09.488 [2024-11-20 11:21:36.847293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.488 [2024-11-20 11:21:36.847327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.488 qpair failed and we were unable to recover it. 00:27:09.488 [2024-11-20 11:21:36.847601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.488 [2024-11-20 11:21:36.847635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.488 qpair failed and we were unable to recover it. 00:27:09.488 [2024-11-20 11:21:36.847835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.488 [2024-11-20 11:21:36.847870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.488 qpair failed and we were unable to recover it. 00:27:09.488 [2024-11-20 11:21:36.848149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.489 [2024-11-20 11:21:36.848185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.489 qpair failed and we were unable to recover it. 00:27:09.489 [2024-11-20 11:21:36.848381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.489 [2024-11-20 11:21:36.848414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.489 qpair failed and we were unable to recover it. 00:27:09.489 [2024-11-20 11:21:36.848606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.489 [2024-11-20 11:21:36.848641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.489 qpair failed and we were unable to recover it. 00:27:09.489 [2024-11-20 11:21:36.848833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.489 [2024-11-20 11:21:36.848866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.489 qpair failed and we were unable to recover it. 00:27:09.489 [2024-11-20 11:21:36.849117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.489 [2024-11-20 11:21:36.849152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.489 qpair failed and we were unable to recover it. 00:27:09.489 [2024-11-20 11:21:36.849373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.489 [2024-11-20 11:21:36.849413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.489 qpair failed and we were unable to recover it. 00:27:09.489 [2024-11-20 11:21:36.849714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.489 [2024-11-20 11:21:36.849748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.489 qpair failed and we were unable to recover it. 00:27:09.489 [2024-11-20 11:21:36.849963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.489 [2024-11-20 11:21:36.849998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.489 qpair failed and we were unable to recover it. 00:27:09.489 [2024-11-20 11:21:36.850194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.489 [2024-11-20 11:21:36.850228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.489 qpair failed and we were unable to recover it. 00:27:09.489 [2024-11-20 11:21:36.850505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.489 [2024-11-20 11:21:36.850538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.489 qpair failed and we were unable to recover it. 00:27:09.489 [2024-11-20 11:21:36.850688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.489 [2024-11-20 11:21:36.850722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.489 qpair failed and we were unable to recover it. 00:27:09.489 [2024-11-20 11:21:36.850932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.489 [2024-11-20 11:21:36.850975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.489 qpair failed and we were unable to recover it. 00:27:09.489 [2024-11-20 11:21:36.851089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.489 [2024-11-20 11:21:36.851123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.489 qpair failed and we were unable to recover it. 00:27:09.489 [2024-11-20 11:21:36.851253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.489 [2024-11-20 11:21:36.851285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.489 qpair failed and we were unable to recover it. 00:27:09.489 [2024-11-20 11:21:36.851562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.489 [2024-11-20 11:21:36.851595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.489 qpair failed and we were unable to recover it. 00:27:09.489 [2024-11-20 11:21:36.851832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.489 [2024-11-20 11:21:36.851864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.489 qpair failed and we were unable to recover it. 00:27:09.489 [2024-11-20 11:21:36.852052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.489 [2024-11-20 11:21:36.852087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.489 qpair failed and we were unable to recover it. 00:27:09.489 [2024-11-20 11:21:36.852388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.489 [2024-11-20 11:21:36.852423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.489 qpair failed and we were unable to recover it. 00:27:09.489 [2024-11-20 11:21:36.852703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.489 [2024-11-20 11:21:36.852737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.489 qpair failed and we were unable to recover it. 00:27:09.489 [2024-11-20 11:21:36.853020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.489 [2024-11-20 11:21:36.853055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.489 qpair failed and we were unable to recover it. 00:27:09.489 [2024-11-20 11:21:36.853333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.489 [2024-11-20 11:21:36.853367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.489 qpair failed and we were unable to recover it. 00:27:09.489 [2024-11-20 11:21:36.853593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.489 [2024-11-20 11:21:36.853626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.489 qpair failed and we were unable to recover it. 00:27:09.489 [2024-11-20 11:21:36.853877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.489 [2024-11-20 11:21:36.853911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.489 qpair failed and we were unable to recover it. 00:27:09.489 [2024-11-20 11:21:36.854127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.489 [2024-11-20 11:21:36.854163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.489 qpair failed and we were unable to recover it. 00:27:09.489 [2024-11-20 11:21:36.854297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.489 [2024-11-20 11:21:36.854330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.489 qpair failed and we were unable to recover it. 00:27:09.489 [2024-11-20 11:21:36.854620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.489 [2024-11-20 11:21:36.854653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.489 qpair failed and we were unable to recover it. 00:27:09.489 [2024-11-20 11:21:36.854925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.489 [2024-11-20 11:21:36.854977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.489 qpair failed and we were unable to recover it. 00:27:09.489 [2024-11-20 11:21:36.855270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.489 [2024-11-20 11:21:36.855303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.489 qpair failed and we were unable to recover it. 00:27:09.489 [2024-11-20 11:21:36.855495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.489 [2024-11-20 11:21:36.855528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.489 qpair failed and we were unable to recover it. 00:27:09.489 [2024-11-20 11:21:36.855744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.489 [2024-11-20 11:21:36.855777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.489 qpair failed and we were unable to recover it. 00:27:09.489 [2024-11-20 11:21:36.856048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.489 [2024-11-20 11:21:36.856083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.489 qpair failed and we were unable to recover it. 00:27:09.489 [2024-11-20 11:21:36.856288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.489 [2024-11-20 11:21:36.856321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.489 qpair failed and we were unable to recover it. 00:27:09.489 [2024-11-20 11:21:36.856628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.489 [2024-11-20 11:21:36.856662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.489 qpair failed and we were unable to recover it. 00:27:09.489 [2024-11-20 11:21:36.856855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.490 [2024-11-20 11:21:36.856889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.490 qpair failed and we were unable to recover it. 00:27:09.490 [2024-11-20 11:21:36.857183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.490 [2024-11-20 11:21:36.857218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.490 qpair failed and we were unable to recover it. 00:27:09.490 [2024-11-20 11:21:36.857484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.490 [2024-11-20 11:21:36.857519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.490 qpair failed and we were unable to recover it. 00:27:09.490 [2024-11-20 11:21:36.857782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.490 [2024-11-20 11:21:36.857816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.490 qpair failed and we were unable to recover it. 00:27:09.490 [2024-11-20 11:21:36.858070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.490 [2024-11-20 11:21:36.858106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.490 qpair failed and we were unable to recover it. 00:27:09.490 [2024-11-20 11:21:36.858405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.490 [2024-11-20 11:21:36.858439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.490 qpair failed and we were unable to recover it. 00:27:09.490 [2024-11-20 11:21:36.858651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.490 [2024-11-20 11:21:36.858684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.490 qpair failed and we were unable to recover it. 00:27:09.490 [2024-11-20 11:21:36.858985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.490 [2024-11-20 11:21:36.859021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.490 qpair failed and we were unable to recover it. 00:27:09.490 [2024-11-20 11:21:36.859277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.490 [2024-11-20 11:21:36.859310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.490 qpair failed and we were unable to recover it. 00:27:09.490 [2024-11-20 11:21:36.859611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.490 [2024-11-20 11:21:36.859645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.490 qpair failed and we were unable to recover it. 00:27:09.490 [2024-11-20 11:21:36.859848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.490 [2024-11-20 11:21:36.859881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.490 qpair failed and we were unable to recover it. 00:27:09.490 [2024-11-20 11:21:36.860021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.490 [2024-11-20 11:21:36.860056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.490 qpair failed and we were unable to recover it. 00:27:09.490 [2024-11-20 11:21:36.860237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.490 [2024-11-20 11:21:36.860276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.490 qpair failed and we were unable to recover it. 00:27:09.490 [2024-11-20 11:21:36.860553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.490 [2024-11-20 11:21:36.860588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.490 qpair failed and we were unable to recover it. 00:27:09.490 [2024-11-20 11:21:36.860853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.490 [2024-11-20 11:21:36.860887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.490 qpair failed and we were unable to recover it. 00:27:09.490 [2024-11-20 11:21:36.861130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.490 [2024-11-20 11:21:36.861166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.490 qpair failed and we were unable to recover it. 00:27:09.490 [2024-11-20 11:21:36.861382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.490 [2024-11-20 11:21:36.861416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.490 qpair failed and we were unable to recover it. 00:27:09.490 [2024-11-20 11:21:36.861592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.490 [2024-11-20 11:21:36.861626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.490 qpair failed and we were unable to recover it. 00:27:09.490 [2024-11-20 11:21:36.861843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.490 [2024-11-20 11:21:36.861877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.490 qpair failed and we were unable to recover it. 00:27:09.490 [2024-11-20 11:21:36.862001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.490 [2024-11-20 11:21:36.862037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.490 qpair failed and we were unable to recover it. 00:27:09.490 [2024-11-20 11:21:36.862226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.490 [2024-11-20 11:21:36.862260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.490 qpair failed and we were unable to recover it. 00:27:09.490 [2024-11-20 11:21:36.862450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.490 [2024-11-20 11:21:36.862484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.490 qpair failed and we were unable to recover it. 00:27:09.490 [2024-11-20 11:21:36.862758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.490 [2024-11-20 11:21:36.862790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.490 qpair failed and we were unable to recover it. 00:27:09.490 [2024-11-20 11:21:36.862987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.490 [2024-11-20 11:21:36.863023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.490 qpair failed and we were unable to recover it. 00:27:09.490 [2024-11-20 11:21:36.863306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.490 [2024-11-20 11:21:36.863341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.490 qpair failed and we were unable to recover it. 00:27:09.490 [2024-11-20 11:21:36.863526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.490 [2024-11-20 11:21:36.863559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.490 qpair failed and we were unable to recover it. 00:27:09.490 [2024-11-20 11:21:36.863820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.490 [2024-11-20 11:21:36.863854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.490 qpair failed and we were unable to recover it. 00:27:09.490 [2024-11-20 11:21:36.864109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.490 [2024-11-20 11:21:36.864144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.490 qpair failed and we were unable to recover it. 00:27:09.490 [2024-11-20 11:21:36.864397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.490 [2024-11-20 11:21:36.864430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.490 qpair failed and we were unable to recover it. 00:27:09.490 [2024-11-20 11:21:36.864731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.490 [2024-11-20 11:21:36.864764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.490 qpair failed and we were unable to recover it. 00:27:09.490 [2024-11-20 11:21:36.864969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.490 [2024-11-20 11:21:36.865005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.490 qpair failed and we were unable to recover it. 00:27:09.490 [2024-11-20 11:21:36.865143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.490 [2024-11-20 11:21:36.865176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.490 qpair failed and we were unable to recover it. 00:27:09.490 [2024-11-20 11:21:36.865309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.490 [2024-11-20 11:21:36.865342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.490 qpair failed and we were unable to recover it. 00:27:09.490 [2024-11-20 11:21:36.865612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.491 [2024-11-20 11:21:36.865646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.491 qpair failed and we were unable to recover it. 00:27:09.491 [2024-11-20 11:21:36.865912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.491 [2024-11-20 11:21:36.865945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.491 qpair failed and we were unable to recover it. 00:27:09.491 [2024-11-20 11:21:36.866247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.491 [2024-11-20 11:21:36.866282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.491 qpair failed and we were unable to recover it. 00:27:09.491 [2024-11-20 11:21:36.866546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.491 [2024-11-20 11:21:36.866580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.491 qpair failed and we were unable to recover it. 00:27:09.491 [2024-11-20 11:21:36.866884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.491 [2024-11-20 11:21:36.866917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.491 qpair failed and we were unable to recover it. 00:27:09.491 [2024-11-20 11:21:36.867146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.491 [2024-11-20 11:21:36.867181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.491 qpair failed and we were unable to recover it. 00:27:09.491 [2024-11-20 11:21:36.867464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.491 [2024-11-20 11:21:36.867499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.491 qpair failed and we were unable to recover it. 00:27:09.491 [2024-11-20 11:21:36.867792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.491 [2024-11-20 11:21:36.867826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.491 qpair failed and we were unable to recover it. 00:27:09.491 [2024-11-20 11:21:36.868094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.491 [2024-11-20 11:21:36.868129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.491 qpair failed and we were unable to recover it. 00:27:09.491 [2024-11-20 11:21:36.868350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.491 [2024-11-20 11:21:36.868385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.491 qpair failed and we were unable to recover it. 00:27:09.491 [2024-11-20 11:21:36.868685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.491 [2024-11-20 11:21:36.868718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.491 qpair failed and we were unable to recover it. 00:27:09.491 [2024-11-20 11:21:36.868981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.491 [2024-11-20 11:21:36.869016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.491 qpair failed and we were unable to recover it. 00:27:09.491 [2024-11-20 11:21:36.869237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.491 [2024-11-20 11:21:36.869271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.491 qpair failed and we were unable to recover it. 00:27:09.491 [2024-11-20 11:21:36.869457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.491 [2024-11-20 11:21:36.869490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.491 qpair failed and we were unable to recover it. 00:27:09.491 [2024-11-20 11:21:36.869700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.491 [2024-11-20 11:21:36.869734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.491 qpair failed and we were unable to recover it. 00:27:09.491 [2024-11-20 11:21:36.869925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.491 [2024-11-20 11:21:36.869968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.491 qpair failed and we were unable to recover it. 00:27:09.491 [2024-11-20 11:21:36.870141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.491 [2024-11-20 11:21:36.870174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.491 qpair failed and we were unable to recover it. 00:27:09.491 [2024-11-20 11:21:36.870454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.491 [2024-11-20 11:21:36.870487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.491 qpair failed and we were unable to recover it. 00:27:09.491 [2024-11-20 11:21:36.870673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.491 [2024-11-20 11:21:36.870707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.491 qpair failed and we were unable to recover it. 00:27:09.491 [2024-11-20 11:21:36.870897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.491 [2024-11-20 11:21:36.870937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.491 qpair failed and we were unable to recover it. 00:27:09.491 [2024-11-20 11:21:36.871236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.491 [2024-11-20 11:21:36.871271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.491 qpair failed and we were unable to recover it. 00:27:09.491 [2024-11-20 11:21:36.871522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.491 [2024-11-20 11:21:36.871555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.491 qpair failed and we were unable to recover it. 00:27:09.491 [2024-11-20 11:21:36.871762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.491 [2024-11-20 11:21:36.871795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.491 qpair failed and we were unable to recover it. 00:27:09.491 [2024-11-20 11:21:36.872067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.491 [2024-11-20 11:21:36.872102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.491 qpair failed and we were unable to recover it. 00:27:09.491 [2024-11-20 11:21:36.872394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.491 [2024-11-20 11:21:36.872429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.491 qpair failed and we were unable to recover it. 00:27:09.491 [2024-11-20 11:21:36.872699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.491 [2024-11-20 11:21:36.872734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.491 qpair failed and we were unable to recover it. 00:27:09.491 [2024-11-20 11:21:36.872878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.491 [2024-11-20 11:21:36.872911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.491 qpair failed and we were unable to recover it. 00:27:09.491 [2024-11-20 11:21:36.873041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.491 [2024-11-20 11:21:36.873078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.491 qpair failed and we were unable to recover it. 00:27:09.491 [2024-11-20 11:21:36.873328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.491 [2024-11-20 11:21:36.873361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.491 qpair failed and we were unable to recover it. 00:27:09.492 [2024-11-20 11:21:36.873647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.492 [2024-11-20 11:21:36.873681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.492 qpair failed and we were unable to recover it. 00:27:09.492 [2024-11-20 11:21:36.873874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.492 [2024-11-20 11:21:36.873908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.492 qpair failed and we were unable to recover it. 00:27:09.492 [2024-11-20 11:21:36.874099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.492 [2024-11-20 11:21:36.874134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.492 qpair failed and we were unable to recover it. 00:27:09.492 [2024-11-20 11:21:36.874359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.492 [2024-11-20 11:21:36.874393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.492 qpair failed and we were unable to recover it. 00:27:09.492 [2024-11-20 11:21:36.874699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.492 [2024-11-20 11:21:36.874733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.492 qpair failed and we were unable to recover it. 00:27:09.492 [2024-11-20 11:21:36.874925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.492 [2024-11-20 11:21:36.874970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.492 qpair failed and we were unable to recover it. 00:27:09.492 [2024-11-20 11:21:36.875247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.492 [2024-11-20 11:21:36.875280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.492 qpair failed and we were unable to recover it. 00:27:09.492 [2024-11-20 11:21:36.875500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.492 [2024-11-20 11:21:36.875535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.492 qpair failed and we were unable to recover it. 00:27:09.492 [2024-11-20 11:21:36.875737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.492 [2024-11-20 11:21:36.875770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.492 qpair failed and we were unable to recover it. 00:27:09.492 [2024-11-20 11:21:36.876001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.492 [2024-11-20 11:21:36.876036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.492 qpair failed and we were unable to recover it. 00:27:09.492 [2024-11-20 11:21:36.876229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.492 [2024-11-20 11:21:36.876263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.492 qpair failed and we were unable to recover it. 00:27:09.492 [2024-11-20 11:21:36.876458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.492 [2024-11-20 11:21:36.876491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.492 qpair failed and we were unable to recover it. 00:27:09.492 [2024-11-20 11:21:36.876771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.492 [2024-11-20 11:21:36.876805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.492 qpair failed and we were unable to recover it. 00:27:09.492 [2024-11-20 11:21:36.877060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.492 [2024-11-20 11:21:36.877095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.492 qpair failed and we were unable to recover it. 00:27:09.492 [2024-11-20 11:21:36.877366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.492 [2024-11-20 11:21:36.877400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.492 qpair failed and we were unable to recover it. 00:27:09.492 [2024-11-20 11:21:36.877623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.492 [2024-11-20 11:21:36.877656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.492 qpair failed and we were unable to recover it. 00:27:09.492 [2024-11-20 11:21:36.877841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.492 [2024-11-20 11:21:36.877874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.492 qpair failed and we were unable to recover it. 00:27:09.492 [2024-11-20 11:21:36.878143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.492 [2024-11-20 11:21:36.878179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.492 qpair failed and we were unable to recover it. 00:27:09.492 [2024-11-20 11:21:36.878377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.492 [2024-11-20 11:21:36.878411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.492 qpair failed and we were unable to recover it. 00:27:09.492 [2024-11-20 11:21:36.878631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.492 [2024-11-20 11:21:36.878664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.492 qpair failed and we were unable to recover it. 00:27:09.492 [2024-11-20 11:21:36.878851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.492 [2024-11-20 11:21:36.878884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.492 qpair failed and we were unable to recover it. 00:27:09.492 [2024-11-20 11:21:36.879098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.492 [2024-11-20 11:21:36.879132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.492 qpair failed and we were unable to recover it. 00:27:09.492 [2024-11-20 11:21:36.879411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.492 [2024-11-20 11:21:36.879445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.492 qpair failed and we were unable to recover it. 00:27:09.492 [2024-11-20 11:21:36.879626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.492 [2024-11-20 11:21:36.879659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.492 qpair failed and we were unable to recover it. 00:27:09.492 [2024-11-20 11:21:36.879960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.492 [2024-11-20 11:21:36.879996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.492 qpair failed and we were unable to recover it. 00:27:09.492 [2024-11-20 11:21:36.880229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.492 [2024-11-20 11:21:36.880262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.492 qpair failed and we were unable to recover it. 00:27:09.492 [2024-11-20 11:21:36.880563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.492 [2024-11-20 11:21:36.880596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.492 qpair failed and we were unable to recover it. 00:27:09.492 [2024-11-20 11:21:36.880866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.492 [2024-11-20 11:21:36.880899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.492 qpair failed and we were unable to recover it. 00:27:09.492 [2024-11-20 11:21:36.881095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.492 [2024-11-20 11:21:36.881130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.492 qpair failed and we were unable to recover it. 00:27:09.492 [2024-11-20 11:21:36.881381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.492 [2024-11-20 11:21:36.881415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.492 qpair failed and we were unable to recover it. 00:27:09.492 [2024-11-20 11:21:36.881692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.492 [2024-11-20 11:21:36.881738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.492 qpair failed and we were unable to recover it. 00:27:09.492 [2024-11-20 11:21:36.881962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.492 [2024-11-20 11:21:36.881997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.492 qpair failed and we were unable to recover it. 00:27:09.492 [2024-11-20 11:21:36.882263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.492 [2024-11-20 11:21:36.882297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.492 qpair failed and we were unable to recover it. 00:27:09.493 [2024-11-20 11:21:36.882420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.493 [2024-11-20 11:21:36.882452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.493 qpair failed and we were unable to recover it. 00:27:09.493 [2024-11-20 11:21:36.882728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.493 [2024-11-20 11:21:36.882761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.493 qpair failed and we were unable to recover it. 00:27:09.493 [2024-11-20 11:21:36.883038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.493 [2024-11-20 11:21:36.883074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.493 qpair failed and we were unable to recover it. 00:27:09.493 [2024-11-20 11:21:36.883254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.493 [2024-11-20 11:21:36.883288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.493 qpair failed and we were unable to recover it. 00:27:09.493 [2024-11-20 11:21:36.883565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.493 [2024-11-20 11:21:36.883598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.493 qpair failed and we were unable to recover it. 00:27:09.493 [2024-11-20 11:21:36.883724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.493 [2024-11-20 11:21:36.883757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.493 qpair failed and we were unable to recover it. 00:27:09.493 [2024-11-20 11:21:36.883964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.493 [2024-11-20 11:21:36.884000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.493 qpair failed and we were unable to recover it. 00:27:09.493 [2024-11-20 11:21:36.884199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.493 [2024-11-20 11:21:36.884232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.493 qpair failed and we were unable to recover it. 00:27:09.493 [2024-11-20 11:21:36.884430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.493 [2024-11-20 11:21:36.884464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.493 qpair failed and we were unable to recover it. 00:27:09.493 [2024-11-20 11:21:36.884655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.493 [2024-11-20 11:21:36.884687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.493 qpair failed and we were unable to recover it. 00:27:09.493 [2024-11-20 11:21:36.884871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.493 [2024-11-20 11:21:36.884904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.493 qpair failed and we were unable to recover it. 00:27:09.493 [2024-11-20 11:21:36.885124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.493 [2024-11-20 11:21:36.885160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.493 qpair failed and we were unable to recover it. 00:27:09.493 [2024-11-20 11:21:36.885364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.493 [2024-11-20 11:21:36.885398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.493 qpair failed and we were unable to recover it. 00:27:09.493 [2024-11-20 11:21:36.885650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.493 [2024-11-20 11:21:36.885683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.493 qpair failed and we were unable to recover it. 00:27:09.493 [2024-11-20 11:21:36.885883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.493 [2024-11-20 11:21:36.885918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.493 qpair failed and we were unable to recover it. 00:27:09.493 [2024-11-20 11:21:36.886117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.493 [2024-11-20 11:21:36.886150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.493 qpair failed and we were unable to recover it. 00:27:09.493 [2024-11-20 11:21:36.886433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.493 [2024-11-20 11:21:36.886467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.493 qpair failed and we were unable to recover it. 00:27:09.493 [2024-11-20 11:21:36.886726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.493 [2024-11-20 11:21:36.886759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.493 qpair failed and we were unable to recover it. 00:27:09.493 [2024-11-20 11:21:36.887014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.493 [2024-11-20 11:21:36.887051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.493 qpair failed and we were unable to recover it. 00:27:09.493 [2024-11-20 11:21:36.887230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.493 [2024-11-20 11:21:36.887263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.493 qpair failed and we were unable to recover it. 00:27:09.493 [2024-11-20 11:21:36.887543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.493 [2024-11-20 11:21:36.887578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.493 qpair failed and we were unable to recover it. 00:27:09.493 [2024-11-20 11:21:36.887826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.493 [2024-11-20 11:21:36.887860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.493 qpair failed and we were unable to recover it. 00:27:09.493 [2024-11-20 11:21:36.888123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.493 [2024-11-20 11:21:36.888158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.493 qpair failed and we were unable to recover it. 00:27:09.493 [2024-11-20 11:21:36.888437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.493 [2024-11-20 11:21:36.888471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.493 qpair failed and we were unable to recover it. 00:27:09.493 [2024-11-20 11:21:36.888756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.493 [2024-11-20 11:21:36.888789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.493 qpair failed and we were unable to recover it. 00:27:09.493 [2024-11-20 11:21:36.889067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.493 [2024-11-20 11:21:36.889103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.493 qpair failed and we were unable to recover it. 00:27:09.493 [2024-11-20 11:21:36.889391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.493 [2024-11-20 11:21:36.889425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.493 qpair failed and we were unable to recover it. 00:27:09.493 [2024-11-20 11:21:36.889705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.493 [2024-11-20 11:21:36.889738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.493 qpair failed and we were unable to recover it. 00:27:09.493 [2024-11-20 11:21:36.889937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.493 [2024-11-20 11:21:36.889980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.493 qpair failed and we were unable to recover it. 00:27:09.493 [2024-11-20 11:21:36.890261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.493 [2024-11-20 11:21:36.890295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.493 qpair failed and we were unable to recover it. 00:27:09.493 [2024-11-20 11:21:36.890489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.493 [2024-11-20 11:21:36.890522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.493 qpair failed and we were unable to recover it. 00:27:09.493 [2024-11-20 11:21:36.890646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.493 [2024-11-20 11:21:36.890680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.493 qpair failed and we were unable to recover it. 00:27:09.493 [2024-11-20 11:21:36.890870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.494 [2024-11-20 11:21:36.890904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.494 qpair failed and we were unable to recover it. 00:27:09.494 [2024-11-20 11:21:36.891208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.494 [2024-11-20 11:21:36.891244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.494 qpair failed and we were unable to recover it. 00:27:09.494 [2024-11-20 11:21:36.891443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.494 [2024-11-20 11:21:36.891477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.494 qpair failed and we were unable to recover it. 00:27:09.494 [2024-11-20 11:21:36.891678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.494 [2024-11-20 11:21:36.891711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.494 qpair failed and we were unable to recover it. 00:27:09.494 [2024-11-20 11:21:36.891918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.494 [2024-11-20 11:21:36.891963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.494 qpair failed and we were unable to recover it. 00:27:09.494 [2024-11-20 11:21:36.892187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.494 [2024-11-20 11:21:36.892227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.494 qpair failed and we were unable to recover it. 00:27:09.494 [2024-11-20 11:21:36.892408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.494 [2024-11-20 11:21:36.892442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.494 qpair failed and we were unable to recover it. 00:27:09.494 [2024-11-20 11:21:36.892636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.494 [2024-11-20 11:21:36.892670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.494 qpair failed and we were unable to recover it. 00:27:09.494 [2024-11-20 11:21:36.892925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.494 [2024-11-20 11:21:36.892970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.494 qpair failed and we were unable to recover it. 00:27:09.494 [2024-11-20 11:21:36.893290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.494 [2024-11-20 11:21:36.893323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.494 qpair failed and we were unable to recover it. 00:27:09.494 [2024-11-20 11:21:36.893599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.494 [2024-11-20 11:21:36.893632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.494 qpair failed and we were unable to recover it. 00:27:09.494 [2024-11-20 11:21:36.893825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.494 [2024-11-20 11:21:36.893859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.494 qpair failed and we were unable to recover it. 00:27:09.494 [2024-11-20 11:21:36.894131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.494 [2024-11-20 11:21:36.894166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.494 qpair failed and we were unable to recover it. 00:27:09.494 [2024-11-20 11:21:36.894368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.494 [2024-11-20 11:21:36.894402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.494 qpair failed and we were unable to recover it. 00:27:09.494 [2024-11-20 11:21:36.894530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.494 [2024-11-20 11:21:36.894563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.494 qpair failed and we were unable to recover it. 00:27:09.494 [2024-11-20 11:21:36.894788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.494 [2024-11-20 11:21:36.894821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.494 qpair failed and we were unable to recover it. 00:27:09.494 [2024-11-20 11:21:36.895109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.494 [2024-11-20 11:21:36.895144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.494 qpair failed and we were unable to recover it. 00:27:09.494 [2024-11-20 11:21:36.895370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.494 [2024-11-20 11:21:36.895404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.494 qpair failed and we were unable to recover it. 00:27:09.494 [2024-11-20 11:21:36.895657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.494 [2024-11-20 11:21:36.895690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.494 qpair failed and we were unable to recover it. 00:27:09.494 [2024-11-20 11:21:36.895960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.494 [2024-11-20 11:21:36.895996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.494 qpair failed and we were unable to recover it. 00:27:09.494 [2024-11-20 11:21:36.896247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.494 [2024-11-20 11:21:36.896281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.494 qpair failed and we were unable to recover it. 00:27:09.494 [2024-11-20 11:21:36.896487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.494 [2024-11-20 11:21:36.896520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.494 qpair failed and we were unable to recover it. 00:27:09.494 [2024-11-20 11:21:36.896723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.494 [2024-11-20 11:21:36.896756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.494 qpair failed and we were unable to recover it. 00:27:09.494 [2024-11-20 11:21:36.896955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.494 [2024-11-20 11:21:36.896992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.494 qpair failed and we were unable to recover it. 00:27:09.494 [2024-11-20 11:21:36.897170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.494 [2024-11-20 11:21:36.897204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.494 qpair failed and we were unable to recover it. 00:27:09.494 [2024-11-20 11:21:36.897386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.494 [2024-11-20 11:21:36.897419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.494 qpair failed and we were unable to recover it. 00:27:09.494 [2024-11-20 11:21:36.897698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.494 [2024-11-20 11:21:36.897732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.494 qpair failed and we were unable to recover it. 00:27:09.494 [2024-11-20 11:21:36.898010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.494 [2024-11-20 11:21:36.898046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.494 qpair failed and we were unable to recover it. 00:27:09.494 [2024-11-20 11:21:36.898226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.494 [2024-11-20 11:21:36.898259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.494 qpair failed and we were unable to recover it. 00:27:09.494 [2024-11-20 11:21:36.898388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.494 [2024-11-20 11:21:36.898421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.494 qpair failed and we were unable to recover it. 00:27:09.494 [2024-11-20 11:21:36.898698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.494 [2024-11-20 11:21:36.898732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.494 qpair failed and we were unable to recover it. 00:27:09.495 [2024-11-20 11:21:36.898929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.495 [2024-11-20 11:21:36.898980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.495 qpair failed and we were unable to recover it. 00:27:09.495 [2024-11-20 11:21:36.899264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.495 [2024-11-20 11:21:36.899299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.495 qpair failed and we were unable to recover it. 00:27:09.495 [2024-11-20 11:21:36.899601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.495 [2024-11-20 11:21:36.899634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.495 qpair failed and we were unable to recover it. 00:27:09.495 [2024-11-20 11:21:36.899826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.495 [2024-11-20 11:21:36.899860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.495 qpair failed and we were unable to recover it. 00:27:09.495 [2024-11-20 11:21:36.900161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.495 [2024-11-20 11:21:36.900196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.495 qpair failed and we were unable to recover it. 00:27:09.495 [2024-11-20 11:21:36.900380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.495 [2024-11-20 11:21:36.900413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.495 qpair failed and we were unable to recover it. 00:27:09.495 [2024-11-20 11:21:36.900614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.495 [2024-11-20 11:21:36.900647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.495 qpair failed and we were unable to recover it. 00:27:09.495 [2024-11-20 11:21:36.900853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.495 [2024-11-20 11:21:36.900887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.495 qpair failed and we were unable to recover it. 00:27:09.495 [2024-11-20 11:21:36.901173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.495 [2024-11-20 11:21:36.901207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.495 qpair failed and we were unable to recover it. 00:27:09.495 [2024-11-20 11:21:36.901489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.495 [2024-11-20 11:21:36.901523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.495 qpair failed and we were unable to recover it. 00:27:09.495 [2024-11-20 11:21:36.901801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.495 [2024-11-20 11:21:36.901834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.495 qpair failed and we were unable to recover it. 00:27:09.495 [2024-11-20 11:21:36.901989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.495 [2024-11-20 11:21:36.902024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.495 qpair failed and we were unable to recover it. 00:27:09.495 [2024-11-20 11:21:36.902302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.495 [2024-11-20 11:21:36.902336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.495 qpair failed and we were unable to recover it. 00:27:09.495 [2024-11-20 11:21:36.902607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.495 [2024-11-20 11:21:36.902640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.495 qpair failed and we were unable to recover it. 00:27:09.495 [2024-11-20 11:21:36.902773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.495 [2024-11-20 11:21:36.902813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.495 qpair failed and we were unable to recover it. 00:27:09.495 [2024-11-20 11:21:36.903065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.495 [2024-11-20 11:21:36.903100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.495 qpair failed and we were unable to recover it. 00:27:09.495 [2024-11-20 11:21:36.903403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.495 [2024-11-20 11:21:36.903436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.495 qpair failed and we were unable to recover it. 00:27:09.495 [2024-11-20 11:21:36.903700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.495 [2024-11-20 11:21:36.903734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.495 qpair failed and we were unable to recover it. 00:27:09.495 [2024-11-20 11:21:36.904033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.495 [2024-11-20 11:21:36.904068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.495 qpair failed and we were unable to recover it. 00:27:09.495 [2024-11-20 11:21:36.904338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.495 [2024-11-20 11:21:36.904373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.495 qpair failed and we were unable to recover it. 00:27:09.495 [2024-11-20 11:21:36.904621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.495 [2024-11-20 11:21:36.904654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.495 qpair failed and we were unable to recover it. 00:27:09.495 [2024-11-20 11:21:36.904855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.495 [2024-11-20 11:21:36.904889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.495 qpair failed and we were unable to recover it. 00:27:09.495 [2024-11-20 11:21:36.905082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.495 [2024-11-20 11:21:36.905117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.495 qpair failed and we were unable to recover it. 00:27:09.495 [2024-11-20 11:21:36.905391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.495 [2024-11-20 11:21:36.905424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.495 qpair failed and we were unable to recover it. 00:27:09.495 [2024-11-20 11:21:36.905694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.495 [2024-11-20 11:21:36.905728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.495 qpair failed and we were unable to recover it. 00:27:09.495 [2024-11-20 11:21:36.905962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.495 [2024-11-20 11:21:36.905998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.495 qpair failed and we were unable to recover it. 00:27:09.495 [2024-11-20 11:21:36.906214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.495 [2024-11-20 11:21:36.906248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.495 qpair failed and we were unable to recover it. 00:27:09.495 [2024-11-20 11:21:36.906456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.495 [2024-11-20 11:21:36.906490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.495 qpair failed and we were unable to recover it. 00:27:09.495 [2024-11-20 11:21:36.906771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.495 [2024-11-20 11:21:36.906806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.495 qpair failed and we were unable to recover it. 00:27:09.495 [2024-11-20 11:21:36.907058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.495 [2024-11-20 11:21:36.907094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.495 qpair failed and we were unable to recover it. 00:27:09.495 [2024-11-20 11:21:36.907291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.495 [2024-11-20 11:21:36.907325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.495 qpair failed and we were unable to recover it. 00:27:09.495 [2024-11-20 11:21:36.907599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.495 [2024-11-20 11:21:36.907633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.495 qpair failed and we were unable to recover it. 00:27:09.495 [2024-11-20 11:21:36.907821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.495 [2024-11-20 11:21:36.907856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.495 qpair failed and we were unable to recover it. 00:27:09.496 [2024-11-20 11:21:36.908051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.496 [2024-11-20 11:21:36.908086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.496 qpair failed and we were unable to recover it. 00:27:09.496 [2024-11-20 11:21:36.908360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.496 [2024-11-20 11:21:36.908393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.496 qpair failed and we were unable to recover it. 00:27:09.496 [2024-11-20 11:21:36.908606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.496 [2024-11-20 11:21:36.908640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.496 qpair failed and we were unable to recover it. 00:27:09.496 [2024-11-20 11:21:36.908921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.496 [2024-11-20 11:21:36.908964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.496 qpair failed and we were unable to recover it. 00:27:09.496 [2024-11-20 11:21:36.909080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.496 [2024-11-20 11:21:36.909110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.496 qpair failed and we were unable to recover it. 00:27:09.496 [2024-11-20 11:21:36.909384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.496 [2024-11-20 11:21:36.909416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.496 qpair failed and we were unable to recover it. 00:27:09.496 [2024-11-20 11:21:36.909693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.496 [2024-11-20 11:21:36.909727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.496 qpair failed and we were unable to recover it. 00:27:09.496 [2024-11-20 11:21:36.909965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.496 [2024-11-20 11:21:36.910000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.496 qpair failed and we were unable to recover it. 00:27:09.496 [2024-11-20 11:21:36.910269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.496 [2024-11-20 11:21:36.910303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.496 qpair failed and we were unable to recover it. 00:27:09.496 [2024-11-20 11:21:36.910583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.496 [2024-11-20 11:21:36.910616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.496 qpair failed and we were unable to recover it. 00:27:09.496 [2024-11-20 11:21:36.910914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.496 [2024-11-20 11:21:36.910974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.496 qpair failed and we were unable to recover it. 00:27:09.496 [2024-11-20 11:21:36.911211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.496 [2024-11-20 11:21:36.911244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.496 qpair failed and we were unable to recover it. 00:27:09.496 [2024-11-20 11:21:36.911544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.496 [2024-11-20 11:21:36.911577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.496 qpair failed and we were unable to recover it. 00:27:09.496 [2024-11-20 11:21:36.911871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.496 [2024-11-20 11:21:36.911904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.496 qpair failed and we were unable to recover it. 00:27:09.496 [2024-11-20 11:21:36.912215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.496 [2024-11-20 11:21:36.912250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.496 qpair failed and we were unable to recover it. 00:27:09.496 [2024-11-20 11:21:36.912450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.496 [2024-11-20 11:21:36.912484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.496 qpair failed and we were unable to recover it. 00:27:09.496 [2024-11-20 11:21:36.912670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.496 [2024-11-20 11:21:36.912703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.496 qpair failed and we were unable to recover it. 00:27:09.496 [2024-11-20 11:21:36.912987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.496 [2024-11-20 11:21:36.913022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.496 qpair failed and we were unable to recover it. 00:27:09.496 [2024-11-20 11:21:36.913142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.496 [2024-11-20 11:21:36.913177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.496 qpair failed and we were unable to recover it. 00:27:09.496 [2024-11-20 11:21:36.913428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.496 [2024-11-20 11:21:36.913461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.496 qpair failed and we were unable to recover it. 00:27:09.496 [2024-11-20 11:21:36.913764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.496 [2024-11-20 11:21:36.913798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.496 qpair failed and we were unable to recover it. 00:27:09.496 [2024-11-20 11:21:36.914083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.496 [2024-11-20 11:21:36.914125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.496 qpair failed and we were unable to recover it. 00:27:09.496 [2024-11-20 11:21:36.914379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.496 [2024-11-20 11:21:36.914413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.496 qpair failed and we were unable to recover it. 00:27:09.496 [2024-11-20 11:21:36.914640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.496 [2024-11-20 11:21:36.914673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.496 qpair failed and we were unable to recover it. 00:27:09.496 [2024-11-20 11:21:36.914925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.496 [2024-11-20 11:21:36.914967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.496 qpair failed and we were unable to recover it. 00:27:09.496 [2024-11-20 11:21:36.915224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.496 [2024-11-20 11:21:36.915257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.496 qpair failed and we were unable to recover it. 00:27:09.496 [2024-11-20 11:21:36.915466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.496 [2024-11-20 11:21:36.915500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.496 qpair failed and we were unable to recover it. 00:27:09.496 [2024-11-20 11:21:36.915746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.496 [2024-11-20 11:21:36.915780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.496 qpair failed and we were unable to recover it. 00:27:09.496 [2024-11-20 11:21:36.915893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.497 [2024-11-20 11:21:36.915924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.497 qpair failed and we were unable to recover it. 00:27:09.497 [2024-11-20 11:21:36.916165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.497 [2024-11-20 11:21:36.916199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.497 qpair failed and we were unable to recover it. 00:27:09.497 [2024-11-20 11:21:36.916490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.497 [2024-11-20 11:21:36.916523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.497 qpair failed and we were unable to recover it. 00:27:09.497 [2024-11-20 11:21:36.916717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.497 [2024-11-20 11:21:36.916751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.497 qpair failed and we were unable to recover it. 00:27:09.497 [2024-11-20 11:21:36.916957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.497 [2024-11-20 11:21:36.916993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.497 qpair failed and we were unable to recover it. 00:27:09.497 [2024-11-20 11:21:36.917248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.497 [2024-11-20 11:21:36.917282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.497 qpair failed and we were unable to recover it. 00:27:09.497 [2024-11-20 11:21:36.917497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.497 [2024-11-20 11:21:36.917530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.497 qpair failed and we were unable to recover it. 00:27:09.497 [2024-11-20 11:21:36.917722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.497 [2024-11-20 11:21:36.917755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.497 qpair failed and we were unable to recover it. 00:27:09.497 [2024-11-20 11:21:36.917971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.497 [2024-11-20 11:21:36.918006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.497 qpair failed and we were unable to recover it. 00:27:09.497 [2024-11-20 11:21:36.918209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.497 [2024-11-20 11:21:36.918242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.497 qpair failed and we were unable to recover it. 00:27:09.497 [2024-11-20 11:21:36.918475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.497 [2024-11-20 11:21:36.918510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.497 qpair failed and we were unable to recover it. 00:27:09.497 [2024-11-20 11:21:36.918761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.497 [2024-11-20 11:21:36.918794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.497 qpair failed and we were unable to recover it. 00:27:09.497 [2024-11-20 11:21:36.919095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.497 [2024-11-20 11:21:36.919131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.497 qpair failed and we were unable to recover it. 00:27:09.497 [2024-11-20 11:21:36.919320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.497 [2024-11-20 11:21:36.919353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.497 qpair failed and we were unable to recover it. 00:27:09.497 [2024-11-20 11:21:36.919600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.497 [2024-11-20 11:21:36.919634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.497 qpair failed and we were unable to recover it. 00:27:09.497 [2024-11-20 11:21:36.919931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.497 [2024-11-20 11:21:36.919975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.497 qpair failed and we were unable to recover it. 00:27:09.497 [2024-11-20 11:21:36.920234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.497 [2024-11-20 11:21:36.920268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.497 qpair failed and we were unable to recover it. 00:27:09.497 [2024-11-20 11:21:36.920448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.497 [2024-11-20 11:21:36.920481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.497 qpair failed and we were unable to recover it. 00:27:09.497 [2024-11-20 11:21:36.920732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.497 [2024-11-20 11:21:36.920765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.497 qpair failed and we were unable to recover it. 00:27:09.497 [2024-11-20 11:21:36.921014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.497 [2024-11-20 11:21:36.921051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.497 qpair failed and we were unable to recover it. 00:27:09.497 [2024-11-20 11:21:36.921356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.497 [2024-11-20 11:21:36.921389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.497 qpair failed and we were unable to recover it. 00:27:09.497 [2024-11-20 11:21:36.921646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.497 [2024-11-20 11:21:36.921679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.497 qpair failed and we were unable to recover it. 00:27:09.497 [2024-11-20 11:21:36.921902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.497 [2024-11-20 11:21:36.921936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.497 qpair failed and we were unable to recover it. 00:27:09.497 [2024-11-20 11:21:36.922131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.497 [2024-11-20 11:21:36.922164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.497 qpair failed and we were unable to recover it. 00:27:09.497 [2024-11-20 11:21:36.922464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.497 [2024-11-20 11:21:36.922498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.497 qpair failed and we were unable to recover it. 00:27:09.497 [2024-11-20 11:21:36.922700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.497 [2024-11-20 11:21:36.922734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.497 qpair failed and we were unable to recover it. 00:27:09.497 [2024-11-20 11:21:36.922959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.497 [2024-11-20 11:21:36.922995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.497 qpair failed and we were unable to recover it. 00:27:09.497 [2024-11-20 11:21:36.923246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.497 [2024-11-20 11:21:36.923280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.497 qpair failed and we were unable to recover it. 00:27:09.497 [2024-11-20 11:21:36.923412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.497 [2024-11-20 11:21:36.923445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.497 qpair failed and we were unable to recover it. 00:27:09.497 [2024-11-20 11:21:36.923660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.497 [2024-11-20 11:21:36.923692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.497 qpair failed and we were unable to recover it. 00:27:09.497 [2024-11-20 11:21:36.923972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.497 [2024-11-20 11:21:36.924007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.497 qpair failed and we were unable to recover it. 00:27:09.497 [2024-11-20 11:21:36.924290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.497 [2024-11-20 11:21:36.924324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.497 qpair failed and we were unable to recover it. 00:27:09.497 [2024-11-20 11:21:36.924438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.497 [2024-11-20 11:21:36.924470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.497 qpair failed and we were unable to recover it. 00:27:09.497 [2024-11-20 11:21:36.924664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.498 [2024-11-20 11:21:36.924703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.498 qpair failed and we were unable to recover it. 00:27:09.498 [2024-11-20 11:21:36.924882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.498 [2024-11-20 11:21:36.924915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.498 qpair failed and we were unable to recover it. 00:27:09.498 [2024-11-20 11:21:36.925182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.498 [2024-11-20 11:21:36.925217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.498 qpair failed and we were unable to recover it. 00:27:09.498 [2024-11-20 11:21:36.925492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.498 [2024-11-20 11:21:36.925526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.498 qpair failed and we were unable to recover it. 00:27:09.498 [2024-11-20 11:21:36.925732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.498 [2024-11-20 11:21:36.925766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.498 qpair failed and we were unable to recover it. 00:27:09.498 [2024-11-20 11:21:36.926015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.498 [2024-11-20 11:21:36.926051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.498 qpair failed and we were unable to recover it. 00:27:09.498 [2024-11-20 11:21:36.926264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.498 [2024-11-20 11:21:36.926296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.498 qpair failed and we were unable to recover it. 00:27:09.498 [2024-11-20 11:21:36.926412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.498 [2024-11-20 11:21:36.926443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.498 qpair failed and we were unable to recover it. 00:27:09.498 [2024-11-20 11:21:36.926694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.498 [2024-11-20 11:21:36.926728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.498 qpair failed and we were unable to recover it. 00:27:09.498 [2024-11-20 11:21:36.927012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.498 [2024-11-20 11:21:36.927047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.498 qpair failed and we were unable to recover it. 00:27:09.498 [2024-11-20 11:21:36.927295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.498 [2024-11-20 11:21:36.927329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.498 qpair failed and we were unable to recover it. 00:27:09.498 [2024-11-20 11:21:36.927599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.498 [2024-11-20 11:21:36.927633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.498 qpair failed and we were unable to recover it. 00:27:09.498 [2024-11-20 11:21:36.927821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.498 [2024-11-20 11:21:36.927854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.498 qpair failed and we were unable to recover it. 00:27:09.498 [2024-11-20 11:21:36.927976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.498 [2024-11-20 11:21:36.928012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.498 qpair failed and we were unable to recover it. 00:27:09.498 [2024-11-20 11:21:36.928310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.498 [2024-11-20 11:21:36.928343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.498 qpair failed and we were unable to recover it. 00:27:09.498 [2024-11-20 11:21:36.928606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.498 [2024-11-20 11:21:36.928640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.498 qpair failed and we were unable to recover it. 00:27:09.498 [2024-11-20 11:21:36.928894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.498 [2024-11-20 11:21:36.928927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.498 qpair failed and we were unable to recover it. 00:27:09.498 [2024-11-20 11:21:36.929189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.498 [2024-11-20 11:21:36.929222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.498 qpair failed and we were unable to recover it. 00:27:09.498 [2024-11-20 11:21:36.929472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.498 [2024-11-20 11:21:36.929505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.498 qpair failed and we were unable to recover it. 00:27:09.498 [2024-11-20 11:21:36.929807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.498 [2024-11-20 11:21:36.929841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.498 qpair failed and we were unable to recover it. 00:27:09.498 [2024-11-20 11:21:36.930104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.498 [2024-11-20 11:21:36.930138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.498 qpair failed and we were unable to recover it. 00:27:09.498 [2024-11-20 11:21:36.930333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.498 [2024-11-20 11:21:36.930367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.498 qpair failed and we were unable to recover it. 00:27:09.498 [2024-11-20 11:21:36.930638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.498 [2024-11-20 11:21:36.930672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.498 qpair failed and we were unable to recover it. 00:27:09.498 [2024-11-20 11:21:36.930810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.498 [2024-11-20 11:21:36.930843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.498 qpair failed and we were unable to recover it. 00:27:09.498 [2024-11-20 11:21:36.931060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.498 [2024-11-20 11:21:36.931094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.498 qpair failed and we were unable to recover it. 00:27:09.498 [2024-11-20 11:21:36.931366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.498 [2024-11-20 11:21:36.931400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.498 qpair failed and we were unable to recover it. 00:27:09.498 [2024-11-20 11:21:36.931686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.498 [2024-11-20 11:21:36.931719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.498 qpair failed and we were unable to recover it. 00:27:09.498 [2024-11-20 11:21:36.931927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.498 [2024-11-20 11:21:36.931975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.498 qpair failed and we were unable to recover it. 00:27:09.498 [2024-11-20 11:21:36.932255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.498 [2024-11-20 11:21:36.932288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.498 qpair failed and we were unable to recover it. 00:27:09.498 [2024-11-20 11:21:36.932469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.498 [2024-11-20 11:21:36.932502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.498 qpair failed and we were unable to recover it. 00:27:09.498 [2024-11-20 11:21:36.932699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.498 [2024-11-20 11:21:36.932733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.498 qpair failed and we were unable to recover it. 00:27:09.498 [2024-11-20 11:21:36.932983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.498 [2024-11-20 11:21:36.933017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.498 qpair failed and we were unable to recover it. 00:27:09.498 [2024-11-20 11:21:36.933318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.498 [2024-11-20 11:21:36.933351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.498 qpair failed and we were unable to recover it. 00:27:09.499 [2024-11-20 11:21:36.933587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.499 [2024-11-20 11:21:36.933619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.499 qpair failed and we were unable to recover it. 00:27:09.499 [2024-11-20 11:21:36.933894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.499 [2024-11-20 11:21:36.933927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.499 qpair failed and we were unable to recover it. 00:27:09.499 [2024-11-20 11:21:36.934061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.499 [2024-11-20 11:21:36.934095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.499 qpair failed and we were unable to recover it. 00:27:09.499 [2024-11-20 11:21:36.934368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.499 [2024-11-20 11:21:36.934402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.499 qpair failed and we were unable to recover it. 00:27:09.499 [2024-11-20 11:21:36.934535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.499 [2024-11-20 11:21:36.934569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.499 qpair failed and we were unable to recover it. 00:27:09.499 [2024-11-20 11:21:36.934785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.499 [2024-11-20 11:21:36.934819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.499 qpair failed and we were unable to recover it. 00:27:09.499 [2024-11-20 11:21:36.935117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.499 [2024-11-20 11:21:36.935152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.499 qpair failed and we were unable to recover it. 00:27:09.499 [2024-11-20 11:21:36.935335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.499 [2024-11-20 11:21:36.935378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.499 qpair failed and we were unable to recover it. 00:27:09.499 [2024-11-20 11:21:36.935503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.499 [2024-11-20 11:21:36.935534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.499 qpair failed and we were unable to recover it. 00:27:09.499 [2024-11-20 11:21:36.935748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.499 [2024-11-20 11:21:36.935782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.499 qpair failed and we were unable to recover it. 00:27:09.499 [2024-11-20 11:21:36.936033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.499 [2024-11-20 11:21:36.936067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.499 qpair failed and we were unable to recover it. 00:27:09.499 [2024-11-20 11:21:36.936337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.499 [2024-11-20 11:21:36.936371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.499 qpair failed and we were unable to recover it. 00:27:09.499 [2024-11-20 11:21:36.936648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.499 [2024-11-20 11:21:36.936681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.499 qpair failed and we were unable to recover it. 00:27:09.499 [2024-11-20 11:21:36.936972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.499 [2024-11-20 11:21:36.937007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.499 qpair failed and we were unable to recover it. 00:27:09.499 [2024-11-20 11:21:36.937278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.499 [2024-11-20 11:21:36.937312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.499 qpair failed and we were unable to recover it. 00:27:09.499 [2024-11-20 11:21:36.937587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.499 [2024-11-20 11:21:36.937620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.499 qpair failed and we were unable to recover it. 00:27:09.499 [2024-11-20 11:21:36.937911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.499 [2024-11-20 11:21:36.937943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.499 qpair failed and we were unable to recover it. 00:27:09.499 [2024-11-20 11:21:36.938207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.499 [2024-11-20 11:21:36.938241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.499 qpair failed and we were unable to recover it. 00:27:09.499 [2024-11-20 11:21:36.938564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.499 [2024-11-20 11:21:36.938598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.499 qpair failed and we were unable to recover it. 00:27:09.499 [2024-11-20 11:21:36.938869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.499 [2024-11-20 11:21:36.938902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.499 qpair failed and we were unable to recover it. 00:27:09.499 [2024-11-20 11:21:36.939200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.499 [2024-11-20 11:21:36.939235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.499 qpair failed and we were unable to recover it. 00:27:09.499 [2024-11-20 11:21:36.939492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.499 [2024-11-20 11:21:36.939527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.499 qpair failed and we were unable to recover it. 00:27:09.499 [2024-11-20 11:21:36.939754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.499 [2024-11-20 11:21:36.939787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.499 qpair failed and we were unable to recover it. 00:27:09.499 [2024-11-20 11:21:36.939989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.499 [2024-11-20 11:21:36.940024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.499 qpair failed and we were unable to recover it. 00:27:09.499 [2024-11-20 11:21:36.940300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.499 [2024-11-20 11:21:36.940333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.499 qpair failed and we were unable to recover it. 00:27:09.499 [2024-11-20 11:21:36.940606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.499 [2024-11-20 11:21:36.940639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.499 qpair failed and we were unable to recover it. 00:27:09.499 [2024-11-20 11:21:36.940965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.499 [2024-11-20 11:21:36.941000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.499 qpair failed and we were unable to recover it. 00:27:09.499 [2024-11-20 11:21:36.941297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.499 [2024-11-20 11:21:36.941330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.499 qpair failed and we were unable to recover it. 00:27:09.499 [2024-11-20 11:21:36.941596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.499 [2024-11-20 11:21:36.941630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.499 qpair failed and we were unable to recover it. 00:27:09.499 [2024-11-20 11:21:36.941898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.499 [2024-11-20 11:21:36.941931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.499 qpair failed and we were unable to recover it. 00:27:09.499 [2024-11-20 11:21:36.942194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.499 [2024-11-20 11:21:36.942228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.499 qpair failed and we were unable to recover it. 00:27:09.499 [2024-11-20 11:21:36.942445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.499 [2024-11-20 11:21:36.942478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.499 qpair failed and we were unable to recover it. 00:27:09.499 [2024-11-20 11:21:36.942604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.500 [2024-11-20 11:21:36.942636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.500 qpair failed and we were unable to recover it. 00:27:09.500 [2024-11-20 11:21:36.942906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.500 [2024-11-20 11:21:36.942939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.500 qpair failed and we were unable to recover it. 00:27:09.500 [2024-11-20 11:21:36.943241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.500 [2024-11-20 11:21:36.943276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.500 qpair failed and we were unable to recover it. 00:27:09.500 [2024-11-20 11:21:36.943461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.500 [2024-11-20 11:21:36.943495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.500 qpair failed and we were unable to recover it. 00:27:09.500 [2024-11-20 11:21:36.943695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.500 [2024-11-20 11:21:36.943729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.500 qpair failed and we were unable to recover it. 00:27:09.500 [2024-11-20 11:21:36.943920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.500 [2024-11-20 11:21:36.943967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.500 qpair failed and we were unable to recover it. 00:27:09.500 [2024-11-20 11:21:36.944258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.500 [2024-11-20 11:21:36.944292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.500 qpair failed and we were unable to recover it. 00:27:09.500 [2024-11-20 11:21:36.944431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.500 [2024-11-20 11:21:36.944465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.500 qpair failed and we were unable to recover it. 00:27:09.500 [2024-11-20 11:21:36.944716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.500 [2024-11-20 11:21:36.944750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.500 qpair failed and we were unable to recover it. 00:27:09.778 [2024-11-20 11:21:36.945053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.778 [2024-11-20 11:21:36.945087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.778 qpair failed and we were unable to recover it. 00:27:09.778 [2024-11-20 11:21:36.945292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.778 [2024-11-20 11:21:36.945325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.778 qpair failed and we were unable to recover it. 00:27:09.778 [2024-11-20 11:21:36.945599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.778 [2024-11-20 11:21:36.945633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.778 qpair failed and we were unable to recover it. 00:27:09.778 [2024-11-20 11:21:36.945826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.778 [2024-11-20 11:21:36.945859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.778 qpair failed and we were unable to recover it. 00:27:09.778 [2024-11-20 11:21:36.946039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.778 [2024-11-20 11:21:36.946073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.778 qpair failed and we were unable to recover it. 00:27:09.778 [2024-11-20 11:21:36.946288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.778 [2024-11-20 11:21:36.946321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.778 qpair failed and we were unable to recover it. 00:27:09.778 [2024-11-20 11:21:36.946537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.778 [2024-11-20 11:21:36.946575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.778 qpair failed and we were unable to recover it. 00:27:09.778 [2024-11-20 11:21:36.946853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.778 [2024-11-20 11:21:36.946886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.778 qpair failed and we were unable to recover it. 00:27:09.778 [2024-11-20 11:21:36.947104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.778 [2024-11-20 11:21:36.947137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.778 qpair failed and we were unable to recover it. 00:27:09.778 [2024-11-20 11:21:36.947329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.778 [2024-11-20 11:21:36.947363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.778 qpair failed and we were unable to recover it. 00:27:09.778 [2024-11-20 11:21:36.947493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.778 [2024-11-20 11:21:36.947526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.778 qpair failed and we were unable to recover it. 00:27:09.778 [2024-11-20 11:21:36.947802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.778 [2024-11-20 11:21:36.947836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.778 qpair failed and we were unable to recover it. 00:27:09.778 [2024-11-20 11:21:36.948034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.778 [2024-11-20 11:21:36.948069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.778 qpair failed and we were unable to recover it. 00:27:09.778 [2024-11-20 11:21:36.948340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.778 [2024-11-20 11:21:36.948373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.778 qpair failed and we were unable to recover it. 00:27:09.778 [2024-11-20 11:21:36.948556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.778 [2024-11-20 11:21:36.948589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.778 qpair failed and we were unable to recover it. 00:27:09.778 [2024-11-20 11:21:36.948789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.778 [2024-11-20 11:21:36.948823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.778 qpair failed and we were unable to recover it. 00:27:09.778 [2024-11-20 11:21:36.949078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.778 [2024-11-20 11:21:36.949114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.778 qpair failed and we were unable to recover it. 00:27:09.778 [2024-11-20 11:21:36.949411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.778 [2024-11-20 11:21:36.949443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.778 qpair failed and we were unable to recover it. 00:27:09.778 [2024-11-20 11:21:36.949713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.778 [2024-11-20 11:21:36.949746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.778 qpair failed and we were unable to recover it. 00:27:09.778 [2024-11-20 11:21:36.949965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.778 [2024-11-20 11:21:36.950001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.778 qpair failed and we were unable to recover it. 00:27:09.779 [2024-11-20 11:21:36.950125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.779 [2024-11-20 11:21:36.950158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.779 qpair failed and we were unable to recover it. 00:27:09.779 [2024-11-20 11:21:36.950408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.779 [2024-11-20 11:21:36.950442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.779 qpair failed and we were unable to recover it. 00:27:09.779 [2024-11-20 11:21:36.950631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.779 [2024-11-20 11:21:36.950665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.779 qpair failed and we were unable to recover it. 00:27:09.779 [2024-11-20 11:21:36.950867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.779 [2024-11-20 11:21:36.950899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.779 qpair failed and we were unable to recover it. 00:27:09.779 [2024-11-20 11:21:36.951038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.779 [2024-11-20 11:21:36.951073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.779 qpair failed and we were unable to recover it. 00:27:09.779 [2024-11-20 11:21:36.951351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.779 [2024-11-20 11:21:36.951384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.779 qpair failed and we were unable to recover it. 00:27:09.779 [2024-11-20 11:21:36.951634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.779 [2024-11-20 11:21:36.951667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.779 qpair failed and we were unable to recover it. 00:27:09.779 [2024-11-20 11:21:36.951872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.779 [2024-11-20 11:21:36.951905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.779 qpair failed and we were unable to recover it. 00:27:09.779 [2024-11-20 11:21:36.952182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.779 [2024-11-20 11:21:36.952217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.779 qpair failed and we were unable to recover it. 00:27:09.779 [2024-11-20 11:21:36.952447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.779 [2024-11-20 11:21:36.952480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.779 qpair failed and we were unable to recover it. 00:27:09.779 [2024-11-20 11:21:36.952751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.779 [2024-11-20 11:21:36.952784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.779 qpair failed and we were unable to recover it. 00:27:09.779 [2024-11-20 11:21:36.953005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.779 [2024-11-20 11:21:36.953041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.779 qpair failed and we were unable to recover it. 00:27:09.779 [2024-11-20 11:21:36.953241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.779 [2024-11-20 11:21:36.953274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.779 qpair failed and we were unable to recover it. 00:27:09.779 [2024-11-20 11:21:36.953527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.779 [2024-11-20 11:21:36.953561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.779 qpair failed and we were unable to recover it. 00:27:09.779 [2024-11-20 11:21:36.953759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.779 [2024-11-20 11:21:36.953792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.779 qpair failed and we were unable to recover it. 00:27:09.779 [2024-11-20 11:21:36.954046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.779 [2024-11-20 11:21:36.954080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.779 qpair failed and we were unable to recover it. 00:27:09.779 [2024-11-20 11:21:36.954193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.779 [2024-11-20 11:21:36.954224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.779 qpair failed and we were unable to recover it. 00:27:09.779 [2024-11-20 11:21:36.954354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.779 [2024-11-20 11:21:36.954386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.779 qpair failed and we were unable to recover it. 00:27:09.779 [2024-11-20 11:21:36.954662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.779 [2024-11-20 11:21:36.954696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.779 qpair failed and we were unable to recover it. 00:27:09.779 [2024-11-20 11:21:36.954934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.779 [2024-11-20 11:21:36.954977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.779 qpair failed and we were unable to recover it. 00:27:09.779 [2024-11-20 11:21:36.955177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.779 [2024-11-20 11:21:36.955211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.779 qpair failed and we were unable to recover it. 00:27:09.779 [2024-11-20 11:21:36.955392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.779 [2024-11-20 11:21:36.955426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.779 qpair failed and we were unable to recover it. 00:27:09.779 [2024-11-20 11:21:36.955703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.779 [2024-11-20 11:21:36.955737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.779 qpair failed and we were unable to recover it. 00:27:09.779 [2024-11-20 11:21:36.956006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.779 [2024-11-20 11:21:36.956041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.779 qpair failed and we were unable to recover it. 00:27:09.779 [2024-11-20 11:21:36.956331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.779 [2024-11-20 11:21:36.956364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.779 qpair failed and we were unable to recover it. 00:27:09.779 [2024-11-20 11:21:36.956598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.779 [2024-11-20 11:21:36.956630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.779 qpair failed and we were unable to recover it. 00:27:09.779 [2024-11-20 11:21:36.956777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.779 [2024-11-20 11:21:36.956816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.779 qpair failed and we were unable to recover it. 00:27:09.779 [2024-11-20 11:21:36.956997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.779 [2024-11-20 11:21:36.957032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.779 qpair failed and we were unable to recover it. 00:27:09.779 [2024-11-20 11:21:36.957311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.779 [2024-11-20 11:21:36.957342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.779 qpair failed and we were unable to recover it. 00:27:09.779 [2024-11-20 11:21:36.957614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.779 [2024-11-20 11:21:36.957649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.779 qpair failed and we were unable to recover it. 00:27:09.779 [2024-11-20 11:21:36.957765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.779 [2024-11-20 11:21:36.957797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.779 qpair failed and we were unable to recover it. 00:27:09.779 [2024-11-20 11:21:36.958049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.779 [2024-11-20 11:21:36.958084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.779 qpair failed and we were unable to recover it. 00:27:09.779 [2024-11-20 11:21:36.958268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.779 [2024-11-20 11:21:36.958301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.780 qpair failed and we were unable to recover it. 00:27:09.780 [2024-11-20 11:21:36.958577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.780 [2024-11-20 11:21:36.958611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.780 qpair failed and we were unable to recover it. 00:27:09.780 [2024-11-20 11:21:36.958880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.780 [2024-11-20 11:21:36.958914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.780 qpair failed and we were unable to recover it. 00:27:09.780 [2024-11-20 11:21:36.959211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.780 [2024-11-20 11:21:36.959246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.780 qpair failed and we were unable to recover it. 00:27:09.780 [2024-11-20 11:21:36.959511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.780 [2024-11-20 11:21:36.959545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.780 qpair failed and we were unable to recover it. 00:27:09.780 [2024-11-20 11:21:36.959765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.780 [2024-11-20 11:21:36.959797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.780 qpair failed and we were unable to recover it. 00:27:09.780 [2024-11-20 11:21:36.959991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.780 [2024-11-20 11:21:36.960026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.780 qpair failed and we were unable to recover it. 00:27:09.780 [2024-11-20 11:21:36.960302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.780 [2024-11-20 11:21:36.960335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.780 qpair failed and we were unable to recover it. 00:27:09.780 [2024-11-20 11:21:36.960549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.780 [2024-11-20 11:21:36.960583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.780 qpair failed and we were unable to recover it. 00:27:09.780 [2024-11-20 11:21:36.960782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.780 [2024-11-20 11:21:36.960815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.780 qpair failed and we were unable to recover it. 00:27:09.780 [2024-11-20 11:21:36.961010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.780 [2024-11-20 11:21:36.961044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.780 qpair failed and we were unable to recover it. 00:27:09.780 [2024-11-20 11:21:36.961322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.780 [2024-11-20 11:21:36.961356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.780 qpair failed and we were unable to recover it. 00:27:09.780 [2024-11-20 11:21:36.961594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.780 [2024-11-20 11:21:36.961627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.780 qpair failed and we were unable to recover it. 00:27:09.780 [2024-11-20 11:21:36.961857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.780 [2024-11-20 11:21:36.961890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.780 qpair failed and we were unable to recover it. 00:27:09.780 [2024-11-20 11:21:36.962103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.780 [2024-11-20 11:21:36.962137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.780 qpair failed and we were unable to recover it. 00:27:09.780 [2024-11-20 11:21:36.962319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.780 [2024-11-20 11:21:36.962352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.780 qpair failed and we were unable to recover it. 00:27:09.780 [2024-11-20 11:21:36.962550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.780 [2024-11-20 11:21:36.962584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.780 qpair failed and we were unable to recover it. 00:27:09.780 [2024-11-20 11:21:36.962856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.780 [2024-11-20 11:21:36.962889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.780 qpair failed and we were unable to recover it. 00:27:09.780 [2024-11-20 11:21:36.963149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.780 [2024-11-20 11:21:36.963184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.780 qpair failed and we were unable to recover it. 00:27:09.780 [2024-11-20 11:21:36.963485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.780 [2024-11-20 11:21:36.963518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.780 qpair failed and we were unable to recover it. 00:27:09.780 [2024-11-20 11:21:36.963783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.780 [2024-11-20 11:21:36.963816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.780 qpair failed and we were unable to recover it. 00:27:09.780 [2024-11-20 11:21:36.964040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.780 [2024-11-20 11:21:36.964075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.780 qpair failed and we were unable to recover it. 00:27:09.780 [2024-11-20 11:21:36.964376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.780 [2024-11-20 11:21:36.964409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.780 qpair failed and we were unable to recover it. 00:27:09.780 [2024-11-20 11:21:36.964609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.780 [2024-11-20 11:21:36.964643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.780 qpair failed and we were unable to recover it. 00:27:09.780 [2024-11-20 11:21:36.964868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.780 [2024-11-20 11:21:36.964902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.780 qpair failed and we were unable to recover it. 00:27:09.780 [2024-11-20 11:21:36.965192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.780 [2024-11-20 11:21:36.965228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.780 qpair failed and we were unable to recover it. 00:27:09.780 [2024-11-20 11:21:36.965498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.780 [2024-11-20 11:21:36.965531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.780 qpair failed and we were unable to recover it. 00:27:09.780 [2024-11-20 11:21:36.965727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.780 [2024-11-20 11:21:36.965761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.780 qpair failed and we were unable to recover it. 00:27:09.780 [2024-11-20 11:21:36.965964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.780 [2024-11-20 11:21:36.965999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.780 qpair failed and we were unable to recover it. 00:27:09.780 [2024-11-20 11:21:36.966198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.780 [2024-11-20 11:21:36.966232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.780 qpair failed and we were unable to recover it. 00:27:09.780 [2024-11-20 11:21:36.966438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.780 [2024-11-20 11:21:36.966471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.780 qpair failed and we were unable to recover it. 00:27:09.780 [2024-11-20 11:21:36.966684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.780 [2024-11-20 11:21:36.966719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.780 qpair failed and we were unable to recover it. 00:27:09.780 [2024-11-20 11:21:36.967003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.780 [2024-11-20 11:21:36.967040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.780 qpair failed and we were unable to recover it. 00:27:09.780 [2024-11-20 11:21:36.967319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.781 [2024-11-20 11:21:36.967353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.781 qpair failed and we were unable to recover it. 00:27:09.781 [2024-11-20 11:21:36.967572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.781 [2024-11-20 11:21:36.967612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.781 qpair failed and we were unable to recover it. 00:27:09.781 [2024-11-20 11:21:36.967739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.781 [2024-11-20 11:21:36.967772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.781 qpair failed and we were unable to recover it. 00:27:09.781 [2024-11-20 11:21:36.967914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.781 [2024-11-20 11:21:36.967958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.781 qpair failed and we were unable to recover it. 00:27:09.781 [2024-11-20 11:21:36.968217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.781 [2024-11-20 11:21:36.968251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.781 qpair failed and we were unable to recover it. 00:27:09.781 [2024-11-20 11:21:36.968441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.781 [2024-11-20 11:21:36.968474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.781 qpair failed and we were unable to recover it. 00:27:09.781 [2024-11-20 11:21:36.968728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.781 [2024-11-20 11:21:36.968761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.781 qpair failed and we were unable to recover it. 00:27:09.781 [2024-11-20 11:21:36.968968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.781 [2024-11-20 11:21:36.969002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.781 qpair failed and we were unable to recover it. 00:27:09.781 [2024-11-20 11:21:36.969266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.781 [2024-11-20 11:21:36.969299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.781 qpair failed and we were unable to recover it. 00:27:09.781 [2024-11-20 11:21:36.969489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.781 [2024-11-20 11:21:36.969521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.781 qpair failed and we were unable to recover it. 00:27:09.781 [2024-11-20 11:21:36.969750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.781 [2024-11-20 11:21:36.969783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.781 qpair failed and we were unable to recover it. 00:27:09.781 [2024-11-20 11:21:36.969976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.781 [2024-11-20 11:21:36.970011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.781 qpair failed and we were unable to recover it. 00:27:09.781 [2024-11-20 11:21:36.970284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.781 [2024-11-20 11:21:36.970319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.781 qpair failed and we were unable to recover it. 00:27:09.781 [2024-11-20 11:21:36.970568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.781 [2024-11-20 11:21:36.970601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.781 qpair failed and we were unable to recover it. 00:27:09.781 [2024-11-20 11:21:36.970906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.781 [2024-11-20 11:21:36.970939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.781 qpair failed and we were unable to recover it. 00:27:09.781 [2024-11-20 11:21:36.971258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.781 [2024-11-20 11:21:36.971292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.781 qpair failed and we were unable to recover it. 00:27:09.781 [2024-11-20 11:21:36.971571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.781 [2024-11-20 11:21:36.971604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.781 qpair failed and we were unable to recover it. 00:27:09.781 [2024-11-20 11:21:36.971821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.781 [2024-11-20 11:21:36.971854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.781 qpair failed and we were unable to recover it. 00:27:09.781 [2024-11-20 11:21:36.972046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.781 [2024-11-20 11:21:36.972081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.781 qpair failed and we were unable to recover it. 00:27:09.781 [2024-11-20 11:21:36.972379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.781 [2024-11-20 11:21:36.972413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.781 qpair failed and we were unable to recover it. 00:27:09.781 [2024-11-20 11:21:36.972606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.781 [2024-11-20 11:21:36.972641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.781 qpair failed and we were unable to recover it. 00:27:09.781 [2024-11-20 11:21:36.972914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.781 [2024-11-20 11:21:36.972957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.781 qpair failed and we were unable to recover it. 00:27:09.781 [2024-11-20 11:21:36.973238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.781 [2024-11-20 11:21:36.973273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.781 qpair failed and we were unable to recover it. 00:27:09.781 [2024-11-20 11:21:36.973546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.781 [2024-11-20 11:21:36.973579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.781 qpair failed and we were unable to recover it. 00:27:09.781 [2024-11-20 11:21:36.973767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.781 [2024-11-20 11:21:36.973800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.781 qpair failed and we were unable to recover it. 00:27:09.781 [2024-11-20 11:21:36.974090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.781 [2024-11-20 11:21:36.974147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.781 qpair failed and we were unable to recover it. 00:27:09.781 [2024-11-20 11:21:36.974288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.781 [2024-11-20 11:21:36.974322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.781 qpair failed and we were unable to recover it. 00:27:09.781 [2024-11-20 11:21:36.974603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.781 [2024-11-20 11:21:36.974636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.781 qpair failed and we were unable to recover it. 00:27:09.781 [2024-11-20 11:21:36.974756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.781 [2024-11-20 11:21:36.974790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.781 qpair failed and we were unable to recover it. 00:27:09.781 [2024-11-20 11:21:36.975062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.781 [2024-11-20 11:21:36.975097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.781 qpair failed and we were unable to recover it. 00:27:09.781 [2024-11-20 11:21:36.975358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.781 [2024-11-20 11:21:36.975392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.781 qpair failed and we were unable to recover it. 00:27:09.781 [2024-11-20 11:21:36.975690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.781 [2024-11-20 11:21:36.975722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.781 qpair failed and we were unable to recover it. 00:27:09.781 [2024-11-20 11:21:36.975908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.781 [2024-11-20 11:21:36.975941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.781 qpair failed and we were unable to recover it. 00:27:09.782 [2024-11-20 11:21:36.976240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.782 [2024-11-20 11:21:36.976274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.782 qpair failed and we were unable to recover it. 00:27:09.782 [2024-11-20 11:21:36.976551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.782 [2024-11-20 11:21:36.976585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.782 qpair failed and we were unable to recover it. 00:27:09.782 [2024-11-20 11:21:36.976868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.782 [2024-11-20 11:21:36.976902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.782 qpair failed and we were unable to recover it. 00:27:09.782 [2024-11-20 11:21:36.977125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.782 [2024-11-20 11:21:36.977160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.782 qpair failed and we were unable to recover it. 00:27:09.782 [2024-11-20 11:21:36.977443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.782 [2024-11-20 11:21:36.977477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.782 qpair failed and we were unable to recover it. 00:27:09.782 [2024-11-20 11:21:36.977776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.782 [2024-11-20 11:21:36.977809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.782 qpair failed and we were unable to recover it. 00:27:09.782 [2024-11-20 11:21:36.978001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.782 [2024-11-20 11:21:36.978036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.782 qpair failed and we were unable to recover it. 00:27:09.782 [2024-11-20 11:21:36.978229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.782 [2024-11-20 11:21:36.978262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.782 qpair failed and we were unable to recover it. 00:27:09.782 [2024-11-20 11:21:36.978535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.782 [2024-11-20 11:21:36.978575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.782 qpair failed and we were unable to recover it. 00:27:09.782 [2024-11-20 11:21:36.978856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.782 [2024-11-20 11:21:36.978891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.782 qpair failed and we were unable to recover it. 00:27:09.782 [2024-11-20 11:21:36.979194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.782 [2024-11-20 11:21:36.979228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.782 qpair failed and we were unable to recover it. 00:27:09.782 [2024-11-20 11:21:36.979490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.782 [2024-11-20 11:21:36.979524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.782 qpair failed and we were unable to recover it. 00:27:09.782 [2024-11-20 11:21:36.979818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.782 [2024-11-20 11:21:36.979851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.782 qpair failed and we were unable to recover it. 00:27:09.782 [2024-11-20 11:21:36.980045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.782 [2024-11-20 11:21:36.980080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.782 qpair failed and we were unable to recover it. 00:27:09.782 [2024-11-20 11:21:36.980279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.782 [2024-11-20 11:21:36.980313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.782 qpair failed and we were unable to recover it. 00:27:09.782 [2024-11-20 11:21:36.980564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.782 [2024-11-20 11:21:36.980597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.782 qpair failed and we were unable to recover it. 00:27:09.782 [2024-11-20 11:21:36.980719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.782 [2024-11-20 11:21:36.980750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.782 qpair failed and we were unable to recover it. 00:27:09.782 [2024-11-20 11:21:36.981022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.782 [2024-11-20 11:21:36.981056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.782 qpair failed and we were unable to recover it. 00:27:09.782 [2024-11-20 11:21:36.981184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.782 [2024-11-20 11:21:36.981218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.782 qpair failed and we were unable to recover it. 00:27:09.782 [2024-11-20 11:21:36.981492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.782 [2024-11-20 11:21:36.981525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.782 qpair failed and we were unable to recover it. 00:27:09.782 [2024-11-20 11:21:36.981799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.782 [2024-11-20 11:21:36.981832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.782 qpair failed and we were unable to recover it. 00:27:09.782 [2024-11-20 11:21:36.982124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.782 [2024-11-20 11:21:36.982160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.782 qpair failed and we were unable to recover it. 00:27:09.782 [2024-11-20 11:21:36.982399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.782 [2024-11-20 11:21:36.982434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.782 qpair failed and we were unable to recover it. 00:27:09.782 [2024-11-20 11:21:36.982701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.782 [2024-11-20 11:21:36.982735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.782 qpair failed and we were unable to recover it. 00:27:09.782 [2024-11-20 11:21:36.982925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.782 [2024-11-20 11:21:36.982968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.782 qpair failed and we were unable to recover it. 00:27:09.782 [2024-11-20 11:21:36.983248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.782 [2024-11-20 11:21:36.983280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.782 qpair failed and we were unable to recover it. 00:27:09.782 [2024-11-20 11:21:36.983401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.782 [2024-11-20 11:21:36.983434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.782 qpair failed and we were unable to recover it. 00:27:09.782 [2024-11-20 11:21:36.983640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.782 [2024-11-20 11:21:36.983673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.782 qpair failed and we were unable to recover it. 00:27:09.782 [2024-11-20 11:21:36.984014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.782 [2024-11-20 11:21:36.984048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.782 qpair failed and we were unable to recover it. 00:27:09.782 [2024-11-20 11:21:36.984283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.782 [2024-11-20 11:21:36.984317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.782 qpair failed and we were unable to recover it. 00:27:09.782 [2024-11-20 11:21:36.984587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.782 [2024-11-20 11:21:36.984620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.782 qpair failed and we were unable to recover it. 00:27:09.782 [2024-11-20 11:21:36.984833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.782 [2024-11-20 11:21:36.984867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.782 qpair failed and we were unable to recover it. 00:27:09.782 [2024-11-20 11:21:36.985070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.782 [2024-11-20 11:21:36.985104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.782 qpair failed and we were unable to recover it. 00:27:09.783 [2024-11-20 11:21:36.985285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.783 [2024-11-20 11:21:36.985319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.783 qpair failed and we were unable to recover it. 00:27:09.783 [2024-11-20 11:21:36.985461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.783 [2024-11-20 11:21:36.985495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.783 qpair failed and we were unable to recover it. 00:27:09.783 [2024-11-20 11:21:36.985772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.783 [2024-11-20 11:21:36.985810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.783 qpair failed and we were unable to recover it. 00:27:09.783 [2024-11-20 11:21:36.986018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.783 [2024-11-20 11:21:36.986055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.783 qpair failed and we were unable to recover it. 00:27:09.783 [2024-11-20 11:21:36.986246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.783 [2024-11-20 11:21:36.986279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.783 qpair failed and we were unable to recover it. 00:27:09.783 [2024-11-20 11:21:36.986481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.783 [2024-11-20 11:21:36.986515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.783 qpair failed and we were unable to recover it. 00:27:09.783 [2024-11-20 11:21:36.986805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.783 [2024-11-20 11:21:36.986839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.783 qpair failed and we were unable to recover it. 00:27:09.783 [2024-11-20 11:21:36.987116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.783 [2024-11-20 11:21:36.987153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.783 qpair failed and we were unable to recover it. 00:27:09.783 [2024-11-20 11:21:36.987340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.783 [2024-11-20 11:21:36.987373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.783 qpair failed and we were unable to recover it. 00:27:09.783 [2024-11-20 11:21:36.987626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.783 [2024-11-20 11:21:36.987660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.783 qpair failed and we were unable to recover it. 00:27:09.783 [2024-11-20 11:21:36.987939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.783 [2024-11-20 11:21:36.987984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.783 qpair failed and we were unable to recover it. 00:27:09.783 [2024-11-20 11:21:36.988258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.783 [2024-11-20 11:21:36.988292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.783 qpair failed and we were unable to recover it. 00:27:09.783 [2024-11-20 11:21:36.988494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.783 [2024-11-20 11:21:36.988529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.783 qpair failed and we were unable to recover it. 00:27:09.783 [2024-11-20 11:21:36.988721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.783 [2024-11-20 11:21:36.988755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.783 qpair failed and we were unable to recover it. 00:27:09.783 [2024-11-20 11:21:36.989030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.783 [2024-11-20 11:21:36.989066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.783 qpair failed and we were unable to recover it. 00:27:09.783 [2024-11-20 11:21:36.989353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.783 [2024-11-20 11:21:36.989387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.783 qpair failed and we were unable to recover it. 00:27:09.783 [2024-11-20 11:21:36.989660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.783 [2024-11-20 11:21:36.989696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.783 qpair failed and we were unable to recover it. 00:27:09.783 [2024-11-20 11:21:36.989963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.783 [2024-11-20 11:21:36.990000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.783 qpair failed and we were unable to recover it. 00:27:09.783 [2024-11-20 11:21:36.990283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.783 [2024-11-20 11:21:36.990316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.783 qpair failed and we were unable to recover it. 00:27:09.783 [2024-11-20 11:21:36.990535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.783 [2024-11-20 11:21:36.990568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.783 qpair failed and we were unable to recover it. 00:27:09.783 [2024-11-20 11:21:36.990843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.783 [2024-11-20 11:21:36.990877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.783 qpair failed and we were unable to recover it. 00:27:09.783 [2024-11-20 11:21:36.991183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.783 [2024-11-20 11:21:36.991218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.783 qpair failed and we were unable to recover it. 00:27:09.783 [2024-11-20 11:21:36.991477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.783 [2024-11-20 11:21:36.991510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.783 qpair failed and we were unable to recover it. 00:27:09.783 [2024-11-20 11:21:36.991811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.783 [2024-11-20 11:21:36.991844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.783 qpair failed and we were unable to recover it. 00:27:09.783 [2024-11-20 11:21:36.992109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.783 [2024-11-20 11:21:36.992144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.783 qpair failed and we were unable to recover it. 00:27:09.783 [2024-11-20 11:21:36.992395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.783 [2024-11-20 11:21:36.992428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.783 qpair failed and we were unable to recover it. 00:27:09.783 [2024-11-20 11:21:36.992681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.783 [2024-11-20 11:21:36.992715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.783 qpair failed and we were unable to recover it. 00:27:09.783 [2024-11-20 11:21:36.993015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.784 [2024-11-20 11:21:36.993050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.784 qpair failed and we were unable to recover it. 00:27:09.784 [2024-11-20 11:21:36.993314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.784 [2024-11-20 11:21:36.993347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.784 qpair failed and we were unable to recover it. 00:27:09.784 [2024-11-20 11:21:36.993645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.784 [2024-11-20 11:21:36.993680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.784 qpair failed and we were unable to recover it. 00:27:09.784 [2024-11-20 11:21:36.993969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.784 [2024-11-20 11:21:36.994004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.784 qpair failed and we were unable to recover it. 00:27:09.784 [2024-11-20 11:21:36.994190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.784 [2024-11-20 11:21:36.994223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.784 qpair failed and we were unable to recover it. 00:27:09.784 [2024-11-20 11:21:36.994495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.784 [2024-11-20 11:21:36.994528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.784 qpair failed and we were unable to recover it. 00:27:09.784 [2024-11-20 11:21:36.994814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.784 [2024-11-20 11:21:36.994847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.784 qpair failed and we were unable to recover it. 00:27:09.784 [2024-11-20 11:21:36.995146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.784 [2024-11-20 11:21:36.995182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.784 qpair failed and we were unable to recover it. 00:27:09.784 [2024-11-20 11:21:36.995449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.784 [2024-11-20 11:21:36.995484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.784 qpair failed and we were unable to recover it. 00:27:09.784 [2024-11-20 11:21:36.995761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.784 [2024-11-20 11:21:36.995795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.784 qpair failed and we were unable to recover it. 00:27:09.784 [2024-11-20 11:21:36.996081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.784 [2024-11-20 11:21:36.996118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.784 qpair failed and we were unable to recover it. 00:27:09.784 [2024-11-20 11:21:36.996317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.784 [2024-11-20 11:21:36.996352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.784 qpair failed and we were unable to recover it. 00:27:09.784 [2024-11-20 11:21:36.996615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.784 [2024-11-20 11:21:36.996649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.784 qpair failed and we were unable to recover it. 00:27:09.784 [2024-11-20 11:21:36.996912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.784 [2024-11-20 11:21:36.996958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.784 qpair failed and we were unable to recover it. 00:27:09.784 [2024-11-20 11:21:36.997162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.784 [2024-11-20 11:21:36.997197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.784 qpair failed and we were unable to recover it. 00:27:09.784 [2024-11-20 11:21:36.997457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.784 [2024-11-20 11:21:36.997496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.784 qpair failed and we were unable to recover it. 00:27:09.784 [2024-11-20 11:21:36.997608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.784 [2024-11-20 11:21:36.997643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.784 qpair failed and we were unable to recover it. 00:27:09.784 [2024-11-20 11:21:36.997842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.784 [2024-11-20 11:21:36.997876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.784 qpair failed and we were unable to recover it. 00:27:09.784 [2024-11-20 11:21:36.998084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.784 [2024-11-20 11:21:36.998118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.784 qpair failed and we were unable to recover it. 00:27:09.784 [2024-11-20 11:21:36.998277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.784 [2024-11-20 11:21:36.998311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.784 qpair failed and we were unable to recover it. 00:27:09.784 [2024-11-20 11:21:36.998512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.784 [2024-11-20 11:21:36.998547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.784 qpair failed and we were unable to recover it. 00:27:09.784 [2024-11-20 11:21:36.998736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.784 [2024-11-20 11:21:36.998770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.784 qpair failed and we were unable to recover it. 00:27:09.784 [2024-11-20 11:21:36.998978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.784 [2024-11-20 11:21:36.999013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.784 qpair failed and we were unable to recover it. 00:27:09.784 [2024-11-20 11:21:36.999218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.784 [2024-11-20 11:21:36.999252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.784 qpair failed and we were unable to recover it. 00:27:09.784 [2024-11-20 11:21:36.999366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.784 [2024-11-20 11:21:36.999400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.784 qpair failed and we were unable to recover it. 00:27:09.784 [2024-11-20 11:21:36.999624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.784 [2024-11-20 11:21:36.999658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.784 qpair failed and we were unable to recover it. 00:27:09.784 [2024-11-20 11:21:36.999873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.784 [2024-11-20 11:21:36.999907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.784 qpair failed and we were unable to recover it. 00:27:09.784 [2024-11-20 11:21:37.000145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.784 [2024-11-20 11:21:37.000180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.784 qpair failed and we were unable to recover it. 00:27:09.784 [2024-11-20 11:21:37.000434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.784 [2024-11-20 11:21:37.000468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.784 qpair failed and we were unable to recover it. 00:27:09.784 [2024-11-20 11:21:37.000774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.784 [2024-11-20 11:21:37.000810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.784 qpair failed and we were unable to recover it. 00:27:09.784 [2024-11-20 11:21:37.001016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.784 [2024-11-20 11:21:37.001053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.784 qpair failed and we were unable to recover it. 00:27:09.784 [2024-11-20 11:21:37.001337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.784 [2024-11-20 11:21:37.001372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.784 qpair failed and we were unable to recover it. 00:27:09.784 [2024-11-20 11:21:37.001648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.784 [2024-11-20 11:21:37.001683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.784 qpair failed and we were unable to recover it. 00:27:09.784 [2024-11-20 11:21:37.001914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.785 [2024-11-20 11:21:37.001961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.785 qpair failed and we were unable to recover it. 00:27:09.785 [2024-11-20 11:21:37.002262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.785 [2024-11-20 11:21:37.002297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.785 qpair failed and we were unable to recover it. 00:27:09.785 [2024-11-20 11:21:37.002429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.785 [2024-11-20 11:21:37.002462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.785 qpair failed and we were unable to recover it. 00:27:09.785 [2024-11-20 11:21:37.002678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.785 [2024-11-20 11:21:37.002713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.785 qpair failed and we were unable to recover it. 00:27:09.785 [2024-11-20 11:21:37.002904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.785 [2024-11-20 11:21:37.002939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.785 qpair failed and we were unable to recover it. 00:27:09.785 [2024-11-20 11:21:37.003168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.785 [2024-11-20 11:21:37.003203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.785 qpair failed and we were unable to recover it. 00:27:09.785 [2024-11-20 11:21:37.003393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.785 [2024-11-20 11:21:37.003427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.785 qpair failed and we were unable to recover it. 00:27:09.785 [2024-11-20 11:21:37.003638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.785 [2024-11-20 11:21:37.003672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.785 qpair failed and we were unable to recover it. 00:27:09.785 [2024-11-20 11:21:37.003866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.785 [2024-11-20 11:21:37.003902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.785 qpair failed and we were unable to recover it. 00:27:09.785 [2024-11-20 11:21:37.004128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.785 [2024-11-20 11:21:37.004164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.785 qpair failed and we were unable to recover it. 00:27:09.785 [2024-11-20 11:21:37.004429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.785 [2024-11-20 11:21:37.004464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.785 qpair failed and we were unable to recover it. 00:27:09.785 [2024-11-20 11:21:37.004745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.785 [2024-11-20 11:21:37.004779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.785 qpair failed and we were unable to recover it. 00:27:09.785 [2024-11-20 11:21:37.005066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.785 [2024-11-20 11:21:37.005101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.785 qpair failed and we were unable to recover it. 00:27:09.785 [2024-11-20 11:21:37.005375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.785 [2024-11-20 11:21:37.005410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.785 qpair failed and we were unable to recover it. 00:27:09.785 [2024-11-20 11:21:37.005695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.785 [2024-11-20 11:21:37.005732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.785 qpair failed and we were unable to recover it. 00:27:09.785 [2024-11-20 11:21:37.005932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.785 [2024-11-20 11:21:37.006030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.785 qpair failed and we were unable to recover it. 00:27:09.785 [2024-11-20 11:21:37.006334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.785 [2024-11-20 11:21:37.006369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.785 qpair failed and we were unable to recover it. 00:27:09.785 [2024-11-20 11:21:37.006594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.785 [2024-11-20 11:21:37.006626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.785 qpair failed and we were unable to recover it. 00:27:09.785 [2024-11-20 11:21:37.006750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.785 [2024-11-20 11:21:37.006783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.785 qpair failed and we were unable to recover it. 00:27:09.785 [2024-11-20 11:21:37.007038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.785 [2024-11-20 11:21:37.007073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.785 qpair failed and we were unable to recover it. 00:27:09.785 [2024-11-20 11:21:37.007350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.785 [2024-11-20 11:21:37.007384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.785 qpair failed and we were unable to recover it. 00:27:09.785 [2024-11-20 11:21:37.007661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.785 [2024-11-20 11:21:37.007695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.785 qpair failed and we were unable to recover it. 00:27:09.785 [2024-11-20 11:21:37.007811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.785 [2024-11-20 11:21:37.007851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.785 qpair failed and we were unable to recover it. 00:27:09.785 [2024-11-20 11:21:37.008041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.785 [2024-11-20 11:21:37.008078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.785 qpair failed and we were unable to recover it. 00:27:09.785 [2024-11-20 11:21:37.008349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.785 [2024-11-20 11:21:37.008383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.785 qpair failed and we were unable to recover it. 00:27:09.785 [2024-11-20 11:21:37.008644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.785 [2024-11-20 11:21:37.008679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.785 qpair failed and we were unable to recover it. 00:27:09.785 [2024-11-20 11:21:37.008968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.785 [2024-11-20 11:21:37.009005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.785 qpair failed and we were unable to recover it. 00:27:09.785 [2024-11-20 11:21:37.009305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.785 [2024-11-20 11:21:37.009340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.785 qpair failed and we were unable to recover it. 00:27:09.785 [2024-11-20 11:21:37.009529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.785 [2024-11-20 11:21:37.009563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.785 qpair failed and we were unable to recover it. 00:27:09.785 [2024-11-20 11:21:37.009792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.785 [2024-11-20 11:21:37.009826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.785 qpair failed and we were unable to recover it. 00:27:09.785 [2024-11-20 11:21:37.010087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.785 [2024-11-20 11:21:37.010124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.785 qpair failed and we were unable to recover it. 00:27:09.785 [2024-11-20 11:21:37.010357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.785 [2024-11-20 11:21:37.010392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.785 qpair failed and we were unable to recover it. 00:27:09.785 [2024-11-20 11:21:37.010505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.785 [2024-11-20 11:21:37.010540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.785 qpair failed and we were unable to recover it. 00:27:09.786 [2024-11-20 11:21:37.010804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.786 [2024-11-20 11:21:37.010838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.786 qpair failed and we were unable to recover it. 00:27:09.786 [2024-11-20 11:21:37.011111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.786 [2024-11-20 11:21:37.011146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.786 qpair failed and we were unable to recover it. 00:27:09.786 [2024-11-20 11:21:37.011339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.786 [2024-11-20 11:21:37.011372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.786 qpair failed and we were unable to recover it. 00:27:09.786 [2024-11-20 11:21:37.011573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.786 [2024-11-20 11:21:37.011609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.786 qpair failed and we were unable to recover it. 00:27:09.786 [2024-11-20 11:21:37.011722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.786 [2024-11-20 11:21:37.011756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.786 qpair failed and we were unable to recover it. 00:27:09.786 [2024-11-20 11:21:37.011970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.786 [2024-11-20 11:21:37.012008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.786 qpair failed and we were unable to recover it. 00:27:09.786 [2024-11-20 11:21:37.012208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.786 [2024-11-20 11:21:37.012242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.786 qpair failed and we were unable to recover it. 00:27:09.786 [2024-11-20 11:21:37.012495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.786 [2024-11-20 11:21:37.012532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.786 qpair failed and we were unable to recover it. 00:27:09.786 [2024-11-20 11:21:37.012712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.786 [2024-11-20 11:21:37.012746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.786 qpair failed and we were unable to recover it. 00:27:09.786 [2024-11-20 11:21:37.013019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.786 [2024-11-20 11:21:37.013054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.786 qpair failed and we were unable to recover it. 00:27:09.786 [2024-11-20 11:21:37.013248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.786 [2024-11-20 11:21:37.013281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.786 qpair failed and we were unable to recover it. 00:27:09.786 [2024-11-20 11:21:37.013592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.786 [2024-11-20 11:21:37.013625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.786 qpair failed and we were unable to recover it. 00:27:09.786 [2024-11-20 11:21:37.013897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.786 [2024-11-20 11:21:37.013933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.786 qpair failed and we were unable to recover it. 00:27:09.786 [2024-11-20 11:21:37.014227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.786 [2024-11-20 11:21:37.014261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.786 qpair failed and we were unable to recover it. 00:27:09.786 [2024-11-20 11:21:37.014452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.786 [2024-11-20 11:21:37.014486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.786 qpair failed and we were unable to recover it. 00:27:09.786 [2024-11-20 11:21:37.014746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.786 [2024-11-20 11:21:37.014781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.786 qpair failed and we were unable to recover it. 00:27:09.786 [2024-11-20 11:21:37.014919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.786 [2024-11-20 11:21:37.014967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.786 qpair failed and we were unable to recover it. 00:27:09.786 [2024-11-20 11:21:37.015156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.786 [2024-11-20 11:21:37.015191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.786 qpair failed and we were unable to recover it. 00:27:09.786 [2024-11-20 11:21:37.015416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.786 [2024-11-20 11:21:37.015449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.786 qpair failed and we were unable to recover it. 00:27:09.786 [2024-11-20 11:21:37.015583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.786 [2024-11-20 11:21:37.015617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.786 qpair failed and we were unable to recover it. 00:27:09.786 [2024-11-20 11:21:37.015868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.786 [2024-11-20 11:21:37.015903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.786 qpair failed and we were unable to recover it. 00:27:09.786 [2024-11-20 11:21:37.016123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.786 [2024-11-20 11:21:37.016158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.786 qpair failed and we were unable to recover it. 00:27:09.786 [2024-11-20 11:21:37.016282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.786 [2024-11-20 11:21:37.016317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.786 qpair failed and we were unable to recover it. 00:27:09.786 [2024-11-20 11:21:37.016538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.786 [2024-11-20 11:21:37.016572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.786 qpair failed and we were unable to recover it. 00:27:09.786 [2024-11-20 11:21:37.016789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.786 [2024-11-20 11:21:37.016824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.786 qpair failed and we were unable to recover it. 00:27:09.786 [2024-11-20 11:21:37.017050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.786 [2024-11-20 11:21:37.017086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.786 qpair failed and we were unable to recover it. 00:27:09.786 [2024-11-20 11:21:37.017301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.786 [2024-11-20 11:21:37.017333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.786 qpair failed and we were unable to recover it. 00:27:09.786 [2024-11-20 11:21:37.017531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.786 [2024-11-20 11:21:37.017566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.786 qpair failed and we were unable to recover it. 00:27:09.786 [2024-11-20 11:21:37.017792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.786 [2024-11-20 11:21:37.017826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.786 qpair failed and we were unable to recover it. 00:27:09.786 [2024-11-20 11:21:37.018089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.786 [2024-11-20 11:21:37.018130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.786 qpair failed and we were unable to recover it. 00:27:09.786 [2024-11-20 11:21:37.018338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.786 [2024-11-20 11:21:37.018374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.786 qpair failed and we were unable to recover it. 00:27:09.786 [2024-11-20 11:21:37.018570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.786 [2024-11-20 11:21:37.018605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.786 qpair failed and we were unable to recover it. 00:27:09.786 [2024-11-20 11:21:37.018831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.786 [2024-11-20 11:21:37.018868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.787 qpair failed and we were unable to recover it. 00:27:09.787 [2024-11-20 11:21:37.019088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.787 [2024-11-20 11:21:37.019123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.787 qpair failed and we were unable to recover it. 00:27:09.787 [2024-11-20 11:21:37.019235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.787 [2024-11-20 11:21:37.019269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.787 qpair failed and we were unable to recover it. 00:27:09.787 [2024-11-20 11:21:37.019493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.787 [2024-11-20 11:21:37.019527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.787 qpair failed and we were unable to recover it. 00:27:09.787 [2024-11-20 11:21:37.019802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.787 [2024-11-20 11:21:37.019836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.787 qpair failed and we were unable to recover it. 00:27:09.787 [2024-11-20 11:21:37.020125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.787 [2024-11-20 11:21:37.020161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.787 qpair failed and we were unable to recover it. 00:27:09.787 [2024-11-20 11:21:37.020431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.787 [2024-11-20 11:21:37.020466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.787 qpair failed and we were unable to recover it. 00:27:09.787 [2024-11-20 11:21:37.020670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.787 [2024-11-20 11:21:37.020706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.787 qpair failed and we were unable to recover it. 00:27:09.787 [2024-11-20 11:21:37.020918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.787 [2024-11-20 11:21:37.020965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.787 qpair failed and we were unable to recover it. 00:27:09.787 [2024-11-20 11:21:37.021222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.787 [2024-11-20 11:21:37.021256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.787 qpair failed and we were unable to recover it. 00:27:09.787 [2024-11-20 11:21:37.021534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.787 [2024-11-20 11:21:37.021568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.787 qpair failed and we were unable to recover it. 00:27:09.787 [2024-11-20 11:21:37.021857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.787 [2024-11-20 11:21:37.021892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.787 qpair failed and we were unable to recover it. 00:27:09.787 [2024-11-20 11:21:37.022101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.787 [2024-11-20 11:21:37.022136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.787 qpair failed and we were unable to recover it. 00:27:09.787 [2024-11-20 11:21:37.022395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.787 [2024-11-20 11:21:37.022430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.787 qpair failed and we were unable to recover it. 00:27:09.787 [2024-11-20 11:21:37.022680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.787 [2024-11-20 11:21:37.022715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.787 qpair failed and we were unable to recover it. 00:27:09.787 [2024-11-20 11:21:37.022911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.787 [2024-11-20 11:21:37.022944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.787 qpair failed and we were unable to recover it. 00:27:09.787 [2024-11-20 11:21:37.023089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.787 [2024-11-20 11:21:37.023125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.787 qpair failed and we were unable to recover it. 00:27:09.787 [2024-11-20 11:21:37.023332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.787 [2024-11-20 11:21:37.023367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.787 qpair failed and we were unable to recover it. 00:27:09.787 [2024-11-20 11:21:37.023499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.787 [2024-11-20 11:21:37.023533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.787 qpair failed and we were unable to recover it. 00:27:09.787 [2024-11-20 11:21:37.023752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.787 [2024-11-20 11:21:37.023784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.787 qpair failed and we were unable to recover it. 00:27:09.787 [2024-11-20 11:21:37.024015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.787 [2024-11-20 11:21:37.024050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.787 qpair failed and we were unable to recover it. 00:27:09.787 [2024-11-20 11:21:37.024274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.787 [2024-11-20 11:21:37.024306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.787 qpair failed and we were unable to recover it. 00:27:09.787 [2024-11-20 11:21:37.024525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.787 [2024-11-20 11:21:37.024557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.787 qpair failed and we were unable to recover it. 00:27:09.787 [2024-11-20 11:21:37.024832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.787 [2024-11-20 11:21:37.024868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.787 qpair failed and we were unable to recover it. 00:27:09.787 [2024-11-20 11:21:37.025111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.787 [2024-11-20 11:21:37.025149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.787 qpair failed and we were unable to recover it. 00:27:09.787 [2024-11-20 11:21:37.025357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.787 [2024-11-20 11:21:37.025391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.787 qpair failed and we were unable to recover it. 00:27:09.787 [2024-11-20 11:21:37.025620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.787 [2024-11-20 11:21:37.025654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.787 qpair failed and we were unable to recover it. 00:27:09.787 [2024-11-20 11:21:37.025791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.787 [2024-11-20 11:21:37.025826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.787 qpair failed and we were unable to recover it. 00:27:09.787 [2024-11-20 11:21:37.026102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.787 [2024-11-20 11:21:37.026135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.787 qpair failed and we were unable to recover it. 00:27:09.787 [2024-11-20 11:21:37.026339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.787 [2024-11-20 11:21:37.026372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.787 qpair failed and we were unable to recover it. 00:27:09.787 [2024-11-20 11:21:37.026577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.787 [2024-11-20 11:21:37.026611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.787 qpair failed and we were unable to recover it. 00:27:09.787 [2024-11-20 11:21:37.026798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.787 [2024-11-20 11:21:37.026832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.787 qpair failed and we were unable to recover it. 00:27:09.787 [2024-11-20 11:21:37.027055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.787 [2024-11-20 11:21:37.027091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.787 qpair failed and we were unable to recover it. 00:27:09.787 [2024-11-20 11:21:37.027344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.788 [2024-11-20 11:21:37.027378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.788 qpair failed and we were unable to recover it. 00:27:09.788 [2024-11-20 11:21:37.027650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.788 [2024-11-20 11:21:37.027685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.788 qpair failed and we were unable to recover it. 00:27:09.788 [2024-11-20 11:21:37.027973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.788 [2024-11-20 11:21:37.028009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.788 qpair failed and we were unable to recover it. 00:27:09.788 [2024-11-20 11:21:37.028305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.788 [2024-11-20 11:21:37.028338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.788 qpair failed and we were unable to recover it. 00:27:09.788 [2024-11-20 11:21:37.028618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.788 [2024-11-20 11:21:37.028659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.788 qpair failed and we were unable to recover it. 00:27:09.788 [2024-11-20 11:21:37.028856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.788 [2024-11-20 11:21:37.028890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.788 qpair failed and we were unable to recover it. 00:27:09.788 [2024-11-20 11:21:37.029206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.788 [2024-11-20 11:21:37.029242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.788 qpair failed and we were unable to recover it. 00:27:09.788 [2024-11-20 11:21:37.029441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.788 [2024-11-20 11:21:37.029474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.788 qpair failed and we were unable to recover it. 00:27:09.788 [2024-11-20 11:21:37.029677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.788 [2024-11-20 11:21:37.029711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.788 qpair failed and we were unable to recover it. 00:27:09.788 [2024-11-20 11:21:37.029914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.788 [2024-11-20 11:21:37.029959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.788 qpair failed and we were unable to recover it. 00:27:09.788 [2024-11-20 11:21:37.030148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.788 [2024-11-20 11:21:37.030181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.788 qpair failed and we were unable to recover it. 00:27:09.788 [2024-11-20 11:21:37.030378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.788 [2024-11-20 11:21:37.030412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.788 qpair failed and we were unable to recover it. 00:27:09.788 [2024-11-20 11:21:37.030542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.788 [2024-11-20 11:21:37.030576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.788 qpair failed and we were unable to recover it. 00:27:09.788 [2024-11-20 11:21:37.030848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.788 [2024-11-20 11:21:37.030881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.788 qpair failed and we were unable to recover it. 00:27:09.788 [2024-11-20 11:21:37.031108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.788 [2024-11-20 11:21:37.031143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.788 qpair failed and we were unable to recover it. 00:27:09.788 [2024-11-20 11:21:37.031347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.788 [2024-11-20 11:21:37.031380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.788 qpair failed and we were unable to recover it. 00:27:09.788 [2024-11-20 11:21:37.031633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.788 [2024-11-20 11:21:37.031667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.788 qpair failed and we were unable to recover it. 00:27:09.788 [2024-11-20 11:21:37.031971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.788 [2024-11-20 11:21:37.032008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.788 qpair failed and we were unable to recover it. 00:27:09.788 [2024-11-20 11:21:37.032233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.788 [2024-11-20 11:21:37.032268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.788 qpair failed and we were unable to recover it. 00:27:09.788 [2024-11-20 11:21:37.032461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.788 [2024-11-20 11:21:37.032495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.788 qpair failed and we were unable to recover it. 00:27:09.788 [2024-11-20 11:21:37.032640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.788 [2024-11-20 11:21:37.032675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.788 qpair failed and we were unable to recover it. 00:27:09.788 [2024-11-20 11:21:37.032966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.788 [2024-11-20 11:21:37.033000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.788 qpair failed and we were unable to recover it. 00:27:09.788 [2024-11-20 11:21:37.033252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.788 [2024-11-20 11:21:37.033286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.788 qpair failed and we were unable to recover it. 00:27:09.788 [2024-11-20 11:21:37.033583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.788 [2024-11-20 11:21:37.033618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.788 qpair failed and we were unable to recover it. 00:27:09.788 [2024-11-20 11:21:37.033806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.788 [2024-11-20 11:21:37.033840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.788 qpair failed and we were unable to recover it. 00:27:09.788 [2024-11-20 11:21:37.034057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.788 [2024-11-20 11:21:37.034093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.788 qpair failed and we were unable to recover it. 00:27:09.788 [2024-11-20 11:21:37.034209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.788 [2024-11-20 11:21:37.034245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.788 qpair failed and we were unable to recover it. 00:27:09.788 [2024-11-20 11:21:37.034364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.788 [2024-11-20 11:21:37.034397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.788 qpair failed and we were unable to recover it. 00:27:09.788 [2024-11-20 11:21:37.034589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.788 [2024-11-20 11:21:37.034622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.788 qpair failed and we were unable to recover it. 00:27:09.789 [2024-11-20 11:21:37.034920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.789 [2024-11-20 11:21:37.034983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.789 qpair failed and we were unable to recover it. 00:27:09.789 [2024-11-20 11:21:37.035237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.789 [2024-11-20 11:21:37.035272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.789 qpair failed and we were unable to recover it. 00:27:09.789 [2024-11-20 11:21:37.035470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.789 [2024-11-20 11:21:37.035504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.789 qpair failed and we were unable to recover it. 00:27:09.789 [2024-11-20 11:21:37.035718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.789 [2024-11-20 11:21:37.035752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.789 qpair failed and we were unable to recover it. 00:27:09.789 [2024-11-20 11:21:37.035969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.789 [2024-11-20 11:21:37.036006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.789 qpair failed and we were unable to recover it. 00:27:09.789 [2024-11-20 11:21:37.036148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.789 [2024-11-20 11:21:37.036181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.789 qpair failed and we were unable to recover it. 00:27:09.789 [2024-11-20 11:21:37.036466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.789 [2024-11-20 11:21:37.036499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.789 qpair failed and we were unable to recover it. 00:27:09.789 [2024-11-20 11:21:37.036627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.789 [2024-11-20 11:21:37.036661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.789 qpair failed and we were unable to recover it. 00:27:09.789 [2024-11-20 11:21:37.036883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.789 [2024-11-20 11:21:37.036919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.789 qpair failed and we were unable to recover it. 00:27:09.789 [2024-11-20 11:21:37.037130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.789 [2024-11-20 11:21:37.037165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.789 qpair failed and we were unable to recover it. 00:27:09.789 [2024-11-20 11:21:37.037426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.789 [2024-11-20 11:21:37.037459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.789 qpair failed and we were unable to recover it. 00:27:09.789 [2024-11-20 11:21:37.037648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.789 [2024-11-20 11:21:37.037682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.789 qpair failed and we were unable to recover it. 00:27:09.789 [2024-11-20 11:21:37.037910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.789 [2024-11-20 11:21:37.037945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.789 qpair failed and we were unable to recover it. 00:27:09.789 [2024-11-20 11:21:37.038153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.789 [2024-11-20 11:21:37.038188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.789 qpair failed and we were unable to recover it. 00:27:09.789 [2024-11-20 11:21:37.038486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.789 [2024-11-20 11:21:37.038519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.789 qpair failed and we were unable to recover it. 00:27:09.789 [2024-11-20 11:21:37.038729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.789 [2024-11-20 11:21:37.038768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.789 qpair failed and we were unable to recover it. 00:27:09.789 [2024-11-20 11:21:37.038976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.789 [2024-11-20 11:21:37.039013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.789 qpair failed and we were unable to recover it. 00:27:09.789 [2024-11-20 11:21:37.039221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.789 [2024-11-20 11:21:37.039256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.789 qpair failed and we were unable to recover it. 00:27:09.789 [2024-11-20 11:21:37.039390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.789 [2024-11-20 11:21:37.039423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.789 qpair failed and we were unable to recover it. 00:27:09.789 [2024-11-20 11:21:37.039628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.789 [2024-11-20 11:21:37.039661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.789 qpair failed and we were unable to recover it. 00:27:09.789 [2024-11-20 11:21:37.039856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.789 [2024-11-20 11:21:37.039891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.789 qpair failed and we were unable to recover it. 00:27:09.789 [2024-11-20 11:21:37.040158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.789 [2024-11-20 11:21:37.040194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.789 qpair failed and we were unable to recover it. 00:27:09.789 [2024-11-20 11:21:37.040399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.789 [2024-11-20 11:21:37.040434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.789 qpair failed and we were unable to recover it. 00:27:09.789 [2024-11-20 11:21:37.040684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.789 [2024-11-20 11:21:37.040718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.789 qpair failed and we were unable to recover it. 00:27:09.789 [2024-11-20 11:21:37.041013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.789 [2024-11-20 11:21:37.041049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.789 qpair failed and we were unable to recover it. 00:27:09.789 [2024-11-20 11:21:37.041346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.789 [2024-11-20 11:21:37.041380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.789 qpair failed and we were unable to recover it. 00:27:09.789 [2024-11-20 11:21:37.041664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.789 [2024-11-20 11:21:37.041698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.789 qpair failed and we were unable to recover it. 00:27:09.789 [2024-11-20 11:21:37.041885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.789 [2024-11-20 11:21:37.041919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.789 qpair failed and we were unable to recover it. 00:27:09.789 [2024-11-20 11:21:37.042133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.789 [2024-11-20 11:21:37.042170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.789 qpair failed and we were unable to recover it. 00:27:09.789 [2024-11-20 11:21:37.042438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.789 [2024-11-20 11:21:37.042474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.789 qpair failed and we were unable to recover it. 00:27:09.789 [2024-11-20 11:21:37.042692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.789 [2024-11-20 11:21:37.042726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.789 qpair failed and we were unable to recover it. 00:27:09.789 [2024-11-20 11:21:37.042921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.789 [2024-11-20 11:21:37.042965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.789 qpair failed and we were unable to recover it. 00:27:09.789 [2024-11-20 11:21:37.043178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.789 [2024-11-20 11:21:37.043212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.790 qpair failed and we were unable to recover it. 00:27:09.790 [2024-11-20 11:21:37.043465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.790 [2024-11-20 11:21:37.043498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.790 qpair failed and we were unable to recover it. 00:27:09.790 [2024-11-20 11:21:37.043700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.790 [2024-11-20 11:21:37.043735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.790 qpair failed and we were unable to recover it. 00:27:09.790 [2024-11-20 11:21:37.044014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.790 [2024-11-20 11:21:37.044050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.790 qpair failed and we were unable to recover it. 00:27:09.790 [2024-11-20 11:21:37.044256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.790 [2024-11-20 11:21:37.044288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.790 qpair failed and we were unable to recover it. 00:27:09.790 [2024-11-20 11:21:37.044519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.790 [2024-11-20 11:21:37.044552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.790 qpair failed and we were unable to recover it. 00:27:09.790 [2024-11-20 11:21:37.044743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.790 [2024-11-20 11:21:37.044773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.790 qpair failed and we were unable to recover it. 00:27:09.790 [2024-11-20 11:21:37.045087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.790 [2024-11-20 11:21:37.045120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.790 qpair failed and we were unable to recover it. 00:27:09.790 [2024-11-20 11:21:37.045313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.790 [2024-11-20 11:21:37.045344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.790 qpair failed and we were unable to recover it. 00:27:09.790 [2024-11-20 11:21:37.045556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.790 [2024-11-20 11:21:37.045587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.790 qpair failed and we were unable to recover it. 00:27:09.790 [2024-11-20 11:21:37.045774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.790 [2024-11-20 11:21:37.045803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.790 qpair failed and we were unable to recover it. 00:27:09.790 [2024-11-20 11:21:37.046076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.790 [2024-11-20 11:21:37.046107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.790 qpair failed and we were unable to recover it. 00:27:09.790 [2024-11-20 11:21:37.046409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.790 [2024-11-20 11:21:37.046441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.790 qpair failed and we were unable to recover it. 00:27:09.790 [2024-11-20 11:21:37.046756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.790 [2024-11-20 11:21:37.046788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.790 qpair failed and we were unable to recover it. 00:27:09.790 [2024-11-20 11:21:37.047045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.790 [2024-11-20 11:21:37.047076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.790 qpair failed and we were unable to recover it. 00:27:09.790 [2024-11-20 11:21:37.047342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.790 [2024-11-20 11:21:37.047374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.790 qpair failed and we were unable to recover it. 00:27:09.790 [2024-11-20 11:21:37.047520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.790 [2024-11-20 11:21:37.047552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.790 qpair failed and we were unable to recover it. 00:27:09.790 [2024-11-20 11:21:37.047825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.790 [2024-11-20 11:21:37.047856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.790 qpair failed and we were unable to recover it. 00:27:09.790 [2024-11-20 11:21:37.048133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.790 [2024-11-20 11:21:37.048168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.790 qpair failed and we were unable to recover it. 00:27:09.790 [2024-11-20 11:21:37.048423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.790 [2024-11-20 11:21:37.048454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.790 qpair failed and we were unable to recover it. 00:27:09.790 [2024-11-20 11:21:37.048636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.790 [2024-11-20 11:21:37.048668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.790 qpair failed and we were unable to recover it. 00:27:09.790 [2024-11-20 11:21:37.048923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.790 [2024-11-20 11:21:37.048968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.790 qpair failed and we were unable to recover it. 00:27:09.790 [2024-11-20 11:21:37.049109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.790 [2024-11-20 11:21:37.049140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.790 qpair failed and we were unable to recover it. 00:27:09.790 [2024-11-20 11:21:37.049386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.790 [2024-11-20 11:21:37.049424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.790 qpair failed and we were unable to recover it. 00:27:09.790 [2024-11-20 11:21:37.049703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.790 [2024-11-20 11:21:37.049735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.790 qpair failed and we were unable to recover it. 00:27:09.790 [2024-11-20 11:21:37.049934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.790 [2024-11-20 11:21:37.049984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.790 qpair failed and we were unable to recover it. 00:27:09.790 [2024-11-20 11:21:37.050240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.790 [2024-11-20 11:21:37.050274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.790 qpair failed and we were unable to recover it. 00:27:09.790 [2024-11-20 11:21:37.050514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.790 [2024-11-20 11:21:37.050548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.790 qpair failed and we were unable to recover it. 00:27:09.790 [2024-11-20 11:21:37.050747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.790 [2024-11-20 11:21:37.050783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.790 qpair failed and we were unable to recover it. 00:27:09.790 [2024-11-20 11:21:37.050996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.790 [2024-11-20 11:21:37.051031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.790 qpair failed and we were unable to recover it. 00:27:09.790 [2024-11-20 11:21:37.051237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.790 [2024-11-20 11:21:37.051272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.790 qpair failed and we were unable to recover it. 00:27:09.790 [2024-11-20 11:21:37.051413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.790 [2024-11-20 11:21:37.051449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.790 qpair failed and we were unable to recover it. 00:27:09.790 [2024-11-20 11:21:37.051586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.790 [2024-11-20 11:21:37.051618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.790 qpair failed and we were unable to recover it. 00:27:09.790 [2024-11-20 11:21:37.051729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.791 [2024-11-20 11:21:37.051764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.791 qpair failed and we were unable to recover it. 00:27:09.791 [2024-11-20 11:21:37.052020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.791 [2024-11-20 11:21:37.052056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.791 qpair failed and we were unable to recover it. 00:27:09.791 [2024-11-20 11:21:37.052334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.791 [2024-11-20 11:21:37.052368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.791 qpair failed and we were unable to recover it. 00:27:09.791 [2024-11-20 11:21:37.052557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.791 [2024-11-20 11:21:37.052590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.791 qpair failed and we were unable to recover it. 00:27:09.791 [2024-11-20 11:21:37.052820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.791 [2024-11-20 11:21:37.052853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.791 qpair failed and we were unable to recover it. 00:27:09.791 [2024-11-20 11:21:37.053113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.791 [2024-11-20 11:21:37.053149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.791 qpair failed and we were unable to recover it. 00:27:09.791 [2024-11-20 11:21:37.053451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.791 [2024-11-20 11:21:37.053484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.791 qpair failed and we were unable to recover it. 00:27:09.791 [2024-11-20 11:21:37.053689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.791 [2024-11-20 11:21:37.053724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.791 qpair failed and we were unable to recover it. 00:27:09.791 [2024-11-20 11:21:37.053957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.791 [2024-11-20 11:21:37.053993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.791 qpair failed and we were unable to recover it. 00:27:09.791 [2024-11-20 11:21:37.054237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.791 [2024-11-20 11:21:37.054273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.791 qpair failed and we were unable to recover it. 00:27:09.791 [2024-11-20 11:21:37.054536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.791 [2024-11-20 11:21:37.054570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.791 qpair failed and we were unable to recover it. 00:27:09.791 [2024-11-20 11:21:37.054764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.791 [2024-11-20 11:21:37.054797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.791 qpair failed and we were unable to recover it. 00:27:09.791 [2024-11-20 11:21:37.055070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.791 [2024-11-20 11:21:37.055107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.791 qpair failed and we were unable to recover it. 00:27:09.791 [2024-11-20 11:21:37.055377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.791 [2024-11-20 11:21:37.055412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.791 qpair failed and we were unable to recover it. 00:27:09.791 [2024-11-20 11:21:37.055630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.791 [2024-11-20 11:21:37.055664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.791 qpair failed and we were unable to recover it. 00:27:09.791 [2024-11-20 11:21:37.055917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.791 [2024-11-20 11:21:37.055963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.791 qpair failed and we were unable to recover it. 00:27:09.791 [2024-11-20 11:21:37.056220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.791 [2024-11-20 11:21:37.056255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.791 qpair failed and we were unable to recover it. 00:27:09.791 [2024-11-20 11:21:37.056459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.791 [2024-11-20 11:21:37.056494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.791 qpair failed and we were unable to recover it. 00:27:09.791 [2024-11-20 11:21:37.056682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.791 [2024-11-20 11:21:37.056715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.791 qpair failed and we were unable to recover it. 00:27:09.791 [2024-11-20 11:21:37.056895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.791 [2024-11-20 11:21:37.056930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.791 qpair failed and we were unable to recover it. 00:27:09.791 [2024-11-20 11:21:37.057121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.791 [2024-11-20 11:21:37.057156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.791 qpair failed and we were unable to recover it. 00:27:09.791 [2024-11-20 11:21:37.057430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.791 [2024-11-20 11:21:37.057464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.791 qpair failed and we were unable to recover it. 00:27:09.791 [2024-11-20 11:21:37.057641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.791 [2024-11-20 11:21:37.057675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.791 qpair failed and we were unable to recover it. 00:27:09.791 [2024-11-20 11:21:37.057859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.791 [2024-11-20 11:21:37.057892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.791 qpair failed and we were unable to recover it. 00:27:09.791 [2024-11-20 11:21:37.058083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.791 [2024-11-20 11:21:37.058117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.791 qpair failed and we were unable to recover it. 00:27:09.791 [2024-11-20 11:21:37.058323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.791 [2024-11-20 11:21:37.058356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.791 qpair failed and we were unable to recover it. 00:27:09.791 [2024-11-20 11:21:37.058639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.791 [2024-11-20 11:21:37.058673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.791 qpair failed and we were unable to recover it. 00:27:09.791 [2024-11-20 11:21:37.058800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.791 [2024-11-20 11:21:37.058833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.791 qpair failed and we were unable to recover it. 00:27:09.791 [2024-11-20 11:21:37.059016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.791 [2024-11-20 11:21:37.059051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.791 qpair failed and we were unable to recover it. 00:27:09.791 [2024-11-20 11:21:37.059305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.791 [2024-11-20 11:21:37.059339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.791 qpair failed and we were unable to recover it. 00:27:09.791 [2024-11-20 11:21:37.059596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.791 [2024-11-20 11:21:37.059636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.791 qpair failed and we were unable to recover it. 00:27:09.791 [2024-11-20 11:21:37.059869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.791 [2024-11-20 11:21:37.059903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.791 qpair failed and we were unable to recover it. 00:27:09.791 [2024-11-20 11:21:37.060109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.791 [2024-11-20 11:21:37.060144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.791 qpair failed and we were unable to recover it. 00:27:09.791 [2024-11-20 11:21:37.060398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.792 [2024-11-20 11:21:37.060433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.792 qpair failed and we were unable to recover it. 00:27:09.792 [2024-11-20 11:21:37.060583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.792 [2024-11-20 11:21:37.060616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.792 qpair failed and we were unable to recover it. 00:27:09.792 [2024-11-20 11:21:37.060875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.792 [2024-11-20 11:21:37.060909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.792 qpair failed and we were unable to recover it. 00:27:09.792 [2024-11-20 11:21:37.061118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.792 [2024-11-20 11:21:37.061155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.792 qpair failed and we were unable to recover it. 00:27:09.792 [2024-11-20 11:21:37.061352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.792 [2024-11-20 11:21:37.061387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.792 qpair failed and we were unable to recover it. 00:27:09.792 [2024-11-20 11:21:37.061636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.792 [2024-11-20 11:21:37.061670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.792 qpair failed and we were unable to recover it. 00:27:09.792 [2024-11-20 11:21:37.061944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.792 [2024-11-20 11:21:37.061990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.792 qpair failed and we were unable to recover it. 00:27:09.792 [2024-11-20 11:21:37.062270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.792 [2024-11-20 11:21:37.062304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.792 qpair failed and we were unable to recover it. 00:27:09.792 [2024-11-20 11:21:37.062495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.792 [2024-11-20 11:21:37.062530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.792 qpair failed and we were unable to recover it. 00:27:09.792 [2024-11-20 11:21:37.062652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.792 [2024-11-20 11:21:37.062685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.792 qpair failed and we were unable to recover it. 00:27:09.792 [2024-11-20 11:21:37.062892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.792 [2024-11-20 11:21:37.062925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.792 qpair failed and we were unable to recover it. 00:27:09.792 [2024-11-20 11:21:37.063218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.792 [2024-11-20 11:21:37.063254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.792 qpair failed and we were unable to recover it. 00:27:09.792 [2024-11-20 11:21:37.063534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.792 [2024-11-20 11:21:37.063567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.792 qpair failed and we were unable to recover it. 00:27:09.792 [2024-11-20 11:21:37.063816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.792 [2024-11-20 11:21:37.063850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.792 qpair failed and we were unable to recover it. 00:27:09.792 [2024-11-20 11:21:37.064102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.792 [2024-11-20 11:21:37.064138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.792 qpair failed and we were unable to recover it. 00:27:09.792 [2024-11-20 11:21:37.064406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.792 [2024-11-20 11:21:37.064440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.792 qpair failed and we were unable to recover it. 00:27:09.792 [2024-11-20 11:21:37.064569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.792 [2024-11-20 11:21:37.064603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.792 qpair failed and we were unable to recover it. 00:27:09.792 [2024-11-20 11:21:37.064791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.792 [2024-11-20 11:21:37.064824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.792 qpair failed and we were unable to recover it. 00:27:09.792 [2024-11-20 11:21:37.065039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.792 [2024-11-20 11:21:37.065074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.792 qpair failed and we were unable to recover it. 00:27:09.792 [2024-11-20 11:21:37.065282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.792 [2024-11-20 11:21:37.065317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.792 qpair failed and we were unable to recover it. 00:27:09.792 [2024-11-20 11:21:37.065440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.792 [2024-11-20 11:21:37.065472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.792 qpair failed and we were unable to recover it. 00:27:09.792 [2024-11-20 11:21:37.065680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.792 [2024-11-20 11:21:37.065715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.792 qpair failed and we were unable to recover it. 00:27:09.792 [2024-11-20 11:21:37.065834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.792 [2024-11-20 11:21:37.065868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.792 qpair failed and we were unable to recover it. 00:27:09.792 [2024-11-20 11:21:37.066002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.792 [2024-11-20 11:21:37.066037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.792 qpair failed and we were unable to recover it. 00:27:09.792 [2024-11-20 11:21:37.066319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.792 [2024-11-20 11:21:37.066355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.792 qpair failed and we were unable to recover it. 00:27:09.792 [2024-11-20 11:21:37.066548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.792 [2024-11-20 11:21:37.066581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.792 qpair failed and we were unable to recover it. 00:27:09.792 [2024-11-20 11:21:37.066847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.792 [2024-11-20 11:21:37.066882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.792 qpair failed and we were unable to recover it. 00:27:09.792 [2024-11-20 11:21:37.067099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.792 [2024-11-20 11:21:37.067134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.792 qpair failed and we were unable to recover it. 00:27:09.792 [2024-11-20 11:21:37.067401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.792 [2024-11-20 11:21:37.067435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.792 qpair failed and we were unable to recover it. 00:27:09.792 [2024-11-20 11:21:37.067640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.792 [2024-11-20 11:21:37.067674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.792 qpair failed and we were unable to recover it. 00:27:09.792 [2024-11-20 11:21:37.067890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.792 [2024-11-20 11:21:37.067925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.792 qpair failed and we were unable to recover it. 00:27:09.792 [2024-11-20 11:21:37.068125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.792 [2024-11-20 11:21:37.068159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.792 qpair failed and we were unable to recover it. 00:27:09.792 [2024-11-20 11:21:37.068364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.792 [2024-11-20 11:21:37.068399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.792 qpair failed and we were unable to recover it. 00:27:09.792 [2024-11-20 11:21:37.068653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.793 [2024-11-20 11:21:37.068688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.793 qpair failed and we were unable to recover it. 00:27:09.793 [2024-11-20 11:21:37.068871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.793 [2024-11-20 11:21:37.068906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.793 qpair failed and we were unable to recover it. 00:27:09.793 [2024-11-20 11:21:37.069101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.793 [2024-11-20 11:21:37.069136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.793 qpair failed and we were unable to recover it. 00:27:09.793 [2024-11-20 11:21:37.069425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.793 [2024-11-20 11:21:37.069460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.793 qpair failed and we were unable to recover it. 00:27:09.793 [2024-11-20 11:21:37.069651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.793 [2024-11-20 11:21:37.069691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.793 qpair failed and we were unable to recover it. 00:27:09.793 [2024-11-20 11:21:37.069811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.793 [2024-11-20 11:21:37.069846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.793 qpair failed and we were unable to recover it. 00:27:09.793 [2024-11-20 11:21:37.070060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.793 [2024-11-20 11:21:37.070097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.793 qpair failed and we were unable to recover it. 00:27:09.793 [2024-11-20 11:21:37.070227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.793 [2024-11-20 11:21:37.070261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.793 qpair failed and we were unable to recover it. 00:27:09.793 [2024-11-20 11:21:37.070452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.793 [2024-11-20 11:21:37.070488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.793 qpair failed and we were unable to recover it. 00:27:09.793 [2024-11-20 11:21:37.070763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.793 [2024-11-20 11:21:37.070796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.793 qpair failed and we were unable to recover it. 00:27:09.793 [2024-11-20 11:21:37.070939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.793 [2024-11-20 11:21:37.070986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.793 qpair failed and we were unable to recover it. 00:27:09.793 [2024-11-20 11:21:37.071174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.793 [2024-11-20 11:21:37.071207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.793 qpair failed and we were unable to recover it. 00:27:09.793 [2024-11-20 11:21:37.071483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.793 [2024-11-20 11:21:37.071518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.793 qpair failed and we were unable to recover it. 00:27:09.793 [2024-11-20 11:21:37.071714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.793 [2024-11-20 11:21:37.071746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.793 qpair failed and we were unable to recover it. 00:27:09.793 [2024-11-20 11:21:37.072031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.793 [2024-11-20 11:21:37.072065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.793 qpair failed and we were unable to recover it. 00:27:09.793 [2024-11-20 11:21:37.072272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.793 [2024-11-20 11:21:37.072308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.793 qpair failed and we were unable to recover it. 00:27:09.793 [2024-11-20 11:21:37.072505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.793 [2024-11-20 11:21:37.072541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.793 qpair failed and we were unable to recover it. 00:27:09.793 [2024-11-20 11:21:37.072751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.793 [2024-11-20 11:21:37.072784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.793 qpair failed and we were unable to recover it. 00:27:09.793 [2024-11-20 11:21:37.073072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.793 [2024-11-20 11:21:37.073109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.793 qpair failed and we were unable to recover it. 00:27:09.793 [2024-11-20 11:21:37.073299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.793 [2024-11-20 11:21:37.073334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.793 qpair failed and we were unable to recover it. 00:27:09.793 [2024-11-20 11:21:37.073614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.793 [2024-11-20 11:21:37.073647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.793 qpair failed and we were unable to recover it. 00:27:09.793 [2024-11-20 11:21:37.073901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.793 [2024-11-20 11:21:37.073937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.793 qpair failed and we were unable to recover it. 00:27:09.793 [2024-11-20 11:21:37.074094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.793 [2024-11-20 11:21:37.074127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.793 qpair failed and we were unable to recover it. 00:27:09.793 [2024-11-20 11:21:37.074281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.793 [2024-11-20 11:21:37.074315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.793 qpair failed and we were unable to recover it. 00:27:09.793 [2024-11-20 11:21:37.074594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.793 [2024-11-20 11:21:37.074629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.793 qpair failed and we were unable to recover it. 00:27:09.793 [2024-11-20 11:21:37.074838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.793 [2024-11-20 11:21:37.074874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.793 qpair failed and we were unable to recover it. 00:27:09.793 [2024-11-20 11:21:37.075104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.793 [2024-11-20 11:21:37.075139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.793 qpair failed and we were unable to recover it. 00:27:09.793 [2024-11-20 11:21:37.075396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.793 [2024-11-20 11:21:37.075431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.793 qpair failed and we were unable to recover it. 00:27:09.793 [2024-11-20 11:21:37.075683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.793 [2024-11-20 11:21:37.075716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.793 qpair failed and we were unable to recover it. 00:27:09.793 [2024-11-20 11:21:37.075941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.793 [2024-11-20 11:21:37.075986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.793 qpair failed and we were unable to recover it. 00:27:09.793 [2024-11-20 11:21:37.076125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.793 [2024-11-20 11:21:37.076160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.793 qpair failed and we were unable to recover it. 00:27:09.793 [2024-11-20 11:21:37.076360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.793 [2024-11-20 11:21:37.076392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.793 qpair failed and we were unable to recover it. 00:27:09.793 [2024-11-20 11:21:37.076595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.793 [2024-11-20 11:21:37.076628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.793 qpair failed and we were unable to recover it. 00:27:09.793 [2024-11-20 11:21:37.076832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.794 [2024-11-20 11:21:37.076865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.794 qpair failed and we were unable to recover it. 00:27:09.794 [2024-11-20 11:21:37.077093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.794 [2024-11-20 11:21:37.077128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.794 qpair failed and we were unable to recover it. 00:27:09.794 [2024-11-20 11:21:37.077262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.794 [2024-11-20 11:21:37.077296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.794 qpair failed and we were unable to recover it. 00:27:09.794 [2024-11-20 11:21:37.077434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.794 [2024-11-20 11:21:37.077470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.794 qpair failed and we were unable to recover it. 00:27:09.794 [2024-11-20 11:21:37.077683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.794 [2024-11-20 11:21:37.077719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.794 qpair failed and we were unable to recover it. 00:27:09.794 [2024-11-20 11:21:37.077905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.794 [2024-11-20 11:21:37.077939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.794 qpair failed and we were unable to recover it. 00:27:09.794 [2024-11-20 11:21:37.078212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.794 [2024-11-20 11:21:37.078248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.794 qpair failed and we were unable to recover it. 00:27:09.794 [2024-11-20 11:21:37.078448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.794 [2024-11-20 11:21:37.078483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.794 qpair failed and we were unable to recover it. 00:27:09.794 [2024-11-20 11:21:37.078641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.794 [2024-11-20 11:21:37.078674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.794 qpair failed and we were unable to recover it. 00:27:09.794 [2024-11-20 11:21:37.078964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.794 [2024-11-20 11:21:37.079002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.794 qpair failed and we were unable to recover it. 00:27:09.794 [2024-11-20 11:21:37.079145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.794 [2024-11-20 11:21:37.079179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.794 qpair failed and we were unable to recover it. 00:27:09.794 [2024-11-20 11:21:37.079315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.794 [2024-11-20 11:21:37.079355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.794 qpair failed and we were unable to recover it. 00:27:09.794 [2024-11-20 11:21:37.079570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.794 [2024-11-20 11:21:37.079604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.794 qpair failed and we were unable to recover it. 00:27:09.794 [2024-11-20 11:21:37.079720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.794 [2024-11-20 11:21:37.079755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.794 qpair failed and we were unable to recover it. 00:27:09.794 [2024-11-20 11:21:37.080033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.794 [2024-11-20 11:21:37.080070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.794 qpair failed and we were unable to recover it. 00:27:09.794 [2024-11-20 11:21:37.080304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.794 [2024-11-20 11:21:37.080340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.794 qpair failed and we were unable to recover it. 00:27:09.794 [2024-11-20 11:21:37.080552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.794 [2024-11-20 11:21:37.080586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.794 qpair failed and we were unable to recover it. 00:27:09.794 [2024-11-20 11:21:37.080772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.794 [2024-11-20 11:21:37.080807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.794 qpair failed and we were unable to recover it. 00:27:09.794 [2024-11-20 11:21:37.081007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.794 [2024-11-20 11:21:37.081040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.794 qpair failed and we were unable to recover it. 00:27:09.794 [2024-11-20 11:21:37.081246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.794 [2024-11-20 11:21:37.081280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.794 qpair failed and we were unable to recover it. 00:27:09.794 [2024-11-20 11:21:37.081577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.794 [2024-11-20 11:21:37.081611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.794 qpair failed and we were unable to recover it. 00:27:09.794 [2024-11-20 11:21:37.081876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.794 [2024-11-20 11:21:37.081911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.794 qpair failed and we were unable to recover it. 00:27:09.794 [2024-11-20 11:21:37.082119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.794 [2024-11-20 11:21:37.082153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.794 qpair failed and we were unable to recover it. 00:27:09.794 [2024-11-20 11:21:37.082302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.794 [2024-11-20 11:21:37.082336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.794 qpair failed and we were unable to recover it. 00:27:09.794 [2024-11-20 11:21:37.082552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.794 [2024-11-20 11:21:37.082583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.794 qpair failed and we were unable to recover it. 00:27:09.794 [2024-11-20 11:21:37.082893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.794 [2024-11-20 11:21:37.082927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.794 qpair failed and we were unable to recover it. 00:27:09.794 [2024-11-20 11:21:37.083243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.794 [2024-11-20 11:21:37.083278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.794 qpair failed and we were unable to recover it. 00:27:09.794 [2024-11-20 11:21:37.083582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.794 [2024-11-20 11:21:37.083617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.794 qpair failed and we were unable to recover it. 00:27:09.794 [2024-11-20 11:21:37.083882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.794 [2024-11-20 11:21:37.083917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.794 qpair failed and we were unable to recover it. 00:27:09.794 [2024-11-20 11:21:37.084111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.795 [2024-11-20 11:21:37.084146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.795 qpair failed and we were unable to recover it. 00:27:09.795 [2024-11-20 11:21:37.084349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.795 [2024-11-20 11:21:37.084384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.795 qpair failed and we were unable to recover it. 00:27:09.795 [2024-11-20 11:21:37.084570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.795 [2024-11-20 11:21:37.084604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.795 qpair failed and we were unable to recover it. 00:27:09.795 [2024-11-20 11:21:37.084804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.795 [2024-11-20 11:21:37.084839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.795 qpair failed and we were unable to recover it. 00:27:09.795 [2024-11-20 11:21:37.085113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.795 [2024-11-20 11:21:37.085149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.795 qpair failed and we were unable to recover it. 00:27:09.795 [2024-11-20 11:21:37.085266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.795 [2024-11-20 11:21:37.085299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.795 qpair failed and we were unable to recover it. 00:27:09.795 [2024-11-20 11:21:37.085498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.795 [2024-11-20 11:21:37.085532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.795 qpair failed and we were unable to recover it. 00:27:09.795 [2024-11-20 11:21:37.085734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.795 [2024-11-20 11:21:37.085768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.795 qpair failed and we were unable to recover it. 00:27:09.795 [2024-11-20 11:21:37.085971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.795 [2024-11-20 11:21:37.086008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.795 qpair failed and we were unable to recover it. 00:27:09.795 [2024-11-20 11:21:37.086267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.795 [2024-11-20 11:21:37.086304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.795 qpair failed and we were unable to recover it. 00:27:09.795 [2024-11-20 11:21:37.086581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.795 [2024-11-20 11:21:37.086616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.795 qpair failed and we were unable to recover it. 00:27:09.795 [2024-11-20 11:21:37.086892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.795 [2024-11-20 11:21:37.086926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.795 qpair failed and we were unable to recover it. 00:27:09.795 [2024-11-20 11:21:37.087215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.795 [2024-11-20 11:21:37.087249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.795 qpair failed and we were unable to recover it. 00:27:09.795 [2024-11-20 11:21:37.087525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.795 [2024-11-20 11:21:37.087558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.795 qpair failed and we were unable to recover it. 00:27:09.795 [2024-11-20 11:21:37.087756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.795 [2024-11-20 11:21:37.087789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.795 qpair failed and we were unable to recover it. 00:27:09.795 [2024-11-20 11:21:37.087986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.795 [2024-11-20 11:21:37.088022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.795 qpair failed and we were unable to recover it. 00:27:09.795 [2024-11-20 11:21:37.088210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.795 [2024-11-20 11:21:37.088246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.795 qpair failed and we were unable to recover it. 00:27:09.795 [2024-11-20 11:21:37.088531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.795 [2024-11-20 11:21:37.088566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.795 qpair failed and we were unable to recover it. 00:27:09.795 [2024-11-20 11:21:37.088744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.795 [2024-11-20 11:21:37.088778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.795 qpair failed and we were unable to recover it. 00:27:09.795 [2024-11-20 11:21:37.088908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.795 [2024-11-20 11:21:37.088943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.795 qpair failed and we were unable to recover it. 00:27:09.795 [2024-11-20 11:21:37.089097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.795 [2024-11-20 11:21:37.089131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.795 qpair failed and we were unable to recover it. 00:27:09.795 [2024-11-20 11:21:37.089262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.795 [2024-11-20 11:21:37.089295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.795 qpair failed and we were unable to recover it. 00:27:09.795 [2024-11-20 11:21:37.089550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.795 [2024-11-20 11:21:37.089592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.795 qpair failed and we were unable to recover it. 00:27:09.795 [2024-11-20 11:21:37.089845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.795 [2024-11-20 11:21:37.089879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.795 qpair failed and we were unable to recover it. 00:27:09.795 [2024-11-20 11:21:37.090183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.795 [2024-11-20 11:21:37.090220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.795 qpair failed and we were unable to recover it. 00:27:09.795 [2024-11-20 11:21:37.090349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.795 [2024-11-20 11:21:37.090385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.795 qpair failed and we were unable to recover it. 00:27:09.795 [2024-11-20 11:21:37.090577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.795 [2024-11-20 11:21:37.090612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.795 qpair failed and we were unable to recover it. 00:27:09.795 [2024-11-20 11:21:37.090816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.795 [2024-11-20 11:21:37.090850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.795 qpair failed and we were unable to recover it. 00:27:09.795 [2024-11-20 11:21:37.091129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.795 [2024-11-20 11:21:37.091166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.795 qpair failed and we were unable to recover it. 00:27:09.795 [2024-11-20 11:21:37.091296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.795 [2024-11-20 11:21:37.091331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.795 qpair failed and we were unable to recover it. 00:27:09.795 [2024-11-20 11:21:37.091594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.795 [2024-11-20 11:21:37.091628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.795 qpair failed and we were unable to recover it. 00:27:09.795 [2024-11-20 11:21:37.091750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.795 [2024-11-20 11:21:37.091784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.795 qpair failed and we were unable to recover it. 00:27:09.795 [2024-11-20 11:21:37.092009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.795 [2024-11-20 11:21:37.092042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.795 qpair failed and we were unable to recover it. 00:27:09.795 [2024-11-20 11:21:37.092263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.796 [2024-11-20 11:21:37.092295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.796 qpair failed and we were unable to recover it. 00:27:09.796 [2024-11-20 11:21:37.092548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.796 [2024-11-20 11:21:37.092581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.796 qpair failed and we were unable to recover it. 00:27:09.796 [2024-11-20 11:21:37.092782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.796 [2024-11-20 11:21:37.092815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.796 qpair failed and we were unable to recover it. 00:27:09.796 [2024-11-20 11:21:37.093112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.796 [2024-11-20 11:21:37.093150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.796 qpair failed and we were unable to recover it. 00:27:09.796 [2024-11-20 11:21:37.093368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.796 [2024-11-20 11:21:37.093403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.796 qpair failed and we were unable to recover it. 00:27:09.796 [2024-11-20 11:21:37.093702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.796 [2024-11-20 11:21:37.093739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.796 qpair failed and we were unable to recover it. 00:27:09.796 [2024-11-20 11:21:37.094003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.796 [2024-11-20 11:21:37.094039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.796 qpair failed and we were unable to recover it. 00:27:09.796 [2024-11-20 11:21:37.094236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.796 [2024-11-20 11:21:37.094269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.796 qpair failed and we were unable to recover it. 00:27:09.796 [2024-11-20 11:21:37.094394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.796 [2024-11-20 11:21:37.094428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.796 qpair failed and we were unable to recover it. 00:27:09.796 [2024-11-20 11:21:37.094633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.796 [2024-11-20 11:21:37.094667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.796 qpair failed and we were unable to recover it. 00:27:09.796 [2024-11-20 11:21:37.094971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.796 [2024-11-20 11:21:37.095008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.796 qpair failed and we were unable to recover it. 00:27:09.796 [2024-11-20 11:21:37.095223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.796 [2024-11-20 11:21:37.095258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.796 qpair failed and we were unable to recover it. 00:27:09.796 [2024-11-20 11:21:37.095389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.796 [2024-11-20 11:21:37.095424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.796 qpair failed and we were unable to recover it. 00:27:09.796 [2024-11-20 11:21:37.095633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.796 [2024-11-20 11:21:37.095665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.796 qpair failed and we were unable to recover it. 00:27:09.796 [2024-11-20 11:21:37.095996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.796 [2024-11-20 11:21:37.096033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.796 qpair failed and we were unable to recover it. 00:27:09.796 [2024-11-20 11:21:37.096176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.796 [2024-11-20 11:21:37.096208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.796 qpair failed and we were unable to recover it. 00:27:09.796 [2024-11-20 11:21:37.096414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.796 [2024-11-20 11:21:37.096447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.796 qpair failed and we were unable to recover it. 00:27:09.796 [2024-11-20 11:21:37.096632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.796 [2024-11-20 11:21:37.096665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.796 qpair failed and we were unable to recover it. 00:27:09.796 [2024-11-20 11:21:37.096804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.796 [2024-11-20 11:21:37.096837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.796 qpair failed and we were unable to recover it. 00:27:09.796 [2024-11-20 11:21:37.097092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.796 [2024-11-20 11:21:37.097127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.796 qpair failed and we were unable to recover it. 00:27:09.796 [2024-11-20 11:21:37.097324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.796 [2024-11-20 11:21:37.097357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.796 qpair failed and we were unable to recover it. 00:27:09.796 [2024-11-20 11:21:37.097631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.796 [2024-11-20 11:21:37.097666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.796 qpair failed and we were unable to recover it. 00:27:09.796 [2024-11-20 11:21:37.097881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.796 [2024-11-20 11:21:37.097915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.796 qpair failed and we were unable to recover it. 00:27:09.796 [2024-11-20 11:21:37.098204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.796 [2024-11-20 11:21:37.098239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.796 qpair failed and we were unable to recover it. 00:27:09.796 [2024-11-20 11:21:37.098433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.796 [2024-11-20 11:21:37.098467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.796 qpair failed and we were unable to recover it. 00:27:09.796 [2024-11-20 11:21:37.098723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.796 [2024-11-20 11:21:37.098756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.796 qpair failed and we were unable to recover it. 00:27:09.796 [2024-11-20 11:21:37.098869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.796 [2024-11-20 11:21:37.098902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.796 qpair failed and we were unable to recover it. 00:27:09.796 [2024-11-20 11:21:37.099123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.796 [2024-11-20 11:21:37.099158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.796 qpair failed and we were unable to recover it. 00:27:09.796 [2024-11-20 11:21:37.099352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.796 [2024-11-20 11:21:37.099389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.796 qpair failed and we were unable to recover it. 00:27:09.796 [2024-11-20 11:21:37.099654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.796 [2024-11-20 11:21:37.099693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.796 qpair failed and we were unable to recover it. 00:27:09.796 [2024-11-20 11:21:37.099961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.796 [2024-11-20 11:21:37.099998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.796 qpair failed and we were unable to recover it. 00:27:09.796 [2024-11-20 11:21:37.100202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.796 [2024-11-20 11:21:37.100236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.796 qpair failed and we were unable to recover it. 00:27:09.796 [2024-11-20 11:21:37.100417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.796 [2024-11-20 11:21:37.100451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.796 qpair failed and we were unable to recover it. 00:27:09.796 [2024-11-20 11:21:37.100672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.797 [2024-11-20 11:21:37.100705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.797 qpair failed and we were unable to recover it. 00:27:09.797 [2024-11-20 11:21:37.100999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.797 [2024-11-20 11:21:37.101035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.797 qpair failed and we were unable to recover it. 00:27:09.797 [2024-11-20 11:21:37.101147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.797 [2024-11-20 11:21:37.101182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.797 qpair failed and we were unable to recover it. 00:27:09.797 [2024-11-20 11:21:37.101362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.797 [2024-11-20 11:21:37.101397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.797 qpair failed and we were unable to recover it. 00:27:09.797 [2024-11-20 11:21:37.101636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.797 [2024-11-20 11:21:37.101672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.797 qpair failed and we were unable to recover it. 00:27:09.797 [2024-11-20 11:21:37.101806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.797 [2024-11-20 11:21:37.101840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.797 qpair failed and we were unable to recover it. 00:27:09.797 [2024-11-20 11:21:37.102026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.797 [2024-11-20 11:21:37.102060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.797 qpair failed and we were unable to recover it. 00:27:09.797 [2024-11-20 11:21:37.102334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.797 [2024-11-20 11:21:37.102368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.797 qpair failed and we were unable to recover it. 00:27:09.797 [2024-11-20 11:21:37.102558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.797 [2024-11-20 11:21:37.102591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.797 qpair failed and we were unable to recover it. 00:27:09.797 [2024-11-20 11:21:37.102928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.797 [2024-11-20 11:21:37.102971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.797 qpair failed and we were unable to recover it. 00:27:09.797 [2024-11-20 11:21:37.103188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.797 [2024-11-20 11:21:37.103223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.797 qpair failed and we were unable to recover it. 00:27:09.797 [2024-11-20 11:21:37.103439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.797 [2024-11-20 11:21:37.103473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.797 qpair failed and we were unable to recover it. 00:27:09.797 [2024-11-20 11:21:37.103668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.797 [2024-11-20 11:21:37.103701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.797 qpair failed and we were unable to recover it. 00:27:09.797 [2024-11-20 11:21:37.103965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.797 [2024-11-20 11:21:37.104000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.797 qpair failed and we were unable to recover it. 00:27:09.797 [2024-11-20 11:21:37.104195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.797 [2024-11-20 11:21:37.104229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.797 qpair failed and we were unable to recover it. 00:27:09.797 [2024-11-20 11:21:37.104500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.797 [2024-11-20 11:21:37.104535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.797 qpair failed and we were unable to recover it. 00:27:09.797 [2024-11-20 11:21:37.104730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.797 [2024-11-20 11:21:37.104764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.797 qpair failed and we were unable to recover it. 00:27:09.797 [2024-11-20 11:21:37.105029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.797 [2024-11-20 11:21:37.105066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.797 qpair failed and we were unable to recover it. 00:27:09.797 [2024-11-20 11:21:37.105200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.797 [2024-11-20 11:21:37.105234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.797 qpair failed and we were unable to recover it. 00:27:09.797 [2024-11-20 11:21:37.105453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.797 [2024-11-20 11:21:37.105487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.797 qpair failed and we were unable to recover it. 00:27:09.797 [2024-11-20 11:21:37.105628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.797 [2024-11-20 11:21:37.105662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.797 qpair failed and we were unable to recover it. 00:27:09.797 [2024-11-20 11:21:37.105849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.797 [2024-11-20 11:21:37.105883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.797 qpair failed and we were unable to recover it. 00:27:09.797 [2024-11-20 11:21:37.106092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.797 [2024-11-20 11:21:37.106126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.797 qpair failed and we were unable to recover it. 00:27:09.797 [2024-11-20 11:21:37.106276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.797 [2024-11-20 11:21:37.106309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.797 qpair failed and we were unable to recover it. 00:27:09.797 [2024-11-20 11:21:37.106452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.797 [2024-11-20 11:21:37.106486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.797 qpair failed and we were unable to recover it. 00:27:09.797 [2024-11-20 11:21:37.106721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.797 [2024-11-20 11:21:37.106755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.797 qpair failed and we were unable to recover it. 00:27:09.797 [2024-11-20 11:21:37.106897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.797 [2024-11-20 11:21:37.106928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.797 qpair failed and we were unable to recover it. 00:27:09.797 [2024-11-20 11:21:37.107194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.797 [2024-11-20 11:21:37.107227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.797 qpair failed and we were unable to recover it. 00:27:09.797 [2024-11-20 11:21:37.107416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.797 [2024-11-20 11:21:37.107448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.797 qpair failed and we were unable to recover it. 00:27:09.797 [2024-11-20 11:21:37.107738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.797 [2024-11-20 11:21:37.107773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.797 qpair failed and we were unable to recover it. 00:27:09.797 [2024-11-20 11:21:37.107989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.797 [2024-11-20 11:21:37.108025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.797 qpair failed and we were unable to recover it. 00:27:09.797 [2024-11-20 11:21:37.108218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.797 [2024-11-20 11:21:37.108253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.797 qpair failed and we were unable to recover it. 00:27:09.797 [2024-11-20 11:21:37.108475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.797 [2024-11-20 11:21:37.108509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.797 qpair failed and we were unable to recover it. 00:27:09.798 [2024-11-20 11:21:37.108653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.798 [2024-11-20 11:21:37.108688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.798 qpair failed and we were unable to recover it. 00:27:09.798 [2024-11-20 11:21:37.108943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.798 [2024-11-20 11:21:37.108992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.798 qpair failed and we were unable to recover it. 00:27:09.798 [2024-11-20 11:21:37.109139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.798 [2024-11-20 11:21:37.109172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.798 qpair failed and we were unable to recover it. 00:27:09.798 [2024-11-20 11:21:37.109362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.798 [2024-11-20 11:21:37.109403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.798 qpair failed and we were unable to recover it. 00:27:09.798 [2024-11-20 11:21:37.109545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.798 [2024-11-20 11:21:37.109579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.798 qpair failed and we were unable to recover it. 00:27:09.798 [2024-11-20 11:21:37.109868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.798 [2024-11-20 11:21:37.109902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.798 qpair failed and we were unable to recover it. 00:27:09.798 [2024-11-20 11:21:37.110030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.798 [2024-11-20 11:21:37.110067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.798 qpair failed and we were unable to recover it. 00:27:09.798 [2024-11-20 11:21:37.110269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.798 [2024-11-20 11:21:37.110305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.798 qpair failed and we were unable to recover it. 00:27:09.798 [2024-11-20 11:21:37.110497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.798 [2024-11-20 11:21:37.110530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.798 qpair failed and we were unable to recover it. 00:27:09.798 [2024-11-20 11:21:37.110758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.798 [2024-11-20 11:21:37.110792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.798 qpair failed and we were unable to recover it. 00:27:09.798 [2024-11-20 11:21:37.110908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.798 [2024-11-20 11:21:37.110944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.798 qpair failed and we were unable to recover it. 00:27:09.798 [2024-11-20 11:21:37.111099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.798 [2024-11-20 11:21:37.111135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.798 qpair failed and we were unable to recover it. 00:27:09.798 [2024-11-20 11:21:37.111389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.798 [2024-11-20 11:21:37.111424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.798 qpair failed and we were unable to recover it. 00:27:09.798 [2024-11-20 11:21:37.111662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.798 [2024-11-20 11:21:37.111698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.798 qpair failed and we were unable to recover it. 00:27:09.798 [2024-11-20 11:21:37.111892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.798 [2024-11-20 11:21:37.111926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.798 qpair failed and we were unable to recover it. 00:27:09.798 [2024-11-20 11:21:37.113436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.798 [2024-11-20 11:21:37.113495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.798 qpair failed and we were unable to recover it. 00:27:09.798 [2024-11-20 11:21:37.113821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.798 [2024-11-20 11:21:37.113858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.798 qpair failed and we were unable to recover it. 00:27:09.798 [2024-11-20 11:21:37.114105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.798 [2024-11-20 11:21:37.114143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.798 qpair failed and we were unable to recover it. 00:27:09.798 [2024-11-20 11:21:37.114279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.798 [2024-11-20 11:21:37.114313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.798 qpair failed and we were unable to recover it. 00:27:09.798 [2024-11-20 11:21:37.114530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.798 [2024-11-20 11:21:37.114565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.798 qpair failed and we were unable to recover it. 00:27:09.798 [2024-11-20 11:21:37.114777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.798 [2024-11-20 11:21:37.114812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.798 qpair failed and we were unable to recover it. 00:27:09.798 [2024-11-20 11:21:37.114991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.798 [2024-11-20 11:21:37.115027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.798 qpair failed and we were unable to recover it. 00:27:09.798 [2024-11-20 11:21:37.115235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.798 [2024-11-20 11:21:37.115270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.798 qpair failed and we were unable to recover it. 00:27:09.798 [2024-11-20 11:21:37.115498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.798 [2024-11-20 11:21:37.115531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.798 qpair failed and we were unable to recover it. 00:27:09.798 [2024-11-20 11:21:37.115713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.798 [2024-11-20 11:21:37.115746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.798 qpair failed and we were unable to recover it. 00:27:09.798 [2024-11-20 11:21:37.116002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.798 [2024-11-20 11:21:37.116038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.798 qpair failed and we were unable to recover it. 00:27:09.798 [2024-11-20 11:21:37.116296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.798 [2024-11-20 11:21:37.116330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.798 qpair failed and we were unable to recover it. 00:27:09.798 [2024-11-20 11:21:37.116521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.798 [2024-11-20 11:21:37.116555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.798 qpair failed and we were unable to recover it. 00:27:09.798 [2024-11-20 11:21:37.116780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.798 [2024-11-20 11:21:37.116814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.798 qpair failed and we were unable to recover it. 00:27:09.798 [2024-11-20 11:21:37.117070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.798 [2024-11-20 11:21:37.117108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.798 qpair failed and we were unable to recover it. 00:27:09.798 [2024-11-20 11:21:37.117254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.798 [2024-11-20 11:21:37.117292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.798 qpair failed and we were unable to recover it. 00:27:09.798 [2024-11-20 11:21:37.117536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.798 [2024-11-20 11:21:37.117568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.798 qpair failed and we were unable to recover it. 00:27:09.798 [2024-11-20 11:21:37.117847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.798 [2024-11-20 11:21:37.117881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.798 qpair failed and we were unable to recover it. 00:27:09.799 [2024-11-20 11:21:37.118059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.799 [2024-11-20 11:21:37.118095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.799 qpair failed and we were unable to recover it. 00:27:09.799 [2024-11-20 11:21:37.118299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.799 [2024-11-20 11:21:37.118331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.799 qpair failed and we were unable to recover it. 00:27:09.799 [2024-11-20 11:21:37.118591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.799 [2024-11-20 11:21:37.118624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.799 qpair failed and we were unable to recover it. 00:27:09.799 [2024-11-20 11:21:37.118821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.799 [2024-11-20 11:21:37.118856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.799 qpair failed and we were unable to recover it. 00:27:09.799 [2024-11-20 11:21:37.119072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.799 [2024-11-20 11:21:37.119107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.799 qpair failed and we were unable to recover it. 00:27:09.799 [2024-11-20 11:21:37.119249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.799 [2024-11-20 11:21:37.119282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.799 qpair failed and we were unable to recover it. 00:27:09.799 [2024-11-20 11:21:37.119534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.799 [2024-11-20 11:21:37.119570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.799 qpair failed and we were unable to recover it. 00:27:09.799 [2024-11-20 11:21:37.119784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.799 [2024-11-20 11:21:37.119819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.799 qpair failed and we were unable to recover it. 00:27:09.799 [2024-11-20 11:21:37.119927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.799 [2024-11-20 11:21:37.119969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.799 qpair failed and we were unable to recover it. 00:27:09.799 [2024-11-20 11:21:37.120171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.799 [2024-11-20 11:21:37.120206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.799 qpair failed and we were unable to recover it. 00:27:09.799 [2024-11-20 11:21:37.120420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.799 [2024-11-20 11:21:37.120453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.799 qpair failed and we were unable to recover it. 00:27:09.799 [2024-11-20 11:21:37.120665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.799 [2024-11-20 11:21:37.120699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.799 qpair failed and we were unable to recover it. 00:27:09.799 [2024-11-20 11:21:37.120980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.799 [2024-11-20 11:21:37.121014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.799 qpair failed and we were unable to recover it. 00:27:09.799 [2024-11-20 11:21:37.121216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.799 [2024-11-20 11:21:37.121249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.799 qpair failed and we were unable to recover it. 00:27:09.799 [2024-11-20 11:21:37.121370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.799 [2024-11-20 11:21:37.121402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.799 qpair failed and we were unable to recover it. 00:27:09.799 [2024-11-20 11:21:37.121635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.799 [2024-11-20 11:21:37.121667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.799 qpair failed and we were unable to recover it. 00:27:09.799 [2024-11-20 11:21:37.121900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.799 [2024-11-20 11:21:37.121932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.799 qpair failed and we were unable to recover it. 00:27:09.799 [2024-11-20 11:21:37.122143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.799 [2024-11-20 11:21:37.122176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.799 qpair failed and we were unable to recover it. 00:27:09.799 [2024-11-20 11:21:37.122367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.799 [2024-11-20 11:21:37.122401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.799 qpair failed and we were unable to recover it. 00:27:09.799 [2024-11-20 11:21:37.122534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.799 [2024-11-20 11:21:37.122567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.799 qpair failed and we were unable to recover it. 00:27:09.799 [2024-11-20 11:21:37.122705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.799 [2024-11-20 11:21:37.122737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.799 qpair failed and we were unable to recover it. 00:27:09.799 [2024-11-20 11:21:37.122886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.799 [2024-11-20 11:21:37.122918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.799 qpair failed and we were unable to recover it. 00:27:09.799 [2024-11-20 11:21:37.123137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.799 [2024-11-20 11:21:37.123171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.799 qpair failed and we were unable to recover it. 00:27:09.799 [2024-11-20 11:21:37.123302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.799 [2024-11-20 11:21:37.123336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.799 qpair failed and we were unable to recover it. 00:27:09.799 [2024-11-20 11:21:37.123556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.799 [2024-11-20 11:21:37.123590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.799 qpair failed and we were unable to recover it. 00:27:09.799 [2024-11-20 11:21:37.123704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.799 [2024-11-20 11:21:37.123738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.799 qpair failed and we were unable to recover it. 00:27:09.799 [2024-11-20 11:21:37.123861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.799 [2024-11-20 11:21:37.123893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.799 qpair failed and we were unable to recover it. 00:27:09.799 [2024-11-20 11:21:37.124189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.799 [2024-11-20 11:21:37.124226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.799 qpair failed and we were unable to recover it. 00:27:09.799 [2024-11-20 11:21:37.124434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.799 [2024-11-20 11:21:37.124469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.799 qpair failed and we were unable to recover it. 00:27:09.800 [2024-11-20 11:21:37.124657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.800 [2024-11-20 11:21:37.124691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.800 qpair failed and we were unable to recover it. 00:27:09.800 [2024-11-20 11:21:37.124964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.800 [2024-11-20 11:21:37.125000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.800 qpair failed and we were unable to recover it. 00:27:09.800 [2024-11-20 11:21:37.125124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.800 [2024-11-20 11:21:37.125158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.800 qpair failed and we were unable to recover it. 00:27:09.800 [2024-11-20 11:21:37.125343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.800 [2024-11-20 11:21:37.125375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.800 qpair failed and we were unable to recover it. 00:27:09.800 [2024-11-20 11:21:37.125509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.800 [2024-11-20 11:21:37.125542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.800 qpair failed and we were unable to recover it. 00:27:09.800 [2024-11-20 11:21:37.125746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.800 [2024-11-20 11:21:37.125779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.800 qpair failed and we were unable to recover it. 00:27:09.800 [2024-11-20 11:21:37.126030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.800 [2024-11-20 11:21:37.126066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.800 qpair failed and we were unable to recover it. 00:27:09.800 [2024-11-20 11:21:37.126271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.800 [2024-11-20 11:21:37.126304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.800 qpair failed and we were unable to recover it. 00:27:09.800 [2024-11-20 11:21:37.126504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.800 [2024-11-20 11:21:37.126542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.800 qpair failed and we were unable to recover it. 00:27:09.800 [2024-11-20 11:21:37.126757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.800 [2024-11-20 11:21:37.126791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.800 qpair failed and we were unable to recover it. 00:27:09.800 [2024-11-20 11:21:37.126997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.800 [2024-11-20 11:21:37.127032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.800 qpair failed and we were unable to recover it. 00:27:09.800 [2024-11-20 11:21:37.127238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.800 [2024-11-20 11:21:37.127274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.800 qpair failed and we were unable to recover it. 00:27:09.800 [2024-11-20 11:21:37.127456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.800 [2024-11-20 11:21:37.127490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.800 qpair failed and we were unable to recover it. 00:27:09.800 [2024-11-20 11:21:37.127772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.800 [2024-11-20 11:21:37.127806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.800 qpair failed and we were unable to recover it. 00:27:09.800 [2024-11-20 11:21:37.128006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.800 [2024-11-20 11:21:37.128041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.800 qpair failed and we were unable to recover it. 00:27:09.800 [2024-11-20 11:21:37.128244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.800 [2024-11-20 11:21:37.128279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.800 qpair failed and we were unable to recover it. 00:27:09.800 [2024-11-20 11:21:37.128527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.800 [2024-11-20 11:21:37.128559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.800 qpair failed and we were unable to recover it. 00:27:09.800 [2024-11-20 11:21:37.128708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.800 [2024-11-20 11:21:37.128743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.800 qpair failed and we were unable to recover it. 00:27:09.800 [2024-11-20 11:21:37.128888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.800 [2024-11-20 11:21:37.128919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.800 qpair failed and we were unable to recover it. 00:27:09.800 [2024-11-20 11:21:37.129075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.800 [2024-11-20 11:21:37.129110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.800 qpair failed and we were unable to recover it. 00:27:09.800 [2024-11-20 11:21:37.129297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.800 [2024-11-20 11:21:37.129330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.800 qpair failed and we were unable to recover it. 00:27:09.800 [2024-11-20 11:21:37.129539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.800 [2024-11-20 11:21:37.129574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.800 qpair failed and we were unable to recover it. 00:27:09.800 [2024-11-20 11:21:37.129775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.800 [2024-11-20 11:21:37.129809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.800 qpair failed and we were unable to recover it. 00:27:09.800 [2024-11-20 11:21:37.130074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.800 [2024-11-20 11:21:37.130111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.800 qpair failed and we were unable to recover it. 00:27:09.800 [2024-11-20 11:21:37.130303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.800 [2024-11-20 11:21:37.130338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.800 qpair failed and we were unable to recover it. 00:27:09.800 [2024-11-20 11:21:37.130480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.800 [2024-11-20 11:21:37.130513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.800 qpair failed and we were unable to recover it. 00:27:09.800 [2024-11-20 11:21:37.130767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.800 [2024-11-20 11:21:37.130801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.800 qpair failed and we were unable to recover it. 00:27:09.800 [2024-11-20 11:21:37.131000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.800 [2024-11-20 11:21:37.131036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.800 qpair failed and we were unable to recover it. 00:27:09.800 [2024-11-20 11:21:37.131170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.800 [2024-11-20 11:21:37.131203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.800 qpair failed and we were unable to recover it. 00:27:09.800 [2024-11-20 11:21:37.131316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.800 [2024-11-20 11:21:37.131348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.800 qpair failed and we were unable to recover it. 00:27:09.800 [2024-11-20 11:21:37.131567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.800 [2024-11-20 11:21:37.131600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.800 qpair failed and we were unable to recover it. 00:27:09.800 [2024-11-20 11:21:37.131859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.800 [2024-11-20 11:21:37.131892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.800 qpair failed and we were unable to recover it. 00:27:09.800 [2024-11-20 11:21:37.132050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.800 [2024-11-20 11:21:37.132084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.801 qpair failed and we were unable to recover it. 00:27:09.801 [2024-11-20 11:21:37.132350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.801 [2024-11-20 11:21:37.132385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.801 qpair failed and we were unable to recover it. 00:27:09.801 [2024-11-20 11:21:37.132675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.801 [2024-11-20 11:21:37.132710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.801 qpair failed and we were unable to recover it. 00:27:09.801 [2024-11-20 11:21:37.132921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.801 [2024-11-20 11:21:37.132970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.801 qpair failed and we were unable to recover it. 00:27:09.801 [2024-11-20 11:21:37.133192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.801 [2024-11-20 11:21:37.133228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.801 qpair failed and we were unable to recover it. 00:27:09.801 [2024-11-20 11:21:37.133419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.801 [2024-11-20 11:21:37.133453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.801 qpair failed and we were unable to recover it. 00:27:09.801 [2024-11-20 11:21:37.133585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.801 [2024-11-20 11:21:37.133619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.801 qpair failed and we were unable to recover it. 00:27:09.801 [2024-11-20 11:21:37.133844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.801 [2024-11-20 11:21:37.133880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.801 qpair failed and we were unable to recover it. 00:27:09.801 [2024-11-20 11:21:37.134027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.801 [2024-11-20 11:21:37.134068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.801 qpair failed and we were unable to recover it. 00:27:09.801 [2024-11-20 11:21:37.134277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.801 [2024-11-20 11:21:37.134310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.801 qpair failed and we were unable to recover it. 00:27:09.801 [2024-11-20 11:21:37.134585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.801 [2024-11-20 11:21:37.134617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.801 qpair failed and we were unable to recover it. 00:27:09.801 [2024-11-20 11:21:37.134804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.801 [2024-11-20 11:21:37.134836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.801 qpair failed and we were unable to recover it. 00:27:09.801 [2024-11-20 11:21:37.134979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.801 [2024-11-20 11:21:37.135015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.801 qpair failed and we were unable to recover it. 00:27:09.801 [2024-11-20 11:21:37.135216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.801 [2024-11-20 11:21:37.135250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.801 qpair failed and we were unable to recover it. 00:27:09.801 [2024-11-20 11:21:37.135502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.801 [2024-11-20 11:21:37.135537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.801 qpair failed and we were unable to recover it. 00:27:09.801 [2024-11-20 11:21:37.135720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.801 [2024-11-20 11:21:37.135752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.801 qpair failed and we were unable to recover it. 00:27:09.801 [2024-11-20 11:21:37.135978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.801 [2024-11-20 11:21:37.136020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.801 qpair failed and we were unable to recover it. 00:27:09.801 [2024-11-20 11:21:37.136305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.801 [2024-11-20 11:21:37.136339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.801 qpair failed and we were unable to recover it. 00:27:09.801 [2024-11-20 11:21:37.136606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.801 [2024-11-20 11:21:37.136638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.801 qpair failed and we were unable to recover it. 00:27:09.801 [2024-11-20 11:21:37.136936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.801 [2024-11-20 11:21:37.136982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.801 qpair failed and we were unable to recover it. 00:27:09.801 [2024-11-20 11:21:37.137239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.801 [2024-11-20 11:21:37.137274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.801 qpair failed and we were unable to recover it. 00:27:09.801 [2024-11-20 11:21:37.137560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.801 [2024-11-20 11:21:37.137594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.801 qpair failed and we were unable to recover it. 00:27:09.801 [2024-11-20 11:21:37.137874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.801 [2024-11-20 11:21:37.137908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.801 qpair failed and we were unable to recover it. 00:27:09.801 [2024-11-20 11:21:37.138187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.801 [2024-11-20 11:21:37.138223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.801 qpair failed and we were unable to recover it. 00:27:09.801 [2024-11-20 11:21:37.138430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.801 [2024-11-20 11:21:37.138463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.801 qpair failed and we were unable to recover it. 00:27:09.801 [2024-11-20 11:21:37.138649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.801 [2024-11-20 11:21:37.138683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.801 qpair failed and we were unable to recover it. 00:27:09.801 [2024-11-20 11:21:37.138879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.801 [2024-11-20 11:21:37.138913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.801 qpair failed and we were unable to recover it. 00:27:09.801 [2024-11-20 11:21:37.139128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.801 [2024-11-20 11:21:37.139163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.801 qpair failed and we were unable to recover it. 00:27:09.801 [2024-11-20 11:21:37.139349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.801 [2024-11-20 11:21:37.139386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.801 qpair failed and we were unable to recover it. 00:27:09.801 [2024-11-20 11:21:37.139589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.801 [2024-11-20 11:21:37.139624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.801 qpair failed and we were unable to recover it. 00:27:09.801 [2024-11-20 11:21:37.139862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.801 [2024-11-20 11:21:37.139897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.801 qpair failed and we were unable to recover it. 00:27:09.801 [2024-11-20 11:21:37.140055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.801 [2024-11-20 11:21:37.140089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.801 qpair failed and we were unable to recover it. 00:27:09.801 [2024-11-20 11:21:37.140368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.801 [2024-11-20 11:21:37.140401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.801 qpair failed and we were unable to recover it. 00:27:09.801 [2024-11-20 11:21:37.140651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.802 [2024-11-20 11:21:37.140685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.802 qpair failed and we were unable to recover it. 00:27:09.802 [2024-11-20 11:21:37.140912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.802 [2024-11-20 11:21:37.140959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.802 qpair failed and we were unable to recover it. 00:27:09.802 [2024-11-20 11:21:37.141085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.802 [2024-11-20 11:21:37.141118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.802 qpair failed and we were unable to recover it. 00:27:09.802 [2024-11-20 11:21:37.141338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.802 [2024-11-20 11:21:37.141370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.802 qpair failed and we were unable to recover it. 00:27:09.802 [2024-11-20 11:21:37.141560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.802 [2024-11-20 11:21:37.141592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.802 qpair failed and we were unable to recover it. 00:27:09.802 [2024-11-20 11:21:37.141850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.802 [2024-11-20 11:21:37.141883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.802 qpair failed and we were unable to recover it. 00:27:09.802 [2024-11-20 11:21:37.142080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.802 [2024-11-20 11:21:37.142114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.802 qpair failed and we were unable to recover it. 00:27:09.802 [2024-11-20 11:21:37.142307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.802 [2024-11-20 11:21:37.142339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.802 qpair failed and we were unable to recover it. 00:27:09.802 [2024-11-20 11:21:37.142525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.802 [2024-11-20 11:21:37.142559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.802 qpair failed and we were unable to recover it. 00:27:09.802 [2024-11-20 11:21:37.142834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.802 [2024-11-20 11:21:37.142867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.802 qpair failed and we were unable to recover it. 00:27:09.802 [2024-11-20 11:21:37.143125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.802 [2024-11-20 11:21:37.143163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.802 qpair failed and we were unable to recover it. 00:27:09.802 [2024-11-20 11:21:37.143284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.802 [2024-11-20 11:21:37.143319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.802 qpair failed and we were unable to recover it. 00:27:09.802 [2024-11-20 11:21:37.143547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.802 [2024-11-20 11:21:37.143580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.802 qpair failed and we were unable to recover it. 00:27:09.802 [2024-11-20 11:21:37.143769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.802 [2024-11-20 11:21:37.143802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.802 qpair failed and we were unable to recover it. 00:27:09.802 [2024-11-20 11:21:37.143921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.802 [2024-11-20 11:21:37.143981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.802 qpair failed and we were unable to recover it. 00:27:09.802 [2024-11-20 11:21:37.144194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.802 [2024-11-20 11:21:37.144226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.802 qpair failed and we were unable to recover it. 00:27:09.802 [2024-11-20 11:21:37.144356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.802 [2024-11-20 11:21:37.144390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.802 qpair failed and we were unable to recover it. 00:27:09.802 [2024-11-20 11:21:37.144498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.802 [2024-11-20 11:21:37.144530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.802 qpair failed and we were unable to recover it. 00:27:09.802 [2024-11-20 11:21:37.144792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.802 [2024-11-20 11:21:37.144827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.802 qpair failed and we were unable to recover it. 00:27:09.802 [2024-11-20 11:21:37.144941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.802 [2024-11-20 11:21:37.144986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.802 qpair failed and we were unable to recover it. 00:27:09.802 [2024-11-20 11:21:37.145107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.802 [2024-11-20 11:21:37.145140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.802 qpair failed and we were unable to recover it. 00:27:09.802 [2024-11-20 11:21:37.145328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.802 [2024-11-20 11:21:37.145363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.802 qpair failed and we were unable to recover it. 00:27:09.802 [2024-11-20 11:21:37.145478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.802 [2024-11-20 11:21:37.145510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.802 qpair failed and we were unable to recover it. 00:27:09.802 [2024-11-20 11:21:37.145763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.802 [2024-11-20 11:21:37.145804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.802 qpair failed and we were unable to recover it. 00:27:09.802 [2024-11-20 11:21:37.146021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.802 [2024-11-20 11:21:37.146056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.802 qpair failed and we were unable to recover it. 00:27:09.802 [2024-11-20 11:21:37.146167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.802 [2024-11-20 11:21:37.146201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.802 qpair failed and we were unable to recover it. 00:27:09.802 [2024-11-20 11:21:37.146451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.802 [2024-11-20 11:21:37.146485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.802 qpair failed and we were unable to recover it. 00:27:09.802 [2024-11-20 11:21:37.146738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.802 [2024-11-20 11:21:37.146774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.802 qpair failed and we were unable to recover it. 00:27:09.802 [2024-11-20 11:21:37.146975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.802 [2024-11-20 11:21:37.147010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.802 qpair failed and we were unable to recover it. 00:27:09.802 [2024-11-20 11:21:37.147211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.802 [2024-11-20 11:21:37.147247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.802 qpair failed and we were unable to recover it. 00:27:09.802 [2024-11-20 11:21:37.147477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.802 [2024-11-20 11:21:37.147510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.802 qpair failed and we were unable to recover it. 00:27:09.802 [2024-11-20 11:21:37.147730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.802 [2024-11-20 11:21:37.147763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.802 qpair failed and we were unable to recover it. 00:27:09.802 [2024-11-20 11:21:37.147963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.802 [2024-11-20 11:21:37.148000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.802 qpair failed and we were unable to recover it. 00:27:09.802 [2024-11-20 11:21:37.148186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.803 [2024-11-20 11:21:37.148221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.803 qpair failed and we were unable to recover it. 00:27:09.803 [2024-11-20 11:21:37.148424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.803 [2024-11-20 11:21:37.148457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.803 qpair failed and we were unable to recover it. 00:27:09.803 [2024-11-20 11:21:37.148716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.803 [2024-11-20 11:21:37.148751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.803 qpair failed and we were unable to recover it. 00:27:09.803 [2024-11-20 11:21:37.148974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.803 [2024-11-20 11:21:37.149009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.803 qpair failed and we were unable to recover it. 00:27:09.803 [2024-11-20 11:21:37.149215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.803 [2024-11-20 11:21:37.149253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.803 qpair failed and we were unable to recover it. 00:27:09.803 [2024-11-20 11:21:37.149387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.803 [2024-11-20 11:21:37.149418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.803 qpair failed and we were unable to recover it. 00:27:09.803 [2024-11-20 11:21:37.149655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.803 [2024-11-20 11:21:37.149687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.803 qpair failed and we were unable to recover it. 00:27:09.803 [2024-11-20 11:21:37.149901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.803 [2024-11-20 11:21:37.149934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.803 qpair failed and we were unable to recover it. 00:27:09.803 [2024-11-20 11:21:37.150144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.803 [2024-11-20 11:21:37.150177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.803 qpair failed and we were unable to recover it. 00:27:09.803 [2024-11-20 11:21:37.150360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.803 [2024-11-20 11:21:37.150393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.803 qpair failed and we were unable to recover it. 00:27:09.803 [2024-11-20 11:21:37.150585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.803 [2024-11-20 11:21:37.150619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.803 qpair failed and we were unable to recover it. 00:27:09.803 [2024-11-20 11:21:37.150825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.803 [2024-11-20 11:21:37.150859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.803 qpair failed and we were unable to recover it. 00:27:09.803 [2024-11-20 11:21:37.150991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.803 [2024-11-20 11:21:37.151027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.803 qpair failed and we were unable to recover it. 00:27:09.803 [2024-11-20 11:21:37.151213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.803 [2024-11-20 11:21:37.151249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.803 qpair failed and we were unable to recover it. 00:27:09.803 [2024-11-20 11:21:37.151527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.803 [2024-11-20 11:21:37.151561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.803 qpair failed and we were unable to recover it. 00:27:09.803 [2024-11-20 11:21:37.151742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.803 [2024-11-20 11:21:37.151776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.803 qpair failed and we were unable to recover it. 00:27:09.803 [2024-11-20 11:21:37.151918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.803 [2024-11-20 11:21:37.151971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.803 qpair failed and we were unable to recover it. 00:27:09.803 [2024-11-20 11:21:37.152175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.803 [2024-11-20 11:21:37.152208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.803 qpair failed and we were unable to recover it. 00:27:09.803 [2024-11-20 11:21:37.152407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.803 [2024-11-20 11:21:37.152440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.803 qpair failed and we were unable to recover it. 00:27:09.803 [2024-11-20 11:21:37.152645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.803 [2024-11-20 11:21:37.152679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.803 qpair failed and we were unable to recover it. 00:27:09.803 [2024-11-20 11:21:37.152879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.803 [2024-11-20 11:21:37.152911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.803 qpair failed and we were unable to recover it. 00:27:09.803 [2024-11-20 11:21:37.153212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.803 [2024-11-20 11:21:37.153247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.803 qpair failed and we were unable to recover it. 00:27:09.803 [2024-11-20 11:21:37.153510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.803 [2024-11-20 11:21:37.153542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.803 qpair failed and we were unable to recover it. 00:27:09.803 [2024-11-20 11:21:37.153696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.803 [2024-11-20 11:21:37.153729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.803 qpair failed and we were unable to recover it. 00:27:09.803 [2024-11-20 11:21:37.153921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.803 [2024-11-20 11:21:37.153965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.803 qpair failed and we were unable to recover it. 00:27:09.803 [2024-11-20 11:21:37.154098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.803 [2024-11-20 11:21:37.154130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.803 qpair failed and we were unable to recover it. 00:27:09.803 [2024-11-20 11:21:37.154324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.803 [2024-11-20 11:21:37.154356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.803 qpair failed and we were unable to recover it. 00:27:09.803 [2024-11-20 11:21:37.154492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.803 [2024-11-20 11:21:37.154525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.803 qpair failed and we were unable to recover it. 00:27:09.803 [2024-11-20 11:21:37.154684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.803 [2024-11-20 11:21:37.154718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.803 qpair failed and we were unable to recover it. 00:27:09.803 [2024-11-20 11:21:37.154904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.803 [2024-11-20 11:21:37.154937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.803 qpair failed and we were unable to recover it. 00:27:09.803 [2024-11-20 11:21:37.155068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.803 [2024-11-20 11:21:37.155107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.803 qpair failed and we were unable to recover it. 00:27:09.803 [2024-11-20 11:21:37.155325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.803 [2024-11-20 11:21:37.155358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.803 qpair failed and we were unable to recover it. 00:27:09.803 [2024-11-20 11:21:37.155545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.803 [2024-11-20 11:21:37.155578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.803 qpair failed and we were unable to recover it. 00:27:09.803 [2024-11-20 11:21:37.155718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.804 [2024-11-20 11:21:37.155752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.804 qpair failed and we were unable to recover it. 00:27:09.804 [2024-11-20 11:21:37.156014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.804 [2024-11-20 11:21:37.156050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.804 qpair failed and we were unable to recover it. 00:27:09.804 [2024-11-20 11:21:37.156234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.804 [2024-11-20 11:21:37.156268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.804 qpair failed and we were unable to recover it. 00:27:09.804 [2024-11-20 11:21:37.156463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.804 [2024-11-20 11:21:37.156496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.804 qpair failed and we were unable to recover it. 00:27:09.804 [2024-11-20 11:21:37.156740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.804 [2024-11-20 11:21:37.156774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.804 qpair failed and we were unable to recover it. 00:27:09.804 [2024-11-20 11:21:37.156970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.804 [2024-11-20 11:21:37.157006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.804 qpair failed and we were unable to recover it. 00:27:09.804 [2024-11-20 11:21:37.157187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.804 [2024-11-20 11:21:37.157219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.804 qpair failed and we were unable to recover it. 00:27:09.804 [2024-11-20 11:21:37.157416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.804 [2024-11-20 11:21:37.157448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.804 qpair failed and we were unable to recover it. 00:27:09.804 [2024-11-20 11:21:37.157638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.804 [2024-11-20 11:21:37.157671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.804 qpair failed and we were unable to recover it. 00:27:09.804 [2024-11-20 11:21:37.157855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.804 [2024-11-20 11:21:37.157889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.804 qpair failed and we were unable to recover it. 00:27:09.804 [2024-11-20 11:21:37.158214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.804 [2024-11-20 11:21:37.158251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.804 qpair failed and we were unable to recover it. 00:27:09.804 [2024-11-20 11:21:37.158471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.804 [2024-11-20 11:21:37.158505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.804 qpair failed and we were unable to recover it. 00:27:09.804 [2024-11-20 11:21:37.158691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.804 [2024-11-20 11:21:37.158725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.804 qpair failed and we were unable to recover it. 00:27:09.804 [2024-11-20 11:21:37.158919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.804 [2024-11-20 11:21:37.158966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.804 qpair failed and we were unable to recover it. 00:27:09.804 [2024-11-20 11:21:37.159085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.804 [2024-11-20 11:21:37.159118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.804 qpair failed and we were unable to recover it. 00:27:09.804 [2024-11-20 11:21:37.159320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.804 [2024-11-20 11:21:37.159353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.804 qpair failed and we were unable to recover it. 00:27:09.804 [2024-11-20 11:21:37.159463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.804 [2024-11-20 11:21:37.159495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.804 qpair failed and we were unable to recover it. 00:27:09.804 [2024-11-20 11:21:37.159678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.804 [2024-11-20 11:21:37.159713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.804 qpair failed and we were unable to recover it. 00:27:09.804 [2024-11-20 11:21:37.159893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.804 [2024-11-20 11:21:37.159929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.804 qpair failed and we were unable to recover it. 00:27:09.804 [2024-11-20 11:21:37.160135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.804 [2024-11-20 11:21:37.160169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.804 qpair failed and we were unable to recover it. 00:27:09.804 [2024-11-20 11:21:37.160309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.804 [2024-11-20 11:21:37.160342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.804 qpair failed and we were unable to recover it. 00:27:09.804 [2024-11-20 11:21:37.160489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.804 [2024-11-20 11:21:37.160522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.804 qpair failed and we were unable to recover it. 00:27:09.804 [2024-11-20 11:21:37.160649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.804 [2024-11-20 11:21:37.160683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.804 qpair failed and we were unable to recover it. 00:27:09.804 [2024-11-20 11:21:37.160877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.804 [2024-11-20 11:21:37.160910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.804 qpair failed and we were unable to recover it. 00:27:09.804 [2024-11-20 11:21:37.161059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.804 [2024-11-20 11:21:37.161093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.804 qpair failed and we were unable to recover it. 00:27:09.804 [2024-11-20 11:21:37.161236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.804 [2024-11-20 11:21:37.161270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.804 qpair failed and we were unable to recover it. 00:27:09.804 [2024-11-20 11:21:37.161402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.804 [2024-11-20 11:21:37.161435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.804 qpair failed and we were unable to recover it. 00:27:09.804 [2024-11-20 11:21:37.161620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.804 [2024-11-20 11:21:37.161653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.804 qpair failed and we were unable to recover it. 00:27:09.804 [2024-11-20 11:21:37.161926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.804 [2024-11-20 11:21:37.161978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.804 qpair failed and we were unable to recover it. 00:27:09.804 [2024-11-20 11:21:37.162129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.805 [2024-11-20 11:21:37.162161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.805 qpair failed and we were unable to recover it. 00:27:09.805 [2024-11-20 11:21:37.162419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.805 [2024-11-20 11:21:37.162451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.805 qpair failed and we were unable to recover it. 00:27:09.805 [2024-11-20 11:21:37.162669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.805 [2024-11-20 11:21:37.162702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.805 qpair failed and we were unable to recover it. 00:27:09.805 [2024-11-20 11:21:37.162909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.805 [2024-11-20 11:21:37.162942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.805 qpair failed and we were unable to recover it. 00:27:09.805 [2024-11-20 11:21:37.163164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.805 [2024-11-20 11:21:37.163197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.805 qpair failed and we were unable to recover it. 00:27:09.805 [2024-11-20 11:21:37.163329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.805 [2024-11-20 11:21:37.163363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.805 qpair failed and we were unable to recover it. 00:27:09.805 [2024-11-20 11:21:37.163656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.805 [2024-11-20 11:21:37.163690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.805 qpair failed and we were unable to recover it. 00:27:09.805 [2024-11-20 11:21:37.163840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.805 [2024-11-20 11:21:37.163874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.805 qpair failed and we were unable to recover it. 00:27:09.805 [2024-11-20 11:21:37.164059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.805 [2024-11-20 11:21:37.164101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.805 qpair failed and we were unable to recover it. 00:27:09.805 [2024-11-20 11:21:37.164358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.805 [2024-11-20 11:21:37.164392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.805 qpair failed and we were unable to recover it. 00:27:09.805 [2024-11-20 11:21:37.164586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.805 [2024-11-20 11:21:37.164619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.805 qpair failed and we were unable to recover it. 00:27:09.805 [2024-11-20 11:21:37.164829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.805 [2024-11-20 11:21:37.164862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.805 qpair failed and we were unable to recover it. 00:27:09.805 [2024-11-20 11:21:37.165012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.805 [2024-11-20 11:21:37.165046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.805 qpair failed and we were unable to recover it. 00:27:09.805 [2024-11-20 11:21:37.165185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.805 [2024-11-20 11:21:37.165218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.805 qpair failed and we were unable to recover it. 00:27:09.805 [2024-11-20 11:21:37.165403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.805 [2024-11-20 11:21:37.165438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.805 qpair failed and we were unable to recover it. 00:27:09.805 [2024-11-20 11:21:37.165695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.805 [2024-11-20 11:21:37.165728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.805 qpair failed and we were unable to recover it. 00:27:09.805 [2024-11-20 11:21:37.165928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.805 [2024-11-20 11:21:37.165973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.805 qpair failed and we were unable to recover it. 00:27:09.805 [2024-11-20 11:21:37.166115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.805 [2024-11-20 11:21:37.166148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.805 qpair failed and we were unable to recover it. 00:27:09.805 [2024-11-20 11:21:37.166271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.805 [2024-11-20 11:21:37.166303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.805 qpair failed and we were unable to recover it. 00:27:09.805 [2024-11-20 11:21:37.166417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.805 [2024-11-20 11:21:37.166449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.805 qpair failed and we were unable to recover it. 00:27:09.805 [2024-11-20 11:21:37.166739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.805 [2024-11-20 11:21:37.166773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.805 qpair failed and we were unable to recover it. 00:27:09.805 [2024-11-20 11:21:37.166897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.805 [2024-11-20 11:21:37.166930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.805 qpair failed and we were unable to recover it. 00:27:09.805 [2024-11-20 11:21:37.167067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.805 [2024-11-20 11:21:37.167101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.805 qpair failed and we were unable to recover it. 00:27:09.805 [2024-11-20 11:21:37.167305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.805 [2024-11-20 11:21:37.167337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.805 qpair failed and we were unable to recover it. 00:27:09.805 [2024-11-20 11:21:37.167473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.805 [2024-11-20 11:21:37.167506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.805 qpair failed and we were unable to recover it. 00:27:09.805 [2024-11-20 11:21:37.167705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.805 [2024-11-20 11:21:37.167738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.805 qpair failed and we were unable to recover it. 00:27:09.805 [2024-11-20 11:21:37.167940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.805 [2024-11-20 11:21:37.167990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.805 qpair failed and we were unable to recover it. 00:27:09.805 [2024-11-20 11:21:37.168264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.805 [2024-11-20 11:21:37.168299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.805 qpair failed and we were unable to recover it. 00:27:09.805 [2024-11-20 11:21:37.168412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.805 [2024-11-20 11:21:37.168445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.805 qpair failed and we were unable to recover it. 00:27:09.805 [2024-11-20 11:21:37.168630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.805 [2024-11-20 11:21:37.168663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.805 qpair failed and we were unable to recover it. 00:27:09.805 [2024-11-20 11:21:37.168861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.805 [2024-11-20 11:21:37.168895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.805 qpair failed and we were unable to recover it. 00:27:09.805 [2024-11-20 11:21:37.169049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.805 [2024-11-20 11:21:37.169083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.805 qpair failed and we were unable to recover it. 00:27:09.805 [2024-11-20 11:21:37.169277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.805 [2024-11-20 11:21:37.169310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.805 qpair failed and we were unable to recover it. 00:27:09.805 [2024-11-20 11:21:37.169434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.806 [2024-11-20 11:21:37.169467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.806 qpair failed and we were unable to recover it. 00:27:09.806 [2024-11-20 11:21:37.169668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.806 [2024-11-20 11:21:37.169701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.806 qpair failed and we were unable to recover it. 00:27:09.806 [2024-11-20 11:21:37.169979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.806 [2024-11-20 11:21:37.170015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.806 qpair failed and we were unable to recover it. 00:27:09.806 [2024-11-20 11:21:37.170204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.806 [2024-11-20 11:21:37.170239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.806 qpair failed and we were unable to recover it. 00:27:09.806 [2024-11-20 11:21:37.170419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.806 [2024-11-20 11:21:37.170452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.806 qpair failed and we were unable to recover it. 00:27:09.806 [2024-11-20 11:21:37.170636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.806 [2024-11-20 11:21:37.170670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.806 qpair failed and we were unable to recover it. 00:27:09.806 [2024-11-20 11:21:37.170856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.806 [2024-11-20 11:21:37.170889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.806 qpair failed and we were unable to recover it. 00:27:09.806 [2024-11-20 11:21:37.171089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.806 [2024-11-20 11:21:37.171123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.806 qpair failed and we were unable to recover it. 00:27:09.806 [2024-11-20 11:21:37.171256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.806 [2024-11-20 11:21:37.171290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.806 qpair failed and we were unable to recover it. 00:27:09.806 [2024-11-20 11:21:37.171487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.806 [2024-11-20 11:21:37.171522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.806 qpair failed and we were unable to recover it. 00:27:09.806 [2024-11-20 11:21:37.171637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.806 [2024-11-20 11:21:37.171670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.806 qpair failed and we were unable to recover it. 00:27:09.806 [2024-11-20 11:21:37.171856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.806 [2024-11-20 11:21:37.171889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.806 qpair failed and we were unable to recover it. 00:27:09.806 [2024-11-20 11:21:37.172033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.806 [2024-11-20 11:21:37.172069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.806 qpair failed and we were unable to recover it. 00:27:09.806 [2024-11-20 11:21:37.172267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.806 [2024-11-20 11:21:37.172301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.806 qpair failed and we were unable to recover it. 00:27:09.806 [2024-11-20 11:21:37.172531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.806 [2024-11-20 11:21:37.172566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.806 qpair failed and we were unable to recover it. 00:27:09.806 [2024-11-20 11:21:37.172817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.806 [2024-11-20 11:21:37.172856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.806 qpair failed and we were unable to recover it. 00:27:09.806 [2024-11-20 11:21:37.173052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.806 [2024-11-20 11:21:37.173087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.806 qpair failed and we were unable to recover it. 00:27:09.806 [2024-11-20 11:21:37.173199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.806 [2024-11-20 11:21:37.173232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.806 qpair failed and we were unable to recover it. 00:27:09.806 [2024-11-20 11:21:37.173341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.806 [2024-11-20 11:21:37.173373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.806 qpair failed and we were unable to recover it. 00:27:09.806 [2024-11-20 11:21:37.173562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.806 [2024-11-20 11:21:37.173596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.806 qpair failed and we were unable to recover it. 00:27:09.806 [2024-11-20 11:21:37.173787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.806 [2024-11-20 11:21:37.173819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.806 qpair failed and we were unable to recover it. 00:27:09.806 [2024-11-20 11:21:37.174011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.806 [2024-11-20 11:21:37.174046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.806 qpair failed and we were unable to recover it. 00:27:09.806 [2024-11-20 11:21:37.174194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.806 [2024-11-20 11:21:37.174228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.806 qpair failed and we were unable to recover it. 00:27:09.806 [2024-11-20 11:21:37.174427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.806 [2024-11-20 11:21:37.174461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.806 qpair failed and we were unable to recover it. 00:27:09.806 [2024-11-20 11:21:37.174644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.806 [2024-11-20 11:21:37.174677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.806 qpair failed and we were unable to recover it. 00:27:09.806 [2024-11-20 11:21:37.174809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.806 [2024-11-20 11:21:37.174842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.806 qpair failed and we were unable to recover it. 00:27:09.806 [2024-11-20 11:21:37.175066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.806 [2024-11-20 11:21:37.175100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.806 qpair failed and we were unable to recover it. 00:27:09.806 [2024-11-20 11:21:37.175292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.806 [2024-11-20 11:21:37.175326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.806 qpair failed and we were unable to recover it. 00:27:09.806 [2024-11-20 11:21:37.175456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.806 [2024-11-20 11:21:37.175489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.806 qpair failed and we were unable to recover it. 00:27:09.806 [2024-11-20 11:21:37.175690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.806 [2024-11-20 11:21:37.175724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.806 qpair failed and we were unable to recover it. 00:27:09.806 [2024-11-20 11:21:37.175979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.806 [2024-11-20 11:21:37.176016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.806 qpair failed and we were unable to recover it. 00:27:09.806 [2024-11-20 11:21:37.176206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.806 [2024-11-20 11:21:37.176240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.806 qpair failed and we were unable to recover it. 00:27:09.806 [2024-11-20 11:21:37.176419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.806 [2024-11-20 11:21:37.176451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.806 qpair failed and we were unable to recover it. 00:27:09.806 [2024-11-20 11:21:37.176580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.807 [2024-11-20 11:21:37.176612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.807 qpair failed and we were unable to recover it. 00:27:09.807 [2024-11-20 11:21:37.176738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.807 [2024-11-20 11:21:37.176770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.807 qpair failed and we were unable to recover it. 00:27:09.807 [2024-11-20 11:21:37.176973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.807 [2024-11-20 11:21:37.177010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.807 qpair failed and we were unable to recover it. 00:27:09.807 [2024-11-20 11:21:37.177192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.807 [2024-11-20 11:21:37.177226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.807 qpair failed and we were unable to recover it. 00:27:09.807 [2024-11-20 11:21:37.177364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.807 [2024-11-20 11:21:37.177395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.807 qpair failed and we were unable to recover it. 00:27:09.807 [2024-11-20 11:21:37.177595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.807 [2024-11-20 11:21:37.177629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.807 qpair failed and we were unable to recover it. 00:27:09.807 [2024-11-20 11:21:37.177829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.807 [2024-11-20 11:21:37.177863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.807 qpair failed and we were unable to recover it. 00:27:09.807 [2024-11-20 11:21:37.178099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.807 [2024-11-20 11:21:37.178134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.807 qpair failed and we were unable to recover it. 00:27:09.807 [2024-11-20 11:21:37.178343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.807 [2024-11-20 11:21:37.178377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.807 qpair failed and we were unable to recover it. 00:27:09.807 [2024-11-20 11:21:37.178630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.807 [2024-11-20 11:21:37.178663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.807 qpair failed and we were unable to recover it. 00:27:09.807 [2024-11-20 11:21:37.178874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.807 [2024-11-20 11:21:37.178908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.807 qpair failed and we were unable to recover it. 00:27:09.807 [2024-11-20 11:21:37.179119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.807 [2024-11-20 11:21:37.179153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.807 qpair failed and we were unable to recover it. 00:27:09.807 [2024-11-20 11:21:37.179402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.807 [2024-11-20 11:21:37.179436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.807 qpair failed and we were unable to recover it. 00:27:09.807 [2024-11-20 11:21:37.179628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.807 [2024-11-20 11:21:37.179661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.807 qpair failed and we were unable to recover it. 00:27:09.807 [2024-11-20 11:21:37.179881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.807 [2024-11-20 11:21:37.179914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.807 qpair failed and we were unable to recover it. 00:27:09.807 [2024-11-20 11:21:37.180108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.807 [2024-11-20 11:21:37.180142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.807 qpair failed and we were unable to recover it. 00:27:09.807 [2024-11-20 11:21:37.180270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.807 [2024-11-20 11:21:37.180302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.807 qpair failed and we were unable to recover it. 00:27:09.807 [2024-11-20 11:21:37.180479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.807 [2024-11-20 11:21:37.180513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.807 qpair failed and we were unable to recover it. 00:27:09.807 [2024-11-20 11:21:37.180655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.807 [2024-11-20 11:21:37.180688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.807 qpair failed and we were unable to recover it. 00:27:09.807 [2024-11-20 11:21:37.180805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.807 [2024-11-20 11:21:37.180839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.807 qpair failed and we were unable to recover it. 00:27:09.807 [2024-11-20 11:21:37.180984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.807 [2024-11-20 11:21:37.181018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.807 qpair failed and we were unable to recover it. 00:27:09.807 [2024-11-20 11:21:37.181147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.807 [2024-11-20 11:21:37.181179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.807 qpair failed and we were unable to recover it. 00:27:09.807 [2024-11-20 11:21:37.181302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.807 [2024-11-20 11:21:37.181343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.807 qpair failed and we were unable to recover it. 00:27:09.807 [2024-11-20 11:21:37.181534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.807 [2024-11-20 11:21:37.181569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.807 qpair failed and we were unable to recover it. 00:27:09.807 [2024-11-20 11:21:37.181748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.807 [2024-11-20 11:21:37.181781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.807 qpair failed and we were unable to recover it. 00:27:09.807 [2024-11-20 11:21:37.181976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.807 [2024-11-20 11:21:37.182011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.807 qpair failed and we were unable to recover it. 00:27:09.807 [2024-11-20 11:21:37.182202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.807 [2024-11-20 11:21:37.182236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.807 qpair failed and we were unable to recover it. 00:27:09.807 [2024-11-20 11:21:37.182432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.807 [2024-11-20 11:21:37.182466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.807 qpair failed and we were unable to recover it. 00:27:09.807 [2024-11-20 11:21:37.182657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.807 [2024-11-20 11:21:37.182690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.807 qpair failed and we were unable to recover it. 00:27:09.807 [2024-11-20 11:21:37.182876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.807 [2024-11-20 11:21:37.182910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.807 qpair failed and we were unable to recover it. 00:27:09.807 [2024-11-20 11:21:37.183116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.807 [2024-11-20 11:21:37.183151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.807 qpair failed and we were unable to recover it. 00:27:09.807 [2024-11-20 11:21:37.183286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.807 [2024-11-20 11:21:37.183319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.807 qpair failed and we were unable to recover it. 00:27:09.807 [2024-11-20 11:21:37.183523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.807 [2024-11-20 11:21:37.183556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.807 qpair failed and we were unable to recover it. 00:27:09.808 [2024-11-20 11:21:37.183687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.808 [2024-11-20 11:21:37.183719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.808 qpair failed and we were unable to recover it. 00:27:09.808 [2024-11-20 11:21:37.183912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.808 [2024-11-20 11:21:37.183958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.808 qpair failed and we were unable to recover it. 00:27:09.808 [2024-11-20 11:21:37.184088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.808 [2024-11-20 11:21:37.184120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.808 qpair failed and we were unable to recover it. 00:27:09.808 [2024-11-20 11:21:37.184306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.808 [2024-11-20 11:21:37.184339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.808 qpair failed and we were unable to recover it. 00:27:09.808 [2024-11-20 11:21:37.184611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.808 [2024-11-20 11:21:37.184646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.808 qpair failed and we were unable to recover it. 00:27:09.808 [2024-11-20 11:21:37.184829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.808 [2024-11-20 11:21:37.184864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.808 qpair failed and we were unable to recover it. 00:27:09.808 [2024-11-20 11:21:37.185071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.808 [2024-11-20 11:21:37.185105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.808 qpair failed and we were unable to recover it. 00:27:09.808 [2024-11-20 11:21:37.185303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.808 [2024-11-20 11:21:37.185337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.808 qpair failed and we were unable to recover it. 00:27:09.808 [2024-11-20 11:21:37.185614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.808 [2024-11-20 11:21:37.185648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.808 qpair failed and we were unable to recover it. 00:27:09.808 [2024-11-20 11:21:37.185834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.808 [2024-11-20 11:21:37.185868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.808 qpair failed and we were unable to recover it. 00:27:09.808 [2024-11-20 11:21:37.186090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.808 [2024-11-20 11:21:37.186127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.808 qpair failed and we were unable to recover it. 00:27:09.808 [2024-11-20 11:21:37.186331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.808 [2024-11-20 11:21:37.186365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.808 qpair failed and we were unable to recover it. 00:27:09.808 [2024-11-20 11:21:37.186484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.808 [2024-11-20 11:21:37.186520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.808 qpair failed and we were unable to recover it. 00:27:09.808 [2024-11-20 11:21:37.186635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.808 [2024-11-20 11:21:37.186668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.808 qpair failed and we were unable to recover it. 00:27:09.808 [2024-11-20 11:21:37.186781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.808 [2024-11-20 11:21:37.186815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.808 qpair failed and we were unable to recover it. 00:27:09.808 [2024-11-20 11:21:37.187075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.808 [2024-11-20 11:21:37.187112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.808 qpair failed and we were unable to recover it. 00:27:09.808 [2024-11-20 11:21:37.187328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.808 [2024-11-20 11:21:37.187362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.808 qpair failed and we were unable to recover it. 00:27:09.808 [2024-11-20 11:21:37.187631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.808 [2024-11-20 11:21:37.187664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.808 qpair failed and we were unable to recover it. 00:27:09.808 [2024-11-20 11:21:37.187876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.808 [2024-11-20 11:21:37.187908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.808 qpair failed and we were unable to recover it. 00:27:09.808 [2024-11-20 11:21:37.188105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.808 [2024-11-20 11:21:37.188141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.808 qpair failed and we were unable to recover it. 00:27:09.808 [2024-11-20 11:21:37.188327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.808 [2024-11-20 11:21:37.188360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.808 qpair failed and we were unable to recover it. 00:27:09.808 [2024-11-20 11:21:37.188474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.808 [2024-11-20 11:21:37.188508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.808 qpair failed and we were unable to recover it. 00:27:09.808 [2024-11-20 11:21:37.188638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.808 [2024-11-20 11:21:37.188670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.808 qpair failed and we were unable to recover it. 00:27:09.808 [2024-11-20 11:21:37.188916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.808 [2024-11-20 11:21:37.188972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.808 qpair failed and we were unable to recover it. 00:27:09.808 [2024-11-20 11:21:37.189244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.808 [2024-11-20 11:21:37.189278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.808 qpair failed and we were unable to recover it. 00:27:09.808 [2024-11-20 11:21:37.189455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.808 [2024-11-20 11:21:37.189489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.808 qpair failed and we were unable to recover it. 00:27:09.808 [2024-11-20 11:21:37.189600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.808 [2024-11-20 11:21:37.189633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.808 qpair failed and we were unable to recover it. 00:27:09.808 [2024-11-20 11:21:37.189737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.808 [2024-11-20 11:21:37.189769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.808 qpair failed and we were unable to recover it. 00:27:09.808 [2024-11-20 11:21:37.189976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.808 [2024-11-20 11:21:37.190012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.808 qpair failed and we were unable to recover it. 00:27:09.808 [2024-11-20 11:21:37.190263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.808 [2024-11-20 11:21:37.190304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.808 qpair failed and we were unable to recover it. 00:27:09.808 [2024-11-20 11:21:37.190499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.808 [2024-11-20 11:21:37.190532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.808 qpair failed and we were unable to recover it. 00:27:09.808 [2024-11-20 11:21:37.190667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.808 [2024-11-20 11:21:37.190701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.808 qpair failed and we were unable to recover it. 00:27:09.808 [2024-11-20 11:21:37.190960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.808 [2024-11-20 11:21:37.190994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.808 qpair failed and we were unable to recover it. 00:27:09.809 [2024-11-20 11:21:37.191130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.809 [2024-11-20 11:21:37.191164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.809 qpair failed and we were unable to recover it. 00:27:09.809 [2024-11-20 11:21:37.191305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.809 [2024-11-20 11:21:37.191339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.809 qpair failed and we were unable to recover it. 00:27:09.809 [2024-11-20 11:21:37.191528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.809 [2024-11-20 11:21:37.191562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.809 qpair failed and we were unable to recover it. 00:27:09.809 [2024-11-20 11:21:37.191670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.809 [2024-11-20 11:21:37.191702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.809 qpair failed and we were unable to recover it. 00:27:09.809 [2024-11-20 11:21:37.191815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.809 [2024-11-20 11:21:37.191848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.809 qpair failed and we were unable to recover it. 00:27:09.809 [2024-11-20 11:21:37.191990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.809 [2024-11-20 11:21:37.192026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.809 qpair failed and we were unable to recover it. 00:27:09.809 [2024-11-20 11:21:37.192226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.809 [2024-11-20 11:21:37.192260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.809 qpair failed and we were unable to recover it. 00:27:09.809 [2024-11-20 11:21:37.192443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.809 [2024-11-20 11:21:37.192477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.809 qpair failed and we were unable to recover it. 00:27:09.809 [2024-11-20 11:21:37.192665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.809 [2024-11-20 11:21:37.192699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.809 qpair failed and we were unable to recover it. 00:27:09.809 [2024-11-20 11:21:37.192876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.809 [2024-11-20 11:21:37.192911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.809 qpair failed and we were unable to recover it. 00:27:09.809 [2024-11-20 11:21:37.193070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.809 [2024-11-20 11:21:37.193106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.809 qpair failed and we were unable to recover it. 00:27:09.809 [2024-11-20 11:21:37.193237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.809 [2024-11-20 11:21:37.193272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.809 qpair failed and we were unable to recover it. 00:27:09.809 [2024-11-20 11:21:37.193573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.809 [2024-11-20 11:21:37.193606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.809 qpair failed and we were unable to recover it. 00:27:09.809 [2024-11-20 11:21:37.193854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.809 [2024-11-20 11:21:37.193888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.809 qpair failed and we were unable to recover it. 00:27:09.809 [2024-11-20 11:21:37.194104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.809 [2024-11-20 11:21:37.194138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.809 qpair failed and we were unable to recover it. 00:27:09.809 [2024-11-20 11:21:37.194349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.809 [2024-11-20 11:21:37.194383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.809 qpair failed and we were unable to recover it. 00:27:09.809 [2024-11-20 11:21:37.194505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.809 [2024-11-20 11:21:37.194538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.809 qpair failed and we were unable to recover it. 00:27:09.809 [2024-11-20 11:21:37.194741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.809 [2024-11-20 11:21:37.194774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.809 qpair failed and we were unable to recover it. 00:27:09.809 [2024-11-20 11:21:37.194890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.809 [2024-11-20 11:21:37.194921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.809 qpair failed and we were unable to recover it. 00:27:09.809 [2024-11-20 11:21:37.195166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.809 [2024-11-20 11:21:37.195200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.809 qpair failed and we were unable to recover it. 00:27:09.809 [2024-11-20 11:21:37.195380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.809 [2024-11-20 11:21:37.195414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.809 qpair failed and we were unable to recover it. 00:27:09.809 [2024-11-20 11:21:37.195666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.809 [2024-11-20 11:21:37.195699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.809 qpair failed and we were unable to recover it. 00:27:09.809 [2024-11-20 11:21:37.195907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.809 [2024-11-20 11:21:37.195940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.809 qpair failed and we were unable to recover it. 00:27:09.809 [2024-11-20 11:21:37.196089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.809 [2024-11-20 11:21:37.196123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.809 qpair failed and we were unable to recover it. 00:27:09.809 [2024-11-20 11:21:37.196269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.809 [2024-11-20 11:21:37.196302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.809 qpair failed and we were unable to recover it. 00:27:09.809 [2024-11-20 11:21:37.196513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.809 [2024-11-20 11:21:37.196544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.809 qpair failed and we were unable to recover it. 00:27:09.809 [2024-11-20 11:21:37.196668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.809 [2024-11-20 11:21:37.196701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.809 qpair failed and we were unable to recover it. 00:27:09.809 [2024-11-20 11:21:37.196893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.809 [2024-11-20 11:21:37.196926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.809 qpair failed and we were unable to recover it. 00:27:09.809 [2024-11-20 11:21:37.197204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.809 [2024-11-20 11:21:37.197240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.809 qpair failed and we were unable to recover it. 00:27:09.809 [2024-11-20 11:21:37.197457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.810 [2024-11-20 11:21:37.197489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.810 qpair failed and we were unable to recover it. 00:27:09.810 [2024-11-20 11:21:37.197739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.810 [2024-11-20 11:21:37.197771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.810 qpair failed and we were unable to recover it. 00:27:09.810 [2024-11-20 11:21:37.197979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.810 [2024-11-20 11:21:37.198015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.810 qpair failed and we were unable to recover it. 00:27:09.810 [2024-11-20 11:21:37.198214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.810 [2024-11-20 11:21:37.198245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.810 qpair failed and we were unable to recover it. 00:27:09.810 [2024-11-20 11:21:37.198372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.810 [2024-11-20 11:21:37.198405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.810 qpair failed and we were unable to recover it. 00:27:09.810 [2024-11-20 11:21:37.198590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.810 [2024-11-20 11:21:37.198623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.810 qpair failed and we were unable to recover it. 00:27:09.810 [2024-11-20 11:21:37.198801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.810 [2024-11-20 11:21:37.198833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.810 qpair failed and we were unable to recover it. 00:27:09.810 [2024-11-20 11:21:37.199078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.810 [2024-11-20 11:21:37.199119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.810 qpair failed and we were unable to recover it. 00:27:09.810 [2024-11-20 11:21:37.199314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.810 [2024-11-20 11:21:37.199348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.810 qpair failed and we were unable to recover it. 00:27:09.810 [2024-11-20 11:21:37.199520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.810 [2024-11-20 11:21:37.199553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.810 qpair failed and we were unable to recover it. 00:27:09.810 [2024-11-20 11:21:37.199733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.810 [2024-11-20 11:21:37.199766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.810 qpair failed and we were unable to recover it. 00:27:09.810 [2024-11-20 11:21:37.199969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.810 [2024-11-20 11:21:37.200005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.810 qpair failed and we were unable to recover it. 00:27:09.810 [2024-11-20 11:21:37.200252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.810 [2024-11-20 11:21:37.200285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.810 qpair failed and we were unable to recover it. 00:27:09.810 [2024-11-20 11:21:37.200462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.810 [2024-11-20 11:21:37.200495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.810 qpair failed and we were unable to recover it. 00:27:09.810 [2024-11-20 11:21:37.200708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.810 [2024-11-20 11:21:37.200742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.810 qpair failed and we were unable to recover it. 00:27:09.810 [2024-11-20 11:21:37.200858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.810 [2024-11-20 11:21:37.200892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.810 qpair failed and we were unable to recover it. 00:27:09.810 [2024-11-20 11:21:37.201047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.810 [2024-11-20 11:21:37.201082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.810 qpair failed and we were unable to recover it. 00:27:09.810 [2024-11-20 11:21:37.201204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.810 [2024-11-20 11:21:37.201237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.810 qpair failed and we were unable to recover it. 00:27:09.810 [2024-11-20 11:21:37.201435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.810 [2024-11-20 11:21:37.201469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.810 qpair failed and we were unable to recover it. 00:27:09.810 [2024-11-20 11:21:37.201666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.810 [2024-11-20 11:21:37.201699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.810 qpair failed and we were unable to recover it. 00:27:09.810 [2024-11-20 11:21:37.201880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.810 [2024-11-20 11:21:37.201912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.810 qpair failed and we were unable to recover it. 00:27:09.810 [2024-11-20 11:21:37.202126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.810 [2024-11-20 11:21:37.202160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.810 qpair failed and we were unable to recover it. 00:27:09.810 [2024-11-20 11:21:37.202287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.810 [2024-11-20 11:21:37.202320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.810 qpair failed and we were unable to recover it. 00:27:09.810 [2024-11-20 11:21:37.202509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.810 [2024-11-20 11:21:37.202541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.810 qpair failed and we were unable to recover it. 00:27:09.810 [2024-11-20 11:21:37.202678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.810 [2024-11-20 11:21:37.202710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.810 qpair failed and we were unable to recover it. 00:27:09.810 [2024-11-20 11:21:37.202821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.810 [2024-11-20 11:21:37.202854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.810 qpair failed and we were unable to recover it. 00:27:09.810 [2024-11-20 11:21:37.202969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.810 [2024-11-20 11:21:37.203003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.810 qpair failed and we were unable to recover it. 00:27:09.810 [2024-11-20 11:21:37.203190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.810 [2024-11-20 11:21:37.203224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.810 qpair failed and we were unable to recover it. 00:27:09.810 [2024-11-20 11:21:37.203358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.810 [2024-11-20 11:21:37.203389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.810 qpair failed and we were unable to recover it. 00:27:09.810 [2024-11-20 11:21:37.203600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.810 [2024-11-20 11:21:37.203632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.810 qpair failed and we were unable to recover it. 00:27:09.810 [2024-11-20 11:21:37.203907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.810 [2024-11-20 11:21:37.203941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.810 qpair failed and we were unable to recover it. 00:27:09.810 [2024-11-20 11:21:37.204158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.810 [2024-11-20 11:21:37.204191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.810 qpair failed and we were unable to recover it. 00:27:09.810 [2024-11-20 11:21:37.204301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.810 [2024-11-20 11:21:37.204334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.810 qpair failed and we were unable to recover it. 00:27:09.811 [2024-11-20 11:21:37.204538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.811 [2024-11-20 11:21:37.204571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.811 qpair failed and we were unable to recover it. 00:27:09.811 [2024-11-20 11:21:37.204727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.811 [2024-11-20 11:21:37.204761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.811 qpair failed and we were unable to recover it. 00:27:09.811 [2024-11-20 11:21:37.204895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.811 [2024-11-20 11:21:37.204929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.811 qpair failed and we were unable to recover it. 00:27:09.811 [2024-11-20 11:21:37.205140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.811 [2024-11-20 11:21:37.205173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.811 qpair failed and we were unable to recover it. 00:27:09.811 [2024-11-20 11:21:37.205382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.811 [2024-11-20 11:21:37.205415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.811 qpair failed and we were unable to recover it. 00:27:09.811 [2024-11-20 11:21:37.205555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.811 [2024-11-20 11:21:37.205586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.811 qpair failed and we were unable to recover it. 00:27:09.811 [2024-11-20 11:21:37.205716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.811 [2024-11-20 11:21:37.205748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.811 qpair failed and we were unable to recover it. 00:27:09.811 [2024-11-20 11:21:37.205857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.811 [2024-11-20 11:21:37.205888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.811 qpair failed and we were unable to recover it. 00:27:09.811 [2024-11-20 11:21:37.206096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.811 [2024-11-20 11:21:37.206129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.811 qpair failed and we were unable to recover it. 00:27:09.811 [2024-11-20 11:21:37.206307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.811 [2024-11-20 11:21:37.206341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.811 qpair failed and we were unable to recover it. 00:27:09.811 [2024-11-20 11:21:37.206460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.811 [2024-11-20 11:21:37.206492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.811 qpair failed and we were unable to recover it. 00:27:09.811 [2024-11-20 11:21:37.206601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.811 [2024-11-20 11:21:37.206634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.811 qpair failed and we were unable to recover it. 00:27:09.811 [2024-11-20 11:21:37.206744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.811 [2024-11-20 11:21:37.206777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.811 qpair failed and we were unable to recover it. 00:27:09.811 [2024-11-20 11:21:37.206966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.811 [2024-11-20 11:21:37.207000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.811 qpair failed and we were unable to recover it. 00:27:09.811 [2024-11-20 11:21:37.207112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.811 [2024-11-20 11:21:37.207150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.811 qpair failed and we were unable to recover it. 00:27:09.811 [2024-11-20 11:21:37.207339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.811 [2024-11-20 11:21:37.207372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.811 qpair failed and we were unable to recover it. 00:27:09.811 [2024-11-20 11:21:37.207565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.811 [2024-11-20 11:21:37.207597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.811 qpair failed and we were unable to recover it. 00:27:09.811 [2024-11-20 11:21:37.207720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.811 [2024-11-20 11:21:37.207753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.811 qpair failed and we were unable to recover it. 00:27:09.811 [2024-11-20 11:21:37.207937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.811 [2024-11-20 11:21:37.207985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.811 qpair failed and we were unable to recover it. 00:27:09.811 [2024-11-20 11:21:37.208251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.811 [2024-11-20 11:21:37.208285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.811 qpair failed and we were unable to recover it. 00:27:09.811 [2024-11-20 11:21:37.208471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.811 [2024-11-20 11:21:37.208503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.811 qpair failed and we were unable to recover it. 00:27:09.811 [2024-11-20 11:21:37.208626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.811 [2024-11-20 11:21:37.208659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.811 qpair failed and we were unable to recover it. 00:27:09.811 [2024-11-20 11:21:37.208779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.811 [2024-11-20 11:21:37.208812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.811 qpair failed and we were unable to recover it. 00:27:09.811 [2024-11-20 11:21:37.208997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.811 [2024-11-20 11:21:37.209032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.811 qpair failed and we were unable to recover it. 00:27:09.811 [2024-11-20 11:21:37.209157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.811 [2024-11-20 11:21:37.209190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.811 qpair failed and we were unable to recover it. 00:27:09.811 [2024-11-20 11:21:37.209381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.811 [2024-11-20 11:21:37.209414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.811 qpair failed and we were unable to recover it. 00:27:09.811 [2024-11-20 11:21:37.209521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.811 [2024-11-20 11:21:37.209554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.811 qpair failed and we were unable to recover it. 00:27:09.811 [2024-11-20 11:21:37.209804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.811 [2024-11-20 11:21:37.209839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.811 qpair failed and we were unable to recover it. 00:27:09.811 [2024-11-20 11:21:37.210023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.811 [2024-11-20 11:21:37.210059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.811 qpair failed and we were unable to recover it. 00:27:09.811 [2024-11-20 11:21:37.210259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.811 [2024-11-20 11:21:37.210292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.811 qpair failed and we were unable to recover it. 00:27:09.811 [2024-11-20 11:21:37.210483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.811 [2024-11-20 11:21:37.210515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.811 qpair failed and we were unable to recover it. 00:27:09.811 [2024-11-20 11:21:37.210623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.811 [2024-11-20 11:21:37.210654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.811 qpair failed and we were unable to recover it. 00:27:09.811 [2024-11-20 11:21:37.210772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.811 [2024-11-20 11:21:37.210805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.811 qpair failed and we were unable to recover it. 00:27:09.812 [2024-11-20 11:21:37.210923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.812 [2024-11-20 11:21:37.210968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.812 qpair failed and we were unable to recover it. 00:27:09.812 [2024-11-20 11:21:37.211093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.812 [2024-11-20 11:21:37.211127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.812 qpair failed and we were unable to recover it. 00:27:09.812 [2024-11-20 11:21:37.211328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.812 [2024-11-20 11:21:37.211362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.812 qpair failed and we were unable to recover it. 00:27:09.812 [2024-11-20 11:21:37.211466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.812 [2024-11-20 11:21:37.211499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.812 qpair failed and we were unable to recover it. 00:27:09.812 [2024-11-20 11:21:37.211641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.812 [2024-11-20 11:21:37.211675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.812 qpair failed and we were unable to recover it. 00:27:09.812 [2024-11-20 11:21:37.211853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.812 [2024-11-20 11:21:37.211885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.812 qpair failed and we were unable to recover it. 00:27:09.812 [2024-11-20 11:21:37.212068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.812 [2024-11-20 11:21:37.212102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.812 qpair failed and we were unable to recover it. 00:27:09.812 [2024-11-20 11:21:37.212282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.812 [2024-11-20 11:21:37.212314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.812 qpair failed and we were unable to recover it. 00:27:09.812 [2024-11-20 11:21:37.212430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.812 [2024-11-20 11:21:37.212464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.812 qpair failed and we were unable to recover it. 00:27:09.812 [2024-11-20 11:21:37.212660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.812 [2024-11-20 11:21:37.212693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.812 qpair failed and we were unable to recover it. 00:27:09.812 [2024-11-20 11:21:37.212827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.812 [2024-11-20 11:21:37.212860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.812 qpair failed and we were unable to recover it. 00:27:09.812 [2024-11-20 11:21:37.213010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.812 [2024-11-20 11:21:37.213043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.812 qpair failed and we were unable to recover it. 00:27:09.812 [2024-11-20 11:21:37.213231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.812 [2024-11-20 11:21:37.213265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.812 qpair failed and we were unable to recover it. 00:27:09.812 [2024-11-20 11:21:37.213447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.812 [2024-11-20 11:21:37.213481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.812 qpair failed and we were unable to recover it. 00:27:09.812 [2024-11-20 11:21:37.213602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.812 [2024-11-20 11:21:37.213633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.812 qpair failed and we were unable to recover it. 00:27:09.812 [2024-11-20 11:21:37.213745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.812 [2024-11-20 11:21:37.213778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.812 qpair failed and we were unable to recover it. 00:27:09.812 [2024-11-20 11:21:37.213898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.812 [2024-11-20 11:21:37.213930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.812 qpair failed and we were unable to recover it. 00:27:09.812 [2024-11-20 11:21:37.214126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.812 [2024-11-20 11:21:37.214158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.812 qpair failed and we were unable to recover it. 00:27:09.812 [2024-11-20 11:21:37.214425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.812 [2024-11-20 11:21:37.214457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.812 qpair failed and we were unable to recover it. 00:27:09.812 [2024-11-20 11:21:37.214641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.812 [2024-11-20 11:21:37.214673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.812 qpair failed and we were unable to recover it. 00:27:09.812 [2024-11-20 11:21:37.214915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.812 [2024-11-20 11:21:37.214957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.812 qpair failed and we were unable to recover it. 00:27:09.812 [2024-11-20 11:21:37.215234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.812 [2024-11-20 11:21:37.215274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.812 qpair failed and we were unable to recover it. 00:27:09.812 [2024-11-20 11:21:37.215416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.812 [2024-11-20 11:21:37.215448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.812 qpair failed and we were unable to recover it. 00:27:09.812 [2024-11-20 11:21:37.215660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.812 [2024-11-20 11:21:37.215694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.812 qpair failed and we were unable to recover it. 00:27:09.812 [2024-11-20 11:21:37.215831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.812 [2024-11-20 11:21:37.215864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.812 qpair failed and we were unable to recover it. 00:27:09.812 [2024-11-20 11:21:37.216075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.812 [2024-11-20 11:21:37.216111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.812 qpair failed and we were unable to recover it. 00:27:09.812 [2024-11-20 11:21:37.216306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.812 [2024-11-20 11:21:37.216339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.812 qpair failed and we were unable to recover it. 00:27:09.812 [2024-11-20 11:21:37.216524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.812 [2024-11-20 11:21:37.216558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.812 qpair failed and we were unable to recover it. 00:27:09.812 [2024-11-20 11:21:37.216798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.812 [2024-11-20 11:21:37.216831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.812 qpair failed and we were unable to recover it. 00:27:09.812 [2024-11-20 11:21:37.217010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.812 [2024-11-20 11:21:37.217044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.812 qpair failed and we were unable to recover it. 00:27:09.812 [2024-11-20 11:21:37.217153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.812 [2024-11-20 11:21:37.217186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.812 qpair failed and we were unable to recover it. 00:27:09.812 [2024-11-20 11:21:37.217399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.812 [2024-11-20 11:21:37.217433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.812 qpair failed and we were unable to recover it. 00:27:09.812 [2024-11-20 11:21:37.217612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.813 [2024-11-20 11:21:37.217644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.813 qpair failed and we were unable to recover it. 00:27:09.813 [2024-11-20 11:21:37.217817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.813 [2024-11-20 11:21:37.217850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.813 qpair failed and we were unable to recover it. 00:27:09.813 [2024-11-20 11:21:37.218044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.813 [2024-11-20 11:21:37.218077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.813 qpair failed and we were unable to recover it. 00:27:09.813 [2024-11-20 11:21:37.218291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.813 [2024-11-20 11:21:37.218324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.813 qpair failed and we were unable to recover it. 00:27:09.813 [2024-11-20 11:21:37.218521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.813 [2024-11-20 11:21:37.218556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.813 qpair failed and we were unable to recover it. 00:27:09.813 [2024-11-20 11:21:37.218684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.813 [2024-11-20 11:21:37.218717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.813 qpair failed and we were unable to recover it. 00:27:09.813 [2024-11-20 11:21:37.218834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.813 [2024-11-20 11:21:37.218866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.813 qpair failed and we were unable to recover it. 00:27:09.813 [2024-11-20 11:21:37.218992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.813 [2024-11-20 11:21:37.219026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.813 qpair failed and we were unable to recover it. 00:27:09.813 [2024-11-20 11:21:37.219147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.813 [2024-11-20 11:21:37.219181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.813 qpair failed and we were unable to recover it. 00:27:09.813 [2024-11-20 11:21:37.219387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.813 [2024-11-20 11:21:37.219420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.813 qpair failed and we were unable to recover it. 00:27:09.813 [2024-11-20 11:21:37.219612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.813 [2024-11-20 11:21:37.219645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.813 qpair failed and we were unable to recover it. 00:27:09.813 [2024-11-20 11:21:37.219773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.813 [2024-11-20 11:21:37.219806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.813 qpair failed and we were unable to recover it. 00:27:09.813 [2024-11-20 11:21:37.219926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.813 [2024-11-20 11:21:37.219968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.813 qpair failed and we were unable to recover it. 00:27:09.813 [2024-11-20 11:21:37.220079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.813 [2024-11-20 11:21:37.220113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.813 qpair failed and we were unable to recover it. 00:27:09.813 [2024-11-20 11:21:37.220234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.813 [2024-11-20 11:21:37.220267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.813 qpair failed and we were unable to recover it. 00:27:09.813 [2024-11-20 11:21:37.220442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.813 [2024-11-20 11:21:37.220474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.813 qpair failed and we were unable to recover it. 00:27:09.813 [2024-11-20 11:21:37.220666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.813 [2024-11-20 11:21:37.220699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.813 qpair failed and we were unable to recover it. 00:27:09.813 [2024-11-20 11:21:37.220836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.813 [2024-11-20 11:21:37.220869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.813 qpair failed and we were unable to recover it. 00:27:09.813 [2024-11-20 11:21:37.220988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.813 [2024-11-20 11:21:37.221023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.813 qpair failed and we were unable to recover it. 00:27:09.813 [2024-11-20 11:21:37.221146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.813 [2024-11-20 11:21:37.221179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.813 qpair failed and we were unable to recover it. 00:27:09.813 [2024-11-20 11:21:37.221424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.813 [2024-11-20 11:21:37.221457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.813 qpair failed and we were unable to recover it. 00:27:09.813 [2024-11-20 11:21:37.221670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.813 [2024-11-20 11:21:37.221704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.813 qpair failed and we were unable to recover it. 00:27:09.813 [2024-11-20 11:21:37.221879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.813 [2024-11-20 11:21:37.221912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.813 qpair failed and we were unable to recover it. 00:27:09.813 [2024-11-20 11:21:37.222209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.813 [2024-11-20 11:21:37.222243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.813 qpair failed and we were unable to recover it. 00:27:09.813 [2024-11-20 11:21:37.222357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.813 [2024-11-20 11:21:37.222390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.813 qpair failed and we were unable to recover it. 00:27:09.813 [2024-11-20 11:21:37.222527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.813 [2024-11-20 11:21:37.222560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.813 qpair failed and we were unable to recover it. 00:27:09.813 [2024-11-20 11:21:37.222777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.813 [2024-11-20 11:21:37.222809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.813 qpair failed and we were unable to recover it. 00:27:09.813 [2024-11-20 11:21:37.222927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.813 [2024-11-20 11:21:37.222970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.813 qpair failed and we were unable to recover it. 00:27:09.813 [2024-11-20 11:21:37.223164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.813 [2024-11-20 11:21:37.223196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.813 qpair failed and we were unable to recover it. 00:27:09.813 [2024-11-20 11:21:37.223319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.813 [2024-11-20 11:21:37.223359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.813 qpair failed and we were unable to recover it. 00:27:09.813 [2024-11-20 11:21:37.223469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.813 [2024-11-20 11:21:37.223502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.813 qpair failed and we were unable to recover it. 00:27:09.813 [2024-11-20 11:21:37.223609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.813 [2024-11-20 11:21:37.223642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.813 qpair failed and we were unable to recover it. 00:27:09.813 [2024-11-20 11:21:37.223742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.813 [2024-11-20 11:21:37.223776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.813 qpair failed and we were unable to recover it. 00:27:09.813 [2024-11-20 11:21:37.223966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.814 [2024-11-20 11:21:37.224001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.814 qpair failed and we were unable to recover it. 00:27:09.814 [2024-11-20 11:21:37.224176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.814 [2024-11-20 11:21:37.224209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.814 qpair failed and we were unable to recover it. 00:27:09.814 [2024-11-20 11:21:37.224454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.814 [2024-11-20 11:21:37.224486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.814 qpair failed and we were unable to recover it. 00:27:09.814 [2024-11-20 11:21:37.224749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.814 [2024-11-20 11:21:37.224782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.814 qpair failed and we were unable to recover it. 00:27:09.814 [2024-11-20 11:21:37.224994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.814 [2024-11-20 11:21:37.225029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.814 qpair failed and we were unable to recover it. 00:27:09.814 [2024-11-20 11:21:37.225308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.814 [2024-11-20 11:21:37.225342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.814 qpair failed and we were unable to recover it. 00:27:09.814 [2024-11-20 11:21:37.225451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.814 [2024-11-20 11:21:37.225483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.814 qpair failed and we were unable to recover it. 00:27:09.814 [2024-11-20 11:21:37.225681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.814 [2024-11-20 11:21:37.225713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.814 qpair failed and we were unable to recover it. 00:27:09.814 [2024-11-20 11:21:37.225841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.814 [2024-11-20 11:21:37.225893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.814 qpair failed and we were unable to recover it. 00:27:09.814 [2024-11-20 11:21:37.226011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.814 [2024-11-20 11:21:37.226045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.814 qpair failed and we were unable to recover it. 00:27:09.814 [2024-11-20 11:21:37.226319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.814 [2024-11-20 11:21:37.226351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.814 qpair failed and we were unable to recover it. 00:27:09.814 [2024-11-20 11:21:37.226476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.814 [2024-11-20 11:21:37.226510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.814 qpair failed and we were unable to recover it. 00:27:09.814 [2024-11-20 11:21:37.226749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.814 [2024-11-20 11:21:37.226781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.814 qpair failed and we were unable to recover it. 00:27:09.814 [2024-11-20 11:21:37.226974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.814 [2024-11-20 11:21:37.227009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.814 qpair failed and we were unable to recover it. 00:27:09.814 [2024-11-20 11:21:37.227200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.814 [2024-11-20 11:21:37.227233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.814 qpair failed and we were unable to recover it. 00:27:09.814 [2024-11-20 11:21:37.227413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.814 [2024-11-20 11:21:37.227446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.814 qpair failed and we were unable to recover it. 00:27:09.814 [2024-11-20 11:21:37.227694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.814 [2024-11-20 11:21:37.227726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.814 qpair failed and we were unable to recover it. 00:27:09.814 [2024-11-20 11:21:37.227846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.814 [2024-11-20 11:21:37.227879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.814 qpair failed and we were unable to recover it. 00:27:09.814 [2024-11-20 11:21:37.228065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.814 [2024-11-20 11:21:37.228099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.814 qpair failed and we were unable to recover it. 00:27:09.814 [2024-11-20 11:21:37.228206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.814 [2024-11-20 11:21:37.228239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.814 qpair failed and we were unable to recover it. 00:27:09.814 [2024-11-20 11:21:37.228419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.814 [2024-11-20 11:21:37.228451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.814 qpair failed and we were unable to recover it. 00:27:09.814 [2024-11-20 11:21:37.228732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.814 [2024-11-20 11:21:37.228764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.814 qpair failed and we were unable to recover it. 00:27:09.814 [2024-11-20 11:21:37.228872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.814 [2024-11-20 11:21:37.228905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.814 qpair failed and we were unable to recover it. 00:27:09.814 [2024-11-20 11:21:37.229171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.814 [2024-11-20 11:21:37.229210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.814 qpair failed and we were unable to recover it. 00:27:09.814 [2024-11-20 11:21:37.229328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.814 [2024-11-20 11:21:37.229362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.814 qpair failed and we were unable to recover it. 00:27:09.814 [2024-11-20 11:21:37.229554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.814 [2024-11-20 11:21:37.229586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.814 qpair failed and we were unable to recover it. 00:27:09.814 [2024-11-20 11:21:37.229755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.814 [2024-11-20 11:21:37.229789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.814 qpair failed and we were unable to recover it. 00:27:09.814 [2024-11-20 11:21:37.229981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.814 [2024-11-20 11:21:37.230016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.814 qpair failed and we were unable to recover it. 00:27:09.814 [2024-11-20 11:21:37.230125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.814 [2024-11-20 11:21:37.230159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.814 qpair failed and we were unable to recover it. 00:27:09.814 [2024-11-20 11:21:37.230439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.814 [2024-11-20 11:21:37.230471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.814 qpair failed and we were unable to recover it. 00:27:09.814 [2024-11-20 11:21:37.230585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.814 [2024-11-20 11:21:37.230617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.814 qpair failed and we were unable to recover it. 00:27:09.814 [2024-11-20 11:21:37.230722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.814 [2024-11-20 11:21:37.230756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.814 qpair failed and we were unable to recover it. 00:27:09.814 [2024-11-20 11:21:37.230930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.814 [2024-11-20 11:21:37.230974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.814 qpair failed and we were unable to recover it. 00:27:09.814 [2024-11-20 11:21:37.231148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.815 [2024-11-20 11:21:37.231181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.815 qpair failed and we were unable to recover it. 00:27:09.815 [2024-11-20 11:21:37.231313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.815 [2024-11-20 11:21:37.231345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.815 qpair failed and we were unable to recover it. 00:27:09.815 [2024-11-20 11:21:37.231549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.815 [2024-11-20 11:21:37.231580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.815 qpair failed and we were unable to recover it. 00:27:09.815 [2024-11-20 11:21:37.231697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.815 [2024-11-20 11:21:37.231729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.815 qpair failed and we were unable to recover it. 00:27:09.815 [2024-11-20 11:21:37.231924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.815 [2024-11-20 11:21:37.231967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.815 qpair failed and we were unable to recover it. 00:27:09.815 [2024-11-20 11:21:37.232173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.815 [2024-11-20 11:21:37.232207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.815 qpair failed and we were unable to recover it. 00:27:09.815 [2024-11-20 11:21:37.232330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.815 [2024-11-20 11:21:37.232363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.815 qpair failed and we were unable to recover it. 00:27:09.815 [2024-11-20 11:21:37.232549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.815 [2024-11-20 11:21:37.232582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.815 qpair failed and we were unable to recover it. 00:27:09.815 [2024-11-20 11:21:37.232823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.815 [2024-11-20 11:21:37.232855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.815 qpair failed and we were unable to recover it. 00:27:09.815 [2024-11-20 11:21:37.232989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.815 [2024-11-20 11:21:37.233024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.815 qpair failed and we were unable to recover it. 00:27:09.815 [2024-11-20 11:21:37.233209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.815 [2024-11-20 11:21:37.233241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.815 qpair failed and we were unable to recover it. 00:27:09.815 [2024-11-20 11:21:37.233498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.815 [2024-11-20 11:21:37.233529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.815 qpair failed and we were unable to recover it. 00:27:09.815 [2024-11-20 11:21:37.233715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.815 [2024-11-20 11:21:37.233747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.815 qpair failed and we were unable to recover it. 00:27:09.815 [2024-11-20 11:21:37.233924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.815 [2024-11-20 11:21:37.233969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.815 qpair failed and we were unable to recover it. 00:27:09.815 [2024-11-20 11:21:37.234096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.815 [2024-11-20 11:21:37.234129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.815 qpair failed and we were unable to recover it. 00:27:09.815 [2024-11-20 11:21:37.234264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.815 [2024-11-20 11:21:37.234297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.815 qpair failed and we were unable to recover it. 00:27:09.815 [2024-11-20 11:21:37.234478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.815 [2024-11-20 11:21:37.234510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.815 qpair failed and we were unable to recover it. 00:27:09.815 [2024-11-20 11:21:37.234693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.815 [2024-11-20 11:21:37.234727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.815 qpair failed and we were unable to recover it. 00:27:09.815 [2024-11-20 11:21:37.234844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.815 [2024-11-20 11:21:37.234877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.815 qpair failed and we were unable to recover it. 00:27:09.815 [2024-11-20 11:21:37.234989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.815 [2024-11-20 11:21:37.235023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.815 qpair failed and we were unable to recover it. 00:27:09.815 [2024-11-20 11:21:37.235145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.815 [2024-11-20 11:21:37.235179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.815 qpair failed and we were unable to recover it. 00:27:09.815 [2024-11-20 11:21:37.235374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.815 [2024-11-20 11:21:37.235406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.815 qpair failed and we were unable to recover it. 00:27:09.815 [2024-11-20 11:21:37.235528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.815 [2024-11-20 11:21:37.235560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.815 qpair failed and we were unable to recover it. 00:27:09.815 [2024-11-20 11:21:37.235687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.815 [2024-11-20 11:21:37.235720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.815 qpair failed and we were unable to recover it. 00:27:09.815 [2024-11-20 11:21:37.235927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.815 [2024-11-20 11:21:37.235969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.815 qpair failed and we were unable to recover it. 00:27:09.815 [2024-11-20 11:21:37.236116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.815 [2024-11-20 11:21:37.236149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.815 qpair failed and we were unable to recover it. 00:27:09.815 [2024-11-20 11:21:37.236271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.815 [2024-11-20 11:21:37.236304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.815 qpair failed and we were unable to recover it. 00:27:09.815 [2024-11-20 11:21:37.236511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.815 [2024-11-20 11:21:37.236543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.815 qpair failed and we were unable to recover it. 00:27:09.816 [2024-11-20 11:21:37.236727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.816 [2024-11-20 11:21:37.236760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.816 qpair failed and we were unable to recover it. 00:27:09.816 [2024-11-20 11:21:37.236883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.816 [2024-11-20 11:21:37.236916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.816 qpair failed and we were unable to recover it. 00:27:09.816 [2024-11-20 11:21:37.237204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.816 [2024-11-20 11:21:37.237244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.816 qpair failed and we were unable to recover it. 00:27:09.816 [2024-11-20 11:21:37.237536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.816 [2024-11-20 11:21:37.237568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.816 qpair failed and we were unable to recover it. 00:27:09.816 [2024-11-20 11:21:37.237780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.816 [2024-11-20 11:21:37.237812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.816 qpair failed and we were unable to recover it. 00:27:09.816 [2024-11-20 11:21:37.237982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.816 [2024-11-20 11:21:37.238016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.816 qpair failed and we were unable to recover it. 00:27:09.816 [2024-11-20 11:21:37.238150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.816 [2024-11-20 11:21:37.238182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.816 qpair failed and we were unable to recover it. 00:27:09.816 [2024-11-20 11:21:37.238312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.816 [2024-11-20 11:21:37.238344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.816 qpair failed and we were unable to recover it. 00:27:09.816 [2024-11-20 11:21:37.238452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.816 [2024-11-20 11:21:37.238484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.816 qpair failed and we were unable to recover it. 00:27:09.816 [2024-11-20 11:21:37.238614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.816 [2024-11-20 11:21:37.238647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.816 qpair failed and we were unable to recover it. 00:27:09.816 [2024-11-20 11:21:37.238902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.816 [2024-11-20 11:21:37.238934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.816 qpair failed and we were unable to recover it. 00:27:09.816 [2024-11-20 11:21:37.239205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.816 [2024-11-20 11:21:37.239238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.816 qpair failed and we were unable to recover it. 00:27:09.816 [2024-11-20 11:21:37.239444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.816 [2024-11-20 11:21:37.239477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.816 qpair failed and we were unable to recover it. 00:27:09.816 [2024-11-20 11:21:37.239747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.816 [2024-11-20 11:21:37.239781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.816 qpair failed and we were unable to recover it. 00:27:09.816 [2024-11-20 11:21:37.239905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.816 [2024-11-20 11:21:37.239937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.816 qpair failed and we were unable to recover it. 00:27:09.816 [2024-11-20 11:21:37.240145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.816 [2024-11-20 11:21:37.240178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.816 qpair failed and we were unable to recover it. 00:27:09.816 [2024-11-20 11:21:37.240318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.816 [2024-11-20 11:21:37.240351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.816 qpair failed and we were unable to recover it. 00:27:09.816 [2024-11-20 11:21:37.240531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.816 [2024-11-20 11:21:37.240563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.816 qpair failed and we were unable to recover it. 00:27:09.816 [2024-11-20 11:21:37.240769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.816 [2024-11-20 11:21:37.240802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.816 qpair failed and we were unable to recover it. 00:27:09.816 [2024-11-20 11:21:37.240919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.816 [2024-11-20 11:21:37.240964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.816 qpair failed and we were unable to recover it. 00:27:09.816 [2024-11-20 11:21:37.241157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.816 [2024-11-20 11:21:37.241189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.816 qpair failed and we were unable to recover it. 00:27:09.816 [2024-11-20 11:21:37.241308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.816 [2024-11-20 11:21:37.241340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.816 qpair failed and we were unable to recover it. 00:27:09.816 [2024-11-20 11:21:37.241543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.816 [2024-11-20 11:21:37.241576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.816 qpair failed and we were unable to recover it. 00:27:09.816 [2024-11-20 11:21:37.241768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.816 [2024-11-20 11:21:37.241800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.816 qpair failed and we were unable to recover it. 00:27:09.816 [2024-11-20 11:21:37.241997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.816 [2024-11-20 11:21:37.242032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.816 qpair failed and we were unable to recover it. 00:27:09.816 [2024-11-20 11:21:37.242273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.816 [2024-11-20 11:21:37.242305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.816 qpair failed and we were unable to recover it. 00:27:09.816 [2024-11-20 11:21:37.242545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.816 [2024-11-20 11:21:37.242578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.816 qpair failed and we were unable to recover it. 00:27:09.816 [2024-11-20 11:21:37.242760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.816 [2024-11-20 11:21:37.242792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.816 qpair failed and we were unable to recover it. 00:27:09.816 [2024-11-20 11:21:37.242979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.816 [2024-11-20 11:21:37.243013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.816 qpair failed and we were unable to recover it. 00:27:09.816 [2024-11-20 11:21:37.243191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.816 [2024-11-20 11:21:37.243224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.816 qpair failed and we were unable to recover it. 00:27:09.816 [2024-11-20 11:21:37.243410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.816 [2024-11-20 11:21:37.243442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.816 qpair failed and we were unable to recover it. 00:27:09.816 [2024-11-20 11:21:37.243621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.816 [2024-11-20 11:21:37.243653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.816 qpair failed and we were unable to recover it. 00:27:09.816 [2024-11-20 11:21:37.243840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.816 [2024-11-20 11:21:37.243873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.817 qpair failed and we were unable to recover it. 00:27:09.817 [2024-11-20 11:21:37.243989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.817 [2024-11-20 11:21:37.244022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.817 qpair failed and we were unable to recover it. 00:27:09.817 [2024-11-20 11:21:37.244196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.817 [2024-11-20 11:21:37.244228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.817 qpair failed and we were unable to recover it. 00:27:09.817 [2024-11-20 11:21:37.244422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.817 [2024-11-20 11:21:37.244454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.817 qpair failed and we were unable to recover it. 00:27:09.817 [2024-11-20 11:21:37.244626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.817 [2024-11-20 11:21:37.244658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.817 qpair failed and we were unable to recover it. 00:27:09.817 [2024-11-20 11:21:37.244903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.817 [2024-11-20 11:21:37.244936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.817 qpair failed and we were unable to recover it. 00:27:09.817 [2024-11-20 11:21:37.245102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.817 [2024-11-20 11:21:37.245135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.817 qpair failed and we were unable to recover it. 00:27:09.817 [2024-11-20 11:21:37.245263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.817 [2024-11-20 11:21:37.245295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.817 qpair failed and we were unable to recover it. 00:27:09.817 [2024-11-20 11:21:37.245419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.817 [2024-11-20 11:21:37.245451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.817 qpair failed and we were unable to recover it. 00:27:09.817 [2024-11-20 11:21:37.245627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.817 [2024-11-20 11:21:37.245659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.817 qpair failed and we were unable to recover it. 00:27:09.817 [2024-11-20 11:21:37.245831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.817 [2024-11-20 11:21:37.245875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.817 qpair failed and we were unable to recover it. 00:27:09.817 [2024-11-20 11:21:37.246067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.817 [2024-11-20 11:21:37.246102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.817 qpair failed and we were unable to recover it. 00:27:09.817 [2024-11-20 11:21:37.246349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.817 [2024-11-20 11:21:37.246380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.817 qpair failed and we were unable to recover it. 00:27:09.817 [2024-11-20 11:21:37.246510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.817 [2024-11-20 11:21:37.246543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.817 qpair failed and we were unable to recover it. 00:27:09.817 [2024-11-20 11:21:37.246646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.817 [2024-11-20 11:21:37.246678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.817 qpair failed and we were unable to recover it. 00:27:09.817 [2024-11-20 11:21:37.246865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.817 [2024-11-20 11:21:37.246896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.817 qpair failed and we were unable to recover it. 00:27:09.817 [2024-11-20 11:21:37.247037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.817 [2024-11-20 11:21:37.247071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.817 qpair failed and we were unable to recover it. 00:27:09.817 [2024-11-20 11:21:37.247317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.817 [2024-11-20 11:21:37.247349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.817 qpair failed and we were unable to recover it. 00:27:09.817 [2024-11-20 11:21:37.247539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.817 [2024-11-20 11:21:37.247571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.817 qpair failed and we were unable to recover it. 00:27:09.817 [2024-11-20 11:21:37.247758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.817 [2024-11-20 11:21:37.247790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.817 qpair failed and we were unable to recover it. 00:27:09.817 [2024-11-20 11:21:37.247992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.817 [2024-11-20 11:21:37.248026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.817 qpair failed and we were unable to recover it. 00:27:09.817 [2024-11-20 11:21:37.248142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.817 [2024-11-20 11:21:37.248174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.817 qpair failed and we were unable to recover it. 00:27:09.817 [2024-11-20 11:21:37.248303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.817 [2024-11-20 11:21:37.248335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.817 qpair failed and we were unable to recover it. 00:27:09.817 [2024-11-20 11:21:37.248458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.817 [2024-11-20 11:21:37.248491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.817 qpair failed and we were unable to recover it. 00:27:09.817 [2024-11-20 11:21:37.248632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.817 [2024-11-20 11:21:37.248665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.817 qpair failed and we were unable to recover it. 00:27:09.817 [2024-11-20 11:21:37.248846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.817 [2024-11-20 11:21:37.248878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.817 qpair failed and we were unable to recover it. 00:27:09.817 [2024-11-20 11:21:37.249052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.817 [2024-11-20 11:21:37.249085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.817 qpair failed and we were unable to recover it. 00:27:09.817 [2024-11-20 11:21:37.249261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.817 [2024-11-20 11:21:37.249295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.817 qpair failed and we were unable to recover it. 00:27:09.817 [2024-11-20 11:21:37.249415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.817 [2024-11-20 11:21:37.249446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.817 qpair failed and we were unable to recover it. 00:27:09.817 [2024-11-20 11:21:37.249548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.817 [2024-11-20 11:21:37.249580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.817 qpair failed and we were unable to recover it. 00:27:09.817 [2024-11-20 11:21:37.249833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.817 [2024-11-20 11:21:37.249866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.817 qpair failed and we were unable to recover it. 00:27:09.817 [2024-11-20 11:21:37.250116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.817 [2024-11-20 11:21:37.250150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.817 qpair failed and we were unable to recover it. 00:27:09.817 [2024-11-20 11:21:37.250265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.817 [2024-11-20 11:21:37.250297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.817 qpair failed and we were unable to recover it. 00:27:09.817 [2024-11-20 11:21:37.250483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.817 [2024-11-20 11:21:37.250515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.817 qpair failed and we were unable to recover it. 00:27:09.817 [2024-11-20 11:21:37.250711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.817 [2024-11-20 11:21:37.250744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.817 qpair failed and we were unable to recover it. 00:27:09.817 [2024-11-20 11:21:37.250930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.818 [2024-11-20 11:21:37.250973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.818 qpair failed and we were unable to recover it. 00:27:09.818 [2024-11-20 11:21:37.251148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.818 [2024-11-20 11:21:37.251181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.818 qpair failed and we were unable to recover it. 00:27:09.818 [2024-11-20 11:21:37.251360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.818 [2024-11-20 11:21:37.251392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.818 qpair failed and we were unable to recover it. 00:27:09.818 [2024-11-20 11:21:37.251580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.818 [2024-11-20 11:21:37.251609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.818 qpair failed and we were unable to recover it. 00:27:09.818 [2024-11-20 11:21:37.251821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.818 [2024-11-20 11:21:37.251850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.818 qpair failed and we were unable to recover it. 00:27:09.818 [2024-11-20 11:21:37.252024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.818 [2024-11-20 11:21:37.252056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.818 qpair failed and we were unable to recover it. 00:27:09.818 [2024-11-20 11:21:37.252239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.818 [2024-11-20 11:21:37.252268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.818 qpair failed and we were unable to recover it. 00:27:09.818 [2024-11-20 11:21:37.252371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.818 [2024-11-20 11:21:37.252401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.818 qpair failed and we were unable to recover it. 00:27:09.818 [2024-11-20 11:21:37.252528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.818 [2024-11-20 11:21:37.252558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.818 qpair failed and we were unable to recover it. 00:27:09.818 [2024-11-20 11:21:37.252676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.818 [2024-11-20 11:21:37.252704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.818 qpair failed and we were unable to recover it. 00:27:09.818 [2024-11-20 11:21:37.252899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.818 [2024-11-20 11:21:37.252929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.818 qpair failed and we were unable to recover it. 00:27:09.818 [2024-11-20 11:21:37.253128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.818 [2024-11-20 11:21:37.253159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.818 qpair failed and we were unable to recover it. 00:27:09.818 [2024-11-20 11:21:37.253331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.818 [2024-11-20 11:21:37.253362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.818 qpair failed and we were unable to recover it. 00:27:09.818 [2024-11-20 11:21:37.253486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.818 [2024-11-20 11:21:37.253516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:09.818 qpair failed and we were unable to recover it. 00:27:10.102 [2024-11-20 11:21:37.253628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.102 [2024-11-20 11:21:37.253657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.102 qpair failed and we were unable to recover it. 00:27:10.102 [2024-11-20 11:21:37.253836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.102 [2024-11-20 11:21:37.253872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.102 qpair failed and we were unable to recover it. 00:27:10.102 [2024-11-20 11:21:37.254061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.102 [2024-11-20 11:21:37.254092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.102 qpair failed and we were unable to recover it. 00:27:10.102 [2024-11-20 11:21:37.254203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.102 [2024-11-20 11:21:37.254232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.102 qpair failed and we were unable to recover it. 00:27:10.102 [2024-11-20 11:21:37.254429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.102 [2024-11-20 11:21:37.254460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.102 qpair failed and we were unable to recover it. 00:27:10.102 [2024-11-20 11:21:37.254702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.102 [2024-11-20 11:21:37.254731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.102 qpair failed and we were unable to recover it. 00:27:10.102 [2024-11-20 11:21:37.254912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.102 [2024-11-20 11:21:37.254941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.102 qpair failed and we were unable to recover it. 00:27:10.102 [2024-11-20 11:21:37.255080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.102 [2024-11-20 11:21:37.255110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.102 qpair failed and we were unable to recover it. 00:27:10.102 [2024-11-20 11:21:37.255391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.102 [2024-11-20 11:21:37.255422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.102 qpair failed and we were unable to recover it. 00:27:10.102 [2024-11-20 11:21:37.255608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.102 [2024-11-20 11:21:37.255637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.102 qpair failed and we were unable to recover it. 00:27:10.102 [2024-11-20 11:21:37.255740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.102 [2024-11-20 11:21:37.255770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.102 qpair failed and we were unable to recover it. 00:27:10.102 [2024-11-20 11:21:37.255885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.102 [2024-11-20 11:21:37.255916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.102 qpair failed and we were unable to recover it. 00:27:10.102 [2024-11-20 11:21:37.256151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.102 [2024-11-20 11:21:37.256182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.102 qpair failed and we were unable to recover it. 00:27:10.102 [2024-11-20 11:21:37.256383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.102 [2024-11-20 11:21:37.256413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.102 qpair failed and we were unable to recover it. 00:27:10.102 [2024-11-20 11:21:37.256616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.102 [2024-11-20 11:21:37.256647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.102 qpair failed and we were unable to recover it. 00:27:10.102 [2024-11-20 11:21:37.256787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.102 [2024-11-20 11:21:37.256818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.102 qpair failed and we were unable to recover it. 00:27:10.102 [2024-11-20 11:21:37.256996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.102 [2024-11-20 11:21:37.257028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.102 qpair failed and we were unable to recover it. 00:27:10.102 [2024-11-20 11:21:37.257273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.102 [2024-11-20 11:21:37.257305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.102 qpair failed and we were unable to recover it. 00:27:10.102 [2024-11-20 11:21:37.257424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.102 [2024-11-20 11:21:37.257455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.102 qpair failed and we were unable to recover it. 00:27:10.102 [2024-11-20 11:21:37.257653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.102 [2024-11-20 11:21:37.257683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.102 qpair failed and we were unable to recover it. 00:27:10.102 [2024-11-20 11:21:37.257886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.102 [2024-11-20 11:21:37.257917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.102 qpair failed and we were unable to recover it. 00:27:10.102 [2024-11-20 11:21:37.258110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.102 [2024-11-20 11:21:37.258143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.102 qpair failed and we were unable to recover it. 00:27:10.102 [2024-11-20 11:21:37.258330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.102 [2024-11-20 11:21:37.258362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.102 qpair failed and we were unable to recover it. 00:27:10.102 [2024-11-20 11:21:37.258577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.102 [2024-11-20 11:21:37.258610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.102 qpair failed and we were unable to recover it. 00:27:10.102 [2024-11-20 11:21:37.258786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.102 [2024-11-20 11:21:37.258817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.102 qpair failed and we were unable to recover it. 00:27:10.102 [2024-11-20 11:21:37.258945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.103 [2024-11-20 11:21:37.258991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.103 qpair failed and we were unable to recover it. 00:27:10.103 [2024-11-20 11:21:37.259094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.103 [2024-11-20 11:21:37.259127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.103 qpair failed and we were unable to recover it. 00:27:10.103 [2024-11-20 11:21:37.259235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.103 [2024-11-20 11:21:37.259266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.103 qpair failed and we were unable to recover it. 00:27:10.103 [2024-11-20 11:21:37.259463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.103 [2024-11-20 11:21:37.259495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.103 qpair failed and we were unable to recover it. 00:27:10.103 [2024-11-20 11:21:37.259663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.103 [2024-11-20 11:21:37.259695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.103 qpair failed and we were unable to recover it. 00:27:10.103 [2024-11-20 11:21:37.259825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.103 [2024-11-20 11:21:37.259857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.103 qpair failed and we were unable to recover it. 00:27:10.103 [2024-11-20 11:21:37.259994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.103 [2024-11-20 11:21:37.260028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.103 qpair failed and we were unable to recover it. 00:27:10.103 [2024-11-20 11:21:37.260209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.103 [2024-11-20 11:21:37.260241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.103 qpair failed and we were unable to recover it. 00:27:10.103 [2024-11-20 11:21:37.260480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.103 [2024-11-20 11:21:37.260511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.103 qpair failed and we were unable to recover it. 00:27:10.103 [2024-11-20 11:21:37.260681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.103 [2024-11-20 11:21:37.260713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.103 qpair failed and we were unable to recover it. 00:27:10.103 [2024-11-20 11:21:37.260894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.103 [2024-11-20 11:21:37.260927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.103 qpair failed and we were unable to recover it. 00:27:10.103 [2024-11-20 11:21:37.261074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.103 [2024-11-20 11:21:37.261107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.103 qpair failed and we were unable to recover it. 00:27:10.103 [2024-11-20 11:21:37.261304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.103 [2024-11-20 11:21:37.261337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.103 qpair failed and we were unable to recover it. 00:27:10.103 [2024-11-20 11:21:37.261574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.103 [2024-11-20 11:21:37.261606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.103 qpair failed and we were unable to recover it. 00:27:10.103 [2024-11-20 11:21:37.261793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.103 [2024-11-20 11:21:37.261825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.103 qpair failed and we were unable to recover it. 00:27:10.103 [2024-11-20 11:21:37.261938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.103 [2024-11-20 11:21:37.261981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.103 qpair failed and we were unable to recover it. 00:27:10.103 [2024-11-20 11:21:37.262152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.103 [2024-11-20 11:21:37.262190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.103 qpair failed and we were unable to recover it. 00:27:10.103 [2024-11-20 11:21:37.262378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.103 [2024-11-20 11:21:37.262411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.103 qpair failed and we were unable to recover it. 00:27:10.103 [2024-11-20 11:21:37.262598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.103 [2024-11-20 11:21:37.262631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.103 qpair failed and we were unable to recover it. 00:27:10.103 [2024-11-20 11:21:37.262808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.103 [2024-11-20 11:21:37.262840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.103 qpair failed and we were unable to recover it. 00:27:10.103 [2024-11-20 11:21:37.263014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.103 [2024-11-20 11:21:37.263049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.103 qpair failed and we were unable to recover it. 00:27:10.103 [2024-11-20 11:21:37.263175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.103 [2024-11-20 11:21:37.263208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.103 qpair failed and we were unable to recover it. 00:27:10.103 [2024-11-20 11:21:37.263413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.103 [2024-11-20 11:21:37.263446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.103 qpair failed and we were unable to recover it. 00:27:10.103 [2024-11-20 11:21:37.263625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.103 [2024-11-20 11:21:37.263657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.103 qpair failed and we were unable to recover it. 00:27:10.103 [2024-11-20 11:21:37.263791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.103 [2024-11-20 11:21:37.263823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.103 qpair failed and we were unable to recover it. 00:27:10.103 [2024-11-20 11:21:37.264069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.103 [2024-11-20 11:21:37.264104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.103 qpair failed and we were unable to recover it. 00:27:10.103 [2024-11-20 11:21:37.264305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.103 [2024-11-20 11:21:37.264337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.103 qpair failed and we were unable to recover it. 00:27:10.103 [2024-11-20 11:21:37.264444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.103 [2024-11-20 11:21:37.264476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.103 qpair failed and we were unable to recover it. 00:27:10.103 [2024-11-20 11:21:37.264655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.103 [2024-11-20 11:21:37.264687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.103 qpair failed and we were unable to recover it. 00:27:10.103 [2024-11-20 11:21:37.264806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.103 [2024-11-20 11:21:37.264843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.103 qpair failed and we were unable to recover it. 00:27:10.103 [2024-11-20 11:21:37.264972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.103 [2024-11-20 11:21:37.265007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.103 qpair failed and we were unable to recover it. 00:27:10.103 [2024-11-20 11:21:37.265244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.103 [2024-11-20 11:21:37.265277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.103 qpair failed and we were unable to recover it. 00:27:10.103 [2024-11-20 11:21:37.265514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.104 [2024-11-20 11:21:37.265546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.104 qpair failed and we were unable to recover it. 00:27:10.104 [2024-11-20 11:21:37.265724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.104 [2024-11-20 11:21:37.265756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.104 qpair failed and we were unable to recover it. 00:27:10.104 [2024-11-20 11:21:37.265938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.104 [2024-11-20 11:21:37.265980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.104 qpair failed and we were unable to recover it. 00:27:10.104 [2024-11-20 11:21:37.266149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.104 [2024-11-20 11:21:37.266182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.104 qpair failed and we were unable to recover it. 00:27:10.104 [2024-11-20 11:21:37.266469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.104 [2024-11-20 11:21:37.266502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.104 qpair failed and we were unable to recover it. 00:27:10.104 [2024-11-20 11:21:37.266712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.104 [2024-11-20 11:21:37.266743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.104 qpair failed and we were unable to recover it. 00:27:10.104 [2024-11-20 11:21:37.266928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.104 [2024-11-20 11:21:37.266969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.104 qpair failed and we were unable to recover it. 00:27:10.104 [2024-11-20 11:21:37.267092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.104 [2024-11-20 11:21:37.267123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.104 qpair failed and we were unable to recover it. 00:27:10.104 [2024-11-20 11:21:37.267372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.104 [2024-11-20 11:21:37.267404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.104 qpair failed and we were unable to recover it. 00:27:10.104 [2024-11-20 11:21:37.267590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.104 [2024-11-20 11:21:37.267622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.104 qpair failed and we were unable to recover it. 00:27:10.104 [2024-11-20 11:21:37.267745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.104 [2024-11-20 11:21:37.267776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.104 qpair failed and we were unable to recover it. 00:27:10.104 [2024-11-20 11:21:37.267971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.104 [2024-11-20 11:21:37.268006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.104 qpair failed and we were unable to recover it. 00:27:10.104 [2024-11-20 11:21:37.268269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.104 [2024-11-20 11:21:37.268301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.104 qpair failed and we were unable to recover it. 00:27:10.104 [2024-11-20 11:21:37.268483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.104 [2024-11-20 11:21:37.268515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.104 qpair failed and we were unable to recover it. 00:27:10.104 [2024-11-20 11:21:37.268648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.104 [2024-11-20 11:21:37.268680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.104 qpair failed and we were unable to recover it. 00:27:10.104 [2024-11-20 11:21:37.268916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.104 [2024-11-20 11:21:37.268966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.104 qpair failed and we were unable to recover it. 00:27:10.104 [2024-11-20 11:21:37.269165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.104 [2024-11-20 11:21:37.269197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.104 qpair failed and we were unable to recover it. 00:27:10.104 [2024-11-20 11:21:37.269307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.104 [2024-11-20 11:21:37.269339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.104 qpair failed and we were unable to recover it. 00:27:10.104 [2024-11-20 11:21:37.269453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.104 [2024-11-20 11:21:37.269485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.104 qpair failed and we were unable to recover it. 00:27:10.104 [2024-11-20 11:21:37.269724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.104 [2024-11-20 11:21:37.269755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.104 qpair failed and we were unable to recover it. 00:27:10.104 [2024-11-20 11:21:37.269870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.104 [2024-11-20 11:21:37.269902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.104 qpair failed and we were unable to recover it. 00:27:10.104 [2024-11-20 11:21:37.270161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.104 [2024-11-20 11:21:37.270195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.104 qpair failed and we were unable to recover it. 00:27:10.104 [2024-11-20 11:21:37.270369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.104 [2024-11-20 11:21:37.270401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.104 qpair failed and we were unable to recover it. 00:27:10.104 [2024-11-20 11:21:37.270585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.104 [2024-11-20 11:21:37.270618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.104 qpair failed and we were unable to recover it. 00:27:10.104 [2024-11-20 11:21:37.270748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.104 [2024-11-20 11:21:37.270786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.104 qpair failed and we were unable to recover it. 00:27:10.104 [2024-11-20 11:21:37.270975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.104 [2024-11-20 11:21:37.271009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.104 qpair failed and we were unable to recover it. 00:27:10.104 [2024-11-20 11:21:37.271292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.104 [2024-11-20 11:21:37.271323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.104 qpair failed and we were unable to recover it. 00:27:10.104 [2024-11-20 11:21:37.271526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.104 [2024-11-20 11:21:37.271558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.104 qpair failed and we were unable to recover it. 00:27:10.104 [2024-11-20 11:21:37.271744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.104 [2024-11-20 11:21:37.271776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.104 qpair failed and we were unable to recover it. 00:27:10.104 [2024-11-20 11:21:37.271958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.104 [2024-11-20 11:21:37.271991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.104 qpair failed and we were unable to recover it. 00:27:10.104 [2024-11-20 11:21:37.272165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.104 [2024-11-20 11:21:37.272197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.104 qpair failed and we were unable to recover it. 00:27:10.105 [2024-11-20 11:21:37.272376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.105 [2024-11-20 11:21:37.272408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.105 qpair failed and we were unable to recover it. 00:27:10.105 [2024-11-20 11:21:37.272644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.105 [2024-11-20 11:21:37.272675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.105 qpair failed and we were unable to recover it. 00:27:10.105 [2024-11-20 11:21:37.272854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.105 [2024-11-20 11:21:37.272886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.105 qpair failed and we were unable to recover it. 00:27:10.105 [2024-11-20 11:21:37.273125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.105 [2024-11-20 11:21:37.273159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.105 qpair failed and we were unable to recover it. 00:27:10.105 [2024-11-20 11:21:37.273276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.105 [2024-11-20 11:21:37.273308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.105 qpair failed and we were unable to recover it. 00:27:10.105 [2024-11-20 11:21:37.273440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.105 [2024-11-20 11:21:37.273472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.105 qpair failed and we were unable to recover it. 00:27:10.105 [2024-11-20 11:21:37.273641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.105 [2024-11-20 11:21:37.273673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.105 qpair failed and we were unable to recover it. 00:27:10.105 [2024-11-20 11:21:37.273783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.105 [2024-11-20 11:21:37.273816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.105 qpair failed and we were unable to recover it. 00:27:10.105 [2024-11-20 11:21:37.273995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.105 [2024-11-20 11:21:37.274029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.105 qpair failed and we were unable to recover it. 00:27:10.105 [2024-11-20 11:21:37.274303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.105 [2024-11-20 11:21:37.274334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.105 qpair failed and we were unable to recover it. 00:27:10.105 [2024-11-20 11:21:37.274456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.105 [2024-11-20 11:21:37.274488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.105 qpair failed and we were unable to recover it. 00:27:10.105 [2024-11-20 11:21:37.274671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.105 [2024-11-20 11:21:37.274702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.105 qpair failed and we were unable to recover it. 00:27:10.105 [2024-11-20 11:21:37.275045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.105 [2024-11-20 11:21:37.275079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.105 qpair failed and we were unable to recover it. 00:27:10.105 [2024-11-20 11:21:37.275280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.105 [2024-11-20 11:21:37.275312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.105 qpair failed and we were unable to recover it. 00:27:10.105 [2024-11-20 11:21:37.275498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.105 [2024-11-20 11:21:37.275529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.105 qpair failed and we were unable to recover it. 00:27:10.105 [2024-11-20 11:21:37.275766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.105 [2024-11-20 11:21:37.275798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.105 qpair failed and we were unable to recover it. 00:27:10.105 [2024-11-20 11:21:37.276034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.105 [2024-11-20 11:21:37.276067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.105 qpair failed and we were unable to recover it. 00:27:10.105 [2024-11-20 11:21:37.276248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.105 [2024-11-20 11:21:37.276281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.105 qpair failed and we were unable to recover it. 00:27:10.105 [2024-11-20 11:21:37.276397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.105 [2024-11-20 11:21:37.276428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.105 qpair failed and we were unable to recover it. 00:27:10.105 [2024-11-20 11:21:37.276687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.105 [2024-11-20 11:21:37.276719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.105 qpair failed and we were unable to recover it. 00:27:10.105 [2024-11-20 11:21:37.276902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.105 [2024-11-20 11:21:37.276935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.105 qpair failed and we were unable to recover it. 00:27:10.105 [2024-11-20 11:21:37.277192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.105 [2024-11-20 11:21:37.277225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.105 qpair failed and we were unable to recover it. 00:27:10.105 [2024-11-20 11:21:37.277353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.105 [2024-11-20 11:21:37.277385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.105 qpair failed and we were unable to recover it. 00:27:10.105 [2024-11-20 11:21:37.277657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.105 [2024-11-20 11:21:37.277689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.105 qpair failed and we were unable to recover it. 00:27:10.105 [2024-11-20 11:21:37.277869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.105 [2024-11-20 11:21:37.277901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.105 qpair failed and we were unable to recover it. 00:27:10.105 [2024-11-20 11:21:37.278119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.105 [2024-11-20 11:21:37.278153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.105 qpair failed and we were unable to recover it. 00:27:10.105 [2024-11-20 11:21:37.278338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.105 [2024-11-20 11:21:37.278369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.105 qpair failed and we were unable to recover it. 00:27:10.105 [2024-11-20 11:21:37.278564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.105 [2024-11-20 11:21:37.278595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.105 qpair failed and we were unable to recover it. 00:27:10.105 [2024-11-20 11:21:37.278833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.105 [2024-11-20 11:21:37.278865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.105 qpair failed and we were unable to recover it. 00:27:10.105 [2024-11-20 11:21:37.279077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.105 [2024-11-20 11:21:37.279111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.105 qpair failed and we were unable to recover it. 00:27:10.105 [2024-11-20 11:21:37.279304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.105 [2024-11-20 11:21:37.279336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.105 qpair failed and we were unable to recover it. 00:27:10.105 [2024-11-20 11:21:37.279594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.105 [2024-11-20 11:21:37.279625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.105 qpair failed and we were unable to recover it. 00:27:10.105 [2024-11-20 11:21:37.279738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.106 [2024-11-20 11:21:37.279770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.106 qpair failed and we were unable to recover it. 00:27:10.106 [2024-11-20 11:21:37.279970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.106 [2024-11-20 11:21:37.280010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.106 qpair failed and we were unable to recover it. 00:27:10.106 [2024-11-20 11:21:37.280143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.106 [2024-11-20 11:21:37.280175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.106 qpair failed and we were unable to recover it. 00:27:10.106 [2024-11-20 11:21:37.280365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.106 [2024-11-20 11:21:37.280397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.106 qpair failed and we were unable to recover it. 00:27:10.106 [2024-11-20 11:21:37.280570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.106 [2024-11-20 11:21:37.280601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.106 qpair failed and we were unable to recover it. 00:27:10.106 [2024-11-20 11:21:37.280792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.106 [2024-11-20 11:21:37.280825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.106 qpair failed and we were unable to recover it. 00:27:10.106 [2024-11-20 11:21:37.281006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.106 [2024-11-20 11:21:37.281040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.106 qpair failed and we were unable to recover it. 00:27:10.106 [2024-11-20 11:21:37.281226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.106 [2024-11-20 11:21:37.281258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.106 qpair failed and we were unable to recover it. 00:27:10.106 [2024-11-20 11:21:37.281439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.106 [2024-11-20 11:21:37.281471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.106 qpair failed and we were unable to recover it. 00:27:10.106 [2024-11-20 11:21:37.281706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.106 [2024-11-20 11:21:37.281738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.106 qpair failed and we were unable to recover it. 00:27:10.106 [2024-11-20 11:21:37.281909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.106 [2024-11-20 11:21:37.281943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.106 qpair failed and we were unable to recover it. 00:27:10.106 [2024-11-20 11:21:37.282191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.106 [2024-11-20 11:21:37.282223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.106 qpair failed and we were unable to recover it. 00:27:10.106 [2024-11-20 11:21:37.282413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.106 [2024-11-20 11:21:37.282446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.106 qpair failed and we were unable to recover it. 00:27:10.106 [2024-11-20 11:21:37.282724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.106 [2024-11-20 11:21:37.282756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.106 qpair failed and we were unable to recover it. 00:27:10.106 [2024-11-20 11:21:37.282977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.106 [2024-11-20 11:21:37.283012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.106 qpair failed and we were unable to recover it. 00:27:10.106 [2024-11-20 11:21:37.283271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.106 [2024-11-20 11:21:37.283304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.106 qpair failed and we were unable to recover it. 00:27:10.106 [2024-11-20 11:21:37.283543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.106 [2024-11-20 11:21:37.283576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.106 qpair failed and we were unable to recover it. 00:27:10.106 [2024-11-20 11:21:37.283748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.106 [2024-11-20 11:21:37.283780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.106 qpair failed and we were unable to recover it. 00:27:10.106 [2024-11-20 11:21:37.283974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.106 [2024-11-20 11:21:37.284008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.106 qpair failed and we were unable to recover it. 00:27:10.106 [2024-11-20 11:21:37.284294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.106 [2024-11-20 11:21:37.284326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.106 qpair failed and we were unable to recover it. 00:27:10.106 [2024-11-20 11:21:37.284591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.106 [2024-11-20 11:21:37.284623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.106 qpair failed and we were unable to recover it. 00:27:10.106 [2024-11-20 11:21:37.284807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.106 [2024-11-20 11:21:37.284838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.106 qpair failed and we were unable to recover it. 00:27:10.106 [2024-11-20 11:21:37.284967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.106 [2024-11-20 11:21:37.285001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.106 qpair failed and we were unable to recover it. 00:27:10.106 [2024-11-20 11:21:37.285132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.106 [2024-11-20 11:21:37.285163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.106 qpair failed and we were unable to recover it. 00:27:10.106 [2024-11-20 11:21:37.285277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.106 [2024-11-20 11:21:37.285309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.106 qpair failed and we were unable to recover it. 00:27:10.106 [2024-11-20 11:21:37.285531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.106 [2024-11-20 11:21:37.285562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.106 qpair failed and we were unable to recover it. 00:27:10.106 [2024-11-20 11:21:37.285755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.106 [2024-11-20 11:21:37.285787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.106 qpair failed and we were unable to recover it. 00:27:10.106 [2024-11-20 11:21:37.286024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.106 [2024-11-20 11:21:37.286057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.106 qpair failed and we were unable to recover it. 00:27:10.106 [2024-11-20 11:21:37.286210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.106 [2024-11-20 11:21:37.286244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.106 qpair failed and we were unable to recover it. 00:27:10.106 [2024-11-20 11:21:37.286432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.106 [2024-11-20 11:21:37.286463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.106 qpair failed and we were unable to recover it. 00:27:10.106 [2024-11-20 11:21:37.286703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.106 [2024-11-20 11:21:37.286734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.106 qpair failed and we were unable to recover it. 00:27:10.106 [2024-11-20 11:21:37.286905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.106 [2024-11-20 11:21:37.286937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.106 qpair failed and we were unable to recover it. 00:27:10.106 [2024-11-20 11:21:37.287204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.106 [2024-11-20 11:21:37.287237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.106 qpair failed and we were unable to recover it. 00:27:10.107 [2024-11-20 11:21:37.287407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.107 [2024-11-20 11:21:37.287438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.107 qpair failed and we were unable to recover it. 00:27:10.107 [2024-11-20 11:21:37.287559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.107 [2024-11-20 11:21:37.287591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.107 qpair failed and we were unable to recover it. 00:27:10.107 [2024-11-20 11:21:37.287835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.107 [2024-11-20 11:21:37.287867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.107 qpair failed and we were unable to recover it. 00:27:10.107 [2024-11-20 11:21:37.288102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.107 [2024-11-20 11:21:37.288136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.107 qpair failed and we were unable to recover it. 00:27:10.107 [2024-11-20 11:21:37.288400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.107 [2024-11-20 11:21:37.288432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.107 qpair failed and we were unable to recover it. 00:27:10.107 [2024-11-20 11:21:37.288633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.107 [2024-11-20 11:21:37.288665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.107 qpair failed and we were unable to recover it. 00:27:10.107 [2024-11-20 11:21:37.288865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.107 [2024-11-20 11:21:37.288897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.107 qpair failed and we were unable to recover it. 00:27:10.107 [2024-11-20 11:21:37.289165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.107 [2024-11-20 11:21:37.289198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.107 qpair failed and we were unable to recover it. 00:27:10.107 [2024-11-20 11:21:37.289381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.107 [2024-11-20 11:21:37.289424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.107 qpair failed and we were unable to recover it. 00:27:10.107 [2024-11-20 11:21:37.289606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.107 [2024-11-20 11:21:37.289639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.107 qpair failed and we were unable to recover it. 00:27:10.107 [2024-11-20 11:21:37.289816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.107 [2024-11-20 11:21:37.289847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.107 qpair failed and we were unable to recover it. 00:27:10.107 [2024-11-20 11:21:37.289984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.107 [2024-11-20 11:21:37.290018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.107 qpair failed and we were unable to recover it. 00:27:10.107 [2024-11-20 11:21:37.290144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.107 [2024-11-20 11:21:37.290176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.107 qpair failed and we were unable to recover it. 00:27:10.107 [2024-11-20 11:21:37.290376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.107 [2024-11-20 11:21:37.290407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.107 qpair failed and we were unable to recover it. 00:27:10.107 [2024-11-20 11:21:37.290580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.107 [2024-11-20 11:21:37.290611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.107 qpair failed and we were unable to recover it. 00:27:10.107 [2024-11-20 11:21:37.290867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.107 [2024-11-20 11:21:37.290899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.107 qpair failed and we were unable to recover it. 00:27:10.107 [2024-11-20 11:21:37.291030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.107 [2024-11-20 11:21:37.291064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.107 qpair failed and we were unable to recover it. 00:27:10.107 [2024-11-20 11:21:37.291181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.107 [2024-11-20 11:21:37.291213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.107 qpair failed and we were unable to recover it. 00:27:10.107 [2024-11-20 11:21:37.291394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.107 [2024-11-20 11:21:37.291426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.107 qpair failed and we were unable to recover it. 00:27:10.107 [2024-11-20 11:21:37.291666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.107 [2024-11-20 11:21:37.291698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.107 qpair failed and we were unable to recover it. 00:27:10.107 [2024-11-20 11:21:37.291832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.107 [2024-11-20 11:21:37.291863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.107 qpair failed and we were unable to recover it. 00:27:10.107 [2024-11-20 11:21:37.292144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.107 [2024-11-20 11:21:37.292178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.107 qpair failed and we were unable to recover it. 00:27:10.107 [2024-11-20 11:21:37.292429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.107 [2024-11-20 11:21:37.292461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.107 qpair failed and we were unable to recover it. 00:27:10.107 [2024-11-20 11:21:37.292744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.107 [2024-11-20 11:21:37.292777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.107 qpair failed and we were unable to recover it. 00:27:10.107 [2024-11-20 11:21:37.293042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.107 [2024-11-20 11:21:37.293075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.107 qpair failed and we were unable to recover it. 00:27:10.107 [2024-11-20 11:21:37.293207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.107 [2024-11-20 11:21:37.293239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.107 qpair failed and we were unable to recover it. 00:27:10.107 [2024-11-20 11:21:37.293498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.107 [2024-11-20 11:21:37.293530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.107 qpair failed and we were unable to recover it. 00:27:10.107 [2024-11-20 11:21:37.293651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.108 [2024-11-20 11:21:37.293683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.108 qpair failed and we were unable to recover it. 00:27:10.108 [2024-11-20 11:21:37.293873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.108 [2024-11-20 11:21:37.293905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.108 qpair failed and we were unable to recover it. 00:27:10.108 [2024-11-20 11:21:37.294099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.108 [2024-11-20 11:21:37.294133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.108 qpair failed and we were unable to recover it. 00:27:10.108 [2024-11-20 11:21:37.294245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.108 [2024-11-20 11:21:37.294277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.108 qpair failed and we were unable to recover it. 00:27:10.108 [2024-11-20 11:21:37.294409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.108 [2024-11-20 11:21:37.294441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.108 qpair failed and we were unable to recover it. 00:27:10.108 [2024-11-20 11:21:37.294628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.108 [2024-11-20 11:21:37.294660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.108 qpair failed and we were unable to recover it. 00:27:10.108 [2024-11-20 11:21:37.294903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.108 [2024-11-20 11:21:37.294934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.108 qpair failed and we were unable to recover it. 00:27:10.108 [2024-11-20 11:21:37.295202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.108 [2024-11-20 11:21:37.295235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.108 qpair failed and we were unable to recover it. 00:27:10.108 [2024-11-20 11:21:37.295440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.108 [2024-11-20 11:21:37.295474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.108 qpair failed and we were unable to recover it. 00:27:10.108 [2024-11-20 11:21:37.295601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.108 [2024-11-20 11:21:37.295634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.108 qpair failed and we were unable to recover it. 00:27:10.108 [2024-11-20 11:21:37.295815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.108 [2024-11-20 11:21:37.295847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.108 qpair failed and we were unable to recover it. 00:27:10.108 [2024-11-20 11:21:37.295974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.108 [2024-11-20 11:21:37.296008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.108 qpair failed and we were unable to recover it. 00:27:10.108 [2024-11-20 11:21:37.296189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.108 [2024-11-20 11:21:37.296221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.108 qpair failed and we were unable to recover it. 00:27:10.108 [2024-11-20 11:21:37.296400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.108 [2024-11-20 11:21:37.296433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.108 qpair failed and we were unable to recover it. 00:27:10.108 [2024-11-20 11:21:37.296715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.108 [2024-11-20 11:21:37.296747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.108 qpair failed and we were unable to recover it. 00:27:10.108 [2024-11-20 11:21:37.296958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.108 [2024-11-20 11:21:37.296992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.108 qpair failed and we were unable to recover it. 00:27:10.108 [2024-11-20 11:21:37.297177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.108 [2024-11-20 11:21:37.297209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.108 qpair failed and we were unable to recover it. 00:27:10.108 [2024-11-20 11:21:37.297403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.108 [2024-11-20 11:21:37.297436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.108 qpair failed and we were unable to recover it. 00:27:10.108 [2024-11-20 11:21:37.297646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.108 [2024-11-20 11:21:37.297678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.108 qpair failed and we were unable to recover it. 00:27:10.108 [2024-11-20 11:21:37.297808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.108 [2024-11-20 11:21:37.297840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.108 qpair failed and we were unable to recover it. 00:27:10.108 [2024-11-20 11:21:37.298101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.108 [2024-11-20 11:21:37.298135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.108 qpair failed and we were unable to recover it. 00:27:10.108 [2024-11-20 11:21:37.298332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.108 [2024-11-20 11:21:37.298371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.108 qpair failed and we were unable to recover it. 00:27:10.108 [2024-11-20 11:21:37.298478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.108 [2024-11-20 11:21:37.298511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.108 qpair failed and we were unable to recover it. 00:27:10.108 [2024-11-20 11:21:37.298726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.108 [2024-11-20 11:21:37.298758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.108 qpair failed and we were unable to recover it. 00:27:10.108 [2024-11-20 11:21:37.299048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.108 [2024-11-20 11:21:37.299084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.108 qpair failed and we were unable to recover it. 00:27:10.108 [2024-11-20 11:21:37.299192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.108 [2024-11-20 11:21:37.299225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.108 qpair failed and we were unable to recover it. 00:27:10.108 [2024-11-20 11:21:37.299481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.108 [2024-11-20 11:21:37.299516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.108 qpair failed and we were unable to recover it. 00:27:10.108 [2024-11-20 11:21:37.299649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.108 [2024-11-20 11:21:37.299681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.108 qpair failed and we were unable to recover it. 00:27:10.108 [2024-11-20 11:21:37.299813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.108 [2024-11-20 11:21:37.299846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.108 qpair failed and we were unable to recover it. 00:27:10.108 [2024-11-20 11:21:37.300031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.108 [2024-11-20 11:21:37.300065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.108 qpair failed and we were unable to recover it. 00:27:10.108 [2024-11-20 11:21:37.300329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.108 [2024-11-20 11:21:37.300361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.108 qpair failed and we were unable to recover it. 00:27:10.108 [2024-11-20 11:21:37.300509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.108 [2024-11-20 11:21:37.300542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.108 qpair failed and we were unable to recover it. 00:27:10.108 [2024-11-20 11:21:37.300749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.108 [2024-11-20 11:21:37.300780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.108 qpair failed and we were unable to recover it. 00:27:10.108 [2024-11-20 11:21:37.300895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.109 [2024-11-20 11:21:37.300928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.109 qpair failed and we were unable to recover it. 00:27:10.109 [2024-11-20 11:21:37.301138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.109 [2024-11-20 11:21:37.301171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.109 qpair failed and we were unable to recover it. 00:27:10.109 [2024-11-20 11:21:37.301357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.109 [2024-11-20 11:21:37.301389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.109 qpair failed and we were unable to recover it. 00:27:10.109 [2024-11-20 11:21:37.301496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.109 [2024-11-20 11:21:37.301528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.109 qpair failed and we were unable to recover it. 00:27:10.109 [2024-11-20 11:21:37.301733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.109 [2024-11-20 11:21:37.301765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.109 qpair failed and we were unable to recover it. 00:27:10.109 [2024-11-20 11:21:37.301868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.109 [2024-11-20 11:21:37.301901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.109 qpair failed and we were unable to recover it. 00:27:10.109 [2024-11-20 11:21:37.302094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.109 [2024-11-20 11:21:37.302127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.109 qpair failed and we were unable to recover it. 00:27:10.109 [2024-11-20 11:21:37.302247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.109 [2024-11-20 11:21:37.302278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.109 qpair failed and we were unable to recover it. 00:27:10.109 [2024-11-20 11:21:37.302398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.109 [2024-11-20 11:21:37.302431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.109 qpair failed and we were unable to recover it. 00:27:10.109 [2024-11-20 11:21:37.302559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.109 [2024-11-20 11:21:37.302591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.109 qpair failed and we were unable to recover it. 00:27:10.109 [2024-11-20 11:21:37.302725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.109 [2024-11-20 11:21:37.302757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.109 qpair failed and we were unable to recover it. 00:27:10.109 [2024-11-20 11:21:37.302930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.109 [2024-11-20 11:21:37.302971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.109 qpair failed and we were unable to recover it. 00:27:10.109 [2024-11-20 11:21:37.303179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.109 [2024-11-20 11:21:37.303211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.109 qpair failed and we were unable to recover it. 00:27:10.109 [2024-11-20 11:21:37.303403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.109 [2024-11-20 11:21:37.303435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.109 qpair failed and we were unable to recover it. 00:27:10.109 [2024-11-20 11:21:37.303621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.109 [2024-11-20 11:21:37.303653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.109 qpair failed and we were unable to recover it. 00:27:10.109 [2024-11-20 11:21:37.303774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.109 [2024-11-20 11:21:37.303807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.109 qpair failed and we were unable to recover it. 00:27:10.109 [2024-11-20 11:21:37.303977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.109 [2024-11-20 11:21:37.304011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.109 qpair failed and we were unable to recover it. 00:27:10.109 [2024-11-20 11:21:37.304199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.109 [2024-11-20 11:21:37.304231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.109 qpair failed and we were unable to recover it. 00:27:10.109 [2024-11-20 11:21:37.304342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.109 [2024-11-20 11:21:37.304375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.109 qpair failed and we were unable to recover it. 00:27:10.109 [2024-11-20 11:21:37.304491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.109 [2024-11-20 11:21:37.304523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.109 qpair failed and we were unable to recover it. 00:27:10.109 [2024-11-20 11:21:37.304712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.109 [2024-11-20 11:21:37.304744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.109 qpair failed and we were unable to recover it. 00:27:10.109 [2024-11-20 11:21:37.304993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.109 [2024-11-20 11:21:37.305028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.109 qpair failed and we were unable to recover it. 00:27:10.109 [2024-11-20 11:21:37.305204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.109 [2024-11-20 11:21:37.305235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.109 qpair failed and we were unable to recover it. 00:27:10.109 [2024-11-20 11:21:37.305414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.109 [2024-11-20 11:21:37.305446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.109 qpair failed and we were unable to recover it. 00:27:10.109 [2024-11-20 11:21:37.305580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.109 [2024-11-20 11:21:37.305611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.109 qpair failed and we were unable to recover it. 00:27:10.109 [2024-11-20 11:21:37.305720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.109 [2024-11-20 11:21:37.305752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.109 qpair failed and we were unable to recover it. 00:27:10.109 [2024-11-20 11:21:37.305965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.109 [2024-11-20 11:21:37.305999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.109 qpair failed and we were unable to recover it. 00:27:10.109 [2024-11-20 11:21:37.306112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.109 [2024-11-20 11:21:37.306145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.109 qpair failed and we were unable to recover it. 00:27:10.109 [2024-11-20 11:21:37.306389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.109 [2024-11-20 11:21:37.306426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.109 qpair failed and we were unable to recover it. 00:27:10.109 [2024-11-20 11:21:37.306541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.109 [2024-11-20 11:21:37.306572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.109 qpair failed and we were unable to recover it. 00:27:10.109 [2024-11-20 11:21:37.306696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.109 [2024-11-20 11:21:37.306728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.109 qpair failed and we were unable to recover it. 00:27:10.109 [2024-11-20 11:21:37.306836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.109 [2024-11-20 11:21:37.306878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.109 qpair failed and we were unable to recover it. 00:27:10.109 [2024-11-20 11:21:37.307090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.109 [2024-11-20 11:21:37.307124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.109 qpair failed and we were unable to recover it. 00:27:10.109 [2024-11-20 11:21:37.307243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.110 [2024-11-20 11:21:37.307275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.110 qpair failed and we were unable to recover it. 00:27:10.110 [2024-11-20 11:21:37.307391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.110 [2024-11-20 11:21:37.307424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.110 qpair failed and we were unable to recover it. 00:27:10.110 [2024-11-20 11:21:37.307690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.110 [2024-11-20 11:21:37.307722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.110 qpair failed and we were unable to recover it. 00:27:10.110 [2024-11-20 11:21:37.307902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.110 [2024-11-20 11:21:37.307934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.110 qpair failed and we were unable to recover it. 00:27:10.110 [2024-11-20 11:21:37.308144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.110 [2024-11-20 11:21:37.308177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.110 qpair failed and we were unable to recover it. 00:27:10.110 [2024-11-20 11:21:37.308288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.110 [2024-11-20 11:21:37.308320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.110 qpair failed and we were unable to recover it. 00:27:10.110 [2024-11-20 11:21:37.308523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.110 [2024-11-20 11:21:37.308555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.110 qpair failed and we were unable to recover it. 00:27:10.110 [2024-11-20 11:21:37.308793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.110 [2024-11-20 11:21:37.308827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.110 qpair failed and we were unable to recover it. 00:27:10.110 [2024-11-20 11:21:37.308960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.110 [2024-11-20 11:21:37.308993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.110 qpair failed and we were unable to recover it. 00:27:10.110 [2024-11-20 11:21:37.309118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.110 [2024-11-20 11:21:37.309150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.110 qpair failed and we were unable to recover it. 00:27:10.110 [2024-11-20 11:21:37.309384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.110 [2024-11-20 11:21:37.309417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.110 qpair failed and we were unable to recover it. 00:27:10.110 [2024-11-20 11:21:37.309529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.110 [2024-11-20 11:21:37.309560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.110 qpair failed and we were unable to recover it. 00:27:10.110 [2024-11-20 11:21:37.309672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.110 [2024-11-20 11:21:37.309705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.110 qpair failed and we were unable to recover it. 00:27:10.110 [2024-11-20 11:21:37.309828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.110 [2024-11-20 11:21:37.309860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.110 qpair failed and we were unable to recover it. 00:27:10.110 [2024-11-20 11:21:37.309988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.110 [2024-11-20 11:21:37.310022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.110 qpair failed and we were unable to recover it. 00:27:10.110 [2024-11-20 11:21:37.310204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.110 [2024-11-20 11:21:37.310236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.110 qpair failed and we were unable to recover it. 00:27:10.110 [2024-11-20 11:21:37.310433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.110 [2024-11-20 11:21:37.310465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.110 qpair failed and we were unable to recover it. 00:27:10.110 [2024-11-20 11:21:37.310605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.110 [2024-11-20 11:21:37.310637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.110 qpair failed and we were unable to recover it. 00:27:10.110 [2024-11-20 11:21:37.310759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.110 [2024-11-20 11:21:37.310790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.110 qpair failed and we were unable to recover it. 00:27:10.110 [2024-11-20 11:21:37.310912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.110 [2024-11-20 11:21:37.310944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.110 qpair failed and we were unable to recover it. 00:27:10.110 [2024-11-20 11:21:37.311058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.110 [2024-11-20 11:21:37.311089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.110 qpair failed and we were unable to recover it. 00:27:10.110 [2024-11-20 11:21:37.311192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.110 [2024-11-20 11:21:37.311224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.110 qpair failed and we were unable to recover it. 00:27:10.110 [2024-11-20 11:21:37.311347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.110 [2024-11-20 11:21:37.311379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.110 qpair failed and we were unable to recover it. 00:27:10.110 [2024-11-20 11:21:37.311551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.110 [2024-11-20 11:21:37.311583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.110 qpair failed and we were unable to recover it. 00:27:10.110 [2024-11-20 11:21:37.311694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.110 [2024-11-20 11:21:37.311725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.110 qpair failed and we were unable to recover it. 00:27:10.110 [2024-11-20 11:21:37.311828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.110 [2024-11-20 11:21:37.311860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.110 qpair failed and we were unable to recover it. 00:27:10.110 [2024-11-20 11:21:37.311980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.110 [2024-11-20 11:21:37.312014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.110 qpair failed and we were unable to recover it. 00:27:10.110 [2024-11-20 11:21:37.312141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.110 [2024-11-20 11:21:37.312174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.110 qpair failed and we were unable to recover it. 00:27:10.110 [2024-11-20 11:21:37.312301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.110 [2024-11-20 11:21:37.312333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.110 qpair failed and we were unable to recover it. 00:27:10.110 [2024-11-20 11:21:37.312515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.110 [2024-11-20 11:21:37.312547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.110 qpair failed and we were unable to recover it. 00:27:10.110 [2024-11-20 11:21:37.312661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.110 [2024-11-20 11:21:37.312693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.110 qpair failed and we were unable to recover it. 00:27:10.110 [2024-11-20 11:21:37.312861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.110 [2024-11-20 11:21:37.312893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.110 qpair failed and we were unable to recover it. 00:27:10.110 [2024-11-20 11:21:37.313002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.110 [2024-11-20 11:21:37.313035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.110 qpair failed and we were unable to recover it. 00:27:10.110 [2024-11-20 11:21:37.313138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.111 [2024-11-20 11:21:37.313170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.111 qpair failed and we were unable to recover it. 00:27:10.111 [2024-11-20 11:21:37.313289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.111 [2024-11-20 11:21:37.313320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.111 qpair failed and we were unable to recover it. 00:27:10.111 [2024-11-20 11:21:37.313515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.111 [2024-11-20 11:21:37.313553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.111 qpair failed and we were unable to recover it. 00:27:10.111 [2024-11-20 11:21:37.313652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.111 [2024-11-20 11:21:37.313684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.111 qpair failed and we were unable to recover it. 00:27:10.111 [2024-11-20 11:21:37.313944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.111 [2024-11-20 11:21:37.313989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.111 qpair failed and we were unable to recover it. 00:27:10.111 [2024-11-20 11:21:37.314199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.111 [2024-11-20 11:21:37.314232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.111 qpair failed and we were unable to recover it. 00:27:10.111 [2024-11-20 11:21:37.314349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.111 [2024-11-20 11:21:37.314381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.111 qpair failed and we were unable to recover it. 00:27:10.111 [2024-11-20 11:21:37.314620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.111 [2024-11-20 11:21:37.314652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.111 qpair failed and we were unable to recover it. 00:27:10.111 [2024-11-20 11:21:37.314834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.111 [2024-11-20 11:21:37.314866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.111 qpair failed and we were unable to recover it. 00:27:10.111 [2024-11-20 11:21:37.314995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.111 [2024-11-20 11:21:37.315030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.111 qpair failed and we were unable to recover it. 00:27:10.111 [2024-11-20 11:21:37.315166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.111 [2024-11-20 11:21:37.315198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.111 qpair failed and we were unable to recover it. 00:27:10.111 [2024-11-20 11:21:37.315316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.111 [2024-11-20 11:21:37.315349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.111 qpair failed and we were unable to recover it. 00:27:10.111 [2024-11-20 11:21:37.315468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.111 [2024-11-20 11:21:37.315500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.111 qpair failed and we were unable to recover it. 00:27:10.111 [2024-11-20 11:21:37.315609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.111 [2024-11-20 11:21:37.315641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.111 qpair failed and we were unable to recover it. 00:27:10.111 [2024-11-20 11:21:37.315813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.111 [2024-11-20 11:21:37.315845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.111 qpair failed and we were unable to recover it. 00:27:10.111 [2024-11-20 11:21:37.315960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.111 [2024-11-20 11:21:37.315994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.111 qpair failed and we were unable to recover it. 00:27:10.111 [2024-11-20 11:21:37.316255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.111 [2024-11-20 11:21:37.316288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.111 qpair failed and we were unable to recover it. 00:27:10.111 [2024-11-20 11:21:37.316409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.111 [2024-11-20 11:21:37.316441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.111 qpair failed and we were unable to recover it. 00:27:10.111 [2024-11-20 11:21:37.316549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.111 [2024-11-20 11:21:37.316581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.111 qpair failed and we were unable to recover it. 00:27:10.111 [2024-11-20 11:21:37.316702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.111 [2024-11-20 11:21:37.316734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.111 qpair failed and we were unable to recover it. 00:27:10.111 [2024-11-20 11:21:37.316974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.111 [2024-11-20 11:21:37.317008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.111 qpair failed and we were unable to recover it. 00:27:10.111 [2024-11-20 11:21:37.317257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.111 [2024-11-20 11:21:37.317289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.111 qpair failed and we were unable to recover it. 00:27:10.111 [2024-11-20 11:21:37.317468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.111 [2024-11-20 11:21:37.317500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.111 qpair failed and we were unable to recover it. 00:27:10.111 [2024-11-20 11:21:37.317669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.111 [2024-11-20 11:21:37.317701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.111 qpair failed and we were unable to recover it. 00:27:10.111 [2024-11-20 11:21:37.317887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.111 [2024-11-20 11:21:37.317919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.111 qpair failed and we were unable to recover it. 00:27:10.111 [2024-11-20 11:21:37.318049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.111 [2024-11-20 11:21:37.318082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.111 qpair failed and we were unable to recover it. 00:27:10.111 [2024-11-20 11:21:37.318323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.111 [2024-11-20 11:21:37.318355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.111 qpair failed and we were unable to recover it. 00:27:10.111 [2024-11-20 11:21:37.318523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.111 [2024-11-20 11:21:37.318554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.111 qpair failed and we were unable to recover it. 00:27:10.111 [2024-11-20 11:21:37.318737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.111 [2024-11-20 11:21:37.318769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.111 qpair failed and we were unable to recover it. 00:27:10.111 [2024-11-20 11:21:37.318889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.111 [2024-11-20 11:21:37.318922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.111 qpair failed and we were unable to recover it. 00:27:10.111 [2024-11-20 11:21:37.319176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.111 [2024-11-20 11:21:37.319209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.111 qpair failed and we were unable to recover it. 00:27:10.111 [2024-11-20 11:21:37.319331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.111 [2024-11-20 11:21:37.319363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.111 qpair failed and we were unable to recover it. 00:27:10.111 [2024-11-20 11:21:37.319489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.112 [2024-11-20 11:21:37.319520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.112 qpair failed and we were unable to recover it. 00:27:10.112 [2024-11-20 11:21:37.319696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.112 [2024-11-20 11:21:37.319728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.112 qpair failed and we were unable to recover it. 00:27:10.112 [2024-11-20 11:21:37.319913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.112 [2024-11-20 11:21:37.319945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.112 qpair failed and we were unable to recover it. 00:27:10.112 [2024-11-20 11:21:37.320193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.112 [2024-11-20 11:21:37.320224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.112 qpair failed and we were unable to recover it. 00:27:10.112 [2024-11-20 11:21:37.320415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.112 [2024-11-20 11:21:37.320447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.112 qpair failed and we were unable to recover it. 00:27:10.112 [2024-11-20 11:21:37.320585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.112 [2024-11-20 11:21:37.320617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.112 qpair failed and we were unable to recover it. 00:27:10.112 [2024-11-20 11:21:37.320795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.112 [2024-11-20 11:21:37.320827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.112 qpair failed and we were unable to recover it. 00:27:10.112 [2024-11-20 11:21:37.321008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.112 [2024-11-20 11:21:37.321042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.112 qpair failed and we were unable to recover it. 00:27:10.112 [2024-11-20 11:21:37.321153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.112 [2024-11-20 11:21:37.321185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.112 qpair failed and we were unable to recover it. 00:27:10.112 [2024-11-20 11:21:37.321362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.112 [2024-11-20 11:21:37.321394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.112 qpair failed and we were unable to recover it. 00:27:10.112 [2024-11-20 11:21:37.321570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.112 [2024-11-20 11:21:37.321607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.112 qpair failed and we were unable to recover it. 00:27:10.112 [2024-11-20 11:21:37.321803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.112 [2024-11-20 11:21:37.321836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.112 qpair failed and we were unable to recover it. 00:27:10.112 [2024-11-20 11:21:37.321980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.112 [2024-11-20 11:21:37.322014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.112 qpair failed and we were unable to recover it. 00:27:10.112 [2024-11-20 11:21:37.322187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.112 [2024-11-20 11:21:37.322220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.112 qpair failed and we were unable to recover it. 00:27:10.112 [2024-11-20 11:21:37.322427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.112 [2024-11-20 11:21:37.322461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.112 qpair failed and we were unable to recover it. 00:27:10.112 [2024-11-20 11:21:37.322584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.112 [2024-11-20 11:21:37.322616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.112 qpair failed and we were unable to recover it. 00:27:10.112 [2024-11-20 11:21:37.322731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.112 [2024-11-20 11:21:37.322765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.112 qpair failed and we were unable to recover it. 00:27:10.112 [2024-11-20 11:21:37.322937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.112 [2024-11-20 11:21:37.323001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.112 qpair failed and we were unable to recover it. 00:27:10.112 [2024-11-20 11:21:37.323209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.112 [2024-11-20 11:21:37.323241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.112 qpair failed and we were unable to recover it. 00:27:10.112 [2024-11-20 11:21:37.323358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.112 [2024-11-20 11:21:37.323390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.112 qpair failed and we were unable to recover it. 00:27:10.112 [2024-11-20 11:21:37.323507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.112 [2024-11-20 11:21:37.323539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.112 qpair failed and we were unable to recover it. 00:27:10.112 [2024-11-20 11:21:37.323751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.112 [2024-11-20 11:21:37.323783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.112 qpair failed and we were unable to recover it. 00:27:10.112 [2024-11-20 11:21:37.323964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.112 [2024-11-20 11:21:37.323999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.112 qpair failed and we were unable to recover it. 00:27:10.112 [2024-11-20 11:21:37.324192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.112 [2024-11-20 11:21:37.324224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.112 qpair failed and we were unable to recover it. 00:27:10.112 [2024-11-20 11:21:37.324506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.112 [2024-11-20 11:21:37.324539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.112 qpair failed and we were unable to recover it. 00:27:10.112 [2024-11-20 11:21:37.324710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.112 [2024-11-20 11:21:37.324742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.112 qpair failed and we were unable to recover it. 00:27:10.112 [2024-11-20 11:21:37.324918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.112 [2024-11-20 11:21:37.324963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.112 qpair failed and we were unable to recover it. 00:27:10.112 [2024-11-20 11:21:37.325136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.112 [2024-11-20 11:21:37.325168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.112 qpair failed and we were unable to recover it. 00:27:10.112 [2024-11-20 11:21:37.325293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.112 [2024-11-20 11:21:37.325325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.112 qpair failed and we were unable to recover it. 00:27:10.113 [2024-11-20 11:21:37.325429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.113 [2024-11-20 11:21:37.325461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.113 qpair failed and we were unable to recover it. 00:27:10.113 [2024-11-20 11:21:37.325664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.113 [2024-11-20 11:21:37.325697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.113 qpair failed and we were unable to recover it. 00:27:10.113 [2024-11-20 11:21:37.325892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.113 [2024-11-20 11:21:37.325925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.113 qpair failed and we were unable to recover it. 00:27:10.113 [2024-11-20 11:21:37.326060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.113 [2024-11-20 11:21:37.326093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.113 qpair failed and we were unable to recover it. 00:27:10.113 [2024-11-20 11:21:37.326221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.113 [2024-11-20 11:21:37.326253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.113 qpair failed and we were unable to recover it. 00:27:10.113 [2024-11-20 11:21:37.326432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.113 [2024-11-20 11:21:37.326465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.113 qpair failed and we were unable to recover it. 00:27:10.113 [2024-11-20 11:21:37.326646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.113 [2024-11-20 11:21:37.326678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.113 qpair failed and we were unable to recover it. 00:27:10.113 [2024-11-20 11:21:37.326815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.113 [2024-11-20 11:21:37.326847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.113 qpair failed and we were unable to recover it. 00:27:10.113 [2024-11-20 11:21:37.326975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.113 [2024-11-20 11:21:37.327010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.113 qpair failed and we were unable to recover it. 00:27:10.113 [2024-11-20 11:21:37.327125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.113 [2024-11-20 11:21:37.327158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.113 qpair failed and we were unable to recover it. 00:27:10.113 [2024-11-20 11:21:37.327340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.113 [2024-11-20 11:21:37.327371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.113 qpair failed and we were unable to recover it. 00:27:10.113 [2024-11-20 11:21:37.327547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.113 [2024-11-20 11:21:37.327578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.113 qpair failed and we were unable to recover it. 00:27:10.113 [2024-11-20 11:21:37.327697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.113 [2024-11-20 11:21:37.327729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.113 qpair failed and we were unable to recover it. 00:27:10.113 [2024-11-20 11:21:37.327833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.113 [2024-11-20 11:21:37.327865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.113 qpair failed and we were unable to recover it. 00:27:10.113 [2024-11-20 11:21:37.328060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.113 [2024-11-20 11:21:37.328094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.113 qpair failed and we were unable to recover it. 00:27:10.113 [2024-11-20 11:21:37.328216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.113 [2024-11-20 11:21:37.328248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.113 qpair failed and we were unable to recover it. 00:27:10.113 [2024-11-20 11:21:37.328374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.113 [2024-11-20 11:21:37.328406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.113 qpair failed and we were unable to recover it. 00:27:10.113 [2024-11-20 11:21:37.328577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.113 [2024-11-20 11:21:37.328609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.113 qpair failed and we were unable to recover it. 00:27:10.113 [2024-11-20 11:21:37.328799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.113 [2024-11-20 11:21:37.328831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.113 qpair failed and we were unable to recover it. 00:27:10.113 [2024-11-20 11:21:37.329012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.113 [2024-11-20 11:21:37.329045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.113 qpair failed and we were unable to recover it. 00:27:10.113 [2024-11-20 11:21:37.329251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.113 [2024-11-20 11:21:37.329283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.113 qpair failed and we were unable to recover it. 00:27:10.113 [2024-11-20 11:21:37.329400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.113 [2024-11-20 11:21:37.329443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.113 qpair failed and we were unable to recover it. 00:27:10.113 [2024-11-20 11:21:37.329545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.113 [2024-11-20 11:21:37.329577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.113 qpair failed and we were unable to recover it. 00:27:10.113 [2024-11-20 11:21:37.329765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.113 [2024-11-20 11:21:37.329797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.113 qpair failed and we were unable to recover it. 00:27:10.113 [2024-11-20 11:21:37.330042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.113 [2024-11-20 11:21:37.330076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.113 qpair failed and we were unable to recover it. 00:27:10.113 [2024-11-20 11:21:37.330297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.113 [2024-11-20 11:21:37.330329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.113 qpair failed and we were unable to recover it. 00:27:10.113 [2024-11-20 11:21:37.330509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.113 [2024-11-20 11:21:37.330542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.113 qpair failed and we were unable to recover it. 00:27:10.113 [2024-11-20 11:21:37.330660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.113 [2024-11-20 11:21:37.330692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.113 qpair failed and we were unable to recover it. 00:27:10.113 [2024-11-20 11:21:37.330874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.113 [2024-11-20 11:21:37.330906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.113 qpair failed and we were unable to recover it. 00:27:10.113 [2024-11-20 11:21:37.331061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.113 [2024-11-20 11:21:37.331095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.113 qpair failed and we were unable to recover it. 00:27:10.113 [2024-11-20 11:21:37.331276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.113 [2024-11-20 11:21:37.331309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.113 qpair failed and we were unable to recover it. 00:27:10.113 [2024-11-20 11:21:37.331482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.113 [2024-11-20 11:21:37.331514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.113 qpair failed and we were unable to recover it. 00:27:10.113 [2024-11-20 11:21:37.331696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.114 [2024-11-20 11:21:37.331729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.114 qpair failed and we were unable to recover it. 00:27:10.114 [2024-11-20 11:21:37.331850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.114 [2024-11-20 11:21:37.331882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.114 qpair failed and we were unable to recover it. 00:27:10.114 [2024-11-20 11:21:37.332002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.114 [2024-11-20 11:21:37.332038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.114 qpair failed and we were unable to recover it. 00:27:10.114 [2024-11-20 11:21:37.332163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.114 [2024-11-20 11:21:37.332196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.114 qpair failed and we were unable to recover it. 00:27:10.114 [2024-11-20 11:21:37.332311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.114 [2024-11-20 11:21:37.332343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.114 qpair failed and we were unable to recover it. 00:27:10.114 [2024-11-20 11:21:37.332518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.114 [2024-11-20 11:21:37.332550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.114 qpair failed and we were unable to recover it. 00:27:10.114 [2024-11-20 11:21:37.332720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.114 [2024-11-20 11:21:37.332752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.114 qpair failed and we were unable to recover it. 00:27:10.114 [2024-11-20 11:21:37.332867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.114 [2024-11-20 11:21:37.332900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.114 qpair failed and we were unable to recover it. 00:27:10.114 [2024-11-20 11:21:37.333094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.114 [2024-11-20 11:21:37.333136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.114 qpair failed and we were unable to recover it. 00:27:10.114 [2024-11-20 11:21:37.333264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.114 [2024-11-20 11:21:37.333296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.114 qpair failed and we were unable to recover it. 00:27:10.114 [2024-11-20 11:21:37.333475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.114 [2024-11-20 11:21:37.333507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.114 qpair failed and we were unable to recover it. 00:27:10.114 [2024-11-20 11:21:37.333610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.114 [2024-11-20 11:21:37.333642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.114 qpair failed and we were unable to recover it. 00:27:10.114 [2024-11-20 11:21:37.333894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.114 [2024-11-20 11:21:37.333926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.114 qpair failed and we were unable to recover it. 00:27:10.114 [2024-11-20 11:21:37.334066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.114 [2024-11-20 11:21:37.334098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.114 qpair failed and we were unable to recover it. 00:27:10.114 [2024-11-20 11:21:37.334223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.114 [2024-11-20 11:21:37.334256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.114 qpair failed and we were unable to recover it. 00:27:10.114 [2024-11-20 11:21:37.334380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.114 [2024-11-20 11:21:37.334413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.114 qpair failed and we were unable to recover it. 00:27:10.114 [2024-11-20 11:21:37.334611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.114 [2024-11-20 11:21:37.334643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.114 qpair failed and we were unable to recover it. 00:27:10.114 [2024-11-20 11:21:37.334834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.114 [2024-11-20 11:21:37.334866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.114 qpair failed and we were unable to recover it. 00:27:10.114 [2024-11-20 11:21:37.335095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.114 [2024-11-20 11:21:37.335128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.114 qpair failed and we were unable to recover it. 00:27:10.114 [2024-11-20 11:21:37.335252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.114 [2024-11-20 11:21:37.335285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.114 qpair failed and we were unable to recover it. 00:27:10.114 [2024-11-20 11:21:37.335528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.114 [2024-11-20 11:21:37.335560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.114 qpair failed and we were unable to recover it. 00:27:10.114 [2024-11-20 11:21:37.335678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.114 [2024-11-20 11:21:37.335711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.114 qpair failed and we were unable to recover it. 00:27:10.114 [2024-11-20 11:21:37.335885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.114 [2024-11-20 11:21:37.335917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.114 qpair failed and we were unable to recover it. 00:27:10.114 [2024-11-20 11:21:37.336050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.114 [2024-11-20 11:21:37.336083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.114 qpair failed and we were unable to recover it. 00:27:10.114 [2024-11-20 11:21:37.336199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.114 [2024-11-20 11:21:37.336232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.114 qpair failed and we were unable to recover it. 00:27:10.114 [2024-11-20 11:21:37.336343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.114 [2024-11-20 11:21:37.336375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.114 qpair failed and we were unable to recover it. 00:27:10.114 [2024-11-20 11:21:37.336616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.114 [2024-11-20 11:21:37.336648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.114 qpair failed and we were unable to recover it. 00:27:10.114 [2024-11-20 11:21:37.336756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.114 [2024-11-20 11:21:37.336789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.114 qpair failed and we were unable to recover it. 00:27:10.114 [2024-11-20 11:21:37.336903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.114 [2024-11-20 11:21:37.336936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.114 qpair failed and we were unable to recover it. 00:27:10.114 [2024-11-20 11:21:37.337131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.114 [2024-11-20 11:21:37.337165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.114 qpair failed and we were unable to recover it. 00:27:10.114 [2024-11-20 11:21:37.337340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.114 [2024-11-20 11:21:37.337374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.114 qpair failed and we were unable to recover it. 00:27:10.114 [2024-11-20 11:21:37.337496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.114 [2024-11-20 11:21:37.337528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.114 qpair failed and we were unable to recover it. 00:27:10.115 [2024-11-20 11:21:37.337638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.115 [2024-11-20 11:21:37.337670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.115 qpair failed and we were unable to recover it. 00:27:10.115 [2024-11-20 11:21:37.337794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.115 [2024-11-20 11:21:37.337827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.115 qpair failed and we were unable to recover it. 00:27:10.115 [2024-11-20 11:21:37.337943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.115 [2024-11-20 11:21:37.337988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.115 qpair failed and we were unable to recover it. 00:27:10.115 [2024-11-20 11:21:37.338106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.115 [2024-11-20 11:21:37.338139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.115 qpair failed and we were unable to recover it. 00:27:10.115 [2024-11-20 11:21:37.338374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.115 [2024-11-20 11:21:37.338406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.115 qpair failed and we were unable to recover it. 00:27:10.115 [2024-11-20 11:21:37.338577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.115 [2024-11-20 11:21:37.338609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.115 qpair failed and we were unable to recover it. 00:27:10.115 [2024-11-20 11:21:37.338745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.115 [2024-11-20 11:21:37.338777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.115 qpair failed and we were unable to recover it. 00:27:10.115 [2024-11-20 11:21:37.338885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.115 [2024-11-20 11:21:37.338917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.115 qpair failed and we were unable to recover it. 00:27:10.115 [2024-11-20 11:21:37.339140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.115 [2024-11-20 11:21:37.339173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.115 qpair failed and we were unable to recover it. 00:27:10.115 [2024-11-20 11:21:37.339304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.115 [2024-11-20 11:21:37.339336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.115 qpair failed and we were unable to recover it. 00:27:10.115 [2024-11-20 11:21:37.339445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.115 [2024-11-20 11:21:37.339477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.115 qpair failed and we were unable to recover it. 00:27:10.115 [2024-11-20 11:21:37.339657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.115 [2024-11-20 11:21:37.339690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.115 qpair failed and we were unable to recover it. 00:27:10.115 [2024-11-20 11:21:37.339860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.115 [2024-11-20 11:21:37.339892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.115 qpair failed and we were unable to recover it. 00:27:10.115 [2024-11-20 11:21:37.340151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.115 [2024-11-20 11:21:37.340184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.115 qpair failed and we were unable to recover it. 00:27:10.115 [2024-11-20 11:21:37.340372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.115 [2024-11-20 11:21:37.340404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.115 qpair failed and we were unable to recover it. 00:27:10.115 [2024-11-20 11:21:37.340522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.115 [2024-11-20 11:21:37.340554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.115 qpair failed and we were unable to recover it. 00:27:10.115 [2024-11-20 11:21:37.340694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.115 [2024-11-20 11:21:37.340726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.115 qpair failed and we were unable to recover it. 00:27:10.115 [2024-11-20 11:21:37.340859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.115 [2024-11-20 11:21:37.340892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.115 qpair failed and we were unable to recover it. 00:27:10.115 [2024-11-20 11:21:37.341011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.115 [2024-11-20 11:21:37.341046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.115 qpair failed and we were unable to recover it. 00:27:10.115 [2024-11-20 11:21:37.341175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.115 [2024-11-20 11:21:37.341207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.115 qpair failed and we were unable to recover it. 00:27:10.115 [2024-11-20 11:21:37.341320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.115 [2024-11-20 11:21:37.341353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.115 qpair failed and we were unable to recover it. 00:27:10.115 [2024-11-20 11:21:37.341470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.115 [2024-11-20 11:21:37.341502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.115 qpair failed and we were unable to recover it. 00:27:10.115 [2024-11-20 11:21:37.341627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.115 [2024-11-20 11:21:37.341658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.115 qpair failed and we were unable to recover it. 00:27:10.115 [2024-11-20 11:21:37.341777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.115 [2024-11-20 11:21:37.341810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.115 qpair failed and we were unable to recover it. 00:27:10.115 [2024-11-20 11:21:37.341982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.115 [2024-11-20 11:21:37.342023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.115 qpair failed and we were unable to recover it. 00:27:10.115 [2024-11-20 11:21:37.342200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.115 [2024-11-20 11:21:37.342233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.115 qpair failed and we were unable to recover it. 00:27:10.115 [2024-11-20 11:21:37.342358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.115 [2024-11-20 11:21:37.342391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.115 qpair failed and we were unable to recover it. 00:27:10.115 [2024-11-20 11:21:37.342563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.115 [2024-11-20 11:21:37.342595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.115 qpair failed and we were unable to recover it. 00:27:10.115 [2024-11-20 11:21:37.342779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.115 [2024-11-20 11:21:37.342812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.115 qpair failed and we were unable to recover it. 00:27:10.115 [2024-11-20 11:21:37.342940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.115 [2024-11-20 11:21:37.343006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.115 qpair failed and we were unable to recover it. 00:27:10.115 [2024-11-20 11:21:37.343132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.115 [2024-11-20 11:21:37.343165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.115 qpair failed and we were unable to recover it. 00:27:10.115 [2024-11-20 11:21:37.343272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.115 [2024-11-20 11:21:37.343304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.115 qpair failed and we were unable to recover it. 00:27:10.116 [2024-11-20 11:21:37.343486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.116 [2024-11-20 11:21:37.343518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.116 qpair failed and we were unable to recover it. 00:27:10.116 [2024-11-20 11:21:37.343691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.116 [2024-11-20 11:21:37.343723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.116 qpair failed and we were unable to recover it. 00:27:10.116 [2024-11-20 11:21:37.343830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.116 [2024-11-20 11:21:37.343862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.116 qpair failed and we were unable to recover it. 00:27:10.116 [2024-11-20 11:21:37.344041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.116 [2024-11-20 11:21:37.344074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.116 qpair failed and we were unable to recover it. 00:27:10.116 [2024-11-20 11:21:37.344254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.116 [2024-11-20 11:21:37.344287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.116 qpair failed and we were unable to recover it. 00:27:10.116 [2024-11-20 11:21:37.344475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.116 [2024-11-20 11:21:37.344508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.116 qpair failed and we were unable to recover it. 00:27:10.116 [2024-11-20 11:21:37.344638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.116 [2024-11-20 11:21:37.344672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.116 qpair failed and we were unable to recover it. 00:27:10.116 [2024-11-20 11:21:37.344777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.116 [2024-11-20 11:21:37.344809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.116 qpair failed and we were unable to recover it. 00:27:10.116 [2024-11-20 11:21:37.345006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.116 [2024-11-20 11:21:37.345040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.116 qpair failed and we were unable to recover it. 00:27:10.116 [2024-11-20 11:21:37.345224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.116 [2024-11-20 11:21:37.345256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.116 qpair failed and we were unable to recover it. 00:27:10.116 [2024-11-20 11:21:37.345370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.116 [2024-11-20 11:21:37.345402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.116 qpair failed and we were unable to recover it. 00:27:10.116 [2024-11-20 11:21:37.345551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.116 [2024-11-20 11:21:37.345583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.116 qpair failed and we were unable to recover it. 00:27:10.116 [2024-11-20 11:21:37.345693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.116 [2024-11-20 11:21:37.345725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.116 qpair failed and we were unable to recover it. 00:27:10.116 [2024-11-20 11:21:37.345844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.116 [2024-11-20 11:21:37.345876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.116 qpair failed and we were unable to recover it. 00:27:10.116 [2024-11-20 11:21:37.346135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.116 [2024-11-20 11:21:37.346169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.116 qpair failed and we were unable to recover it. 00:27:10.116 [2024-11-20 11:21:37.346293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.116 [2024-11-20 11:21:37.346325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.116 qpair failed and we were unable to recover it. 00:27:10.116 [2024-11-20 11:21:37.346494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.116 [2024-11-20 11:21:37.346526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.116 qpair failed and we were unable to recover it. 00:27:10.116 [2024-11-20 11:21:37.346635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.116 [2024-11-20 11:21:37.346668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.116 qpair failed and we were unable to recover it. 00:27:10.116 [2024-11-20 11:21:37.346840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.116 [2024-11-20 11:21:37.346874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.116 qpair failed and we were unable to recover it. 00:27:10.116 [2024-11-20 11:21:37.346989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.116 [2024-11-20 11:21:37.347023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.116 qpair failed and we were unable to recover it. 00:27:10.116 [2024-11-20 11:21:37.347146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.116 [2024-11-20 11:21:37.347180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.116 qpair failed and we were unable to recover it. 00:27:10.116 [2024-11-20 11:21:37.347291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.116 [2024-11-20 11:21:37.347322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.116 qpair failed and we were unable to recover it. 00:27:10.116 [2024-11-20 11:21:37.347470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.116 [2024-11-20 11:21:37.347502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.116 qpair failed and we were unable to recover it. 00:27:10.116 [2024-11-20 11:21:37.347675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.116 [2024-11-20 11:21:37.347707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.116 qpair failed and we were unable to recover it. 00:27:10.116 [2024-11-20 11:21:37.347899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.116 [2024-11-20 11:21:37.347931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.116 qpair failed and we were unable to recover it. 00:27:10.116 [2024-11-20 11:21:37.348069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.116 [2024-11-20 11:21:37.348102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.116 qpair failed and we were unable to recover it. 00:27:10.116 [2024-11-20 11:21:37.348279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.116 [2024-11-20 11:21:37.348311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.116 qpair failed and we were unable to recover it. 00:27:10.116 [2024-11-20 11:21:37.348413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.116 [2024-11-20 11:21:37.348444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.116 qpair failed and we were unable to recover it. 00:27:10.116 [2024-11-20 11:21:37.348568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.116 [2024-11-20 11:21:37.348601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.116 qpair failed and we were unable to recover it. 00:27:10.116 [2024-11-20 11:21:37.348721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.116 [2024-11-20 11:21:37.348752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.116 qpair failed and we were unable to recover it. 00:27:10.116 [2024-11-20 11:21:37.348867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.116 [2024-11-20 11:21:37.348898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.116 qpair failed and we were unable to recover it. 00:27:10.116 [2024-11-20 11:21:37.349016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.116 [2024-11-20 11:21:37.349049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.116 qpair failed and we were unable to recover it. 00:27:10.116 [2024-11-20 11:21:37.349238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.116 [2024-11-20 11:21:37.349275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.117 qpair failed and we were unable to recover it. 00:27:10.117 [2024-11-20 11:21:37.349387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.117 [2024-11-20 11:21:37.349419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.117 qpair failed and we were unable to recover it. 00:27:10.117 [2024-11-20 11:21:37.349532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.117 [2024-11-20 11:21:37.349564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.117 qpair failed and we were unable to recover it. 00:27:10.117 [2024-11-20 11:21:37.349669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.117 [2024-11-20 11:21:37.349700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.117 qpair failed and we were unable to recover it. 00:27:10.117 [2024-11-20 11:21:37.349811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.117 [2024-11-20 11:21:37.349842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.117 qpair failed and we were unable to recover it. 00:27:10.117 [2024-11-20 11:21:37.349959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.117 [2024-11-20 11:21:37.349993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.117 qpair failed and we were unable to recover it. 00:27:10.117 [2024-11-20 11:21:37.350165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.117 [2024-11-20 11:21:37.350196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.117 qpair failed and we were unable to recover it. 00:27:10.117 [2024-11-20 11:21:37.350302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.117 [2024-11-20 11:21:37.350334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.117 qpair failed and we were unable to recover it. 00:27:10.117 [2024-11-20 11:21:37.350443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.117 [2024-11-20 11:21:37.350475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.117 qpair failed and we were unable to recover it. 00:27:10.117 [2024-11-20 11:21:37.350580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.117 [2024-11-20 11:21:37.350611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.117 qpair failed and we were unable to recover it. 00:27:10.117 [2024-11-20 11:21:37.350791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.117 [2024-11-20 11:21:37.350822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.117 qpair failed and we were unable to recover it. 00:27:10.117 [2024-11-20 11:21:37.350993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.117 [2024-11-20 11:21:37.351027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.117 qpair failed and we were unable to recover it. 00:27:10.117 [2024-11-20 11:21:37.351205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.117 [2024-11-20 11:21:37.351238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.117 qpair failed and we were unable to recover it. 00:27:10.117 [2024-11-20 11:21:37.351348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.117 [2024-11-20 11:21:37.351380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.117 qpair failed and we were unable to recover it. 00:27:10.117 [2024-11-20 11:21:37.351502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.117 [2024-11-20 11:21:37.351534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.117 qpair failed and we were unable to recover it. 00:27:10.117 [2024-11-20 11:21:37.351651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.117 [2024-11-20 11:21:37.351682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.117 qpair failed and we were unable to recover it. 00:27:10.117 [2024-11-20 11:21:37.351881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.117 [2024-11-20 11:21:37.351912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.117 qpair failed and we were unable to recover it. 00:27:10.117 [2024-11-20 11:21:37.352037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.117 [2024-11-20 11:21:37.352069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.117 qpair failed and we were unable to recover it. 00:27:10.117 [2024-11-20 11:21:37.352238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.117 [2024-11-20 11:21:37.352268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.117 qpair failed and we were unable to recover it. 00:27:10.117 [2024-11-20 11:21:37.352450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.117 [2024-11-20 11:21:37.352479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.117 qpair failed and we were unable to recover it. 00:27:10.117 [2024-11-20 11:21:37.352589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.117 [2024-11-20 11:21:37.352618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.117 qpair failed and we were unable to recover it. 00:27:10.117 [2024-11-20 11:21:37.352782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.117 [2024-11-20 11:21:37.352826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.117 qpair failed and we were unable to recover it. 00:27:10.117 [2024-11-20 11:21:37.352965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.117 [2024-11-20 11:21:37.353000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.117 qpair failed and we were unable to recover it. 00:27:10.117 [2024-11-20 11:21:37.353177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.117 [2024-11-20 11:21:37.353212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.117 qpair failed and we were unable to recover it. 00:27:10.117 [2024-11-20 11:21:37.353315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.117 [2024-11-20 11:21:37.353347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.117 qpair failed and we were unable to recover it. 00:27:10.117 [2024-11-20 11:21:37.353529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.117 [2024-11-20 11:21:37.353561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.117 qpair failed and we were unable to recover it. 00:27:10.117 [2024-11-20 11:21:37.353668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.117 [2024-11-20 11:21:37.353700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.117 qpair failed and we were unable to recover it. 00:27:10.117 [2024-11-20 11:21:37.353890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.117 [2024-11-20 11:21:37.353923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.117 qpair failed and we were unable to recover it. 00:27:10.117 [2024-11-20 11:21:37.354119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.117 [2024-11-20 11:21:37.354152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.117 qpair failed and we were unable to recover it. 00:27:10.117 [2024-11-20 11:21:37.354322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.117 [2024-11-20 11:21:37.354354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.117 qpair failed and we were unable to recover it. 00:27:10.117 [2024-11-20 11:21:37.354484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.117 [2024-11-20 11:21:37.354516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.117 qpair failed and we were unable to recover it. 00:27:10.117 [2024-11-20 11:21:37.354714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.117 [2024-11-20 11:21:37.354746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.117 qpair failed and we were unable to recover it. 00:27:10.117 [2024-11-20 11:21:37.354919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.117 [2024-11-20 11:21:37.354976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.118 qpair failed and we were unable to recover it. 00:27:10.118 [2024-11-20 11:21:37.355153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.118 [2024-11-20 11:21:37.355184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.118 qpair failed and we were unable to recover it. 00:27:10.118 [2024-11-20 11:21:37.355372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.118 [2024-11-20 11:21:37.355403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.118 qpair failed and we were unable to recover it. 00:27:10.118 [2024-11-20 11:21:37.355516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.118 [2024-11-20 11:21:37.355547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.118 qpair failed and we were unable to recover it. 00:27:10.118 [2024-11-20 11:21:37.355656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.118 [2024-11-20 11:21:37.355689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.118 qpair failed and we were unable to recover it. 00:27:10.118 [2024-11-20 11:21:37.355927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.118 [2024-11-20 11:21:37.355971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.118 qpair failed and we were unable to recover it. 00:27:10.118 [2024-11-20 11:21:37.356195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.118 [2024-11-20 11:21:37.356225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.118 qpair failed and we were unable to recover it. 00:27:10.118 [2024-11-20 11:21:37.356409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.118 [2024-11-20 11:21:37.356439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.118 qpair failed and we were unable to recover it. 00:27:10.118 [2024-11-20 11:21:37.356537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.118 [2024-11-20 11:21:37.356571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.118 qpair failed and we were unable to recover it. 00:27:10.118 [2024-11-20 11:21:37.356737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.118 [2024-11-20 11:21:37.356768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.118 qpair failed and we were unable to recover it. 00:27:10.118 [2024-11-20 11:21:37.356967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.118 [2024-11-20 11:21:37.356998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.118 qpair failed and we were unable to recover it. 00:27:10.118 [2024-11-20 11:21:37.357270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.118 [2024-11-20 11:21:37.357302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.118 qpair failed and we were unable to recover it. 00:27:10.118 [2024-11-20 11:21:37.357471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.118 [2024-11-20 11:21:37.357503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.118 qpair failed and we were unable to recover it. 00:27:10.118 [2024-11-20 11:21:37.357603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.118 [2024-11-20 11:21:37.357635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.118 qpair failed and we were unable to recover it. 00:27:10.118 [2024-11-20 11:21:37.357821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.118 [2024-11-20 11:21:37.357853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.118 qpair failed and we were unable to recover it. 00:27:10.118 [2024-11-20 11:21:37.357974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.118 [2024-11-20 11:21:37.358005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.118 qpair failed and we were unable to recover it. 00:27:10.118 [2024-11-20 11:21:37.358123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.118 [2024-11-20 11:21:37.358153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.118 qpair failed and we were unable to recover it. 00:27:10.118 [2024-11-20 11:21:37.358267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.118 [2024-11-20 11:21:37.358296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.118 qpair failed and we were unable to recover it. 00:27:10.118 [2024-11-20 11:21:37.358463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.118 [2024-11-20 11:21:37.358492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.118 qpair failed and we were unable to recover it. 00:27:10.118 [2024-11-20 11:21:37.358595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.118 [2024-11-20 11:21:37.358625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.118 qpair failed and we were unable to recover it. 00:27:10.118 [2024-11-20 11:21:37.358810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.118 [2024-11-20 11:21:37.358840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.118 qpair failed and we were unable to recover it. 00:27:10.118 [2024-11-20 11:21:37.358992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.118 [2024-11-20 11:21:37.359024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.118 qpair failed and we were unable to recover it. 00:27:10.118 [2024-11-20 11:21:37.359232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.118 [2024-11-20 11:21:37.359264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.118 qpair failed and we were unable to recover it. 00:27:10.118 [2024-11-20 11:21:37.359379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.118 [2024-11-20 11:21:37.359411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.118 qpair failed and we were unable to recover it. 00:27:10.118 [2024-11-20 11:21:37.359538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.118 [2024-11-20 11:21:37.359570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.118 qpair failed and we were unable to recover it. 00:27:10.118 [2024-11-20 11:21:37.359755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.118 [2024-11-20 11:21:37.359787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.118 qpair failed and we were unable to recover it. 00:27:10.118 [2024-11-20 11:21:37.359904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.118 [2024-11-20 11:21:37.359937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.118 qpair failed and we were unable to recover it. 00:27:10.118 [2024-11-20 11:21:37.360202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.118 [2024-11-20 11:21:37.360232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.118 qpair failed and we were unable to recover it. 00:27:10.118 [2024-11-20 11:21:37.360347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.118 [2024-11-20 11:21:37.360379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.118 qpair failed and we were unable to recover it. 00:27:10.118 [2024-11-20 11:21:37.360481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.119 [2024-11-20 11:21:37.360513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.119 qpair failed and we were unable to recover it. 00:27:10.119 [2024-11-20 11:21:37.360619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.119 [2024-11-20 11:21:37.360651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.119 qpair failed and we were unable to recover it. 00:27:10.119 [2024-11-20 11:21:37.360892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.119 [2024-11-20 11:21:37.360924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.119 qpair failed and we were unable to recover it. 00:27:10.119 [2024-11-20 11:21:37.361106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.119 [2024-11-20 11:21:37.361136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.119 qpair failed and we were unable to recover it. 00:27:10.119 [2024-11-20 11:21:37.361313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.119 [2024-11-20 11:21:37.361342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.119 qpair failed and we were unable to recover it. 00:27:10.119 [2024-11-20 11:21:37.361509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.119 [2024-11-20 11:21:37.361539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.119 qpair failed and we were unable to recover it. 00:27:10.119 [2024-11-20 11:21:37.361716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.119 [2024-11-20 11:21:37.361749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.119 qpair failed and we were unable to recover it. 00:27:10.119 [2024-11-20 11:21:37.361922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.119 [2024-11-20 11:21:37.361965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.119 qpair failed and we were unable to recover it. 00:27:10.119 [2024-11-20 11:21:37.362167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.119 [2024-11-20 11:21:37.362198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.119 qpair failed and we were unable to recover it. 00:27:10.119 [2024-11-20 11:21:37.362309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.119 [2024-11-20 11:21:37.362341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.119 qpair failed and we were unable to recover it. 00:27:10.119 [2024-11-20 11:21:37.362451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.119 [2024-11-20 11:21:37.362483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.119 qpair failed and we were unable to recover it. 00:27:10.119 [2024-11-20 11:21:37.362655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.119 [2024-11-20 11:21:37.362698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.119 qpair failed and we were unable to recover it. 00:27:10.119 [2024-11-20 11:21:37.362883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.119 [2024-11-20 11:21:37.362912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.119 qpair failed and we were unable to recover it. 00:27:10.119 [2024-11-20 11:21:37.363115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.119 [2024-11-20 11:21:37.363144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.119 qpair failed and we were unable to recover it. 00:27:10.119 [2024-11-20 11:21:37.363251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.119 [2024-11-20 11:21:37.363279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.119 qpair failed and we were unable to recover it. 00:27:10.119 [2024-11-20 11:21:37.363388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.119 [2024-11-20 11:21:37.363416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.119 qpair failed and we were unable to recover it. 00:27:10.119 [2024-11-20 11:21:37.363611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.119 [2024-11-20 11:21:37.363644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.119 qpair failed and we were unable to recover it. 00:27:10.119 [2024-11-20 11:21:37.363761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.119 [2024-11-20 11:21:37.363792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.119 qpair failed and we were unable to recover it. 00:27:10.119 [2024-11-20 11:21:37.363896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.119 [2024-11-20 11:21:37.363927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.119 qpair failed and we were unable to recover it. 00:27:10.119 [2024-11-20 11:21:37.364125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.119 [2024-11-20 11:21:37.364164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.119 qpair failed and we were unable to recover it. 00:27:10.119 [2024-11-20 11:21:37.364300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.119 [2024-11-20 11:21:37.364332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.119 qpair failed and we were unable to recover it. 00:27:10.119 [2024-11-20 11:21:37.364510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.119 [2024-11-20 11:21:37.364542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.119 qpair failed and we were unable to recover it. 00:27:10.119 [2024-11-20 11:21:37.364666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.119 [2024-11-20 11:21:37.364698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.119 qpair failed and we were unable to recover it. 00:27:10.119 [2024-11-20 11:21:37.364813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.119 [2024-11-20 11:21:37.364845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.119 qpair failed and we were unable to recover it. 00:27:10.119 [2024-11-20 11:21:37.365041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.119 [2024-11-20 11:21:37.365072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.119 qpair failed and we were unable to recover it. 00:27:10.119 [2024-11-20 11:21:37.365192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.119 [2024-11-20 11:21:37.365221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.119 qpair failed and we were unable to recover it. 00:27:10.119 [2024-11-20 11:21:37.365339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.119 [2024-11-20 11:21:37.365368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.119 qpair failed and we were unable to recover it. 00:27:10.119 [2024-11-20 11:21:37.365480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.119 [2024-11-20 11:21:37.365510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.119 qpair failed and we were unable to recover it. 00:27:10.119 [2024-11-20 11:21:37.365642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.119 [2024-11-20 11:21:37.365671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.119 qpair failed and we were unable to recover it. 00:27:10.119 [2024-11-20 11:21:37.365834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.119 [2024-11-20 11:21:37.365864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.119 qpair failed and we were unable to recover it. 00:27:10.119 [2024-11-20 11:21:37.365977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.119 [2024-11-20 11:21:37.366007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.119 qpair failed and we were unable to recover it. 00:27:10.119 [2024-11-20 11:21:37.366221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.119 [2024-11-20 11:21:37.366252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.119 qpair failed and we were unable to recover it. 00:27:10.119 [2024-11-20 11:21:37.366421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.119 [2024-11-20 11:21:37.366451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.120 qpair failed and we were unable to recover it. 00:27:10.120 [2024-11-20 11:21:37.366692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.120 [2024-11-20 11:21:37.366722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.120 qpair failed and we were unable to recover it. 00:27:10.120 [2024-11-20 11:21:37.366910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.120 [2024-11-20 11:21:37.366940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.120 qpair failed and we were unable to recover it. 00:27:10.120 [2024-11-20 11:21:37.367077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.120 [2024-11-20 11:21:37.367107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.120 qpair failed and we were unable to recover it. 00:27:10.120 [2024-11-20 11:21:37.367290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.120 [2024-11-20 11:21:37.367319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.120 qpair failed and we were unable to recover it. 00:27:10.120 [2024-11-20 11:21:37.367482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.120 [2024-11-20 11:21:37.367511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.120 qpair failed and we were unable to recover it. 00:27:10.120 [2024-11-20 11:21:37.367702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.120 [2024-11-20 11:21:37.367731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.120 qpair failed and we were unable to recover it. 00:27:10.120 [2024-11-20 11:21:37.367909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.120 [2024-11-20 11:21:37.367939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.120 qpair failed and we were unable to recover it. 00:27:10.120 [2024-11-20 11:21:37.368071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.120 [2024-11-20 11:21:37.368100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.120 qpair failed and we were unable to recover it. 00:27:10.120 [2024-11-20 11:21:37.368274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.120 [2024-11-20 11:21:37.368305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.120 qpair failed and we were unable to recover it. 00:27:10.120 [2024-11-20 11:21:37.368441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.120 [2024-11-20 11:21:37.368473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.120 qpair failed and we were unable to recover it. 00:27:10.120 [2024-11-20 11:21:37.368662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.120 [2024-11-20 11:21:37.368693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.120 qpair failed and we were unable to recover it. 00:27:10.120 [2024-11-20 11:21:37.368798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.120 [2024-11-20 11:21:37.368829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.120 qpair failed and we were unable to recover it. 00:27:10.120 [2024-11-20 11:21:37.369004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.120 [2024-11-20 11:21:37.369037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.120 qpair failed and we were unable to recover it. 00:27:10.120 [2024-11-20 11:21:37.369174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.120 [2024-11-20 11:21:37.369207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.120 qpair failed and we were unable to recover it. 00:27:10.120 [2024-11-20 11:21:37.369387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.120 [2024-11-20 11:21:37.369418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.120 qpair failed and we were unable to recover it. 00:27:10.120 [2024-11-20 11:21:37.369547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.120 [2024-11-20 11:21:37.369579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.120 qpair failed and we were unable to recover it. 00:27:10.120 [2024-11-20 11:21:37.369704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.120 [2024-11-20 11:21:37.369749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.120 qpair failed and we were unable to recover it. 00:27:10.120 [2024-11-20 11:21:37.369862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.120 [2024-11-20 11:21:37.369891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.120 qpair failed and we were unable to recover it. 00:27:10.120 [2024-11-20 11:21:37.369998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.120 [2024-11-20 11:21:37.370029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.120 qpair failed and we were unable to recover it. 00:27:10.120 [2024-11-20 11:21:37.370138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.120 [2024-11-20 11:21:37.370168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.120 qpair failed and we were unable to recover it. 00:27:10.120 [2024-11-20 11:21:37.370304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.120 [2024-11-20 11:21:37.370334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.120 qpair failed and we were unable to recover it. 00:27:10.120 [2024-11-20 11:21:37.370497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.120 [2024-11-20 11:21:37.370526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.120 qpair failed and we were unable to recover it. 00:27:10.120 [2024-11-20 11:21:37.370694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.120 [2024-11-20 11:21:37.370723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.120 qpair failed and we were unable to recover it. 00:27:10.120 [2024-11-20 11:21:37.370821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.120 [2024-11-20 11:21:37.370849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.120 qpair failed and we were unable to recover it. 00:27:10.120 [2024-11-20 11:21:37.371031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.120 [2024-11-20 11:21:37.371062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.120 qpair failed and we were unable to recover it. 00:27:10.120 [2024-11-20 11:21:37.371201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.120 [2024-11-20 11:21:37.371230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.120 qpair failed and we were unable to recover it. 00:27:10.120 [2024-11-20 11:21:37.371438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.120 [2024-11-20 11:21:37.371472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.120 qpair failed and we were unable to recover it. 00:27:10.120 [2024-11-20 11:21:37.371640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.120 [2024-11-20 11:21:37.371670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.120 qpair failed and we were unable to recover it. 00:27:10.120 [2024-11-20 11:21:37.371782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.120 [2024-11-20 11:21:37.371811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.120 qpair failed and we were unable to recover it. 00:27:10.120 [2024-11-20 11:21:37.371916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.120 [2024-11-20 11:21:37.371945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.120 qpair failed and we were unable to recover it. 00:27:10.120 [2024-11-20 11:21:37.372074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.120 [2024-11-20 11:21:37.372103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.120 qpair failed and we were unable to recover it. 00:27:10.120 [2024-11-20 11:21:37.372204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.120 [2024-11-20 11:21:37.372233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.120 qpair failed and we were unable to recover it. 00:27:10.120 [2024-11-20 11:21:37.372332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.121 [2024-11-20 11:21:37.372361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.121 qpair failed and we were unable to recover it. 00:27:10.121 [2024-11-20 11:21:37.372541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.121 [2024-11-20 11:21:37.372571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.121 qpair failed and we were unable to recover it. 00:27:10.121 [2024-11-20 11:21:37.372682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.121 [2024-11-20 11:21:37.372712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.121 qpair failed and we were unable to recover it. 00:27:10.121 [2024-11-20 11:21:37.372842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.121 [2024-11-20 11:21:37.372871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.121 qpair failed and we were unable to recover it. 00:27:10.121 [2024-11-20 11:21:37.372975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.121 [2024-11-20 11:21:37.373006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.121 qpair failed and we were unable to recover it. 00:27:10.121 [2024-11-20 11:21:37.373111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.121 [2024-11-20 11:21:37.373140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.121 qpair failed and we were unable to recover it. 00:27:10.121 [2024-11-20 11:21:37.373398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.121 [2024-11-20 11:21:37.373441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.121 qpair failed and we were unable to recover it. 00:27:10.121 [2024-11-20 11:21:37.373569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.121 [2024-11-20 11:21:37.373601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.121 qpair failed and we were unable to recover it. 00:27:10.121 [2024-11-20 11:21:37.373724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.121 [2024-11-20 11:21:37.373756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.121 qpair failed and we were unable to recover it. 00:27:10.121 [2024-11-20 11:21:37.373927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.121 [2024-11-20 11:21:37.373983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.121 qpair failed and we were unable to recover it. 00:27:10.121 [2024-11-20 11:21:37.374098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.121 [2024-11-20 11:21:37.374127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.121 qpair failed and we were unable to recover it. 00:27:10.121 [2024-11-20 11:21:37.374323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.121 [2024-11-20 11:21:37.374351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.121 qpair failed and we were unable to recover it. 00:27:10.121 [2024-11-20 11:21:37.374464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.121 [2024-11-20 11:21:37.374493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.121 qpair failed and we were unable to recover it. 00:27:10.121 [2024-11-20 11:21:37.374662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.121 [2024-11-20 11:21:37.374691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.121 qpair failed and we were unable to recover it. 00:27:10.121 [2024-11-20 11:21:37.374923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.121 [2024-11-20 11:21:37.374960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.121 qpair failed and we were unable to recover it. 00:27:10.121 [2024-11-20 11:21:37.375088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.121 [2024-11-20 11:21:37.375118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.121 qpair failed and we were unable to recover it. 00:27:10.121 [2024-11-20 11:21:37.375230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.121 [2024-11-20 11:21:37.375259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.121 qpair failed and we were unable to recover it. 00:27:10.121 [2024-11-20 11:21:37.375425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.121 [2024-11-20 11:21:37.375454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.121 qpair failed and we were unable to recover it. 00:27:10.121 [2024-11-20 11:21:37.375643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.121 [2024-11-20 11:21:37.375673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.121 qpair failed and we were unable to recover it. 00:27:10.121 [2024-11-20 11:21:37.375773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.121 [2024-11-20 11:21:37.375802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.121 qpair failed and we were unable to recover it. 00:27:10.121 [2024-11-20 11:21:37.375920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.121 [2024-11-20 11:21:37.375958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.121 qpair failed and we were unable to recover it. 00:27:10.121 [2024-11-20 11:21:37.376155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.121 [2024-11-20 11:21:37.376185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.121 qpair failed and we were unable to recover it. 00:27:10.121 [2024-11-20 11:21:37.376354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.121 [2024-11-20 11:21:37.376383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.121 qpair failed and we were unable to recover it. 00:27:10.121 [2024-11-20 11:21:37.376556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.121 [2024-11-20 11:21:37.376587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.121 qpair failed and we were unable to recover it. 00:27:10.121 [2024-11-20 11:21:37.376705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.121 [2024-11-20 11:21:37.376737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.121 qpair failed and we were unable to recover it. 00:27:10.121 [2024-11-20 11:21:37.376961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.121 [2024-11-20 11:21:37.376996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.121 qpair failed and we were unable to recover it. 00:27:10.121 [2024-11-20 11:21:37.377101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.121 [2024-11-20 11:21:37.377132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.121 qpair failed and we were unable to recover it. 00:27:10.121 [2024-11-20 11:21:37.377239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.121 [2024-11-20 11:21:37.377271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.121 qpair failed and we were unable to recover it. 00:27:10.121 [2024-11-20 11:21:37.377509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.121 [2024-11-20 11:21:37.377541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.121 qpair failed and we were unable to recover it. 00:27:10.121 [2024-11-20 11:21:37.377667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.121 [2024-11-20 11:21:37.377696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.121 qpair failed and we were unable to recover it. 00:27:10.121 [2024-11-20 11:21:37.377896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.121 [2024-11-20 11:21:37.377926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.121 qpair failed and we were unable to recover it. 00:27:10.121 [2024-11-20 11:21:37.378144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.121 [2024-11-20 11:21:37.378173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.121 qpair failed and we were unable to recover it. 00:27:10.121 [2024-11-20 11:21:37.378304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.122 [2024-11-20 11:21:37.378332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.122 qpair failed and we were unable to recover it. 00:27:10.122 [2024-11-20 11:21:37.378445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.122 [2024-11-20 11:21:37.378475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.122 qpair failed and we were unable to recover it. 00:27:10.122 [2024-11-20 11:21:37.378578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.122 [2024-11-20 11:21:37.378613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.122 qpair failed and we were unable to recover it. 00:27:10.122 [2024-11-20 11:21:37.378793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.122 [2024-11-20 11:21:37.378825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.122 qpair failed and we were unable to recover it. 00:27:10.122 [2024-11-20 11:21:37.378998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.122 [2024-11-20 11:21:37.379033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.122 qpair failed and we were unable to recover it. 00:27:10.122 [2024-11-20 11:21:37.379160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.122 [2024-11-20 11:21:37.379193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.122 qpair failed and we were unable to recover it. 00:27:10.122 [2024-11-20 11:21:37.379372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.122 [2024-11-20 11:21:37.379404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.122 qpair failed and we were unable to recover it. 00:27:10.122 [2024-11-20 11:21:37.379528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.122 [2024-11-20 11:21:37.379560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.122 qpair failed and we were unable to recover it. 00:27:10.122 [2024-11-20 11:21:37.379744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.122 [2024-11-20 11:21:37.379776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.122 qpair failed and we were unable to recover it. 00:27:10.122 [2024-11-20 11:21:37.379890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.122 [2024-11-20 11:21:37.379922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.122 qpair failed and we were unable to recover it. 00:27:10.122 [2024-11-20 11:21:37.380050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.122 [2024-11-20 11:21:37.380083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.122 qpair failed and we were unable to recover it. 00:27:10.122 [2024-11-20 11:21:37.380264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.122 [2024-11-20 11:21:37.380296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.122 qpair failed and we were unable to recover it. 00:27:10.122 [2024-11-20 11:21:37.380471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.122 [2024-11-20 11:21:37.380503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.122 qpair failed and we were unable to recover it. 00:27:10.122 [2024-11-20 11:21:37.380612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.122 [2024-11-20 11:21:37.380644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.122 qpair failed and we were unable to recover it. 00:27:10.122 [2024-11-20 11:21:37.380830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.122 [2024-11-20 11:21:37.380861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.122 qpair failed and we were unable to recover it. 00:27:10.122 [2024-11-20 11:21:37.381052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.122 [2024-11-20 11:21:37.381083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.122 qpair failed and we were unable to recover it. 00:27:10.122 [2024-11-20 11:21:37.381257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.122 [2024-11-20 11:21:37.381286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.122 qpair failed and we were unable to recover it. 00:27:10.122 [2024-11-20 11:21:37.381410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.122 [2024-11-20 11:21:37.381439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.122 qpair failed and we were unable to recover it. 00:27:10.122 [2024-11-20 11:21:37.381609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.122 [2024-11-20 11:21:37.381640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.122 qpair failed and we were unable to recover it. 00:27:10.122 [2024-11-20 11:21:37.381831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.122 [2024-11-20 11:21:37.381860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.122 qpair failed and we were unable to recover it. 00:27:10.122 [2024-11-20 11:21:37.382026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.122 [2024-11-20 11:21:37.382057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.122 qpair failed and we were unable to recover it. 00:27:10.122 [2024-11-20 11:21:37.382173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.122 [2024-11-20 11:21:37.382203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.122 qpair failed and we were unable to recover it. 00:27:10.122 [2024-11-20 11:21:37.382376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.122 [2024-11-20 11:21:37.382405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.122 qpair failed and we were unable to recover it. 00:27:10.122 [2024-11-20 11:21:37.382507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.122 [2024-11-20 11:21:37.382536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.122 qpair failed and we were unable to recover it. 00:27:10.122 [2024-11-20 11:21:37.382730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.122 [2024-11-20 11:21:37.382759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.122 qpair failed and we were unable to recover it. 00:27:10.122 [2024-11-20 11:21:37.382927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.122 [2024-11-20 11:21:37.382967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.122 qpair failed and we were unable to recover it. 00:27:10.122 [2024-11-20 11:21:37.383137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.122 [2024-11-20 11:21:37.383166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.122 qpair failed and we were unable to recover it. 00:27:10.122 [2024-11-20 11:21:37.383341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.122 [2024-11-20 11:21:37.383370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.122 qpair failed and we were unable to recover it. 00:27:10.122 [2024-11-20 11:21:37.383495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.122 [2024-11-20 11:21:37.383524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.122 qpair failed and we were unable to recover it. 00:27:10.122 [2024-11-20 11:21:37.383772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.122 [2024-11-20 11:21:37.383816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.122 qpair failed and we were unable to recover it. 00:27:10.122 [2024-11-20 11:21:37.383998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.122 [2024-11-20 11:21:37.384032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.122 qpair failed and we were unable to recover it. 00:27:10.122 [2024-11-20 11:21:37.384154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.123 [2024-11-20 11:21:37.384185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.123 qpair failed and we were unable to recover it. 00:27:10.123 [2024-11-20 11:21:37.384357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.123 [2024-11-20 11:21:37.384388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.123 qpair failed and we were unable to recover it. 00:27:10.123 [2024-11-20 11:21:37.384575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.123 [2024-11-20 11:21:37.384607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.123 qpair failed and we were unable to recover it. 00:27:10.123 [2024-11-20 11:21:37.384728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.123 [2024-11-20 11:21:37.384760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.123 qpair failed and we were unable to recover it. 00:27:10.123 [2024-11-20 11:21:37.384864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.123 [2024-11-20 11:21:37.384905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.123 qpair failed and we were unable to recover it. 00:27:10.123 [2024-11-20 11:21:37.385031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.123 [2024-11-20 11:21:37.385061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.123 qpair failed and we were unable to recover it. 00:27:10.123 [2024-11-20 11:21:37.385187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.123 [2024-11-20 11:21:37.385216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.123 qpair failed and we were unable to recover it. 00:27:10.123 [2024-11-20 11:21:37.385331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.123 [2024-11-20 11:21:37.385360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.123 qpair failed and we were unable to recover it. 00:27:10.123 [2024-11-20 11:21:37.385566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.123 [2024-11-20 11:21:37.385595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.123 qpair failed and we were unable to recover it. 00:27:10.123 [2024-11-20 11:21:37.385706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.123 [2024-11-20 11:21:37.385736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.123 qpair failed and we were unable to recover it. 00:27:10.123 [2024-11-20 11:21:37.385851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.123 [2024-11-20 11:21:37.385880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.123 qpair failed and we were unable to recover it. 00:27:10.123 [2024-11-20 11:21:37.386010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.123 [2024-11-20 11:21:37.386045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.123 qpair failed and we were unable to recover it. 00:27:10.123 [2024-11-20 11:21:37.386157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.123 [2024-11-20 11:21:37.386186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.123 qpair failed and we were unable to recover it. 00:27:10.123 [2024-11-20 11:21:37.386334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.123 [2024-11-20 11:21:37.386364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.123 qpair failed and we were unable to recover it. 00:27:10.123 [2024-11-20 11:21:37.386564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.123 [2024-11-20 11:21:37.386593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.123 qpair failed and we were unable to recover it. 00:27:10.123 [2024-11-20 11:21:37.386716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.123 [2024-11-20 11:21:37.386745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.123 qpair failed and we were unable to recover it. 00:27:10.123 [2024-11-20 11:21:37.386863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.123 [2024-11-20 11:21:37.386893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.123 qpair failed and we were unable to recover it. 00:27:10.123 [2024-11-20 11:21:37.387091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.123 [2024-11-20 11:21:37.387121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.123 qpair failed and we were unable to recover it. 00:27:10.123 [2024-11-20 11:21:37.387240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.123 [2024-11-20 11:21:37.387269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.123 qpair failed and we were unable to recover it. 00:27:10.123 [2024-11-20 11:21:37.387437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.123 [2024-11-20 11:21:37.387467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.123 qpair failed and we were unable to recover it. 00:27:10.123 [2024-11-20 11:21:37.387636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.123 [2024-11-20 11:21:37.387666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.123 qpair failed and we were unable to recover it. 00:27:10.123 [2024-11-20 11:21:37.387795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.123 [2024-11-20 11:21:37.387824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.123 qpair failed and we were unable to recover it. 00:27:10.123 [2024-11-20 11:21:37.388094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.123 [2024-11-20 11:21:37.388129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.123 qpair failed and we were unable to recover it. 00:27:10.123 [2024-11-20 11:21:37.388251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.123 [2024-11-20 11:21:37.388283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.123 qpair failed and we were unable to recover it. 00:27:10.123 [2024-11-20 11:21:37.388383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.123 [2024-11-20 11:21:37.388416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.123 qpair failed and we were unable to recover it. 00:27:10.123 [2024-11-20 11:21:37.388542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.123 [2024-11-20 11:21:37.388574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.123 qpair failed and we were unable to recover it. 00:27:10.123 [2024-11-20 11:21:37.388817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.123 [2024-11-20 11:21:37.388850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.123 qpair failed and we were unable to recover it. 00:27:10.123 [2024-11-20 11:21:37.389019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.124 [2024-11-20 11:21:37.389050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.124 qpair failed and we were unable to recover it. 00:27:10.124 [2024-11-20 11:21:37.389301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.124 [2024-11-20 11:21:37.389333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.124 qpair failed and we were unable to recover it. 00:27:10.124 [2024-11-20 11:21:37.389446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.124 [2024-11-20 11:21:37.389478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.124 qpair failed and we were unable to recover it. 00:27:10.124 [2024-11-20 11:21:37.389649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.124 [2024-11-20 11:21:37.389682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.124 qpair failed and we were unable to recover it. 00:27:10.124 [2024-11-20 11:21:37.389859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.124 [2024-11-20 11:21:37.389891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.124 qpair failed and we were unable to recover it. 00:27:10.124 [2024-11-20 11:21:37.390127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.124 [2024-11-20 11:21:37.390161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.124 qpair failed and we were unable to recover it. 00:27:10.124 [2024-11-20 11:21:37.390296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.124 [2024-11-20 11:21:37.390328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.124 qpair failed and we were unable to recover it. 00:27:10.124 [2024-11-20 11:21:37.390570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.124 [2024-11-20 11:21:37.390603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.124 qpair failed and we were unable to recover it. 00:27:10.124 [2024-11-20 11:21:37.390745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.124 [2024-11-20 11:21:37.390776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.124 qpair failed and we were unable to recover it. 00:27:10.124 [2024-11-20 11:21:37.391033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.124 [2024-11-20 11:21:37.391064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.124 qpair failed and we were unable to recover it. 00:27:10.124 [2024-11-20 11:21:37.391191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.124 [2024-11-20 11:21:37.391220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.124 qpair failed and we were unable to recover it. 00:27:10.124 [2024-11-20 11:21:37.391426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.124 [2024-11-20 11:21:37.391457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.124 qpair failed and we were unable to recover it. 00:27:10.124 [2024-11-20 11:21:37.391583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.124 [2024-11-20 11:21:37.391612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.124 qpair failed and we were unable to recover it. 00:27:10.124 [2024-11-20 11:21:37.391847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.124 [2024-11-20 11:21:37.391879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.124 qpair failed and we were unable to recover it. 00:27:10.124 [2024-11-20 11:21:37.392051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.124 [2024-11-20 11:21:37.392085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.124 qpair failed and we were unable to recover it. 00:27:10.124 [2024-11-20 11:21:37.392240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.124 [2024-11-20 11:21:37.392272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.124 qpair failed and we were unable to recover it. 00:27:10.124 [2024-11-20 11:21:37.392396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.124 [2024-11-20 11:21:37.392428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.124 qpair failed and we were unable to recover it. 00:27:10.124 [2024-11-20 11:21:37.392616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.124 [2024-11-20 11:21:37.392648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.124 qpair failed and we were unable to recover it. 00:27:10.124 [2024-11-20 11:21:37.392767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.124 [2024-11-20 11:21:37.392799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.124 qpair failed and we were unable to recover it. 00:27:10.124 [2024-11-20 11:21:37.392976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.124 [2024-11-20 11:21:37.393010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.124 qpair failed and we were unable to recover it. 00:27:10.124 [2024-11-20 11:21:37.393228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.124 [2024-11-20 11:21:37.393260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.124 qpair failed and we were unable to recover it. 00:27:10.124 [2024-11-20 11:21:37.393370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.124 [2024-11-20 11:21:37.393401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.124 qpair failed and we were unable to recover it. 00:27:10.124 [2024-11-20 11:21:37.393535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.124 [2024-11-20 11:21:37.393567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.124 qpair failed and we were unable to recover it. 00:27:10.124 [2024-11-20 11:21:37.393678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.124 [2024-11-20 11:21:37.393709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.124 qpair failed and we were unable to recover it. 00:27:10.124 [2024-11-20 11:21:37.393971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.124 [2024-11-20 11:21:37.394011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.124 qpair failed and we were unable to recover it. 00:27:10.124 [2024-11-20 11:21:37.394147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.124 [2024-11-20 11:21:37.394180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.124 qpair failed and we were unable to recover it. 00:27:10.124 [2024-11-20 11:21:37.394304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.124 [2024-11-20 11:21:37.394336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.124 qpair failed and we were unable to recover it. 00:27:10.124 [2024-11-20 11:21:37.394461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.124 [2024-11-20 11:21:37.394493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.124 qpair failed and we were unable to recover it. 00:27:10.124 [2024-11-20 11:21:37.394669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.124 [2024-11-20 11:21:37.394702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.124 qpair failed and we were unable to recover it. 00:27:10.124 [2024-11-20 11:21:37.394824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.124 [2024-11-20 11:21:37.394855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.124 qpair failed and we were unable to recover it. 00:27:10.124 [2024-11-20 11:21:37.395099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.124 [2024-11-20 11:21:37.395133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.124 qpair failed and we were unable to recover it. 00:27:10.124 [2024-11-20 11:21:37.395243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.124 [2024-11-20 11:21:37.395275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.124 qpair failed and we were unable to recover it. 00:27:10.124 [2024-11-20 11:21:37.395459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.124 [2024-11-20 11:21:37.395491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.125 qpair failed and we were unable to recover it. 00:27:10.125 [2024-11-20 11:21:37.395670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.125 [2024-11-20 11:21:37.395701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.125 qpair failed and we were unable to recover it. 00:27:10.125 [2024-11-20 11:21:37.395877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.125 [2024-11-20 11:21:37.395909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.125 qpair failed and we were unable to recover it. 00:27:10.125 [2024-11-20 11:21:37.396095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.125 [2024-11-20 11:21:37.396129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.125 qpair failed and we were unable to recover it. 00:27:10.125 [2024-11-20 11:21:37.396367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.125 [2024-11-20 11:21:37.396398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.125 qpair failed and we were unable to recover it. 00:27:10.125 [2024-11-20 11:21:37.396515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.125 [2024-11-20 11:21:37.396547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.125 qpair failed and we were unable to recover it. 00:27:10.125 [2024-11-20 11:21:37.396682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.125 [2024-11-20 11:21:37.396716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.125 qpair failed and we were unable to recover it. 00:27:10.125 [2024-11-20 11:21:37.396859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.125 [2024-11-20 11:21:37.396891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.125 qpair failed and we were unable to recover it. 00:27:10.125 [2024-11-20 11:21:37.397134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.125 [2024-11-20 11:21:37.397169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.125 qpair failed and we were unable to recover it. 00:27:10.125 [2024-11-20 11:21:37.397366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.125 [2024-11-20 11:21:37.397397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.125 qpair failed and we were unable to recover it. 00:27:10.125 [2024-11-20 11:21:37.397514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.125 [2024-11-20 11:21:37.397546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.125 qpair failed and we were unable to recover it. 00:27:10.125 [2024-11-20 11:21:37.397789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.125 [2024-11-20 11:21:37.397820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.125 qpair failed and we were unable to recover it. 00:27:10.125 [2024-11-20 11:21:37.397965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.125 [2024-11-20 11:21:37.398001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.125 qpair failed and we were unable to recover it. 00:27:10.125 [2024-11-20 11:21:37.398111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.125 [2024-11-20 11:21:37.398144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.125 qpair failed and we were unable to recover it. 00:27:10.125 [2024-11-20 11:21:37.398315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.125 [2024-11-20 11:21:37.398346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.125 qpair failed and we were unable to recover it. 00:27:10.125 [2024-11-20 11:21:37.398583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.125 [2024-11-20 11:21:37.398616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.125 qpair failed and we were unable to recover it. 00:27:10.125 [2024-11-20 11:21:37.398802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.125 [2024-11-20 11:21:37.398834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.125 qpair failed and we were unable to recover it. 00:27:10.125 [2024-11-20 11:21:37.399078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.125 [2024-11-20 11:21:37.399113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.125 qpair failed and we were unable to recover it. 00:27:10.125 [2024-11-20 11:21:37.399330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.125 [2024-11-20 11:21:37.399362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.125 qpair failed and we were unable to recover it. 00:27:10.125 [2024-11-20 11:21:37.399507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.125 [2024-11-20 11:21:37.399540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.125 qpair failed and we were unable to recover it. 00:27:10.125 [2024-11-20 11:21:37.399756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.125 [2024-11-20 11:21:37.399788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.125 qpair failed and we were unable to recover it. 00:27:10.125 [2024-11-20 11:21:37.399975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.125 [2024-11-20 11:21:37.400009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.125 qpair failed and we were unable to recover it. 00:27:10.125 [2024-11-20 11:21:37.400192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.125 [2024-11-20 11:21:37.400225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.125 qpair failed and we were unable to recover it. 00:27:10.125 [2024-11-20 11:21:37.400346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.125 [2024-11-20 11:21:37.400378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.125 qpair failed and we were unable to recover it. 00:27:10.125 [2024-11-20 11:21:37.400498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.125 [2024-11-20 11:21:37.400530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.125 qpair failed and we were unable to recover it. 00:27:10.125 [2024-11-20 11:21:37.400716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.125 [2024-11-20 11:21:37.400748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.125 qpair failed and we were unable to recover it. 00:27:10.125 [2024-11-20 11:21:37.400869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.125 [2024-11-20 11:21:37.400901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.125 qpair failed and we were unable to recover it. 00:27:10.125 [2024-11-20 11:21:37.401023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.125 [2024-11-20 11:21:37.401057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.125 qpair failed and we were unable to recover it. 00:27:10.125 [2024-11-20 11:21:37.401180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.125 [2024-11-20 11:21:37.401213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.125 qpair failed and we were unable to recover it. 00:27:10.125 [2024-11-20 11:21:37.401329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.125 [2024-11-20 11:21:37.401360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.125 qpair failed and we were unable to recover it. 00:27:10.125 [2024-11-20 11:21:37.401599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.125 [2024-11-20 11:21:37.401631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.125 qpair failed and we were unable to recover it. 00:27:10.125 [2024-11-20 11:21:37.401808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.125 [2024-11-20 11:21:37.401840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.125 qpair failed and we were unable to recover it. 00:27:10.125 [2024-11-20 11:21:37.402020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.125 [2024-11-20 11:21:37.402060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.125 qpair failed and we were unable to recover it. 00:27:10.126 [2024-11-20 11:21:37.402190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.126 [2024-11-20 11:21:37.402222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.126 qpair failed and we were unable to recover it. 00:27:10.126 [2024-11-20 11:21:37.402333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.126 [2024-11-20 11:21:37.402365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.126 qpair failed and we were unable to recover it. 00:27:10.126 [2024-11-20 11:21:37.402535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.126 [2024-11-20 11:21:37.402568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.126 qpair failed and we were unable to recover it. 00:27:10.126 [2024-11-20 11:21:37.402757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.126 [2024-11-20 11:21:37.402789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.126 qpair failed and we were unable to recover it. 00:27:10.126 [2024-11-20 11:21:37.402901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.126 [2024-11-20 11:21:37.402933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.126 qpair failed and we were unable to recover it. 00:27:10.126 [2024-11-20 11:21:37.403079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.126 [2024-11-20 11:21:37.403112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.126 qpair failed and we were unable to recover it. 00:27:10.126 [2024-11-20 11:21:37.403230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.126 [2024-11-20 11:21:37.403262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.126 qpair failed and we were unable to recover it. 00:27:10.126 [2024-11-20 11:21:37.403364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.126 [2024-11-20 11:21:37.403396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.126 qpair failed and we were unable to recover it. 00:27:10.126 [2024-11-20 11:21:37.403501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.126 [2024-11-20 11:21:37.403534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.126 qpair failed and we were unable to recover it. 00:27:10.126 [2024-11-20 11:21:37.403707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.126 [2024-11-20 11:21:37.403739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.126 qpair failed and we were unable to recover it. 00:27:10.126 [2024-11-20 11:21:37.403941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.126 [2024-11-20 11:21:37.403995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.126 qpair failed and we were unable to recover it. 00:27:10.126 [2024-11-20 11:21:37.404172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.126 [2024-11-20 11:21:37.404204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.126 qpair failed and we were unable to recover it. 00:27:10.126 [2024-11-20 11:21:37.404381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.126 [2024-11-20 11:21:37.404413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.126 qpair failed and we were unable to recover it. 00:27:10.126 [2024-11-20 11:21:37.404530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.126 [2024-11-20 11:21:37.404562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.126 qpair failed and we were unable to recover it. 00:27:10.126 [2024-11-20 11:21:37.404740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.126 [2024-11-20 11:21:37.404772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.126 qpair failed and we were unable to recover it. 00:27:10.126 [2024-11-20 11:21:37.404883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.126 [2024-11-20 11:21:37.404916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.126 qpair failed and we were unable to recover it. 00:27:10.126 [2024-11-20 11:21:37.405182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.126 [2024-11-20 11:21:37.405216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.126 qpair failed and we were unable to recover it. 00:27:10.126 [2024-11-20 11:21:37.405450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.126 [2024-11-20 11:21:37.405483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.126 qpair failed and we were unable to recover it. 00:27:10.126 [2024-11-20 11:21:37.405593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.126 [2024-11-20 11:21:37.405625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.126 qpair failed and we were unable to recover it. 00:27:10.126 [2024-11-20 11:21:37.405743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.126 [2024-11-20 11:21:37.405774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.126 qpair failed and we were unable to recover it. 00:27:10.126 [2024-11-20 11:21:37.405940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.126 [2024-11-20 11:21:37.405984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.126 qpair failed and we were unable to recover it. 00:27:10.126 [2024-11-20 11:21:37.406099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.126 [2024-11-20 11:21:37.406131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.126 qpair failed and we were unable to recover it. 00:27:10.126 [2024-11-20 11:21:37.406238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.126 [2024-11-20 11:21:37.406270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.126 qpair failed and we were unable to recover it. 00:27:10.126 [2024-11-20 11:21:37.406486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.126 [2024-11-20 11:21:37.406519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.126 qpair failed and we were unable to recover it. 00:27:10.126 [2024-11-20 11:21:37.406652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.126 [2024-11-20 11:21:37.406684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.126 qpair failed and we were unable to recover it. 00:27:10.126 [2024-11-20 11:21:37.406862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.126 [2024-11-20 11:21:37.406894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.126 qpair failed and we were unable to recover it. 00:27:10.126 [2024-11-20 11:21:37.407036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.126 [2024-11-20 11:21:37.407070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.126 qpair failed and we were unable to recover it. 00:27:10.126 [2024-11-20 11:21:37.407253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.126 [2024-11-20 11:21:37.407285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.126 qpair failed and we were unable to recover it. 00:27:10.126 [2024-11-20 11:21:37.407388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.126 [2024-11-20 11:21:37.407419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.126 qpair failed and we were unable to recover it. 00:27:10.126 [2024-11-20 11:21:37.407539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.126 [2024-11-20 11:21:37.407571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.126 qpair failed and we were unable to recover it. 00:27:10.126 [2024-11-20 11:21:37.407679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.126 [2024-11-20 11:21:37.407711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.126 qpair failed and we were unable to recover it. 00:27:10.126 [2024-11-20 11:21:37.407920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.126 [2024-11-20 11:21:37.407962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.126 qpair failed and we were unable to recover it. 00:27:10.127 [2024-11-20 11:21:37.408159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.127 [2024-11-20 11:21:37.408191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.127 qpair failed and we were unable to recover it. 00:27:10.127 [2024-11-20 11:21:37.408315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.127 [2024-11-20 11:21:37.408347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.127 qpair failed and we were unable to recover it. 00:27:10.127 [2024-11-20 11:21:37.408451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.127 [2024-11-20 11:21:37.408483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.127 qpair failed and we were unable to recover it. 00:27:10.127 [2024-11-20 11:21:37.408605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.127 [2024-11-20 11:21:37.408637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.127 qpair failed and we were unable to recover it. 00:27:10.127 [2024-11-20 11:21:37.408742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.127 [2024-11-20 11:21:37.408774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.127 qpair failed and we were unable to recover it. 00:27:10.127 [2024-11-20 11:21:37.408975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.127 [2024-11-20 11:21:37.409010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.127 qpair failed and we were unable to recover it. 00:27:10.127 [2024-11-20 11:21:37.409182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.127 [2024-11-20 11:21:37.409215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.127 qpair failed and we were unable to recover it. 00:27:10.127 [2024-11-20 11:21:37.409334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.127 [2024-11-20 11:21:37.409371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.127 qpair failed and we were unable to recover it. 00:27:10.127 [2024-11-20 11:21:37.409502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.127 [2024-11-20 11:21:37.409534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.127 qpair failed and we were unable to recover it. 00:27:10.127 [2024-11-20 11:21:37.409657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.127 [2024-11-20 11:21:37.409689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.127 qpair failed and we were unable to recover it. 00:27:10.127 [2024-11-20 11:21:37.409823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.127 [2024-11-20 11:21:37.409855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.127 qpair failed and we were unable to recover it. 00:27:10.127 [2024-11-20 11:21:37.410056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.127 [2024-11-20 11:21:37.410091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.127 qpair failed and we were unable to recover it. 00:27:10.127 [2024-11-20 11:21:37.410266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.127 [2024-11-20 11:21:37.410298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.127 qpair failed and we were unable to recover it. 00:27:10.127 [2024-11-20 11:21:37.410486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.127 [2024-11-20 11:21:37.410517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.127 qpair failed and we were unable to recover it. 00:27:10.127 [2024-11-20 11:21:37.410623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.127 [2024-11-20 11:21:37.410655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.127 qpair failed and we were unable to recover it. 00:27:10.127 [2024-11-20 11:21:37.410757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.127 [2024-11-20 11:21:37.410789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.127 qpair failed and we were unable to recover it. 00:27:10.127 [2024-11-20 11:21:37.410967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.127 [2024-11-20 11:21:37.411000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.127 qpair failed and we were unable to recover it. 00:27:10.127 [2024-11-20 11:21:37.411132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.127 [2024-11-20 11:21:37.411164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.127 qpair failed and we were unable to recover it. 00:27:10.127 [2024-11-20 11:21:37.411334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.127 [2024-11-20 11:21:37.411366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.127 qpair failed and we were unable to recover it. 00:27:10.127 [2024-11-20 11:21:37.411510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.127 [2024-11-20 11:21:37.411543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.127 qpair failed and we were unable to recover it. 00:27:10.127 [2024-11-20 11:21:37.411664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.127 [2024-11-20 11:21:37.411696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.127 qpair failed and we were unable to recover it. 00:27:10.127 [2024-11-20 11:21:37.411806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.127 [2024-11-20 11:21:37.411840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.127 qpair failed and we were unable to recover it. 00:27:10.127 [2024-11-20 11:21:37.412085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.127 [2024-11-20 11:21:37.412119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.127 qpair failed and we were unable to recover it. 00:27:10.127 [2024-11-20 11:21:37.412240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.127 [2024-11-20 11:21:37.412272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.127 qpair failed and we were unable to recover it. 00:27:10.127 [2024-11-20 11:21:37.412394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.127 [2024-11-20 11:21:37.412426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.127 qpair failed and we were unable to recover it. 00:27:10.127 [2024-11-20 11:21:37.412595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.127 [2024-11-20 11:21:37.412627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.127 qpair failed and we were unable to recover it. 00:27:10.127 [2024-11-20 11:21:37.412750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.127 [2024-11-20 11:21:37.412781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.127 qpair failed and we were unable to recover it. 00:27:10.127 [2024-11-20 11:21:37.412993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.127 [2024-11-20 11:21:37.413029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.127 qpair failed and we were unable to recover it. 00:27:10.127 [2024-11-20 11:21:37.413136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.127 [2024-11-20 11:21:37.413167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.127 qpair failed and we were unable to recover it. 00:27:10.127 [2024-11-20 11:21:37.413365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.127 [2024-11-20 11:21:37.413397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.127 qpair failed and we were unable to recover it. 00:27:10.127 [2024-11-20 11:21:37.413647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.127 [2024-11-20 11:21:37.413679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.127 qpair failed and we were unable to recover it. 00:27:10.127 [2024-11-20 11:21:37.413971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.127 [2024-11-20 11:21:37.414006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.127 qpair failed and we were unable to recover it. 00:27:10.127 [2024-11-20 11:21:37.414197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.128 [2024-11-20 11:21:37.414229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.128 qpair failed and we were unable to recover it. 00:27:10.128 [2024-11-20 11:21:37.414408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.128 [2024-11-20 11:21:37.414440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.128 qpair failed and we were unable to recover it. 00:27:10.128 [2024-11-20 11:21:37.414688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.128 [2024-11-20 11:21:37.414721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.128 qpair failed and we were unable to recover it. 00:27:10.128 [2024-11-20 11:21:37.414897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.128 [2024-11-20 11:21:37.414929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.128 qpair failed and we were unable to recover it. 00:27:10.128 [2024-11-20 11:21:37.415044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.128 [2024-11-20 11:21:37.415077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.128 qpair failed and we were unable to recover it. 00:27:10.128 [2024-11-20 11:21:37.415185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.128 [2024-11-20 11:21:37.415217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.128 qpair failed and we were unable to recover it. 00:27:10.128 [2024-11-20 11:21:37.415464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.128 [2024-11-20 11:21:37.415495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.128 qpair failed and we were unable to recover it. 00:27:10.128 [2024-11-20 11:21:37.415640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.128 [2024-11-20 11:21:37.415673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.128 qpair failed and we were unable to recover it. 00:27:10.128 [2024-11-20 11:21:37.415888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.128 [2024-11-20 11:21:37.415920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.128 qpair failed and we were unable to recover it. 00:27:10.128 [2024-11-20 11:21:37.416125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.128 [2024-11-20 11:21:37.416158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.128 qpair failed and we were unable to recover it. 00:27:10.128 [2024-11-20 11:21:37.416319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.128 [2024-11-20 11:21:37.416351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.128 qpair failed and we were unable to recover it. 00:27:10.128 [2024-11-20 11:21:37.416464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.128 [2024-11-20 11:21:37.416495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.128 qpair failed and we were unable to recover it. 00:27:10.128 [2024-11-20 11:21:37.416603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.128 [2024-11-20 11:21:37.416635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.128 qpair failed and we were unable to recover it. 00:27:10.128 [2024-11-20 11:21:37.416813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.128 [2024-11-20 11:21:37.416845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.128 qpair failed and we were unable to recover it. 00:27:10.128 [2024-11-20 11:21:37.416962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.128 [2024-11-20 11:21:37.416995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.128 qpair failed and we were unable to recover it. 00:27:10.128 [2024-11-20 11:21:37.417186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.128 [2024-11-20 11:21:37.417224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.128 qpair failed and we were unable to recover it. 00:27:10.128 [2024-11-20 11:21:37.417466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.128 [2024-11-20 11:21:37.417499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.128 qpair failed and we were unable to recover it. 00:27:10.128 [2024-11-20 11:21:37.417685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.128 [2024-11-20 11:21:37.417717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.128 qpair failed and we were unable to recover it. 00:27:10.128 [2024-11-20 11:21:37.417900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.128 [2024-11-20 11:21:37.417932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.128 qpair failed and we were unable to recover it. 00:27:10.128 [2024-11-20 11:21:37.418149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.128 [2024-11-20 11:21:37.418181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.128 qpair failed and we were unable to recover it. 00:27:10.128 [2024-11-20 11:21:37.418289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.128 [2024-11-20 11:21:37.418321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.128 qpair failed and we were unable to recover it. 00:27:10.128 [2024-11-20 11:21:37.418507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.128 [2024-11-20 11:21:37.418539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.128 qpair failed and we were unable to recover it. 00:27:10.128 [2024-11-20 11:21:37.418712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.128 [2024-11-20 11:21:37.418743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.128 qpair failed and we were unable to recover it. 00:27:10.128 [2024-11-20 11:21:37.418873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.128 [2024-11-20 11:21:37.418905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.128 qpair failed and we were unable to recover it. 00:27:10.128 [2024-11-20 11:21:37.419161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.128 [2024-11-20 11:21:37.419195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.128 qpair failed and we were unable to recover it. 00:27:10.128 [2024-11-20 11:21:37.419372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.128 [2024-11-20 11:21:37.419404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.128 qpair failed and we were unable to recover it. 00:27:10.128 [2024-11-20 11:21:37.419573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.128 [2024-11-20 11:21:37.419605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.128 qpair failed and we were unable to recover it. 00:27:10.128 [2024-11-20 11:21:37.419782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.128 [2024-11-20 11:21:37.419814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.128 qpair failed and we were unable to recover it. 00:27:10.128 [2024-11-20 11:21:37.419935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.128 [2024-11-20 11:21:37.419980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.128 qpair failed and we were unable to recover it. 00:27:10.128 [2024-11-20 11:21:37.420157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.128 [2024-11-20 11:21:37.420190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.128 qpair failed and we were unable to recover it. 00:27:10.128 [2024-11-20 11:21:37.420322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.128 [2024-11-20 11:21:37.420355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.129 qpair failed and we were unable to recover it. 00:27:10.129 [2024-11-20 11:21:37.420525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.129 [2024-11-20 11:21:37.420557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.129 qpair failed and we were unable to recover it. 00:27:10.129 [2024-11-20 11:21:37.420668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.129 [2024-11-20 11:21:37.420700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.129 qpair failed and we were unable to recover it. 00:27:10.129 [2024-11-20 11:21:37.420835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.129 [2024-11-20 11:21:37.420867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.129 qpair failed and we were unable to recover it. 00:27:10.129 [2024-11-20 11:21:37.420986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.129 [2024-11-20 11:21:37.421021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.129 qpair failed and we were unable to recover it. 00:27:10.129 [2024-11-20 11:21:37.421211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.129 [2024-11-20 11:21:37.421243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.129 qpair failed and we were unable to recover it. 00:27:10.129 [2024-11-20 11:21:37.421364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.129 [2024-11-20 11:21:37.421396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.129 qpair failed and we were unable to recover it. 00:27:10.129 [2024-11-20 11:21:37.421528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.129 [2024-11-20 11:21:37.421559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.129 qpair failed and we were unable to recover it. 00:27:10.129 [2024-11-20 11:21:37.421667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.129 [2024-11-20 11:21:37.421698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.129 qpair failed and we were unable to recover it. 00:27:10.129 [2024-11-20 11:21:37.421883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.129 [2024-11-20 11:21:37.421915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.129 qpair failed and we were unable to recover it. 00:27:10.129 [2024-11-20 11:21:37.422094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.129 [2024-11-20 11:21:37.422129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.129 qpair failed and we were unable to recover it. 00:27:10.129 [2024-11-20 11:21:37.422333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.129 [2024-11-20 11:21:37.422364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.129 qpair failed and we were unable to recover it. 00:27:10.129 [2024-11-20 11:21:37.422564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.129 [2024-11-20 11:21:37.422597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.129 qpair failed and we were unable to recover it. 00:27:10.129 [2024-11-20 11:21:37.422778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.129 [2024-11-20 11:21:37.422810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.129 qpair failed and we were unable to recover it. 00:27:10.129 [2024-11-20 11:21:37.422922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.129 [2024-11-20 11:21:37.422963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.129 qpair failed and we were unable to recover it. 00:27:10.129 [2024-11-20 11:21:37.423137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.129 [2024-11-20 11:21:37.423170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.129 qpair failed and we were unable to recover it. 00:27:10.129 [2024-11-20 11:21:37.423275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.129 [2024-11-20 11:21:37.423307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.129 qpair failed and we were unable to recover it. 00:27:10.129 [2024-11-20 11:21:37.423536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.129 [2024-11-20 11:21:37.423567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.129 qpair failed and we were unable to recover it. 00:27:10.129 [2024-11-20 11:21:37.423806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.129 [2024-11-20 11:21:37.423838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.129 qpair failed and we were unable to recover it. 00:27:10.129 [2024-11-20 11:21:37.424045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.129 [2024-11-20 11:21:37.424078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.129 qpair failed and we were unable to recover it. 00:27:10.129 [2024-11-20 11:21:37.424261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.129 [2024-11-20 11:21:37.424292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.129 qpair failed and we were unable to recover it. 00:27:10.129 [2024-11-20 11:21:37.424406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.129 [2024-11-20 11:21:37.424437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.129 qpair failed and we were unable to recover it. 00:27:10.129 [2024-11-20 11:21:37.424607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.129 [2024-11-20 11:21:37.424640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.129 qpair failed and we were unable to recover it. 00:27:10.129 [2024-11-20 11:21:37.424756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.129 [2024-11-20 11:21:37.424788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.129 qpair failed and we were unable to recover it. 00:27:10.129 [2024-11-20 11:21:37.425047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.129 [2024-11-20 11:21:37.425081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.129 qpair failed and we were unable to recover it. 00:27:10.129 [2024-11-20 11:21:37.425201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.129 [2024-11-20 11:21:37.425238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.129 qpair failed and we were unable to recover it. 00:27:10.129 [2024-11-20 11:21:37.425413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.129 [2024-11-20 11:21:37.425446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.129 qpair failed and we were unable to recover it. 00:27:10.129 [2024-11-20 11:21:37.425649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.129 [2024-11-20 11:21:37.425682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.129 qpair failed and we were unable to recover it. 00:27:10.129 [2024-11-20 11:21:37.425784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.129 [2024-11-20 11:21:37.425815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.129 qpair failed and we were unable to recover it. 00:27:10.129 [2024-11-20 11:21:37.425989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.129 [2024-11-20 11:21:37.426022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.129 qpair failed and we were unable to recover it. 00:27:10.129 [2024-11-20 11:21:37.426141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.129 [2024-11-20 11:21:37.426173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.130 qpair failed and we were unable to recover it. 00:27:10.130 [2024-11-20 11:21:37.426423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.130 [2024-11-20 11:21:37.426454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.130 qpair failed and we were unable to recover it. 00:27:10.130 [2024-11-20 11:21:37.426573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.130 [2024-11-20 11:21:37.426605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.130 qpair failed and we were unable to recover it. 00:27:10.130 [2024-11-20 11:21:37.426720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.130 [2024-11-20 11:21:37.426752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.130 qpair failed and we were unable to recover it. 00:27:10.130 [2024-11-20 11:21:37.426891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.130 [2024-11-20 11:21:37.426923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.130 qpair failed and we were unable to recover it. 00:27:10.130 [2024-11-20 11:21:37.427117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.130 [2024-11-20 11:21:37.427150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.130 qpair failed and we were unable to recover it. 00:27:10.130 [2024-11-20 11:21:37.427282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.130 [2024-11-20 11:21:37.427313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.130 qpair failed and we were unable to recover it. 00:27:10.130 [2024-11-20 11:21:37.427434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.130 [2024-11-20 11:21:37.427466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.130 qpair failed and we were unable to recover it. 00:27:10.130 [2024-11-20 11:21:37.427578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.130 [2024-11-20 11:21:37.427610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.130 qpair failed and we were unable to recover it. 00:27:10.130 [2024-11-20 11:21:37.427728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.130 [2024-11-20 11:21:37.427760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.130 qpair failed and we were unable to recover it. 00:27:10.130 [2024-11-20 11:21:37.427877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.130 [2024-11-20 11:21:37.427910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.130 qpair failed and we were unable to recover it. 00:27:10.130 [2024-11-20 11:21:37.428100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.130 [2024-11-20 11:21:37.428134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.130 qpair failed and we were unable to recover it. 00:27:10.130 [2024-11-20 11:21:37.428245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.130 [2024-11-20 11:21:37.428276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.130 qpair failed and we were unable to recover it. 00:27:10.130 [2024-11-20 11:21:37.428511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.130 [2024-11-20 11:21:37.428543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.130 qpair failed and we were unable to recover it. 00:27:10.130 [2024-11-20 11:21:37.428668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.130 [2024-11-20 11:21:37.428699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.130 qpair failed and we were unable to recover it. 00:27:10.130 [2024-11-20 11:21:37.428802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.130 [2024-11-20 11:21:37.428834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.130 qpair failed and we were unable to recover it. 00:27:10.130 [2024-11-20 11:21:37.428969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.130 [2024-11-20 11:21:37.429003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.130 qpair failed and we were unable to recover it. 00:27:10.130 [2024-11-20 11:21:37.429106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.130 [2024-11-20 11:21:37.429138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.130 qpair failed and we were unable to recover it. 00:27:10.130 [2024-11-20 11:21:37.429264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.130 [2024-11-20 11:21:37.429296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.130 qpair failed and we were unable to recover it. 00:27:10.130 [2024-11-20 11:21:37.429407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.130 [2024-11-20 11:21:37.429439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.130 qpair failed and we were unable to recover it. 00:27:10.130 [2024-11-20 11:21:37.429545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.130 [2024-11-20 11:21:37.429577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.130 qpair failed and we were unable to recover it. 00:27:10.130 [2024-11-20 11:21:37.429695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.130 [2024-11-20 11:21:37.429726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.130 qpair failed and we were unable to recover it. 00:27:10.130 [2024-11-20 11:21:37.430066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.130 [2024-11-20 11:21:37.430141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.130 qpair failed and we were unable to recover it. 00:27:10.130 [2024-11-20 11:21:37.430295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.130 [2024-11-20 11:21:37.430332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.130 qpair failed and we were unable to recover it. 00:27:10.130 [2024-11-20 11:21:37.430467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.130 [2024-11-20 11:21:37.430499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.130 qpair failed and we were unable to recover it. 00:27:10.130 [2024-11-20 11:21:37.430693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.130 [2024-11-20 11:21:37.430727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.130 qpair failed and we were unable to recover it. 00:27:10.130 [2024-11-20 11:21:37.430845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.130 [2024-11-20 11:21:37.430879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.130 qpair failed and we were unable to recover it. 00:27:10.130 [2024-11-20 11:21:37.431070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.130 [2024-11-20 11:21:37.431104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.130 qpair failed and we were unable to recover it. 00:27:10.130 [2024-11-20 11:21:37.431217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.130 [2024-11-20 11:21:37.431250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.130 qpair failed and we were unable to recover it. 00:27:10.130 [2024-11-20 11:21:37.431369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.130 [2024-11-20 11:21:37.431403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.130 qpair failed and we were unable to recover it. 00:27:10.130 [2024-11-20 11:21:37.431571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.130 [2024-11-20 11:21:37.431605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.130 qpair failed and we were unable to recover it. 00:27:10.130 [2024-11-20 11:21:37.431777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.130 [2024-11-20 11:21:37.431809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.130 qpair failed and we were unable to recover it. 00:27:10.130 [2024-11-20 11:21:37.431929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.130 [2024-11-20 11:21:37.431973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.130 qpair failed and we were unable to recover it. 00:27:10.130 [2024-11-20 11:21:37.432083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.131 [2024-11-20 11:21:37.432116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.131 qpair failed and we were unable to recover it. 00:27:10.131 [2024-11-20 11:21:37.432313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.131 [2024-11-20 11:21:37.432346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.131 qpair failed and we were unable to recover it. 00:27:10.131 [2024-11-20 11:21:37.432480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.131 [2024-11-20 11:21:37.432513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.131 qpair failed and we were unable to recover it. 00:27:10.131 [2024-11-20 11:21:37.432638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.131 [2024-11-20 11:21:37.432671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.131 qpair failed and we were unable to recover it. 00:27:10.131 [2024-11-20 11:21:37.432792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.131 [2024-11-20 11:21:37.432825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.131 qpair failed and we were unable to recover it. 00:27:10.131 [2024-11-20 11:21:37.432940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.131 [2024-11-20 11:21:37.432982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.131 qpair failed and we were unable to recover it. 00:27:10.131 [2024-11-20 11:21:37.433105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.131 [2024-11-20 11:21:37.433138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.131 qpair failed and we were unable to recover it. 00:27:10.131 [2024-11-20 11:21:37.433248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.131 [2024-11-20 11:21:37.433281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.131 qpair failed and we were unable to recover it. 00:27:10.131 [2024-11-20 11:21:37.433460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.131 [2024-11-20 11:21:37.433493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.131 qpair failed and we were unable to recover it. 00:27:10.131 [2024-11-20 11:21:37.433683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.131 [2024-11-20 11:21:37.433715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.131 qpair failed and we were unable to recover it. 00:27:10.131 [2024-11-20 11:21:37.433963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.131 [2024-11-20 11:21:37.433997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.131 qpair failed and we were unable to recover it. 00:27:10.131 [2024-11-20 11:21:37.434116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.131 [2024-11-20 11:21:37.434148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.131 qpair failed and we were unable to recover it. 00:27:10.131 [2024-11-20 11:21:37.434280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.131 [2024-11-20 11:21:37.434312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.131 qpair failed and we were unable to recover it. 00:27:10.131 [2024-11-20 11:21:37.434496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.131 [2024-11-20 11:21:37.434529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.131 qpair failed and we were unable to recover it. 00:27:10.131 [2024-11-20 11:21:37.434713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.131 [2024-11-20 11:21:37.434745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.131 qpair failed and we were unable to recover it. 00:27:10.131 [2024-11-20 11:21:37.434934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.131 [2024-11-20 11:21:37.434980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.131 qpair failed and we were unable to recover it. 00:27:10.131 [2024-11-20 11:21:37.435110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.131 [2024-11-20 11:21:37.435142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.131 qpair failed and we were unable to recover it. 00:27:10.131 [2024-11-20 11:21:37.435280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.131 [2024-11-20 11:21:37.435312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.131 qpair failed and we were unable to recover it. 00:27:10.131 [2024-11-20 11:21:37.435436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.131 [2024-11-20 11:21:37.435469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.131 qpair failed and we were unable to recover it. 00:27:10.131 [2024-11-20 11:21:37.435674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.131 [2024-11-20 11:21:37.435706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.131 qpair failed and we were unable to recover it. 00:27:10.131 [2024-11-20 11:21:37.435875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.131 [2024-11-20 11:21:37.435906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.131 qpair failed and we were unable to recover it. 00:27:10.131 [2024-11-20 11:21:37.436111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.131 [2024-11-20 11:21:37.436145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.131 qpair failed and we were unable to recover it. 00:27:10.131 [2024-11-20 11:21:37.436281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.131 [2024-11-20 11:21:37.436313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.131 qpair failed and we were unable to recover it. 00:27:10.131 [2024-11-20 11:21:37.436577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.131 [2024-11-20 11:21:37.436610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.131 qpair failed and we were unable to recover it. 00:27:10.131 [2024-11-20 11:21:37.436792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.131 [2024-11-20 11:21:37.436824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.131 qpair failed and we were unable to recover it. 00:27:10.131 [2024-11-20 11:21:37.436999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.131 [2024-11-20 11:21:37.437033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.131 qpair failed and we were unable to recover it. 00:27:10.131 [2024-11-20 11:21:37.437151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.131 [2024-11-20 11:21:37.437184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.131 qpair failed and we were unable to recover it. 00:27:10.131 [2024-11-20 11:21:37.437298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.131 [2024-11-20 11:21:37.437331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.131 qpair failed and we were unable to recover it. 00:27:10.131 [2024-11-20 11:21:37.437515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.131 [2024-11-20 11:21:37.437547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.131 qpair failed and we were unable to recover it. 00:27:10.131 [2024-11-20 11:21:37.437732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.131 [2024-11-20 11:21:37.437770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.131 qpair failed and we were unable to recover it. 00:27:10.131 [2024-11-20 11:21:37.437876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.131 [2024-11-20 11:21:37.437908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.131 qpair failed and we were unable to recover it. 00:27:10.131 [2024-11-20 11:21:37.438100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.131 [2024-11-20 11:21:37.438134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.131 qpair failed and we were unable to recover it. 00:27:10.131 [2024-11-20 11:21:37.438398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.131 [2024-11-20 11:21:37.438431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.131 qpair failed and we were unable to recover it. 00:27:10.132 [2024-11-20 11:21:37.438612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.132 [2024-11-20 11:21:37.438645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.132 qpair failed and we were unable to recover it. 00:27:10.132 [2024-11-20 11:21:37.438755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.132 [2024-11-20 11:21:37.438787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.132 qpair failed and we were unable to recover it. 00:27:10.132 [2024-11-20 11:21:37.438906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.132 [2024-11-20 11:21:37.438938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.132 qpair failed and we were unable to recover it. 00:27:10.132 [2024-11-20 11:21:37.439124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.132 [2024-11-20 11:21:37.439157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.132 qpair failed and we were unable to recover it. 00:27:10.132 [2024-11-20 11:21:37.439293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.132 [2024-11-20 11:21:37.439325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.132 qpair failed and we were unable to recover it. 00:27:10.132 [2024-11-20 11:21:37.439510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.132 [2024-11-20 11:21:37.439543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.132 qpair failed and we were unable to recover it. 00:27:10.132 [2024-11-20 11:21:37.439725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.132 [2024-11-20 11:21:37.439758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.132 qpair failed and we were unable to recover it. 00:27:10.132 [2024-11-20 11:21:37.440031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.132 [2024-11-20 11:21:37.440065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.132 qpair failed and we were unable to recover it. 00:27:10.132 [2024-11-20 11:21:37.440315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.132 [2024-11-20 11:21:37.440348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.132 qpair failed and we were unable to recover it. 00:27:10.132 [2024-11-20 11:21:37.440475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.132 [2024-11-20 11:21:37.440508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.132 qpair failed and we were unable to recover it. 00:27:10.132 [2024-11-20 11:21:37.440735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.132 [2024-11-20 11:21:37.440768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.132 qpair failed and we were unable to recover it. 00:27:10.132 [2024-11-20 11:21:37.440887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.132 [2024-11-20 11:21:37.440919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.132 qpair failed and we were unable to recover it. 00:27:10.132 [2024-11-20 11:21:37.441119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.132 [2024-11-20 11:21:37.441152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.132 qpair failed and we were unable to recover it. 00:27:10.132 [2024-11-20 11:21:37.441280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.132 [2024-11-20 11:21:37.441311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.132 qpair failed and we were unable to recover it. 00:27:10.132 [2024-11-20 11:21:37.441441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.132 [2024-11-20 11:21:37.441473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.132 qpair failed and we were unable to recover it. 00:27:10.132 [2024-11-20 11:21:37.441606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.132 [2024-11-20 11:21:37.441639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.132 qpair failed and we were unable to recover it. 00:27:10.132 [2024-11-20 11:21:37.441761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.132 [2024-11-20 11:21:37.441793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.132 qpair failed and we were unable to recover it. 00:27:10.132 [2024-11-20 11:21:37.441983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.132 [2024-11-20 11:21:37.442017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.132 qpair failed and we were unable to recover it. 00:27:10.132 [2024-11-20 11:21:37.442124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.132 [2024-11-20 11:21:37.442156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.132 qpair failed and we were unable to recover it. 00:27:10.132 [2024-11-20 11:21:37.442415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.132 [2024-11-20 11:21:37.442447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.132 qpair failed and we were unable to recover it. 00:27:10.132 [2024-11-20 11:21:37.442575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.132 [2024-11-20 11:21:37.442607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.132 qpair failed and we were unable to recover it. 00:27:10.132 [2024-11-20 11:21:37.442727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.132 [2024-11-20 11:21:37.442759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.132 qpair failed and we were unable to recover it. 00:27:10.132 [2024-11-20 11:21:37.442871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.132 [2024-11-20 11:21:37.442904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.132 qpair failed and we were unable to recover it. 00:27:10.132 [2024-11-20 11:21:37.443067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.132 [2024-11-20 11:21:37.443101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.132 qpair failed and we were unable to recover it. 00:27:10.132 [2024-11-20 11:21:37.443210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.132 [2024-11-20 11:21:37.443242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.132 qpair failed and we were unable to recover it. 00:27:10.132 [2024-11-20 11:21:37.443375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.132 [2024-11-20 11:21:37.443406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.132 qpair failed and we were unable to recover it. 00:27:10.132 [2024-11-20 11:21:37.443516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.132 [2024-11-20 11:21:37.443548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.132 qpair failed and we were unable to recover it. 00:27:10.132 [2024-11-20 11:21:37.443720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.132 [2024-11-20 11:21:37.443752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.132 qpair failed and we were unable to recover it. 00:27:10.132 [2024-11-20 11:21:37.443859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.132 [2024-11-20 11:21:37.443891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.132 qpair failed and we were unable to recover it. 00:27:10.132 [2024-11-20 11:21:37.444069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.132 [2024-11-20 11:21:37.444103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.132 qpair failed and we were unable to recover it. 00:27:10.132 [2024-11-20 11:21:37.444345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.132 [2024-11-20 11:21:37.444378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.132 qpair failed and we were unable to recover it. 00:27:10.132 [2024-11-20 11:21:37.444484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.132 [2024-11-20 11:21:37.444517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.132 qpair failed and we were unable to recover it. 00:27:10.132 [2024-11-20 11:21:37.444617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.132 [2024-11-20 11:21:37.444650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.133 qpair failed and we were unable to recover it. 00:27:10.133 [2024-11-20 11:21:37.444831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.133 [2024-11-20 11:21:37.444863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.133 qpair failed and we were unable to recover it. 00:27:10.133 [2024-11-20 11:21:37.444987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.133 [2024-11-20 11:21:37.445021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.133 qpair failed and we were unable to recover it. 00:27:10.133 [2024-11-20 11:21:37.445128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.133 [2024-11-20 11:21:37.445161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.133 qpair failed and we were unable to recover it. 00:27:10.133 [2024-11-20 11:21:37.445280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.133 [2024-11-20 11:21:37.445318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.133 qpair failed and we were unable to recover it. 00:27:10.133 [2024-11-20 11:21:37.445422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.133 [2024-11-20 11:21:37.445454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.133 qpair failed and we were unable to recover it. 00:27:10.133 [2024-11-20 11:21:37.445574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.133 [2024-11-20 11:21:37.445606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.133 qpair failed and we were unable to recover it. 00:27:10.133 [2024-11-20 11:21:37.445827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.133 [2024-11-20 11:21:37.445860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.133 qpair failed and we were unable to recover it. 00:27:10.133 [2024-11-20 11:21:37.445978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.133 [2024-11-20 11:21:37.446011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.133 qpair failed and we were unable to recover it. 00:27:10.133 [2024-11-20 11:21:37.446197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.133 [2024-11-20 11:21:37.446230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.133 qpair failed and we were unable to recover it. 00:27:10.133 [2024-11-20 11:21:37.446423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.133 [2024-11-20 11:21:37.446457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.133 qpair failed and we were unable to recover it. 00:27:10.133 [2024-11-20 11:21:37.446574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.133 [2024-11-20 11:21:37.446606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.133 qpair failed and we were unable to recover it. 00:27:10.133 [2024-11-20 11:21:37.446721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.133 [2024-11-20 11:21:37.446753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.133 qpair failed and we were unable to recover it. 00:27:10.133 [2024-11-20 11:21:37.446877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.133 [2024-11-20 11:21:37.446910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.133 qpair failed and we were unable to recover it. 00:27:10.133 [2024-11-20 11:21:37.447038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.133 [2024-11-20 11:21:37.447072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.133 qpair failed and we were unable to recover it. 00:27:10.133 [2024-11-20 11:21:37.447268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.133 [2024-11-20 11:21:37.447301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.133 qpair failed and we were unable to recover it. 00:27:10.133 [2024-11-20 11:21:37.447419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.133 [2024-11-20 11:21:37.447451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.133 qpair failed and we were unable to recover it. 00:27:10.133 [2024-11-20 11:21:37.447574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.133 [2024-11-20 11:21:37.447606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.133 qpair failed and we were unable to recover it. 00:27:10.133 [2024-11-20 11:21:37.447788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.133 [2024-11-20 11:21:37.447822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.133 qpair failed and we were unable to recover it. 00:27:10.133 [2024-11-20 11:21:37.447999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.133 [2024-11-20 11:21:37.448032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.133 qpair failed and we were unable to recover it. 00:27:10.133 [2024-11-20 11:21:37.448220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.133 [2024-11-20 11:21:37.448253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.133 qpair failed and we were unable to recover it. 00:27:10.133 [2024-11-20 11:21:37.448369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.133 [2024-11-20 11:21:37.448402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.133 qpair failed and we were unable to recover it. 00:27:10.133 [2024-11-20 11:21:37.448638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.133 [2024-11-20 11:21:37.448670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.133 qpair failed and we were unable to recover it. 00:27:10.133 [2024-11-20 11:21:37.448789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.133 [2024-11-20 11:21:37.448822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.133 qpair failed and we were unable to recover it. 00:27:10.133 [2024-11-20 11:21:37.449038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.133 [2024-11-20 11:21:37.449073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.133 qpair failed and we were unable to recover it. 00:27:10.133 [2024-11-20 11:21:37.449186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.133 [2024-11-20 11:21:37.449218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.133 qpair failed and we were unable to recover it. 00:27:10.133 [2024-11-20 11:21:37.449324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.133 [2024-11-20 11:21:37.449356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.133 qpair failed and we were unable to recover it. 00:27:10.133 [2024-11-20 11:21:37.449478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.133 [2024-11-20 11:21:37.449512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.133 qpair failed and we were unable to recover it. 00:27:10.133 [2024-11-20 11:21:37.449619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.133 [2024-11-20 11:21:37.449651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.133 qpair failed and we were unable to recover it. 00:27:10.133 [2024-11-20 11:21:37.449857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.133 [2024-11-20 11:21:37.449889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.133 qpair failed and we were unable to recover it. 00:27:10.133 [2024-11-20 11:21:37.450016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.133 [2024-11-20 11:21:37.450049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.133 qpair failed and we were unable to recover it. 00:27:10.133 [2024-11-20 11:21:37.450303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.134 [2024-11-20 11:21:37.450335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.134 qpair failed and we were unable to recover it. 00:27:10.134 [2024-11-20 11:21:37.450512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.134 [2024-11-20 11:21:37.450546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.134 qpair failed and we were unable to recover it. 00:27:10.134 [2024-11-20 11:21:37.450672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.134 [2024-11-20 11:21:37.450703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.134 qpair failed and we were unable to recover it. 00:27:10.134 [2024-11-20 11:21:37.450884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.134 [2024-11-20 11:21:37.450916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.134 qpair failed and we were unable to recover it. 00:27:10.134 [2024-11-20 11:21:37.451204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.134 [2024-11-20 11:21:37.451237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.134 qpair failed and we were unable to recover it. 00:27:10.134 [2024-11-20 11:21:37.451454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.134 [2024-11-20 11:21:37.451486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.134 qpair failed and we were unable to recover it. 00:27:10.134 [2024-11-20 11:21:37.451611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.134 [2024-11-20 11:21:37.451643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.134 qpair failed and we were unable to recover it. 00:27:10.134 [2024-11-20 11:21:37.451761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.134 [2024-11-20 11:21:37.451793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.134 qpair failed and we were unable to recover it. 00:27:10.134 [2024-11-20 11:21:37.451913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.134 [2024-11-20 11:21:37.451946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.134 qpair failed and we were unable to recover it. 00:27:10.134 [2024-11-20 11:21:37.452143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.134 [2024-11-20 11:21:37.452176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.134 qpair failed and we were unable to recover it. 00:27:10.134 [2024-11-20 11:21:37.452300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.134 [2024-11-20 11:21:37.452332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.134 qpair failed and we were unable to recover it. 00:27:10.134 [2024-11-20 11:21:37.452524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.134 [2024-11-20 11:21:37.452557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.134 qpair failed and we were unable to recover it. 00:27:10.134 [2024-11-20 11:21:37.452676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.134 [2024-11-20 11:21:37.452709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.134 qpair failed and we were unable to recover it. 00:27:10.134 [2024-11-20 11:21:37.452881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.134 [2024-11-20 11:21:37.452920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.134 qpair failed and we were unable to recover it. 00:27:10.134 [2024-11-20 11:21:37.453126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.134 [2024-11-20 11:21:37.453159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.134 qpair failed and we were unable to recover it. 00:27:10.134 [2024-11-20 11:21:37.453297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.134 [2024-11-20 11:21:37.453329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.134 qpair failed and we were unable to recover it. 00:27:10.134 [2024-11-20 11:21:37.453590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.134 [2024-11-20 11:21:37.453622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.134 qpair failed and we were unable to recover it. 00:27:10.134 [2024-11-20 11:21:37.453809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.134 [2024-11-20 11:21:37.453842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.134 qpair failed and we were unable to recover it. 00:27:10.134 [2024-11-20 11:21:37.454024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.134 [2024-11-20 11:21:37.454058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.134 qpair failed and we were unable to recover it. 00:27:10.134 [2024-11-20 11:21:37.454244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.134 [2024-11-20 11:21:37.454276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.134 qpair failed and we were unable to recover it. 00:27:10.134 [2024-11-20 11:21:37.454458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.134 [2024-11-20 11:21:37.454491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.134 qpair failed and we were unable to recover it. 00:27:10.134 [2024-11-20 11:21:37.454615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.134 [2024-11-20 11:21:37.454648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.134 qpair failed and we were unable to recover it. 00:27:10.134 [2024-11-20 11:21:37.454824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.134 [2024-11-20 11:21:37.454856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.134 qpair failed and we were unable to recover it. 00:27:10.134 [2024-11-20 11:21:37.454985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.134 [2024-11-20 11:21:37.455017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.134 qpair failed and we were unable to recover it. 00:27:10.134 [2024-11-20 11:21:37.455140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.134 [2024-11-20 11:21:37.455173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.134 qpair failed and we were unable to recover it. 00:27:10.134 [2024-11-20 11:21:37.455288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.134 [2024-11-20 11:21:37.455319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.134 qpair failed and we were unable to recover it. 00:27:10.134 [2024-11-20 11:21:37.455432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.134 [2024-11-20 11:21:37.455464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.134 qpair failed and we were unable to recover it. 00:27:10.135 [2024-11-20 11:21:37.455673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.135 [2024-11-20 11:21:37.455707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.135 qpair failed and we were unable to recover it. 00:27:10.135 [2024-11-20 11:21:37.455807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.135 [2024-11-20 11:21:37.455840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.135 qpair failed and we were unable to recover it. 00:27:10.135 [2024-11-20 11:21:37.456082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.135 [2024-11-20 11:21:37.456116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.135 qpair failed and we were unable to recover it. 00:27:10.135 [2024-11-20 11:21:37.456306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.135 [2024-11-20 11:21:37.456338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.135 qpair failed and we were unable to recover it. 00:27:10.135 [2024-11-20 11:21:37.456517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.135 [2024-11-20 11:21:37.456551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.135 qpair failed and we were unable to recover it. 00:27:10.135 [2024-11-20 11:21:37.456729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.135 [2024-11-20 11:21:37.456760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.135 qpair failed and we were unable to recover it. 00:27:10.135 [2024-11-20 11:21:37.457001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.135 [2024-11-20 11:21:37.457035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.135 qpair failed and we were unable to recover it. 00:27:10.135 [2024-11-20 11:21:37.457236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.135 [2024-11-20 11:21:37.457269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.135 qpair failed and we were unable to recover it. 00:27:10.135 [2024-11-20 11:21:37.457440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.135 [2024-11-20 11:21:37.457473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.135 qpair failed and we were unable to recover it. 00:27:10.135 [2024-11-20 11:21:37.457690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.135 [2024-11-20 11:21:37.457722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.135 qpair failed and we were unable to recover it. 00:27:10.135 [2024-11-20 11:21:37.457831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.135 [2024-11-20 11:21:37.457864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.135 qpair failed and we were unable to recover it. 00:27:10.135 [2024-11-20 11:21:37.457990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.135 [2024-11-20 11:21:37.458024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.135 qpair failed and we were unable to recover it. 00:27:10.135 [2024-11-20 11:21:37.458138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.135 [2024-11-20 11:21:37.458171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.135 qpair failed and we were unable to recover it. 00:27:10.135 [2024-11-20 11:21:37.458316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.135 [2024-11-20 11:21:37.458349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.135 qpair failed and we were unable to recover it. 00:27:10.135 [2024-11-20 11:21:37.458534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.135 [2024-11-20 11:21:37.458567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.135 qpair failed and we were unable to recover it. 00:27:10.135 [2024-11-20 11:21:37.458758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.135 [2024-11-20 11:21:37.458790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.135 qpair failed and we were unable to recover it. 00:27:10.135 [2024-11-20 11:21:37.458921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.135 [2024-11-20 11:21:37.458962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.135 qpair failed and we were unable to recover it. 00:27:10.135 [2024-11-20 11:21:37.459091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.135 [2024-11-20 11:21:37.459124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.135 qpair failed and we were unable to recover it. 00:27:10.135 [2024-11-20 11:21:37.459236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.135 [2024-11-20 11:21:37.459269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.135 qpair failed and we were unable to recover it. 00:27:10.135 [2024-11-20 11:21:37.459458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.135 [2024-11-20 11:21:37.459491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.135 qpair failed and we were unable to recover it. 00:27:10.135 [2024-11-20 11:21:37.459660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.135 [2024-11-20 11:21:37.459692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.135 qpair failed and we were unable to recover it. 00:27:10.135 [2024-11-20 11:21:37.459877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.135 [2024-11-20 11:21:37.459910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.135 qpair failed and we were unable to recover it. 00:27:10.135 [2024-11-20 11:21:37.460020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.135 [2024-11-20 11:21:37.460053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.135 qpair failed and we were unable to recover it. 00:27:10.135 [2024-11-20 11:21:37.460167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.135 [2024-11-20 11:21:37.460200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.135 qpair failed and we were unable to recover it. 00:27:10.135 [2024-11-20 11:21:37.460314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.135 [2024-11-20 11:21:37.460346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.135 qpair failed and we were unable to recover it. 00:27:10.135 [2024-11-20 11:21:37.460530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.135 [2024-11-20 11:21:37.460562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.135 qpair failed and we were unable to recover it. 00:27:10.135 [2024-11-20 11:21:37.460678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.135 [2024-11-20 11:21:37.460717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.135 qpair failed and we were unable to recover it. 00:27:10.135 [2024-11-20 11:21:37.460845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.135 [2024-11-20 11:21:37.460877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.135 qpair failed and we were unable to recover it. 00:27:10.135 [2024-11-20 11:21:37.461058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.135 [2024-11-20 11:21:37.461091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.135 qpair failed and we were unable to recover it. 00:27:10.135 [2024-11-20 11:21:37.461216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.135 [2024-11-20 11:21:37.461248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.135 qpair failed and we were unable to recover it. 00:27:10.135 [2024-11-20 11:21:37.461512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.135 [2024-11-20 11:21:37.461545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.136 qpair failed and we were unable to recover it. 00:27:10.136 [2024-11-20 11:21:37.461662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.136 [2024-11-20 11:21:37.461694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.136 qpair failed and we were unable to recover it. 00:27:10.136 [2024-11-20 11:21:37.461907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.136 [2024-11-20 11:21:37.461940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.136 qpair failed and we were unable to recover it. 00:27:10.136 [2024-11-20 11:21:37.462081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.136 [2024-11-20 11:21:37.462115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.136 qpair failed and we were unable to recover it. 00:27:10.136 [2024-11-20 11:21:37.462223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.136 [2024-11-20 11:21:37.462255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.136 qpair failed and we were unable to recover it. 00:27:10.136 [2024-11-20 11:21:37.462358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.136 [2024-11-20 11:21:37.462391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.136 qpair failed and we were unable to recover it. 00:27:10.136 [2024-11-20 11:21:37.462507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.136 [2024-11-20 11:21:37.462539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.136 qpair failed and we were unable to recover it. 00:27:10.136 [2024-11-20 11:21:37.462660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.136 [2024-11-20 11:21:37.462692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.136 qpair failed and we were unable to recover it. 00:27:10.136 [2024-11-20 11:21:37.462936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.136 [2024-11-20 11:21:37.462979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.136 qpair failed and we were unable to recover it. 00:27:10.136 [2024-11-20 11:21:37.463093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.136 [2024-11-20 11:21:37.463127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.136 qpair failed and we were unable to recover it. 00:27:10.136 [2024-11-20 11:21:37.463331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.136 [2024-11-20 11:21:37.463364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.136 qpair failed and we were unable to recover it. 00:27:10.136 [2024-11-20 11:21:37.463609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.136 [2024-11-20 11:21:37.463642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.136 qpair failed and we were unable to recover it. 00:27:10.136 [2024-11-20 11:21:37.463746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.136 [2024-11-20 11:21:37.463778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.136 qpair failed and we were unable to recover it. 00:27:10.136 [2024-11-20 11:21:37.463895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.136 [2024-11-20 11:21:37.463928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.136 qpair failed and we were unable to recover it. 00:27:10.136 [2024-11-20 11:21:37.464143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.136 [2024-11-20 11:21:37.464177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.136 qpair failed and we were unable to recover it. 00:27:10.136 [2024-11-20 11:21:37.464353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.136 [2024-11-20 11:21:37.464385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.136 qpair failed and we were unable to recover it. 00:27:10.136 [2024-11-20 11:21:37.464568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.136 [2024-11-20 11:21:37.464600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.136 qpair failed and we were unable to recover it. 00:27:10.136 [2024-11-20 11:21:37.464704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.136 [2024-11-20 11:21:37.464737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.136 qpair failed and we were unable to recover it. 00:27:10.136 [2024-11-20 11:21:37.464871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.136 [2024-11-20 11:21:37.464904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.136 qpair failed and we were unable to recover it. 00:27:10.136 [2024-11-20 11:21:37.465096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.136 [2024-11-20 11:21:37.465131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.136 qpair failed and we were unable to recover it. 00:27:10.136 [2024-11-20 11:21:37.465258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.136 [2024-11-20 11:21:37.465291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.136 qpair failed and we were unable to recover it. 00:27:10.136 [2024-11-20 11:21:37.465396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.136 [2024-11-20 11:21:37.465429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.136 qpair failed and we were unable to recover it. 00:27:10.136 [2024-11-20 11:21:37.465639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.136 [2024-11-20 11:21:37.465671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.136 qpair failed and we were unable to recover it. 00:27:10.136 [2024-11-20 11:21:37.465803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.136 [2024-11-20 11:21:37.465836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.136 qpair failed and we were unable to recover it. 00:27:10.136 [2024-11-20 11:21:37.466009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.136 [2024-11-20 11:21:37.466044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.136 qpair failed and we were unable to recover it. 00:27:10.136 [2024-11-20 11:21:37.466163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.136 [2024-11-20 11:21:37.466195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.136 qpair failed and we were unable to recover it. 00:27:10.136 [2024-11-20 11:21:37.466318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.136 [2024-11-20 11:21:37.466350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.136 qpair failed and we were unable to recover it. 00:27:10.136 [2024-11-20 11:21:37.466461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.136 [2024-11-20 11:21:37.466493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.136 qpair failed and we were unable to recover it. 00:27:10.136 [2024-11-20 11:21:37.466604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.136 [2024-11-20 11:21:37.466636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.136 qpair failed and we were unable to recover it. 00:27:10.136 [2024-11-20 11:21:37.466809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.136 [2024-11-20 11:21:37.466840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.136 qpair failed and we were unable to recover it. 00:27:10.136 [2024-11-20 11:21:37.466965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.136 [2024-11-20 11:21:37.467000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.136 qpair failed and we were unable to recover it. 00:27:10.136 [2024-11-20 11:21:37.467185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.136 [2024-11-20 11:21:37.467217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.136 qpair failed and we were unable to recover it. 00:27:10.137 [2024-11-20 11:21:37.467347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.137 [2024-11-20 11:21:37.467380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.137 qpair failed and we were unable to recover it. 00:27:10.137 [2024-11-20 11:21:37.467554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.137 [2024-11-20 11:21:37.467586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.137 qpair failed and we were unable to recover it. 00:27:10.137 [2024-11-20 11:21:37.467702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.137 [2024-11-20 11:21:37.467736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.137 qpair failed and we were unable to recover it. 00:27:10.137 [2024-11-20 11:21:37.467861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.137 [2024-11-20 11:21:37.467894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.137 qpair failed and we were unable to recover it. 00:27:10.137 [2024-11-20 11:21:37.468009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.137 [2024-11-20 11:21:37.468050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.137 qpair failed and we were unable to recover it. 00:27:10.137 [2024-11-20 11:21:37.468169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.137 [2024-11-20 11:21:37.468202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.137 qpair failed and we were unable to recover it. 00:27:10.137 [2024-11-20 11:21:37.468316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.137 [2024-11-20 11:21:37.468348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.137 qpair failed and we were unable to recover it. 00:27:10.137 [2024-11-20 11:21:37.468520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.137 [2024-11-20 11:21:37.468554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.137 qpair failed and we were unable to recover it. 00:27:10.137 [2024-11-20 11:21:37.468662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.137 [2024-11-20 11:21:37.468695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.137 qpair failed and we were unable to recover it. 00:27:10.137 [2024-11-20 11:21:37.468877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.137 [2024-11-20 11:21:37.468909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.137 qpair failed and we were unable to recover it. 00:27:10.137 [2024-11-20 11:21:37.469048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.137 [2024-11-20 11:21:37.469082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.137 qpair failed and we were unable to recover it. 00:27:10.137 [2024-11-20 11:21:37.469271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.137 [2024-11-20 11:21:37.469303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.137 qpair failed and we were unable to recover it. 00:27:10.137 [2024-11-20 11:21:37.469473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.137 [2024-11-20 11:21:37.469506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.137 qpair failed and we were unable to recover it. 00:27:10.137 [2024-11-20 11:21:37.469622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.137 [2024-11-20 11:21:37.469655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.137 qpair failed and we were unable to recover it. 00:27:10.137 [2024-11-20 11:21:37.469779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.137 [2024-11-20 11:21:37.469811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.137 qpair failed and we were unable to recover it. 00:27:10.137 [2024-11-20 11:21:37.469916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.137 [2024-11-20 11:21:37.469961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.137 qpair failed and we were unable to recover it. 00:27:10.137 [2024-11-20 11:21:37.470097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.137 [2024-11-20 11:21:37.470130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.137 qpair failed and we were unable to recover it. 00:27:10.137 [2024-11-20 11:21:37.470248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.137 [2024-11-20 11:21:37.470280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.137 qpair failed and we were unable to recover it. 00:27:10.137 [2024-11-20 11:21:37.470394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.137 [2024-11-20 11:21:37.470426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.137 qpair failed and we were unable to recover it. 00:27:10.137 [2024-11-20 11:21:37.470544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.137 [2024-11-20 11:21:37.470577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.137 qpair failed and we were unable to recover it. 00:27:10.137 [2024-11-20 11:21:37.470697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.137 [2024-11-20 11:21:37.470730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.137 qpair failed and we were unable to recover it. 00:27:10.137 [2024-11-20 11:21:37.470900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.137 [2024-11-20 11:21:37.470931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.137 qpair failed and we were unable to recover it. 00:27:10.137 [2024-11-20 11:21:37.471055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.137 [2024-11-20 11:21:37.471087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.137 qpair failed and we were unable to recover it. 00:27:10.137 [2024-11-20 11:21:37.471260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.137 [2024-11-20 11:21:37.471292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.137 qpair failed and we were unable to recover it. 00:27:10.137 [2024-11-20 11:21:37.471474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.137 [2024-11-20 11:21:37.471506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.137 qpair failed and we were unable to recover it. 00:27:10.137 [2024-11-20 11:21:37.471688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.137 [2024-11-20 11:21:37.471721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.137 qpair failed and we were unable to recover it. 00:27:10.137 [2024-11-20 11:21:37.471844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.137 [2024-11-20 11:21:37.471876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.137 qpair failed and we were unable to recover it. 00:27:10.137 [2024-11-20 11:21:37.472063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.137 [2024-11-20 11:21:37.472098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.137 qpair failed and we were unable to recover it. 00:27:10.137 [2024-11-20 11:21:37.472291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.137 [2024-11-20 11:21:37.472324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.137 qpair failed and we were unable to recover it. 00:27:10.137 [2024-11-20 11:21:37.472580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.137 [2024-11-20 11:21:37.472613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.137 qpair failed and we were unable to recover it. 00:27:10.137 [2024-11-20 11:21:37.472724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.137 [2024-11-20 11:21:37.472757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.137 qpair failed and we were unable to recover it. 00:27:10.137 [2024-11-20 11:21:37.472881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.137 [2024-11-20 11:21:37.472913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.138 qpair failed and we were unable to recover it. 00:27:10.138 [2024-11-20 11:21:37.473034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.138 [2024-11-20 11:21:37.473068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.138 qpair failed and we were unable to recover it. 00:27:10.138 [2024-11-20 11:21:37.473179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.138 [2024-11-20 11:21:37.473211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.138 qpair failed and we were unable to recover it. 00:27:10.138 [2024-11-20 11:21:37.473319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.138 [2024-11-20 11:21:37.473352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.138 qpair failed and we were unable to recover it. 00:27:10.138 [2024-11-20 11:21:37.473528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.138 [2024-11-20 11:21:37.473561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.138 qpair failed and we were unable to recover it. 00:27:10.138 [2024-11-20 11:21:37.473694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.138 [2024-11-20 11:21:37.473726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.138 qpair failed and we were unable to recover it. 00:27:10.138 [2024-11-20 11:21:37.473847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.138 [2024-11-20 11:21:37.473879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.138 qpair failed and we were unable to recover it. 00:27:10.138 [2024-11-20 11:21:37.474004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.138 [2024-11-20 11:21:37.474037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.138 qpair failed and we were unable to recover it. 00:27:10.138 [2024-11-20 11:21:37.474149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.138 [2024-11-20 11:21:37.474181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.138 qpair failed and we were unable to recover it. 00:27:10.138 [2024-11-20 11:21:37.474305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.138 [2024-11-20 11:21:37.474338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.138 qpair failed and we were unable to recover it. 00:27:10.138 [2024-11-20 11:21:37.474516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.138 [2024-11-20 11:21:37.474549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.138 qpair failed and we were unable to recover it. 00:27:10.138 [2024-11-20 11:21:37.474732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.138 [2024-11-20 11:21:37.474764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.138 qpair failed and we were unable to recover it. 00:27:10.138 [2024-11-20 11:21:37.474940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.138 [2024-11-20 11:21:37.474981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.138 qpair failed and we were unable to recover it. 00:27:10.138 [2024-11-20 11:21:37.475099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.138 [2024-11-20 11:21:37.475138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.138 qpair failed and we were unable to recover it. 00:27:10.138 [2024-11-20 11:21:37.475251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.138 [2024-11-20 11:21:37.475284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.138 qpair failed and we were unable to recover it. 00:27:10.138 [2024-11-20 11:21:37.475402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.138 [2024-11-20 11:21:37.475436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.138 qpair failed and we were unable to recover it. 00:27:10.138 [2024-11-20 11:21:37.475559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.138 [2024-11-20 11:21:37.475593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.138 qpair failed and we were unable to recover it. 00:27:10.138 [2024-11-20 11:21:37.475716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.138 [2024-11-20 11:21:37.475749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.138 qpair failed and we were unable to recover it. 00:27:10.138 [2024-11-20 11:21:37.475869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.138 [2024-11-20 11:21:37.475900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.138 qpair failed and we were unable to recover it. 00:27:10.138 [2024-11-20 11:21:37.476036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.138 [2024-11-20 11:21:37.476070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.138 qpair failed and we were unable to recover it. 00:27:10.138 [2024-11-20 11:21:37.476245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.138 [2024-11-20 11:21:37.476278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.138 qpair failed and we were unable to recover it. 00:27:10.138 [2024-11-20 11:21:37.476396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.138 [2024-11-20 11:21:37.476427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.138 qpair failed and we were unable to recover it. 00:27:10.138 [2024-11-20 11:21:37.476552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.138 [2024-11-20 11:21:37.476584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.138 qpair failed and we were unable to recover it. 00:27:10.138 [2024-11-20 11:21:37.476711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.138 [2024-11-20 11:21:37.476744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.138 qpair failed and we were unable to recover it. 00:27:10.138 [2024-11-20 11:21:37.476863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.138 [2024-11-20 11:21:37.476895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.138 qpair failed and we were unable to recover it. 00:27:10.138 [2024-11-20 11:21:37.477135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.138 [2024-11-20 11:21:37.477170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.138 qpair failed and we were unable to recover it. 00:27:10.138 [2024-11-20 11:21:37.477364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.138 [2024-11-20 11:21:37.477397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.138 qpair failed and we were unable to recover it. 00:27:10.138 [2024-11-20 11:21:37.477611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.138 [2024-11-20 11:21:37.477644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.138 qpair failed and we were unable to recover it. 00:27:10.138 [2024-11-20 11:21:37.477748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.138 [2024-11-20 11:21:37.477781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.138 qpair failed and we were unable to recover it. 00:27:10.138 [2024-11-20 11:21:37.477885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.138 [2024-11-20 11:21:37.477917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.138 qpair failed and we were unable to recover it. 00:27:10.138 [2024-11-20 11:21:37.478056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.138 [2024-11-20 11:21:37.478089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.138 qpair failed and we were unable to recover it. 00:27:10.138 [2024-11-20 11:21:37.478210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.139 [2024-11-20 11:21:37.478242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.139 qpair failed and we were unable to recover it. 00:27:10.139 [2024-11-20 11:21:37.478350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.139 [2024-11-20 11:21:37.478383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.139 qpair failed and we were unable to recover it. 00:27:10.139 [2024-11-20 11:21:37.478489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.139 [2024-11-20 11:21:37.478521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.139 qpair failed and we were unable to recover it. 00:27:10.139 [2024-11-20 11:21:37.478651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.139 [2024-11-20 11:21:37.478684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.139 qpair failed and we were unable to recover it. 00:27:10.139 [2024-11-20 11:21:37.478800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.139 [2024-11-20 11:21:37.478832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.139 qpair failed and we were unable to recover it. 00:27:10.139 [2024-11-20 11:21:37.479013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.139 [2024-11-20 11:21:37.479046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.139 qpair failed and we were unable to recover it. 00:27:10.139 [2024-11-20 11:21:37.479156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.139 [2024-11-20 11:21:37.479188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.139 qpair failed and we were unable to recover it. 00:27:10.139 [2024-11-20 11:21:37.479313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.139 [2024-11-20 11:21:37.479346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.139 qpair failed and we were unable to recover it. 00:27:10.139 [2024-11-20 11:21:37.479456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.139 [2024-11-20 11:21:37.479489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.139 qpair failed and we were unable to recover it. 00:27:10.139 [2024-11-20 11:21:37.479689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.139 [2024-11-20 11:21:37.479722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.139 qpair failed and we were unable to recover it. 00:27:10.139 [2024-11-20 11:21:37.479855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.139 [2024-11-20 11:21:37.479888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.139 qpair failed and we were unable to recover it. 00:27:10.139 [2024-11-20 11:21:37.480016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.139 [2024-11-20 11:21:37.480050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.139 qpair failed and we were unable to recover it. 00:27:10.139 [2024-11-20 11:21:37.480158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.139 [2024-11-20 11:21:37.480191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.139 qpair failed and we were unable to recover it. 00:27:10.139 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 23587 Killed "${NVMF_APP[@]}" "$@" 00:27:10.139 [2024-11-20 11:21:37.480301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.139 [2024-11-20 11:21:37.480334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.139 qpair failed and we were unable to recover it. 00:27:10.139 [2024-11-20 11:21:37.480461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.139 [2024-11-20 11:21:37.480493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.139 qpair failed and we were unable to recover it. 00:27:10.139 [2024-11-20 11:21:37.480688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.139 [2024-11-20 11:21:37.480719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.139 qpair failed and we were unable to recover it. 00:27:10.139 [2024-11-20 11:21:37.480836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.139 [2024-11-20 11:21:37.480869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.139 qpair failed and we were unable to recover it. 00:27:10.139 11:21:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:27:10.139 [2024-11-20 11:21:37.481007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.139 [2024-11-20 11:21:37.481039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.139 qpair failed and we were unable to recover it. 00:27:10.139 [2024-11-20 11:21:37.481162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.139 [2024-11-20 11:21:37.481195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.139 qpair failed and we were unable to recover it. 00:27:10.139 [2024-11-20 11:21:37.481299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.139 [2024-11-20 11:21:37.481332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.139 11:21:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:27:10.139 qpair failed and we were unable to recover it. 00:27:10.139 [2024-11-20 11:21:37.481516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.139 [2024-11-20 11:21:37.481547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.139 qpair failed and we were unable to recover it. 00:27:10.139 11:21:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:10.139 [2024-11-20 11:21:37.481724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.139 [2024-11-20 11:21:37.481757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.139 qpair failed and we were unable to recover it. 00:27:10.139 11:21:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:10.139 [2024-11-20 11:21:37.482013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.139 [2024-11-20 11:21:37.482047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.139 qpair failed and we were unable to recover it. 00:27:10.139 [2024-11-20 11:21:37.482178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.139 [2024-11-20 11:21:37.482211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.139 qpair failed and we were unable to recover it. 00:27:10.139 11:21:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:10.139 [2024-11-20 11:21:37.482387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.139 [2024-11-20 11:21:37.482420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.139 qpair failed and we were unable to recover it. 00:27:10.139 [2024-11-20 11:21:37.482525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.139 [2024-11-20 11:21:37.482557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.139 qpair failed and we were unable to recover it. 00:27:10.139 [2024-11-20 11:21:37.482673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.139 [2024-11-20 11:21:37.482704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.140 qpair failed and we were unable to recover it. 00:27:10.140 [2024-11-20 11:21:37.482884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.140 [2024-11-20 11:21:37.482917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.140 qpair failed and we were unable to recover it. 00:27:10.140 [2024-11-20 11:21:37.483064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.140 [2024-11-20 11:21:37.483097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.140 qpair failed and we were unable to recover it. 00:27:10.140 [2024-11-20 11:21:37.483214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.140 [2024-11-20 11:21:37.483246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.140 qpair failed and we were unable to recover it. 00:27:10.140 [2024-11-20 11:21:37.483428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.140 [2024-11-20 11:21:37.483461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.140 qpair failed and we were unable to recover it. 00:27:10.140 [2024-11-20 11:21:37.483568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.140 [2024-11-20 11:21:37.483599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.140 qpair failed and we were unable to recover it. 00:27:10.140 [2024-11-20 11:21:37.483715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.140 [2024-11-20 11:21:37.483745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.140 qpair failed and we were unable to recover it. 00:27:10.140 [2024-11-20 11:21:37.483932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.140 [2024-11-20 11:21:37.483972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.140 qpair failed and we were unable to recover it. 00:27:10.140 [2024-11-20 11:21:37.484206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.140 [2024-11-20 11:21:37.484236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.140 qpair failed and we were unable to recover it. 00:27:10.140 [2024-11-20 11:21:37.484346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.140 [2024-11-20 11:21:37.484376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.140 qpair failed and we were unable to recover it. 00:27:10.140 [2024-11-20 11:21:37.484503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.140 [2024-11-20 11:21:37.484533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.140 qpair failed and we were unable to recover it. 00:27:10.140 [2024-11-20 11:21:37.484650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.140 [2024-11-20 11:21:37.484680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.140 qpair failed and we were unable to recover it. 00:27:10.140 [2024-11-20 11:21:37.484794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.140 [2024-11-20 11:21:37.484824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.140 qpair failed and we were unable to recover it. 00:27:10.140 [2024-11-20 11:21:37.484995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.140 [2024-11-20 11:21:37.485026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.140 qpair failed and we were unable to recover it. 00:27:10.140 [2024-11-20 11:21:37.485127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.140 [2024-11-20 11:21:37.485158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.140 qpair failed and we were unable to recover it. 00:27:10.140 [2024-11-20 11:21:37.485333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.140 [2024-11-20 11:21:37.485364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.140 qpair failed and we were unable to recover it. 00:27:10.140 [2024-11-20 11:21:37.485477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.140 [2024-11-20 11:21:37.485506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.140 qpair failed and we were unable to recover it. 00:27:10.140 [2024-11-20 11:21:37.485679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.140 [2024-11-20 11:21:37.485708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.140 qpair failed and we were unable to recover it. 00:27:10.140 [2024-11-20 11:21:37.485808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.140 [2024-11-20 11:21:37.485838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.140 qpair failed and we were unable to recover it. 00:27:10.140 [2024-11-20 11:21:37.486017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.140 [2024-11-20 11:21:37.486047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.140 qpair failed and we were unable to recover it. 00:27:10.140 [2024-11-20 11:21:37.486154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.140 [2024-11-20 11:21:37.486188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.140 qpair failed and we were unable to recover it. 00:27:10.140 [2024-11-20 11:21:37.486295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.140 [2024-11-20 11:21:37.486325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.140 qpair failed and we were unable to recover it. 00:27:10.140 [2024-11-20 11:21:37.486426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.140 [2024-11-20 11:21:37.486455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.140 qpair failed and we were unable to recover it. 00:27:10.140 [2024-11-20 11:21:37.486568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.140 [2024-11-20 11:21:37.486598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.140 qpair failed and we were unable to recover it. 00:27:10.140 [2024-11-20 11:21:37.486710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.140 [2024-11-20 11:21:37.486739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.140 qpair failed and we were unable to recover it. 00:27:10.140 [2024-11-20 11:21:37.486855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.140 [2024-11-20 11:21:37.486884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.140 qpair failed and we were unable to recover it. 00:27:10.140 [2024-11-20 11:21:37.487015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.140 [2024-11-20 11:21:37.487046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.140 qpair failed and we were unable to recover it. 00:27:10.140 [2024-11-20 11:21:37.487147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.140 [2024-11-20 11:21:37.487177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.140 qpair failed and we were unable to recover it. 00:27:10.140 [2024-11-20 11:21:37.487354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.140 [2024-11-20 11:21:37.487383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.140 qpair failed and we were unable to recover it. 00:27:10.140 [2024-11-20 11:21:37.487481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.140 [2024-11-20 11:21:37.487511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.140 qpair failed and we were unable to recover it. 00:27:10.140 [2024-11-20 11:21:37.487616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.141 [2024-11-20 11:21:37.487645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.141 qpair failed and we were unable to recover it. 00:27:10.141 [2024-11-20 11:21:37.487815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.141 [2024-11-20 11:21:37.487845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.141 qpair failed and we were unable to recover it. 00:27:10.141 [2024-11-20 11:21:37.488012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.141 [2024-11-20 11:21:37.488043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.141 qpair failed and we were unable to recover it. 00:27:10.141 [2024-11-20 11:21:37.488155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.141 [2024-11-20 11:21:37.488185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.141 qpair failed and we were unable to recover it. 00:27:10.141 [2024-11-20 11:21:37.488371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.141 [2024-11-20 11:21:37.488402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.141 qpair failed and we were unable to recover it. 00:27:10.141 [2024-11-20 11:21:37.488598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.141 [2024-11-20 11:21:37.488628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.141 qpair failed and we were unable to recover it. 00:27:10.141 [2024-11-20 11:21:37.488749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.141 [2024-11-20 11:21:37.488778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.141 qpair failed and we were unable to recover it. 00:27:10.141 [2024-11-20 11:21:37.488895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.141 [2024-11-20 11:21:37.488926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.141 qpair failed and we were unable to recover it. 00:27:10.141 11:21:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=24354 00:27:10.141 [2024-11-20 11:21:37.489046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.141 [2024-11-20 11:21:37.489076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.141 qpair failed and we were unable to recover it. 00:27:10.141 [2024-11-20 11:21:37.489174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.141 [2024-11-20 11:21:37.489203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.141 qpair failed and we were unable to recover it. 00:27:10.141 11:21:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 24354 00:27:10.141 [2024-11-20 11:21:37.489371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.141 11:21:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:27:10.141 [2024-11-20 11:21:37.489401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.141 qpair failed and we were unable to recover it. 00:27:10.141 [2024-11-20 11:21:37.489501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.141 [2024-11-20 11:21:37.489531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.141 qpair failed and we were unable to recover it. 00:27:10.141 [2024-11-20 11:21:37.489633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.141 [2024-11-20 11:21:37.489663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.141 qpair failed and we were unable to recover it. 00:27:10.141 11:21:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 24354 ']' 00:27:10.141 [2024-11-20 11:21:37.489826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.141 [2024-11-20 11:21:37.489857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.141 qpair failed and we were unable to recover it. 00:27:10.141 [2024-11-20 11:21:37.490022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.141 11:21:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:10.141 [2024-11-20 11:21:37.490054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.141 qpair failed and we were unable to recover it. 00:27:10.141 [2024-11-20 11:21:37.490225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.141 [2024-11-20 11:21:37.490256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.141 qpair failed and we were unable to recover it. 00:27:10.141 11:21:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:10.141 [2024-11-20 11:21:37.490353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.141 [2024-11-20 11:21:37.490383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.141 qpair failed and we were unable to recover it. 00:27:10.141 [2024-11-20 11:21:37.490517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.141 [2024-11-20 11:21:37.490546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.141 qpair failed and we were unable to recover it. 00:27:10.141 11:21:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:10.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:10.141 [2024-11-20 11:21:37.490658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.141 [2024-11-20 11:21:37.490688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.141 qpair failed and we were unable to recover it. 00:27:10.141 [2024-11-20 11:21:37.490800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.141 [2024-11-20 11:21:37.490830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.141 qpair failed and we were unable to recover it. 00:27:10.141 11:21:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:10.141 [2024-11-20 11:21:37.490928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.141 [2024-11-20 11:21:37.490978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.141 qpair failed and we were unable to recover it. 00:27:10.141 [2024-11-20 11:21:37.491091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.141 [2024-11-20 11:21:37.491121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.141 qpair failed and we were unable to recover it. 00:27:10.141 11:21:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:10.141 [2024-11-20 11:21:37.491219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.141 [2024-11-20 11:21:37.491249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.141 qpair failed and we were unable to recover it. 00:27:10.141 [2024-11-20 11:21:37.491482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.141 [2024-11-20 11:21:37.491512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.141 qpair failed and we were unable to recover it. 00:27:10.141 [2024-11-20 11:21:37.491620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.141 [2024-11-20 11:21:37.491650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.141 qpair failed and we were unable to recover it. 00:27:10.141 [2024-11-20 11:21:37.491816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.141 [2024-11-20 11:21:37.491846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.141 qpair failed and we were unable to recover it. 00:27:10.141 [2024-11-20 11:21:37.491966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.141 [2024-11-20 11:21:37.491998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.141 qpair failed and we were unable to recover it. 00:27:10.141 [2024-11-20 11:21:37.492109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.141 [2024-11-20 11:21:37.492139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.141 qpair failed and we were unable to recover it. 00:27:10.141 [2024-11-20 11:21:37.492312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.141 [2024-11-20 11:21:37.492341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.141 qpair failed and we were unable to recover it. 00:27:10.141 [2024-11-20 11:21:37.492477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.141 [2024-11-20 11:21:37.492506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.141 qpair failed and we were unable to recover it. 00:27:10.141 [2024-11-20 11:21:37.492674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.141 [2024-11-20 11:21:37.492703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.141 qpair failed and we were unable to recover it. 00:27:10.142 [2024-11-20 11:21:37.492874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.142 [2024-11-20 11:21:37.492904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.142 qpair failed and we were unable to recover it. 00:27:10.142 [2024-11-20 11:21:37.493090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.142 [2024-11-20 11:21:37.493127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.142 qpair failed and we were unable to recover it. 00:27:10.142 [2024-11-20 11:21:37.493303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.142 [2024-11-20 11:21:37.493333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.142 qpair failed and we were unable to recover it. 00:27:10.142 [2024-11-20 11:21:37.493459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.142 [2024-11-20 11:21:37.493489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.142 qpair failed and we were unable to recover it. 00:27:10.142 [2024-11-20 11:21:37.493609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.142 [2024-11-20 11:21:37.493640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.142 qpair failed and we were unable to recover it. 00:27:10.142 [2024-11-20 11:21:37.493805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.142 [2024-11-20 11:21:37.493834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.142 qpair failed and we were unable to recover it. 00:27:10.142 [2024-11-20 11:21:37.493963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.142 [2024-11-20 11:21:37.493991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.142 qpair failed and we were unable to recover it. 00:27:10.142 [2024-11-20 11:21:37.494171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.142 [2024-11-20 11:21:37.494200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.142 qpair failed and we were unable to recover it. 00:27:10.142 [2024-11-20 11:21:37.494314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.142 [2024-11-20 11:21:37.494344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.142 qpair failed and we were unable to recover it. 00:27:10.142 [2024-11-20 11:21:37.494528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.142 [2024-11-20 11:21:37.494557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.142 qpair failed and we were unable to recover it. 00:27:10.142 [2024-11-20 11:21:37.494665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.142 [2024-11-20 11:21:37.494695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.142 qpair failed and we were unable to recover it. 00:27:10.142 [2024-11-20 11:21:37.494968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.142 [2024-11-20 11:21:37.495000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.142 qpair failed and we were unable to recover it. 00:27:10.142 [2024-11-20 11:21:37.495123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.142 [2024-11-20 11:21:37.495153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.142 qpair failed and we were unable to recover it. 00:27:10.142 [2024-11-20 11:21:37.495331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.142 [2024-11-20 11:21:37.495363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.142 qpair failed and we were unable to recover it. 00:27:10.142 [2024-11-20 11:21:37.495469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.142 [2024-11-20 11:21:37.495499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.142 qpair failed and we were unable to recover it. 00:27:10.142 [2024-11-20 11:21:37.495619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.142 [2024-11-20 11:21:37.495650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.142 qpair failed and we were unable to recover it. 00:27:10.142 [2024-11-20 11:21:37.495770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.142 [2024-11-20 11:21:37.495800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.142 qpair failed and we were unable to recover it. 00:27:10.142 [2024-11-20 11:21:37.495909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.142 [2024-11-20 11:21:37.495940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.142 qpair failed and we were unable to recover it. 00:27:10.142 [2024-11-20 11:21:37.496052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.142 [2024-11-20 11:21:37.496084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.142 qpair failed and we were unable to recover it. 00:27:10.142 [2024-11-20 11:21:37.496185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.142 [2024-11-20 11:21:37.496215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.142 qpair failed and we were unable to recover it. 00:27:10.142 [2024-11-20 11:21:37.496384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.142 [2024-11-20 11:21:37.496415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.142 qpair failed and we were unable to recover it. 00:27:10.142 [2024-11-20 11:21:37.496580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.142 [2024-11-20 11:21:37.496615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.142 qpair failed and we were unable to recover it. 00:27:10.142 [2024-11-20 11:21:37.496721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.142 [2024-11-20 11:21:37.496752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.142 qpair failed and we were unable to recover it. 00:27:10.142 [2024-11-20 11:21:37.496863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.142 [2024-11-20 11:21:37.496892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.142 qpair failed and we were unable to recover it. 00:27:10.142 [2024-11-20 11:21:37.497036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.142 [2024-11-20 11:21:37.497069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.142 qpair failed and we were unable to recover it. 00:27:10.142 [2024-11-20 11:21:37.497164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.142 [2024-11-20 11:21:37.497194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.142 qpair failed and we were unable to recover it. 00:27:10.142 [2024-11-20 11:21:37.497299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.142 [2024-11-20 11:21:37.497329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.142 qpair failed and we were unable to recover it. 00:27:10.142 [2024-11-20 11:21:37.497441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.142 [2024-11-20 11:21:37.497471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.142 qpair failed and we were unable to recover it. 00:27:10.142 [2024-11-20 11:21:37.497572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.142 [2024-11-20 11:21:37.497602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.142 qpair failed and we were unable to recover it. 00:27:10.142 [2024-11-20 11:21:37.497718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.142 [2024-11-20 11:21:37.497749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.142 qpair failed and we were unable to recover it. 00:27:10.142 [2024-11-20 11:21:37.497870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.142 [2024-11-20 11:21:37.497900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.142 qpair failed and we were unable to recover it. 00:27:10.142 [2024-11-20 11:21:37.498019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.142 [2024-11-20 11:21:37.498049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.142 qpair failed and we were unable to recover it. 00:27:10.142 [2024-11-20 11:21:37.498173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.142 [2024-11-20 11:21:37.498203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.142 qpair failed and we were unable to recover it. 00:27:10.142 [2024-11-20 11:21:37.498376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.142 [2024-11-20 11:21:37.498406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.142 qpair failed and we were unable to recover it. 00:27:10.142 [2024-11-20 11:21:37.498517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.142 [2024-11-20 11:21:37.498547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.142 qpair failed and we were unable to recover it. 00:27:10.142 [2024-11-20 11:21:37.498660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.143 [2024-11-20 11:21:37.498689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.143 qpair failed and we were unable to recover it. 00:27:10.143 [2024-11-20 11:21:37.498808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.143 [2024-11-20 11:21:37.498839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.143 qpair failed and we were unable to recover it. 00:27:10.143 [2024-11-20 11:21:37.499115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.143 [2024-11-20 11:21:37.499147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.143 qpair failed and we were unable to recover it. 00:27:10.143 [2024-11-20 11:21:37.499273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.143 [2024-11-20 11:21:37.499303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.143 qpair failed and we were unable to recover it. 00:27:10.143 [2024-11-20 11:21:37.499472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.143 [2024-11-20 11:21:37.499501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.143 qpair failed and we were unable to recover it. 00:27:10.143 [2024-11-20 11:21:37.499618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.143 [2024-11-20 11:21:37.499648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.143 qpair failed and we were unable to recover it. 00:27:10.143 [2024-11-20 11:21:37.499756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.143 [2024-11-20 11:21:37.499786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.143 qpair failed and we were unable to recover it. 00:27:10.143 [2024-11-20 11:21:37.499901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.143 [2024-11-20 11:21:37.499931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.143 qpair failed and we were unable to recover it. 00:27:10.143 [2024-11-20 11:21:37.500120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.143 [2024-11-20 11:21:37.500151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.143 qpair failed and we were unable to recover it. 00:27:10.143 [2024-11-20 11:21:37.500272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.143 [2024-11-20 11:21:37.500302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.143 qpair failed and we were unable to recover it. 00:27:10.143 [2024-11-20 11:21:37.500472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.143 [2024-11-20 11:21:37.500501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.143 qpair failed and we were unable to recover it. 00:27:10.143 [2024-11-20 11:21:37.500618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.143 [2024-11-20 11:21:37.500648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.143 qpair failed and we were unable to recover it. 00:27:10.143 [2024-11-20 11:21:37.500752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.143 [2024-11-20 11:21:37.500782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.143 qpair failed and we were unable to recover it. 00:27:10.143 [2024-11-20 11:21:37.500904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.143 [2024-11-20 11:21:37.500936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.143 qpair failed and we were unable to recover it. 00:27:10.143 [2024-11-20 11:21:37.501058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.143 [2024-11-20 11:21:37.501090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.143 qpair failed and we were unable to recover it. 00:27:10.143 [2024-11-20 11:21:37.501200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.143 [2024-11-20 11:21:37.501230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.143 qpair failed and we were unable to recover it. 00:27:10.143 [2024-11-20 11:21:37.501337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.143 [2024-11-20 11:21:37.501367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.143 qpair failed and we were unable to recover it. 00:27:10.143 [2024-11-20 11:21:37.501471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.143 [2024-11-20 11:21:37.501501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.143 qpair failed and we were unable to recover it. 00:27:10.143 [2024-11-20 11:21:37.501671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.143 [2024-11-20 11:21:37.501700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.143 qpair failed and we were unable to recover it. 00:27:10.143 [2024-11-20 11:21:37.501814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.143 [2024-11-20 11:21:37.501844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.143 qpair failed and we were unable to recover it. 00:27:10.143 [2024-11-20 11:21:37.501971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.143 [2024-11-20 11:21:37.502002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.143 qpair failed and we were unable to recover it. 00:27:10.143 [2024-11-20 11:21:37.502103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.143 [2024-11-20 11:21:37.502132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.143 qpair failed and we were unable to recover it. 00:27:10.143 [2024-11-20 11:21:37.502249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.143 [2024-11-20 11:21:37.502279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.143 qpair failed and we were unable to recover it. 00:27:10.143 [2024-11-20 11:21:37.502400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.143 [2024-11-20 11:21:37.502431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.143 qpair failed and we were unable to recover it. 00:27:10.143 [2024-11-20 11:21:37.502608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.143 [2024-11-20 11:21:37.502639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.143 qpair failed and we were unable to recover it. 00:27:10.143 [2024-11-20 11:21:37.502745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.143 [2024-11-20 11:21:37.502775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.143 qpair failed and we were unable to recover it. 00:27:10.143 [2024-11-20 11:21:37.502884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.143 [2024-11-20 11:21:37.502918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.143 qpair failed and we were unable to recover it. 00:27:10.143 [2024-11-20 11:21:37.503039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.143 [2024-11-20 11:21:37.503070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.143 qpair failed and we were unable to recover it. 00:27:10.143 [2024-11-20 11:21:37.503195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.143 [2024-11-20 11:21:37.503225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.143 qpair failed and we were unable to recover it. 00:27:10.143 [2024-11-20 11:21:37.503353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.143 [2024-11-20 11:21:37.503384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.143 qpair failed and we were unable to recover it. 00:27:10.143 [2024-11-20 11:21:37.503493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.143 [2024-11-20 11:21:37.503524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.143 qpair failed and we were unable to recover it. 00:27:10.143 [2024-11-20 11:21:37.503651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.143 [2024-11-20 11:21:37.503678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.143 qpair failed and we were unable to recover it. 00:27:10.143 [2024-11-20 11:21:37.503784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.143 [2024-11-20 11:21:37.503812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.143 qpair failed and we were unable to recover it. 00:27:10.143 [2024-11-20 11:21:37.503910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.143 [2024-11-20 11:21:37.503939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.143 qpair failed and we were unable to recover it. 00:27:10.143 [2024-11-20 11:21:37.504048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.143 [2024-11-20 11:21:37.504077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.143 qpair failed and we were unable to recover it. 00:27:10.143 [2024-11-20 11:21:37.504174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.143 [2024-11-20 11:21:37.504202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.144 qpair failed and we were unable to recover it. 00:27:10.144 [2024-11-20 11:21:37.504314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.144 [2024-11-20 11:21:37.504341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.144 qpair failed and we were unable to recover it. 00:27:10.144 [2024-11-20 11:21:37.504601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.144 [2024-11-20 11:21:37.504628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.144 qpair failed and we were unable to recover it. 00:27:10.144 [2024-11-20 11:21:37.504793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.144 [2024-11-20 11:21:37.504820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.144 qpair failed and we were unable to recover it. 00:27:10.144 [2024-11-20 11:21:37.504923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.144 [2024-11-20 11:21:37.504959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.144 qpair failed and we were unable to recover it. 00:27:10.144 [2024-11-20 11:21:37.505130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.144 [2024-11-20 11:21:37.505158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.144 qpair failed and we were unable to recover it. 00:27:10.144 [2024-11-20 11:21:37.505270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.144 [2024-11-20 11:21:37.505299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.144 qpair failed and we were unable to recover it. 00:27:10.144 [2024-11-20 11:21:37.505404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.144 [2024-11-20 11:21:37.505431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.144 qpair failed and we were unable to recover it. 00:27:10.144 [2024-11-20 11:21:37.505537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.144 [2024-11-20 11:21:37.505565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.144 qpair failed and we were unable to recover it. 00:27:10.144 [2024-11-20 11:21:37.505664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.144 [2024-11-20 11:21:37.505691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.144 qpair failed and we were unable to recover it. 00:27:10.144 [2024-11-20 11:21:37.505782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.144 [2024-11-20 11:21:37.505810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.144 qpair failed and we were unable to recover it. 00:27:10.144 [2024-11-20 11:21:37.505908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.144 [2024-11-20 11:21:37.505936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.144 qpair failed and we were unable to recover it. 00:27:10.144 [2024-11-20 11:21:37.506042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.144 [2024-11-20 11:21:37.506071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.144 qpair failed and we were unable to recover it. 00:27:10.144 [2024-11-20 11:21:37.506181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.144 [2024-11-20 11:21:37.506209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.144 qpair failed and we were unable to recover it. 00:27:10.144 [2024-11-20 11:21:37.506378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.144 [2024-11-20 11:21:37.506406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.144 qpair failed and we were unable to recover it. 00:27:10.144 [2024-11-20 11:21:37.506514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.144 [2024-11-20 11:21:37.506542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.144 qpair failed and we were unable to recover it. 00:27:10.144 [2024-11-20 11:21:37.506706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.144 [2024-11-20 11:21:37.506733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.144 qpair failed and we were unable to recover it. 00:27:10.144 [2024-11-20 11:21:37.506839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.144 [2024-11-20 11:21:37.506867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.144 qpair failed and we were unable to recover it. 00:27:10.144 [2024-11-20 11:21:37.506996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.144 [2024-11-20 11:21:37.507027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.144 qpair failed and we were unable to recover it. 00:27:10.144 [2024-11-20 11:21:37.507135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.144 [2024-11-20 11:21:37.507163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.144 qpair failed and we were unable to recover it. 00:27:10.144 [2024-11-20 11:21:37.507261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.144 [2024-11-20 11:21:37.507288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.144 qpair failed and we were unable to recover it. 00:27:10.144 [2024-11-20 11:21:37.507399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.144 [2024-11-20 11:21:37.507427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.144 qpair failed and we were unable to recover it. 00:27:10.144 [2024-11-20 11:21:37.507525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.144 [2024-11-20 11:21:37.507553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.144 qpair failed and we were unable to recover it. 00:27:10.144 [2024-11-20 11:21:37.507653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.144 [2024-11-20 11:21:37.507681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.144 qpair failed and we were unable to recover it. 00:27:10.144 [2024-11-20 11:21:37.507789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.144 [2024-11-20 11:21:37.507817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.144 qpair failed and we were unable to recover it. 00:27:10.144 [2024-11-20 11:21:37.507982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.144 [2024-11-20 11:21:37.508010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.144 qpair failed and we were unable to recover it. 00:27:10.144 [2024-11-20 11:21:37.508119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.144 [2024-11-20 11:21:37.508148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.144 qpair failed and we were unable to recover it. 00:27:10.144 [2024-11-20 11:21:37.508245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.144 [2024-11-20 11:21:37.508272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.144 qpair failed and we were unable to recover it. 00:27:10.144 [2024-11-20 11:21:37.508380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.144 [2024-11-20 11:21:37.508408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.144 qpair failed and we were unable to recover it. 00:27:10.144 [2024-11-20 11:21:37.508642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.144 [2024-11-20 11:21:37.508670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.144 qpair failed and we were unable to recover it. 00:27:10.144 [2024-11-20 11:21:37.508834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.144 [2024-11-20 11:21:37.508862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.144 qpair failed and we were unable to recover it. 00:27:10.144 [2024-11-20 11:21:37.508975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.144 [2024-11-20 11:21:37.509010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.144 qpair failed and we were unable to recover it. 00:27:10.144 [2024-11-20 11:21:37.509112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.144 [2024-11-20 11:21:37.509140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.145 qpair failed and we were unable to recover it. 00:27:10.145 [2024-11-20 11:21:37.509239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.145 [2024-11-20 11:21:37.509266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.145 qpair failed and we were unable to recover it. 00:27:10.145 [2024-11-20 11:21:37.509498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.145 [2024-11-20 11:21:37.509527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.145 qpair failed and we were unable to recover it. 00:27:10.145 [2024-11-20 11:21:37.509636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.145 [2024-11-20 11:21:37.509664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.145 qpair failed and we were unable to recover it. 00:27:10.145 [2024-11-20 11:21:37.509778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.145 [2024-11-20 11:21:37.509804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.145 qpair failed and we were unable to recover it. 00:27:10.145 [2024-11-20 11:21:37.509907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.145 [2024-11-20 11:21:37.509934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.145 qpair failed and we were unable to recover it. 00:27:10.145 [2024-11-20 11:21:37.510104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.145 [2024-11-20 11:21:37.510133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.145 qpair failed and we were unable to recover it. 00:27:10.145 [2024-11-20 11:21:37.510241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.145 [2024-11-20 11:21:37.510268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.145 qpair failed and we were unable to recover it. 00:27:10.145 [2024-11-20 11:21:37.510365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.145 [2024-11-20 11:21:37.510393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.145 qpair failed and we were unable to recover it. 00:27:10.145 [2024-11-20 11:21:37.510492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.145 [2024-11-20 11:21:37.510520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.145 qpair failed and we were unable to recover it. 00:27:10.145 [2024-11-20 11:21:37.510679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.145 [2024-11-20 11:21:37.510706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.145 qpair failed and we were unable to recover it. 00:27:10.145 [2024-11-20 11:21:37.510800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.145 [2024-11-20 11:21:37.510827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.145 qpair failed and we were unable to recover it. 00:27:10.145 [2024-11-20 11:21:37.510933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.145 [2024-11-20 11:21:37.510972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.145 qpair failed and we were unable to recover it. 00:27:10.145 [2024-11-20 11:21:37.511083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.145 [2024-11-20 11:21:37.511111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.145 qpair failed and we were unable to recover it. 00:27:10.145 [2024-11-20 11:21:37.511215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.145 [2024-11-20 11:21:37.511242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.145 qpair failed and we were unable to recover it. 00:27:10.145 [2024-11-20 11:21:37.511344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.145 [2024-11-20 11:21:37.511374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.145 qpair failed and we were unable to recover it. 00:27:10.145 [2024-11-20 11:21:37.511543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.145 [2024-11-20 11:21:37.511571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.145 qpair failed and we were unable to recover it. 00:27:10.145 [2024-11-20 11:21:37.511664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.145 [2024-11-20 11:21:37.511693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.145 qpair failed and we were unable to recover it. 00:27:10.145 [2024-11-20 11:21:37.511793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.145 [2024-11-20 11:21:37.511821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.145 qpair failed and we were unable to recover it. 00:27:10.145 [2024-11-20 11:21:37.511928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.145 [2024-11-20 11:21:37.511965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.145 qpair failed and we were unable to recover it. 00:27:10.145 [2024-11-20 11:21:37.512065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.145 [2024-11-20 11:21:37.512093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.145 qpair failed and we were unable to recover it. 00:27:10.145 [2024-11-20 11:21:37.512231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.145 [2024-11-20 11:21:37.512259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.145 qpair failed and we were unable to recover it. 00:27:10.145 [2024-11-20 11:21:37.512417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.145 [2024-11-20 11:21:37.512444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.145 qpair failed and we were unable to recover it. 00:27:10.145 [2024-11-20 11:21:37.512563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.145 [2024-11-20 11:21:37.512592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.145 qpair failed and we were unable to recover it. 00:27:10.145 [2024-11-20 11:21:37.512783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.145 [2024-11-20 11:21:37.512811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.145 qpair failed and we were unable to recover it. 00:27:10.145 [2024-11-20 11:21:37.512991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.145 [2024-11-20 11:21:37.513019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.145 qpair failed and we were unable to recover it. 00:27:10.145 [2024-11-20 11:21:37.513129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.145 [2024-11-20 11:21:37.513158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.145 qpair failed and we were unable to recover it. 00:27:10.145 [2024-11-20 11:21:37.513323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.145 [2024-11-20 11:21:37.513349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.145 qpair failed and we were unable to recover it. 00:27:10.145 [2024-11-20 11:21:37.513507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.145 [2024-11-20 11:21:37.513533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.145 qpair failed and we were unable to recover it. 00:27:10.145 [2024-11-20 11:21:37.513632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.145 [2024-11-20 11:21:37.513657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.145 qpair failed and we were unable to recover it. 00:27:10.145 [2024-11-20 11:21:37.513860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.145 [2024-11-20 11:21:37.513885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.145 qpair failed and we were unable to recover it. 00:27:10.145 [2024-11-20 11:21:37.513979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.145 [2024-11-20 11:21:37.514006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.145 qpair failed and we were unable to recover it. 00:27:10.145 [2024-11-20 11:21:37.514098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.145 [2024-11-20 11:21:37.514124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.145 qpair failed and we were unable to recover it. 00:27:10.145 [2024-11-20 11:21:37.514297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.145 [2024-11-20 11:21:37.514323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.145 qpair failed and we were unable to recover it. 00:27:10.145 [2024-11-20 11:21:37.514478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.145 [2024-11-20 11:21:37.514503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.145 qpair failed and we were unable to recover it. 00:27:10.145 [2024-11-20 11:21:37.514660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.146 [2024-11-20 11:21:37.514686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.146 qpair failed and we were unable to recover it. 00:27:10.146 [2024-11-20 11:21:37.514802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.146 [2024-11-20 11:21:37.514828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.146 qpair failed and we were unable to recover it. 00:27:10.146 [2024-11-20 11:21:37.514927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.146 [2024-11-20 11:21:37.514964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.146 qpair failed and we were unable to recover it. 00:27:10.146 [2024-11-20 11:21:37.515064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.146 [2024-11-20 11:21:37.515091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.146 qpair failed and we were unable to recover it. 00:27:10.146 [2024-11-20 11:21:37.515185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.146 [2024-11-20 11:21:37.515217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.146 qpair failed and we were unable to recover it. 00:27:10.146 [2024-11-20 11:21:37.515322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.146 [2024-11-20 11:21:37.515347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.146 qpair failed and we were unable to recover it. 00:27:10.146 [2024-11-20 11:21:37.515510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.146 [2024-11-20 11:21:37.515536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.146 qpair failed and we were unable to recover it. 00:27:10.146 [2024-11-20 11:21:37.515625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.146 [2024-11-20 11:21:37.515651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.146 qpair failed and we were unable to recover it. 00:27:10.146 [2024-11-20 11:21:37.515812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.146 [2024-11-20 11:21:37.515838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.146 qpair failed and we were unable to recover it. 00:27:10.146 [2024-11-20 11:21:37.516001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.146 [2024-11-20 11:21:37.516029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.146 qpair failed and we were unable to recover it. 00:27:10.146 [2024-11-20 11:21:37.516120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.146 [2024-11-20 11:21:37.516145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.146 qpair failed and we were unable to recover it. 00:27:10.146 [2024-11-20 11:21:37.516237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.146 [2024-11-20 11:21:37.516264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.146 qpair failed and we were unable to recover it. 00:27:10.146 [2024-11-20 11:21:37.516371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.146 [2024-11-20 11:21:37.516397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.146 qpair failed and we were unable to recover it. 00:27:10.146 [2024-11-20 11:21:37.516505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.146 [2024-11-20 11:21:37.516530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.146 qpair failed and we were unable to recover it. 00:27:10.146 [2024-11-20 11:21:37.516620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.146 [2024-11-20 11:21:37.516645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.146 qpair failed and we were unable to recover it. 00:27:10.146 [2024-11-20 11:21:37.516747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.146 [2024-11-20 11:21:37.516773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.146 qpair failed and we were unable to recover it. 00:27:10.146 [2024-11-20 11:21:37.516870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.146 [2024-11-20 11:21:37.516895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.146 qpair failed and we were unable to recover it. 00:27:10.146 [2024-11-20 11:21:37.517009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.146 [2024-11-20 11:21:37.517035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.146 qpair failed and we were unable to recover it. 00:27:10.146 [2024-11-20 11:21:37.517131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.146 [2024-11-20 11:21:37.517158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.146 qpair failed and we were unable to recover it. 00:27:10.146 [2024-11-20 11:21:37.517249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.146 [2024-11-20 11:21:37.517275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.146 qpair failed and we were unable to recover it. 00:27:10.146 [2024-11-20 11:21:37.517450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.146 [2024-11-20 11:21:37.517476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.146 qpair failed and we were unable to recover it. 00:27:10.146 [2024-11-20 11:21:37.517637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.146 [2024-11-20 11:21:37.517663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.146 qpair failed and we were unable to recover it. 00:27:10.146 [2024-11-20 11:21:37.517758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.146 [2024-11-20 11:21:37.517783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.146 qpair failed and we were unable to recover it. 00:27:10.146 [2024-11-20 11:21:37.517872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.146 [2024-11-20 11:21:37.517898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.146 qpair failed and we were unable to recover it. 00:27:10.146 [2024-11-20 11:21:37.518126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.146 [2024-11-20 11:21:37.518154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.146 qpair failed and we were unable to recover it. 00:27:10.146 [2024-11-20 11:21:37.518252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.146 [2024-11-20 11:21:37.518278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.146 qpair failed and we were unable to recover it. 00:27:10.146 [2024-11-20 11:21:37.518375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.146 [2024-11-20 11:21:37.518401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.146 qpair failed and we were unable to recover it. 00:27:10.146 [2024-11-20 11:21:37.518556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.146 [2024-11-20 11:21:37.518581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.146 qpair failed and we were unable to recover it. 00:27:10.146 [2024-11-20 11:21:37.518683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.146 [2024-11-20 11:21:37.518708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.146 qpair failed and we were unable to recover it. 00:27:10.146 [2024-11-20 11:21:37.518900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.146 [2024-11-20 11:21:37.518925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.146 qpair failed and we were unable to recover it. 00:27:10.146 [2024-11-20 11:21:37.519079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.146 [2024-11-20 11:21:37.519150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.146 qpair failed and we were unable to recover it. 00:27:10.146 [2024-11-20 11:21:37.519356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.146 [2024-11-20 11:21:37.519427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.146 qpair failed and we were unable to recover it. 00:27:10.146 [2024-11-20 11:21:37.519626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.146 [2024-11-20 11:21:37.519662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.146 qpair failed and we were unable to recover it. 00:27:10.146 [2024-11-20 11:21:37.519770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.146 [2024-11-20 11:21:37.519805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.146 qpair failed and we were unable to recover it. 00:27:10.146 [2024-11-20 11:21:37.519986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.146 [2024-11-20 11:21:37.520019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.146 qpair failed and we were unable to recover it. 00:27:10.146 [2024-11-20 11:21:37.520201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.146 [2024-11-20 11:21:37.520233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.146 qpair failed and we were unable to recover it. 00:27:10.146 [2024-11-20 11:21:37.520341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.146 [2024-11-20 11:21:37.520374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.146 qpair failed and we were unable to recover it. 00:27:10.146 [2024-11-20 11:21:37.520484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.146 [2024-11-20 11:21:37.520517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.146 qpair failed and we were unable to recover it. 00:27:10.146 [2024-11-20 11:21:37.520712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.146 [2024-11-20 11:21:37.520745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.146 qpair failed and we were unable to recover it. 00:27:10.147 [2024-11-20 11:21:37.520921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.147 [2024-11-20 11:21:37.520965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.147 qpair failed and we were unable to recover it. 00:27:10.147 [2024-11-20 11:21:37.521078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.147 [2024-11-20 11:21:37.521111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.147 qpair failed and we were unable to recover it. 00:27:10.147 [2024-11-20 11:21:37.521285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.147 [2024-11-20 11:21:37.521318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.147 qpair failed and we were unable to recover it. 00:27:10.147 [2024-11-20 11:21:37.521425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.147 [2024-11-20 11:21:37.521458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.147 qpair failed and we were unable to recover it. 00:27:10.147 [2024-11-20 11:21:37.521575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.147 [2024-11-20 11:21:37.521608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.147 qpair failed and we were unable to recover it. 00:27:10.147 [2024-11-20 11:21:37.521709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.147 [2024-11-20 11:21:37.521751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.147 qpair failed and we were unable to recover it. 00:27:10.147 [2024-11-20 11:21:37.521974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.147 [2024-11-20 11:21:37.522010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.147 qpair failed and we were unable to recover it. 00:27:10.147 [2024-11-20 11:21:37.522123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.147 [2024-11-20 11:21:37.522156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.147 qpair failed and we were unable to recover it. 00:27:10.147 [2024-11-20 11:21:37.522283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.147 [2024-11-20 11:21:37.522316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.147 qpair failed and we were unable to recover it. 00:27:10.147 [2024-11-20 11:21:37.522501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.147 [2024-11-20 11:21:37.522534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.147 qpair failed and we were unable to recover it. 00:27:10.147 [2024-11-20 11:21:37.522645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.147 [2024-11-20 11:21:37.522678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.147 qpair failed and we were unable to recover it. 00:27:10.147 [2024-11-20 11:21:37.522867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.147 [2024-11-20 11:21:37.522900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.147 qpair failed and we were unable to recover it. 00:27:10.147 [2024-11-20 11:21:37.523095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.147 [2024-11-20 11:21:37.523128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.147 qpair failed and we were unable to recover it. 00:27:10.147 [2024-11-20 11:21:37.523242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.147 [2024-11-20 11:21:37.523275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.147 qpair failed and we were unable to recover it. 00:27:10.147 [2024-11-20 11:21:37.523407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.147 [2024-11-20 11:21:37.523439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.147 qpair failed and we were unable to recover it. 00:27:10.147 [2024-11-20 11:21:37.523630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.147 [2024-11-20 11:21:37.523662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.147 qpair failed and we were unable to recover it. 00:27:10.147 [2024-11-20 11:21:37.523777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.147 [2024-11-20 11:21:37.523809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.147 qpair failed and we were unable to recover it. 00:27:10.147 [2024-11-20 11:21:37.523923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.147 [2024-11-20 11:21:37.523965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.147 qpair failed and we were unable to recover it. 00:27:10.147 [2024-11-20 11:21:37.524086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.147 [2024-11-20 11:21:37.524119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.147 qpair failed and we were unable to recover it. 00:27:10.147 [2024-11-20 11:21:37.524310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.147 [2024-11-20 11:21:37.524343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.147 qpair failed and we were unable to recover it. 00:27:10.147 [2024-11-20 11:21:37.524518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.147 [2024-11-20 11:21:37.524550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.147 qpair failed and we were unable to recover it. 00:27:10.147 [2024-11-20 11:21:37.524669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.147 [2024-11-20 11:21:37.524701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.147 qpair failed and we were unable to recover it. 00:27:10.147 [2024-11-20 11:21:37.524826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.147 [2024-11-20 11:21:37.524858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.147 qpair failed and we were unable to recover it. 00:27:10.147 [2024-11-20 11:21:37.524970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.147 [2024-11-20 11:21:37.525005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.147 qpair failed and we were unable to recover it. 00:27:10.147 [2024-11-20 11:21:37.525122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.147 [2024-11-20 11:21:37.525173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.147 qpair failed and we were unable to recover it. 00:27:10.147 [2024-11-20 11:21:37.525302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.147 [2024-11-20 11:21:37.525334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.147 qpair failed and we were unable to recover it. 00:27:10.147 [2024-11-20 11:21:37.525443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.147 [2024-11-20 11:21:37.525476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.147 qpair failed and we were unable to recover it. 00:27:10.147 [2024-11-20 11:21:37.525602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.147 [2024-11-20 11:21:37.525633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.147 qpair failed and we were unable to recover it. 00:27:10.147 [2024-11-20 11:21:37.525822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.147 [2024-11-20 11:21:37.525854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.147 qpair failed and we were unable to recover it. 00:27:10.147 [2024-11-20 11:21:37.525968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.147 [2024-11-20 11:21:37.526003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.147 qpair failed and we were unable to recover it. 00:27:10.147 [2024-11-20 11:21:37.526174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.147 [2024-11-20 11:21:37.526208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.147 qpair failed and we were unable to recover it. 00:27:10.147 [2024-11-20 11:21:37.526429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.147 [2024-11-20 11:21:37.526463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.147 qpair failed and we were unable to recover it. 00:27:10.147 [2024-11-20 11:21:37.526609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.147 [2024-11-20 11:21:37.526650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.147 qpair failed and we were unable to recover it. 00:27:10.147 [2024-11-20 11:21:37.526769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.147 [2024-11-20 11:21:37.526803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.147 qpair failed and we were unable to recover it. 00:27:10.147 [2024-11-20 11:21:37.526909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.147 [2024-11-20 11:21:37.526941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.147 qpair failed and we were unable to recover it. 00:27:10.147 [2024-11-20 11:21:37.527124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.147 [2024-11-20 11:21:37.527158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.147 qpair failed and we were unable to recover it. 00:27:10.147 [2024-11-20 11:21:37.527263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.147 [2024-11-20 11:21:37.527296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.147 qpair failed and we were unable to recover it. 00:27:10.147 [2024-11-20 11:21:37.527478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.147 [2024-11-20 11:21:37.527511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.147 qpair failed and we were unable to recover it. 00:27:10.147 [2024-11-20 11:21:37.527643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.148 [2024-11-20 11:21:37.527675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.148 qpair failed and we were unable to recover it. 00:27:10.148 [2024-11-20 11:21:37.527789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.148 [2024-11-20 11:21:37.527820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.148 qpair failed and we were unable to recover it. 00:27:10.148 [2024-11-20 11:21:37.528003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.148 [2024-11-20 11:21:37.528039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.148 qpair failed and we were unable to recover it. 00:27:10.148 [2024-11-20 11:21:37.528145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.148 [2024-11-20 11:21:37.528177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.148 qpair failed and we were unable to recover it. 00:27:10.148 [2024-11-20 11:21:37.528294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.148 [2024-11-20 11:21:37.528326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.148 qpair failed and we were unable to recover it. 00:27:10.148 [2024-11-20 11:21:37.528527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.148 [2024-11-20 11:21:37.528559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.148 qpair failed and we were unable to recover it. 00:27:10.148 [2024-11-20 11:21:37.528672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.148 [2024-11-20 11:21:37.528704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.148 qpair failed and we were unable to recover it. 00:27:10.148 [2024-11-20 11:21:37.528878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.148 [2024-11-20 11:21:37.528918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.148 qpair failed and we were unable to recover it. 00:27:10.148 [2024-11-20 11:21:37.529048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.148 [2024-11-20 11:21:37.529082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.148 qpair failed and we were unable to recover it. 00:27:10.148 [2024-11-20 11:21:37.529350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.148 [2024-11-20 11:21:37.529382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.148 qpair failed and we were unable to recover it. 00:27:10.148 [2024-11-20 11:21:37.529559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.148 [2024-11-20 11:21:37.529591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.148 qpair failed and we were unable to recover it. 00:27:10.148 [2024-11-20 11:21:37.529712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.148 [2024-11-20 11:21:37.529743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.148 qpair failed and we were unable to recover it. 00:27:10.148 [2024-11-20 11:21:37.529919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.148 [2024-11-20 11:21:37.529957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.148 qpair failed and we were unable to recover it. 00:27:10.148 [2024-11-20 11:21:37.530131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.148 [2024-11-20 11:21:37.530164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.148 qpair failed and we were unable to recover it. 00:27:10.148 [2024-11-20 11:21:37.530383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.148 [2024-11-20 11:21:37.530416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.148 qpair failed and we were unable to recover it. 00:27:10.148 [2024-11-20 11:21:37.530546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.148 [2024-11-20 11:21:37.530578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.148 qpair failed and we were unable to recover it. 00:27:10.148 [2024-11-20 11:21:37.530702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.148 [2024-11-20 11:21:37.530735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.148 qpair failed and we were unable to recover it. 00:27:10.148 [2024-11-20 11:21:37.530973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.148 [2024-11-20 11:21:37.531005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.148 qpair failed and we were unable to recover it. 00:27:10.148 [2024-11-20 11:21:37.531186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.148 [2024-11-20 11:21:37.531218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.148 qpair failed and we were unable to recover it. 00:27:10.148 [2024-11-20 11:21:37.531338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.148 [2024-11-20 11:21:37.531370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.148 qpair failed and we were unable to recover it. 00:27:10.148 [2024-11-20 11:21:37.531496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.148 [2024-11-20 11:21:37.531528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.148 qpair failed and we were unable to recover it. 00:27:10.148 [2024-11-20 11:21:37.531643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.148 [2024-11-20 11:21:37.531676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.148 qpair failed and we were unable to recover it. 00:27:10.148 [2024-11-20 11:21:37.531851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.148 [2024-11-20 11:21:37.531883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.148 qpair failed and we were unable to recover it. 00:27:10.148 [2024-11-20 11:21:37.532005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.148 [2024-11-20 11:21:37.532040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.148 qpair failed and we were unable to recover it. 00:27:10.148 [2024-11-20 11:21:37.532156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.148 [2024-11-20 11:21:37.532188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.148 qpair failed and we were unable to recover it. 00:27:10.148 [2024-11-20 11:21:37.532306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.148 [2024-11-20 11:21:37.532338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.148 qpair failed and we were unable to recover it. 00:27:10.148 [2024-11-20 11:21:37.532513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.148 [2024-11-20 11:21:37.532545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.148 qpair failed and we were unable to recover it. 00:27:10.148 [2024-11-20 11:21:37.532655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.148 [2024-11-20 11:21:37.532687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.148 qpair failed and we were unable to recover it. 00:27:10.148 [2024-11-20 11:21:37.532902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.148 [2024-11-20 11:21:37.532935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.148 qpair failed and we were unable to recover it. 00:27:10.148 [2024-11-20 11:21:37.533051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.148 [2024-11-20 11:21:37.533083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.148 qpair failed and we were unable to recover it. 00:27:10.148 [2024-11-20 11:21:37.533278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.148 [2024-11-20 11:21:37.533310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.148 qpair failed and we were unable to recover it. 00:27:10.148 [2024-11-20 11:21:37.533433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.148 [2024-11-20 11:21:37.533465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.148 qpair failed and we were unable to recover it. 00:27:10.148 [2024-11-20 11:21:37.533570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.148 [2024-11-20 11:21:37.533602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.148 qpair failed and we were unable to recover it. 00:27:10.148 [2024-11-20 11:21:37.533706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.148 [2024-11-20 11:21:37.533738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.148 qpair failed and we were unable to recover it. 00:27:10.148 [2024-11-20 11:21:37.533908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.148 [2024-11-20 11:21:37.533946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.148 qpair failed and we were unable to recover it. 00:27:10.148 [2024-11-20 11:21:37.534073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.148 [2024-11-20 11:21:37.534106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.148 qpair failed and we were unable to recover it. 00:27:10.148 [2024-11-20 11:21:37.534231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.148 [2024-11-20 11:21:37.534263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.148 qpair failed and we were unable to recover it. 00:27:10.148 [2024-11-20 11:21:37.534503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.148 [2024-11-20 11:21:37.534536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.148 qpair failed and we were unable to recover it. 00:27:10.148 [2024-11-20 11:21:37.534640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.149 [2024-11-20 11:21:37.534685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.149 qpair failed and we were unable to recover it. 00:27:10.149 [2024-11-20 11:21:37.534794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.149 [2024-11-20 11:21:37.534826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.149 qpair failed and we were unable to recover it. 00:27:10.149 [2024-11-20 11:21:37.535013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.149 [2024-11-20 11:21:37.535059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.149 qpair failed and we were unable to recover it. 00:27:10.149 [2024-11-20 11:21:37.535180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.149 [2024-11-20 11:21:37.535211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.149 qpair failed and we were unable to recover it. 00:27:10.149 [2024-11-20 11:21:37.535391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.149 [2024-11-20 11:21:37.535424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.149 qpair failed and we were unable to recover it. 00:27:10.149 [2024-11-20 11:21:37.535537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.149 [2024-11-20 11:21:37.535570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.149 qpair failed and we were unable to recover it. 00:27:10.149 [2024-11-20 11:21:37.535744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.149 [2024-11-20 11:21:37.535776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.149 qpair failed and we were unable to recover it. 00:27:10.149 [2024-11-20 11:21:37.535965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.149 [2024-11-20 11:21:37.536000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.149 qpair failed and we were unable to recover it. 00:27:10.149 [2024-11-20 11:21:37.536121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.149 [2024-11-20 11:21:37.536155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.149 qpair failed and we were unable to recover it. 00:27:10.149 [2024-11-20 11:21:37.536279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.149 [2024-11-20 11:21:37.536312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.149 qpair failed and we were unable to recover it. 00:27:10.149 [2024-11-20 11:21:37.536430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.149 [2024-11-20 11:21:37.536464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.149 qpair failed and we were unable to recover it. 00:27:10.149 [2024-11-20 11:21:37.536582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.149 [2024-11-20 11:21:37.536614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.149 qpair failed and we were unable to recover it. 00:27:10.149 [2024-11-20 11:21:37.536739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.149 [2024-11-20 11:21:37.536771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.149 qpair failed and we were unable to recover it. 00:27:10.149 [2024-11-20 11:21:37.536961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.149 [2024-11-20 11:21:37.536995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.149 qpair failed and we were unable to recover it. 00:27:10.149 [2024-11-20 11:21:37.537180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.149 [2024-11-20 11:21:37.537212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.149 qpair failed and we were unable to recover it. 00:27:10.149 [2024-11-20 11:21:37.537332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.149 [2024-11-20 11:21:37.537364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.149 qpair failed and we were unable to recover it. 00:27:10.149 [2024-11-20 11:21:37.537478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.149 [2024-11-20 11:21:37.537512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.149 qpair failed and we were unable to recover it. 00:27:10.149 [2024-11-20 11:21:37.537631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.149 [2024-11-20 11:21:37.537663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.149 qpair failed and we were unable to recover it. 00:27:10.149 [2024-11-20 11:21:37.537837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.149 [2024-11-20 11:21:37.537870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.149 qpair failed and we were unable to recover it. 00:27:10.149 [2024-11-20 11:21:37.537998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.149 [2024-11-20 11:21:37.538034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.149 qpair failed and we were unable to recover it. 00:27:10.149 [2024-11-20 11:21:37.538165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.149 [2024-11-20 11:21:37.538197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.149 qpair failed and we were unable to recover it. 00:27:10.149 [2024-11-20 11:21:37.538380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.149 [2024-11-20 11:21:37.538412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.149 qpair failed and we were unable to recover it. 00:27:10.149 [2024-11-20 11:21:37.538539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.149 [2024-11-20 11:21:37.538572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.149 qpair failed and we were unable to recover it. 00:27:10.149 [2024-11-20 11:21:37.538677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.149 [2024-11-20 11:21:37.538710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.149 qpair failed and we were unable to recover it. 00:27:10.149 [2024-11-20 11:21:37.538878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.149 [2024-11-20 11:21:37.538911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.149 qpair failed and we were unable to recover it. 00:27:10.149 [2024-11-20 11:21:37.539033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.149 [2024-11-20 11:21:37.539067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.149 qpair failed and we were unable to recover it. 00:27:10.149 [2024-11-20 11:21:37.539192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.149 [2024-11-20 11:21:37.539224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.149 qpair failed and we were unable to recover it. 00:27:10.149 [2024-11-20 11:21:37.539347] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:27:10.149 [2024-11-20 11:21:37.539389] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:10.149 [2024-11-20 11:21:37.539397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.149 [2024-11-20 11:21:37.539428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.149 qpair failed and we were unable to recover it. 00:27:10.149 [2024-11-20 11:21:37.539616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.149 [2024-11-20 11:21:37.539645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.149 qpair failed and we were unable to recover it. 00:27:10.149 [2024-11-20 11:21:37.539766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.149 [2024-11-20 11:21:37.539795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.149 qpair failed and we were unable to recover it. 00:27:10.149 [2024-11-20 11:21:37.539908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.149 [2024-11-20 11:21:37.539938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.149 qpair failed and we were unable to recover it. 00:27:10.149 [2024-11-20 11:21:37.540066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.149 [2024-11-20 11:21:37.540097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.149 qpair failed and we were unable to recover it. 00:27:10.149 [2024-11-20 11:21:37.540218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.149 [2024-11-20 11:21:37.540250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.149 qpair failed and we were unable to recover it. 00:27:10.149 [2024-11-20 11:21:37.540379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.149 [2024-11-20 11:21:37.540411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.149 qpair failed and we were unable to recover it. 00:27:10.149 [2024-11-20 11:21:37.540584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.149 [2024-11-20 11:21:37.540617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.149 qpair failed and we were unable to recover it. 00:27:10.150 [2024-11-20 11:21:37.540807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.150 [2024-11-20 11:21:37.540839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.150 qpair failed and we were unable to recover it. 00:27:10.150 [2024-11-20 11:21:37.540961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.150 [2024-11-20 11:21:37.540994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.150 qpair failed and we were unable to recover it. 00:27:10.150 [2024-11-20 11:21:37.541108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.150 [2024-11-20 11:21:37.541140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.150 qpair failed and we were unable to recover it. 00:27:10.150 [2024-11-20 11:21:37.541268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.150 [2024-11-20 11:21:37.541300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.150 qpair failed and we were unable to recover it. 00:27:10.150 [2024-11-20 11:21:37.541407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.150 [2024-11-20 11:21:37.541438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.150 qpair failed and we were unable to recover it. 00:27:10.150 [2024-11-20 11:21:37.541561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.150 [2024-11-20 11:21:37.541594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.150 qpair failed and we were unable to recover it. 00:27:10.150 [2024-11-20 11:21:37.541720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.150 [2024-11-20 11:21:37.541752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.150 qpair failed and we were unable to recover it. 00:27:10.150 [2024-11-20 11:21:37.541873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.150 [2024-11-20 11:21:37.541905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.150 qpair failed and we were unable to recover it. 00:27:10.150 [2024-11-20 11:21:37.542180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.150 [2024-11-20 11:21:37.542214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.150 qpair failed and we were unable to recover it. 00:27:10.150 [2024-11-20 11:21:37.542396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.150 [2024-11-20 11:21:37.542429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.150 qpair failed and we were unable to recover it. 00:27:10.150 [2024-11-20 11:21:37.542549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.150 [2024-11-20 11:21:37.542581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.150 qpair failed and we were unable to recover it. 00:27:10.150 [2024-11-20 11:21:37.542696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.150 [2024-11-20 11:21:37.542727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.150 qpair failed and we were unable to recover it. 00:27:10.150 [2024-11-20 11:21:37.542967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.150 [2024-11-20 11:21:37.543002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.150 qpair failed and we were unable to recover it. 00:27:10.150 [2024-11-20 11:21:37.543125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.150 [2024-11-20 11:21:37.543163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.150 qpair failed and we were unable to recover it. 00:27:10.150 [2024-11-20 11:21:37.543292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.150 [2024-11-20 11:21:37.543324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.150 qpair failed and we were unable to recover it. 00:27:10.150 [2024-11-20 11:21:37.543514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.150 [2024-11-20 11:21:37.543546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.150 qpair failed and we were unable to recover it. 00:27:10.150 [2024-11-20 11:21:37.543756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.150 [2024-11-20 11:21:37.543787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.150 qpair failed and we were unable to recover it. 00:27:10.150 [2024-11-20 11:21:37.543916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.150 [2024-11-20 11:21:37.543957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.150 qpair failed and we were unable to recover it. 00:27:10.150 [2024-11-20 11:21:37.544137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.150 [2024-11-20 11:21:37.544170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.150 qpair failed and we were unable to recover it. 00:27:10.150 [2024-11-20 11:21:37.544351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.150 [2024-11-20 11:21:37.544383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.150 qpair failed and we were unable to recover it. 00:27:10.150 [2024-11-20 11:21:37.544554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.150 [2024-11-20 11:21:37.544586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.150 qpair failed and we were unable to recover it. 00:27:10.150 [2024-11-20 11:21:37.544702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.150 [2024-11-20 11:21:37.544734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.150 qpair failed and we were unable to recover it. 00:27:10.150 [2024-11-20 11:21:37.544840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.150 [2024-11-20 11:21:37.544872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.150 qpair failed and we were unable to recover it. 00:27:10.150 [2024-11-20 11:21:37.544991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.150 [2024-11-20 11:21:37.545025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.150 qpair failed and we were unable to recover it. 00:27:10.150 [2024-11-20 11:21:37.545287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.150 [2024-11-20 11:21:37.545319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.150 qpair failed and we were unable to recover it. 00:27:10.150 [2024-11-20 11:21:37.545445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.150 [2024-11-20 11:21:37.545477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.150 qpair failed and we were unable to recover it. 00:27:10.150 [2024-11-20 11:21:37.545605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.150 [2024-11-20 11:21:37.545636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.150 qpair failed and we were unable to recover it. 00:27:10.150 [2024-11-20 11:21:37.545816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.150 [2024-11-20 11:21:37.545849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.150 qpair failed and we were unable to recover it. 00:27:10.150 [2024-11-20 11:21:37.546040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.150 [2024-11-20 11:21:37.546074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.150 qpair failed and we were unable to recover it. 00:27:10.150 [2024-11-20 11:21:37.546213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.150 [2024-11-20 11:21:37.546245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.150 qpair failed and we were unable to recover it. 00:27:10.150 [2024-11-20 11:21:37.546430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.150 [2024-11-20 11:21:37.546461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.150 qpair failed and we were unable to recover it. 00:27:10.150 [2024-11-20 11:21:37.546600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.150 [2024-11-20 11:21:37.546632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.150 qpair failed and we were unable to recover it. 00:27:10.150 [2024-11-20 11:21:37.546832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.150 [2024-11-20 11:21:37.546865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.150 qpair failed and we were unable to recover it. 00:27:10.150 [2024-11-20 11:21:37.546988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.150 [2024-11-20 11:21:37.547022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.150 qpair failed and we were unable to recover it. 00:27:10.150 [2024-11-20 11:21:37.547208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.150 [2024-11-20 11:21:37.547240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.150 qpair failed and we were unable to recover it. 00:27:10.150 [2024-11-20 11:21:37.547409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.150 [2024-11-20 11:21:37.547441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.150 qpair failed and we were unable to recover it. 00:27:10.150 [2024-11-20 11:21:37.547571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.150 [2024-11-20 11:21:37.547603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.150 qpair failed and we were unable to recover it. 00:27:10.150 [2024-11-20 11:21:37.547784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.150 [2024-11-20 11:21:37.547817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.150 qpair failed and we were unable to recover it. 00:27:10.150 [2024-11-20 11:21:37.547942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.150 [2024-11-20 11:21:37.547982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.150 qpair failed and we were unable to recover it. 00:27:10.150 [2024-11-20 11:21:37.548167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.150 [2024-11-20 11:21:37.548199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.150 qpair failed and we were unable to recover it. 00:27:10.150 [2024-11-20 11:21:37.548325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.150 [2024-11-20 11:21:37.548365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.150 qpair failed and we were unable to recover it. 00:27:10.151 [2024-11-20 11:21:37.548502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.151 [2024-11-20 11:21:37.548536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.151 qpair failed and we were unable to recover it. 00:27:10.151 [2024-11-20 11:21:37.548717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.151 [2024-11-20 11:21:37.548750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.151 qpair failed and we were unable to recover it. 00:27:10.151 [2024-11-20 11:21:37.548941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.151 [2024-11-20 11:21:37.548987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.151 qpair failed and we were unable to recover it. 00:27:10.151 [2024-11-20 11:21:37.549105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.151 [2024-11-20 11:21:37.549139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.151 qpair failed and we were unable to recover it. 00:27:10.151 [2024-11-20 11:21:37.549359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.151 [2024-11-20 11:21:37.549391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.151 qpair failed and we were unable to recover it. 00:27:10.151 [2024-11-20 11:21:37.549531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.151 [2024-11-20 11:21:37.549565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.151 qpair failed and we were unable to recover it. 00:27:10.151 [2024-11-20 11:21:37.549686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.151 [2024-11-20 11:21:37.549719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.151 qpair failed and we were unable to recover it. 00:27:10.151 [2024-11-20 11:21:37.549982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.151 [2024-11-20 11:21:37.550016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.151 qpair failed and we were unable to recover it. 00:27:10.151 [2024-11-20 11:21:37.550256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.151 [2024-11-20 11:21:37.550290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.151 qpair failed and we were unable to recover it. 00:27:10.151 [2024-11-20 11:21:37.550486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.151 [2024-11-20 11:21:37.550519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.151 qpair failed and we were unable to recover it. 00:27:10.151 [2024-11-20 11:21:37.550659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.151 [2024-11-20 11:21:37.550693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.151 qpair failed and we were unable to recover it. 00:27:10.151 [2024-11-20 11:21:37.550905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.151 [2024-11-20 11:21:37.550939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.151 qpair failed and we were unable to recover it. 00:27:10.151 [2024-11-20 11:21:37.551125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.151 [2024-11-20 11:21:37.551166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.151 qpair failed and we were unable to recover it. 00:27:10.151 [2024-11-20 11:21:37.551342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.151 [2024-11-20 11:21:37.551374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.151 qpair failed and we were unable to recover it. 00:27:10.151 [2024-11-20 11:21:37.551481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.151 [2024-11-20 11:21:37.551513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.151 qpair failed and we were unable to recover it. 00:27:10.151 [2024-11-20 11:21:37.551638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.151 [2024-11-20 11:21:37.551670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.151 qpair failed and we were unable to recover it. 00:27:10.151 [2024-11-20 11:21:37.551970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.151 [2024-11-20 11:21:37.552005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.151 qpair failed and we were unable to recover it. 00:27:10.151 [2024-11-20 11:21:37.552195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.151 [2024-11-20 11:21:37.552228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.151 qpair failed and we were unable to recover it. 00:27:10.151 [2024-11-20 11:21:37.552425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.151 [2024-11-20 11:21:37.552457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.151 qpair failed and we were unable to recover it. 00:27:10.151 [2024-11-20 11:21:37.552563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.151 [2024-11-20 11:21:37.552596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.151 qpair failed and we were unable to recover it. 00:27:10.151 [2024-11-20 11:21:37.552831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.151 [2024-11-20 11:21:37.552865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.151 qpair failed and we were unable to recover it. 00:27:10.151 [2024-11-20 11:21:37.552981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.151 [2024-11-20 11:21:37.553015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.151 qpair failed and we were unable to recover it. 00:27:10.151 [2024-11-20 11:21:37.553202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.151 [2024-11-20 11:21:37.553235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.151 qpair failed and we were unable to recover it. 00:27:10.151 [2024-11-20 11:21:37.553425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.151 [2024-11-20 11:21:37.553458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.151 qpair failed and we were unable to recover it. 00:27:10.151 [2024-11-20 11:21:37.553633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.151 [2024-11-20 11:21:37.553666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.151 qpair failed and we were unable to recover it. 00:27:10.151 [2024-11-20 11:21:37.553785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.151 [2024-11-20 11:21:37.553818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.151 qpair failed and we were unable to recover it. 00:27:10.151 [2024-11-20 11:21:37.554021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.151 [2024-11-20 11:21:37.554057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.151 qpair failed and we were unable to recover it. 00:27:10.151 [2024-11-20 11:21:37.554227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.151 [2024-11-20 11:21:37.554260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.151 qpair failed and we were unable to recover it. 00:27:10.151 [2024-11-20 11:21:37.554446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.151 [2024-11-20 11:21:37.554480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.151 qpair failed and we were unable to recover it. 00:27:10.151 [2024-11-20 11:21:37.554662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.151 [2024-11-20 11:21:37.554697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.151 qpair failed and we were unable to recover it. 00:27:10.151 [2024-11-20 11:21:37.554867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.151 [2024-11-20 11:21:37.554899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.151 qpair failed and we were unable to recover it. 00:27:10.151 [2024-11-20 11:21:37.555012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.151 [2024-11-20 11:21:37.555046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.151 qpair failed and we were unable to recover it. 00:27:10.151 [2024-11-20 11:21:37.555239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.151 [2024-11-20 11:21:37.555273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.151 qpair failed and we were unable to recover it. 00:27:10.151 [2024-11-20 11:21:37.555457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.151 [2024-11-20 11:21:37.555488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.151 qpair failed and we were unable to recover it. 00:27:10.151 [2024-11-20 11:21:37.555756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.151 [2024-11-20 11:21:37.555789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.151 qpair failed and we were unable to recover it. 00:27:10.151 [2024-11-20 11:21:37.555984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.151 [2024-11-20 11:21:37.556020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.151 qpair failed and we were unable to recover it. 00:27:10.151 [2024-11-20 11:21:37.556196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.151 [2024-11-20 11:21:37.556229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.151 qpair failed and we were unable to recover it. 00:27:10.151 [2024-11-20 11:21:37.556417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.151 [2024-11-20 11:21:37.556450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.151 qpair failed and we were unable to recover it. 00:27:10.151 [2024-11-20 11:21:37.556559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.151 [2024-11-20 11:21:37.556591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.151 qpair failed and we were unable to recover it. 00:27:10.151 [2024-11-20 11:21:37.556791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.151 [2024-11-20 11:21:37.556829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.151 qpair failed and we were unable to recover it. 00:27:10.151 [2024-11-20 11:21:37.556982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.151 [2024-11-20 11:21:37.557017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.151 qpair failed and we were unable to recover it. 00:27:10.151 [2024-11-20 11:21:37.557136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.152 [2024-11-20 11:21:37.557167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.152 qpair failed and we were unable to recover it. 00:27:10.152 [2024-11-20 11:21:37.557353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.152 [2024-11-20 11:21:37.557385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.152 qpair failed and we were unable to recover it. 00:27:10.152 [2024-11-20 11:21:37.557511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.152 [2024-11-20 11:21:37.557543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.152 qpair failed and we were unable to recover it. 00:27:10.152 [2024-11-20 11:21:37.557721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.152 [2024-11-20 11:21:37.557753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.152 qpair failed and we were unable to recover it. 00:27:10.152 [2024-11-20 11:21:37.557869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.152 [2024-11-20 11:21:37.557901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.152 qpair failed and we were unable to recover it. 00:27:10.152 [2024-11-20 11:21:37.558025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.152 [2024-11-20 11:21:37.558058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.152 qpair failed and we were unable to recover it. 00:27:10.152 [2024-11-20 11:21:37.558191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.152 [2024-11-20 11:21:37.558224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.152 qpair failed and we were unable to recover it. 00:27:10.152 [2024-11-20 11:21:37.558412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.152 [2024-11-20 11:21:37.558444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.152 qpair failed and we were unable to recover it. 00:27:10.152 [2024-11-20 11:21:37.558704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.152 [2024-11-20 11:21:37.558736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.152 qpair failed and we were unable to recover it. 00:27:10.152 [2024-11-20 11:21:37.558977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.152 [2024-11-20 11:21:37.559010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.152 qpair failed and we were unable to recover it. 00:27:10.152 [2024-11-20 11:21:37.559198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.152 [2024-11-20 11:21:37.559230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.152 qpair failed and we were unable to recover it. 00:27:10.152 [2024-11-20 11:21:37.559362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.152 [2024-11-20 11:21:37.559399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.152 qpair failed and we were unable to recover it. 00:27:10.152 [2024-11-20 11:21:37.559521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.152 [2024-11-20 11:21:37.559552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.152 qpair failed and we were unable to recover it. 00:27:10.152 [2024-11-20 11:21:37.559660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.152 [2024-11-20 11:21:37.559692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.152 qpair failed and we were unable to recover it. 00:27:10.152 [2024-11-20 11:21:37.559809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.152 [2024-11-20 11:21:37.559841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.152 qpair failed and we were unable to recover it. 00:27:10.152 [2024-11-20 11:21:37.560022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.152 [2024-11-20 11:21:37.560056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.152 qpair failed and we were unable to recover it. 00:27:10.152 [2024-11-20 11:21:37.560177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.152 [2024-11-20 11:21:37.560209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.152 qpair failed and we were unable to recover it. 00:27:10.152 [2024-11-20 11:21:37.560314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.152 [2024-11-20 11:21:37.560347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.152 qpair failed and we were unable to recover it. 00:27:10.152 [2024-11-20 11:21:37.560567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.152 [2024-11-20 11:21:37.560599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.152 qpair failed and we were unable to recover it. 00:27:10.152 [2024-11-20 11:21:37.560884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.152 [2024-11-20 11:21:37.560917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.152 qpair failed and we were unable to recover it. 00:27:10.152 [2024-11-20 11:21:37.560978] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f3af0 (9): Bad file descriptor 00:27:10.152 [2024-11-20 11:21:37.561191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.152 [2024-11-20 11:21:37.561264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.152 qpair failed and we were unable to recover it. 00:27:10.152 [2024-11-20 11:21:37.561408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.152 [2024-11-20 11:21:37.561445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.152 qpair failed and we were unable to recover it. 00:27:10.152 [2024-11-20 11:21:37.561655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.152 [2024-11-20 11:21:37.561690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.152 qpair failed and we were unable to recover it. 00:27:10.152 [2024-11-20 11:21:37.561862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.152 [2024-11-20 11:21:37.561895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.152 qpair failed and we were unable to recover it. 00:27:10.152 [2024-11-20 11:21:37.562103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.152 [2024-11-20 11:21:37.562148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.152 qpair failed and we were unable to recover it. 00:27:10.152 [2024-11-20 11:21:37.562323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.152 [2024-11-20 11:21:37.562356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.152 qpair failed and we were unable to recover it. 00:27:10.152 [2024-11-20 11:21:37.562561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.152 [2024-11-20 11:21:37.562593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.152 qpair failed and we were unable to recover it. 00:27:10.152 [2024-11-20 11:21:37.562781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.152 [2024-11-20 11:21:37.562815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.152 qpair failed and we were unable to recover it. 00:27:10.152 [2024-11-20 11:21:37.562934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.152 [2024-11-20 11:21:37.562982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.152 qpair failed and we were unable to recover it. 00:27:10.152 [2024-11-20 11:21:37.563103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.152 [2024-11-20 11:21:37.563136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.152 qpair failed and we were unable to recover it. 00:27:10.152 [2024-11-20 11:21:37.563342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.152 [2024-11-20 11:21:37.563376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.152 qpair failed and we were unable to recover it. 00:27:10.152 [2024-11-20 11:21:37.563563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.152 [2024-11-20 11:21:37.563598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.152 qpair failed and we were unable to recover it. 00:27:10.152 [2024-11-20 11:21:37.563786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.152 [2024-11-20 11:21:37.563819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.152 qpair failed and we were unable to recover it. 00:27:10.152 [2024-11-20 11:21:37.563957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.152 [2024-11-20 11:21:37.563992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.152 qpair failed and we were unable to recover it. 00:27:10.152 [2024-11-20 11:21:37.564107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.152 [2024-11-20 11:21:37.564140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.152 qpair failed and we were unable to recover it. 00:27:10.152 [2024-11-20 11:21:37.564249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.152 [2024-11-20 11:21:37.564283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.152 qpair failed and we were unable to recover it. 00:27:10.152 [2024-11-20 11:21:37.564533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.152 [2024-11-20 11:21:37.564566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.152 qpair failed and we were unable to recover it. 00:27:10.152 [2024-11-20 11:21:37.564683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.152 [2024-11-20 11:21:37.564717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.152 qpair failed and we were unable to recover it. 00:27:10.152 [2024-11-20 11:21:37.564901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.152 [2024-11-20 11:21:37.564935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.152 qpair failed and we were unable to recover it. 00:27:10.152 [2024-11-20 11:21:37.565139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.152 [2024-11-20 11:21:37.565172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.152 qpair failed and we were unable to recover it. 00:27:10.152 [2024-11-20 11:21:37.565310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.153 [2024-11-20 11:21:37.565343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.153 qpair failed and we were unable to recover it. 00:27:10.153 [2024-11-20 11:21:37.565519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.153 [2024-11-20 11:21:37.565552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.153 qpair failed and we were unable to recover it. 00:27:10.153 [2024-11-20 11:21:37.565674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.153 [2024-11-20 11:21:37.565707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.153 qpair failed and we were unable to recover it. 00:27:10.153 [2024-11-20 11:21:37.565823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.153 [2024-11-20 11:21:37.565857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.153 qpair failed and we were unable to recover it. 00:27:10.153 [2024-11-20 11:21:37.565969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.153 [2024-11-20 11:21:37.566004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.153 qpair failed and we were unable to recover it. 00:27:10.153 [2024-11-20 11:21:37.566117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.153 [2024-11-20 11:21:37.566151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.153 qpair failed and we were unable to recover it. 00:27:10.153 [2024-11-20 11:21:37.566279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.153 [2024-11-20 11:21:37.566313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.153 qpair failed and we were unable to recover it. 00:27:10.153 [2024-11-20 11:21:37.566552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.153 [2024-11-20 11:21:37.566586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.153 qpair failed and we were unable to recover it. 00:27:10.153 [2024-11-20 11:21:37.566726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.153 [2024-11-20 11:21:37.566761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.153 qpair failed and we were unable to recover it. 00:27:10.153 [2024-11-20 11:21:37.566961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.153 [2024-11-20 11:21:37.566995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.153 qpair failed and we were unable to recover it. 00:27:10.153 [2024-11-20 11:21:37.567274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.153 [2024-11-20 11:21:37.567308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.153 qpair failed and we were unable to recover it. 00:27:10.153 [2024-11-20 11:21:37.567494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.153 [2024-11-20 11:21:37.567534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.153 qpair failed and we were unable to recover it. 00:27:10.153 [2024-11-20 11:21:37.567641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.153 [2024-11-20 11:21:37.567674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.153 qpair failed and we were unable to recover it. 00:27:10.153 [2024-11-20 11:21:37.567798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.153 [2024-11-20 11:21:37.567831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.153 qpair failed and we were unable to recover it. 00:27:10.153 [2024-11-20 11:21:37.568004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.153 [2024-11-20 11:21:37.568039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.153 qpair failed and we were unable to recover it. 00:27:10.153 [2024-11-20 11:21:37.568167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.153 [2024-11-20 11:21:37.568201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.153 qpair failed and we were unable to recover it. 00:27:10.153 [2024-11-20 11:21:37.568305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.153 [2024-11-20 11:21:37.568338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.153 qpair failed and we were unable to recover it. 00:27:10.153 [2024-11-20 11:21:37.568450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.153 [2024-11-20 11:21:37.568484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.153 qpair failed and we were unable to recover it. 00:27:10.153 [2024-11-20 11:21:37.568592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.153 [2024-11-20 11:21:37.568624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.153 qpair failed and we were unable to recover it. 00:27:10.153 [2024-11-20 11:21:37.568744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.153 [2024-11-20 11:21:37.568777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.153 qpair failed and we were unable to recover it. 00:27:10.153 [2024-11-20 11:21:37.568945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.153 [2024-11-20 11:21:37.568989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.153 qpair failed and we were unable to recover it. 00:27:10.153 [2024-11-20 11:21:37.569185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.153 [2024-11-20 11:21:37.569219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.153 qpair failed and we were unable to recover it. 00:27:10.153 [2024-11-20 11:21:37.569342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.153 [2024-11-20 11:21:37.569375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.153 qpair failed and we were unable to recover it. 00:27:10.153 [2024-11-20 11:21:37.569480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.153 [2024-11-20 11:21:37.569514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.153 qpair failed and we were unable to recover it. 00:27:10.153 [2024-11-20 11:21:37.569685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.153 [2024-11-20 11:21:37.569718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.153 qpair failed and we were unable to recover it. 00:27:10.153 [2024-11-20 11:21:37.569913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.153 [2024-11-20 11:21:37.569959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.153 qpair failed and we were unable to recover it. 00:27:10.153 [2024-11-20 11:21:37.570081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.153 [2024-11-20 11:21:37.570115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.153 qpair failed and we were unable to recover it. 00:27:10.153 [2024-11-20 11:21:37.570303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.153 [2024-11-20 11:21:37.570336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.153 qpair failed and we were unable to recover it. 00:27:10.153 [2024-11-20 11:21:37.570442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.153 [2024-11-20 11:21:37.570475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.153 qpair failed and we were unable to recover it. 00:27:10.153 [2024-11-20 11:21:37.570583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.153 [2024-11-20 11:21:37.570616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.153 qpair failed and we were unable to recover it. 00:27:10.153 [2024-11-20 11:21:37.570753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.153 [2024-11-20 11:21:37.570786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.153 qpair failed and we were unable to recover it. 00:27:10.153 [2024-11-20 11:21:37.570970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.153 [2024-11-20 11:21:37.571005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.153 qpair failed and we were unable to recover it. 00:27:10.153 [2024-11-20 11:21:37.571137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.437 [2024-11-20 11:21:37.571170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.437 qpair failed and we were unable to recover it. 00:27:10.437 [2024-11-20 11:21:37.571373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.437 [2024-11-20 11:21:37.571406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.437 qpair failed and we were unable to recover it. 00:27:10.437 [2024-11-20 11:21:37.571512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.437 [2024-11-20 11:21:37.571544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.437 qpair failed and we were unable to recover it. 00:27:10.437 [2024-11-20 11:21:37.571652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.437 [2024-11-20 11:21:37.571684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.437 qpair failed and we were unable to recover it. 00:27:10.437 [2024-11-20 11:21:37.571810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.437 [2024-11-20 11:21:37.571844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.437 qpair failed and we were unable to recover it. 00:27:10.437 [2024-11-20 11:21:37.571967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.437 [2024-11-20 11:21:37.572001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.437 qpair failed and we were unable to recover it. 00:27:10.437 [2024-11-20 11:21:37.572117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.437 [2024-11-20 11:21:37.572150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.437 qpair failed and we were unable to recover it. 00:27:10.437 [2024-11-20 11:21:37.572358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.437 [2024-11-20 11:21:37.572392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.437 qpair failed and we were unable to recover it. 00:27:10.437 [2024-11-20 11:21:37.572497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.437 [2024-11-20 11:21:37.572531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.437 qpair failed and we were unable to recover it. 00:27:10.437 [2024-11-20 11:21:37.572716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.437 [2024-11-20 11:21:37.572749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.437 qpair failed and we were unable to recover it. 00:27:10.437 [2024-11-20 11:21:37.572864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.437 [2024-11-20 11:21:37.572896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.437 qpair failed and we were unable to recover it. 00:27:10.437 [2024-11-20 11:21:37.573020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.437 [2024-11-20 11:21:37.573054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.437 qpair failed and we were unable to recover it. 00:27:10.437 [2024-11-20 11:21:37.573172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.437 [2024-11-20 11:21:37.573205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.437 qpair failed and we were unable to recover it. 00:27:10.437 [2024-11-20 11:21:37.573317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.437 [2024-11-20 11:21:37.573349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.437 qpair failed and we were unable to recover it. 00:27:10.437 [2024-11-20 11:21:37.573475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.437 [2024-11-20 11:21:37.573508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.438 qpair failed and we were unable to recover it. 00:27:10.438 [2024-11-20 11:21:37.573618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.438 [2024-11-20 11:21:37.573650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.438 qpair failed and we were unable to recover it. 00:27:10.438 [2024-11-20 11:21:37.573860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.438 [2024-11-20 11:21:37.573893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.438 qpair failed and we were unable to recover it. 00:27:10.438 [2024-11-20 11:21:37.574088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.438 [2024-11-20 11:21:37.574122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.438 qpair failed and we were unable to recover it. 00:27:10.438 [2024-11-20 11:21:37.574299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.438 [2024-11-20 11:21:37.574331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.438 qpair failed and we were unable to recover it. 00:27:10.438 [2024-11-20 11:21:37.574453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.438 [2024-11-20 11:21:37.574486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.438 qpair failed and we were unable to recover it. 00:27:10.438 [2024-11-20 11:21:37.574627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.438 [2024-11-20 11:21:37.574664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.438 qpair failed and we were unable to recover it. 00:27:10.438 [2024-11-20 11:21:37.574772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.438 [2024-11-20 11:21:37.574804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.438 qpair failed and we were unable to recover it. 00:27:10.438 [2024-11-20 11:21:37.574991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.438 [2024-11-20 11:21:37.575025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.438 qpair failed and we were unable to recover it. 00:27:10.438 [2024-11-20 11:21:37.575242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.438 [2024-11-20 11:21:37.575273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.438 qpair failed and we were unable to recover it. 00:27:10.438 [2024-11-20 11:21:37.575456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.438 [2024-11-20 11:21:37.575487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.438 qpair failed and we were unable to recover it. 00:27:10.438 [2024-11-20 11:21:37.575604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.438 [2024-11-20 11:21:37.575637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.438 qpair failed and we were unable to recover it. 00:27:10.438 [2024-11-20 11:21:37.575748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.438 [2024-11-20 11:21:37.575781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.438 qpair failed and we were unable to recover it. 00:27:10.438 [2024-11-20 11:21:37.575902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.438 [2024-11-20 11:21:37.575935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.438 qpair failed and we were unable to recover it. 00:27:10.438 [2024-11-20 11:21:37.576136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.438 [2024-11-20 11:21:37.576169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.438 qpair failed and we were unable to recover it. 00:27:10.438 [2024-11-20 11:21:37.576345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.438 [2024-11-20 11:21:37.576378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.438 qpair failed and we were unable to recover it. 00:27:10.438 [2024-11-20 11:21:37.576576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.438 [2024-11-20 11:21:37.576608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.438 qpair failed and we were unable to recover it. 00:27:10.438 [2024-11-20 11:21:37.576792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.438 [2024-11-20 11:21:37.576826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.438 qpair failed and we were unable to recover it. 00:27:10.438 [2024-11-20 11:21:37.576932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.438 [2024-11-20 11:21:37.576974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.438 qpair failed and we were unable to recover it. 00:27:10.438 [2024-11-20 11:21:37.577091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.438 [2024-11-20 11:21:37.577129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.438 qpair failed and we were unable to recover it. 00:27:10.438 [2024-11-20 11:21:37.577240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.438 [2024-11-20 11:21:37.577272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.438 qpair failed and we were unable to recover it. 00:27:10.438 [2024-11-20 11:21:37.577452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.438 [2024-11-20 11:21:37.577486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.438 qpair failed and we were unable to recover it. 00:27:10.438 [2024-11-20 11:21:37.577662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.438 [2024-11-20 11:21:37.577694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.438 qpair failed and we were unable to recover it. 00:27:10.438 [2024-11-20 11:21:37.577813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.438 [2024-11-20 11:21:37.577845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.438 qpair failed and we were unable to recover it. 00:27:10.438 [2024-11-20 11:21:37.577975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.438 [2024-11-20 11:21:37.578009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.438 qpair failed and we were unable to recover it. 00:27:10.438 [2024-11-20 11:21:37.578123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.438 [2024-11-20 11:21:37.578154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.438 qpair failed and we were unable to recover it. 00:27:10.438 [2024-11-20 11:21:37.578361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.438 [2024-11-20 11:21:37.578395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.438 qpair failed and we were unable to recover it. 00:27:10.438 [2024-11-20 11:21:37.578573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.438 [2024-11-20 11:21:37.578606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.438 qpair failed and we were unable to recover it. 00:27:10.438 [2024-11-20 11:21:37.578778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.438 [2024-11-20 11:21:37.578811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.438 qpair failed and we were unable to recover it. 00:27:10.438 [2024-11-20 11:21:37.578942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.438 [2024-11-20 11:21:37.578989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.438 qpair failed and we were unable to recover it. 00:27:10.438 [2024-11-20 11:21:37.579175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.438 [2024-11-20 11:21:37.579208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.438 qpair failed and we were unable to recover it. 00:27:10.438 [2024-11-20 11:21:37.579381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.438 [2024-11-20 11:21:37.579414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.438 qpair failed and we were unable to recover it. 00:27:10.438 [2024-11-20 11:21:37.579549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.438 [2024-11-20 11:21:37.579582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.438 qpair failed and we were unable to recover it. 00:27:10.438 [2024-11-20 11:21:37.579707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.439 [2024-11-20 11:21:37.579739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.439 qpair failed and we were unable to recover it. 00:27:10.439 [2024-11-20 11:21:37.579857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.439 [2024-11-20 11:21:37.579889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.439 qpair failed and we were unable to recover it. 00:27:10.439 [2024-11-20 11:21:37.580014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.439 [2024-11-20 11:21:37.580047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.439 qpair failed and we were unable to recover it. 00:27:10.439 [2024-11-20 11:21:37.580226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.439 [2024-11-20 11:21:37.580259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.439 qpair failed and we were unable to recover it. 00:27:10.439 [2024-11-20 11:21:37.580442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.439 [2024-11-20 11:21:37.580475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.439 qpair failed and we were unable to recover it. 00:27:10.439 [2024-11-20 11:21:37.580602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.439 [2024-11-20 11:21:37.580634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.439 qpair failed and we were unable to recover it. 00:27:10.439 [2024-11-20 11:21:37.580838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.439 [2024-11-20 11:21:37.580871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.439 qpair failed and we were unable to recover it. 00:27:10.439 [2024-11-20 11:21:37.581074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.439 [2024-11-20 11:21:37.581110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.439 qpair failed and we were unable to recover it. 00:27:10.439 [2024-11-20 11:21:37.581297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.439 [2024-11-20 11:21:37.581330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.439 qpair failed and we were unable to recover it. 00:27:10.439 [2024-11-20 11:21:37.581465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.439 [2024-11-20 11:21:37.581498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.439 qpair failed and we were unable to recover it. 00:27:10.439 [2024-11-20 11:21:37.581611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.439 [2024-11-20 11:21:37.581643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.439 qpair failed and we were unable to recover it. 00:27:10.439 [2024-11-20 11:21:37.581748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.439 [2024-11-20 11:21:37.581781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.439 qpair failed and we were unable to recover it. 00:27:10.439 [2024-11-20 11:21:37.581966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.439 [2024-11-20 11:21:37.582000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.439 qpair failed and we were unable to recover it. 00:27:10.439 [2024-11-20 11:21:37.582170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.439 [2024-11-20 11:21:37.582208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.439 qpair failed and we were unable to recover it. 00:27:10.439 [2024-11-20 11:21:37.582353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.439 [2024-11-20 11:21:37.582386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.439 qpair failed and we were unable to recover it. 00:27:10.439 [2024-11-20 11:21:37.582566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.439 [2024-11-20 11:21:37.582599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.439 qpair failed and we were unable to recover it. 00:27:10.439 [2024-11-20 11:21:37.582788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.439 [2024-11-20 11:21:37.582821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.439 qpair failed and we were unable to recover it. 00:27:10.439 [2024-11-20 11:21:37.582993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.439 [2024-11-20 11:21:37.583028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.439 qpair failed and we were unable to recover it. 00:27:10.439 [2024-11-20 11:21:37.583206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.439 [2024-11-20 11:21:37.583239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.439 qpair failed and we were unable to recover it. 00:27:10.439 [2024-11-20 11:21:37.583354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.439 [2024-11-20 11:21:37.583388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.439 qpair failed and we were unable to recover it. 00:27:10.439 [2024-11-20 11:21:37.583590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.439 [2024-11-20 11:21:37.583624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.439 qpair failed and we were unable to recover it. 00:27:10.439 [2024-11-20 11:21:37.583804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.439 [2024-11-20 11:21:37.583837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.439 qpair failed and we were unable to recover it. 00:27:10.439 [2024-11-20 11:21:37.584017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.439 [2024-11-20 11:21:37.584052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.439 qpair failed and we were unable to recover it. 00:27:10.439 [2024-11-20 11:21:37.584169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.439 [2024-11-20 11:21:37.584202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.439 qpair failed and we were unable to recover it. 00:27:10.439 [2024-11-20 11:21:37.584337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.439 [2024-11-20 11:21:37.584369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.439 qpair failed and we were unable to recover it. 00:27:10.439 [2024-11-20 11:21:37.584604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.439 [2024-11-20 11:21:37.584636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.439 qpair failed and we were unable to recover it. 00:27:10.439 [2024-11-20 11:21:37.584743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.439 [2024-11-20 11:21:37.584776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.439 qpair failed and we were unable to recover it. 00:27:10.439 [2024-11-20 11:21:37.584969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.439 [2024-11-20 11:21:37.585004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.439 qpair failed and we were unable to recover it. 00:27:10.439 [2024-11-20 11:21:37.585110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.439 [2024-11-20 11:21:37.585143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.439 qpair failed and we were unable to recover it. 00:27:10.439 [2024-11-20 11:21:37.585335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.439 [2024-11-20 11:21:37.585369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.439 qpair failed and we were unable to recover it. 00:27:10.439 [2024-11-20 11:21:37.585499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.439 [2024-11-20 11:21:37.585532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.439 qpair failed and we were unable to recover it. 00:27:10.439 [2024-11-20 11:21:37.585653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.439 [2024-11-20 11:21:37.585685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.439 qpair failed and we were unable to recover it. 00:27:10.439 [2024-11-20 11:21:37.585876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.439 [2024-11-20 11:21:37.585909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.439 qpair failed and we were unable to recover it. 00:27:10.439 [2024-11-20 11:21:37.586113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.440 [2024-11-20 11:21:37.586148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.440 qpair failed and we were unable to recover it. 00:27:10.440 [2024-11-20 11:21:37.586282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.440 [2024-11-20 11:21:37.586314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.440 qpair failed and we were unable to recover it. 00:27:10.440 [2024-11-20 11:21:37.586421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.440 [2024-11-20 11:21:37.586452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.440 qpair failed and we were unable to recover it. 00:27:10.440 [2024-11-20 11:21:37.586563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.440 [2024-11-20 11:21:37.586595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.440 qpair failed and we were unable to recover it. 00:27:10.440 [2024-11-20 11:21:37.586717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.440 [2024-11-20 11:21:37.586750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.440 qpair failed and we were unable to recover it. 00:27:10.440 [2024-11-20 11:21:37.586921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.440 [2024-11-20 11:21:37.586963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.440 qpair failed and we were unable to recover it. 00:27:10.440 [2024-11-20 11:21:37.587204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.440 [2024-11-20 11:21:37.587237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.440 qpair failed and we were unable to recover it. 00:27:10.440 [2024-11-20 11:21:37.587434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.440 [2024-11-20 11:21:37.587468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.440 qpair failed and we were unable to recover it. 00:27:10.440 [2024-11-20 11:21:37.587644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.440 [2024-11-20 11:21:37.587676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.440 qpair failed and we were unable to recover it. 00:27:10.440 [2024-11-20 11:21:37.587789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.440 [2024-11-20 11:21:37.587822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.440 qpair failed and we were unable to recover it. 00:27:10.440 [2024-11-20 11:21:37.588013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.440 [2024-11-20 11:21:37.588048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.440 qpair failed and we were unable to recover it. 00:27:10.440 [2024-11-20 11:21:37.588231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.440 [2024-11-20 11:21:37.588263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.440 qpair failed and we were unable to recover it. 00:27:10.440 [2024-11-20 11:21:37.588451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.440 [2024-11-20 11:21:37.588484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.440 qpair failed and we were unable to recover it. 00:27:10.440 [2024-11-20 11:21:37.588658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.440 [2024-11-20 11:21:37.588691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.440 qpair failed and we were unable to recover it. 00:27:10.440 [2024-11-20 11:21:37.588879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.440 [2024-11-20 11:21:37.588913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.440 qpair failed and we were unable to recover it. 00:27:10.440 [2024-11-20 11:21:37.589052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.440 [2024-11-20 11:21:37.589093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.440 qpair failed and we were unable to recover it. 00:27:10.440 [2024-11-20 11:21:37.589226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.440 [2024-11-20 11:21:37.589259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.440 qpair failed and we were unable to recover it. 00:27:10.440 [2024-11-20 11:21:37.589374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.440 [2024-11-20 11:21:37.589408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.440 qpair failed and we were unable to recover it. 00:27:10.440 [2024-11-20 11:21:37.589527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.440 [2024-11-20 11:21:37.589560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.440 qpair failed and we were unable to recover it. 00:27:10.440 [2024-11-20 11:21:37.589825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.440 [2024-11-20 11:21:37.589859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.440 qpair failed and we were unable to recover it. 00:27:10.440 [2024-11-20 11:21:37.590053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.440 [2024-11-20 11:21:37.590094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.440 qpair failed and we were unable to recover it. 00:27:10.440 [2024-11-20 11:21:37.590304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.440 [2024-11-20 11:21:37.590339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.440 qpair failed and we were unable to recover it. 00:27:10.440 [2024-11-20 11:21:37.590626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.440 [2024-11-20 11:21:37.590660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.440 qpair failed and we were unable to recover it. 00:27:10.440 [2024-11-20 11:21:37.590854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.440 [2024-11-20 11:21:37.590888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.440 qpair failed and we were unable to recover it. 00:27:10.440 [2024-11-20 11:21:37.591023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.440 [2024-11-20 11:21:37.591059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.440 qpair failed and we were unable to recover it. 00:27:10.440 [2024-11-20 11:21:37.591237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.440 [2024-11-20 11:21:37.591271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.440 qpair failed and we were unable to recover it. 00:27:10.440 [2024-11-20 11:21:37.591399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.440 [2024-11-20 11:21:37.591433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.440 qpair failed and we were unable to recover it. 00:27:10.440 [2024-11-20 11:21:37.591692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.440 [2024-11-20 11:21:37.591725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.440 qpair failed and we were unable to recover it. 00:27:10.440 [2024-11-20 11:21:37.591934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.440 [2024-11-20 11:21:37.591979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.440 qpair failed and we were unable to recover it. 00:27:10.440 [2024-11-20 11:21:37.592105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.440 [2024-11-20 11:21:37.592139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.440 qpair failed and we were unable to recover it. 00:27:10.440 [2024-11-20 11:21:37.592401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.440 [2024-11-20 11:21:37.592435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.440 qpair failed and we were unable to recover it. 00:27:10.440 [2024-11-20 11:21:37.592568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.440 [2024-11-20 11:21:37.592602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.440 qpair failed and we were unable to recover it. 00:27:10.440 [2024-11-20 11:21:37.592867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.440 [2024-11-20 11:21:37.592900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.440 qpair failed and we were unable to recover it. 00:27:10.440 [2024-11-20 11:21:37.593023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.441 [2024-11-20 11:21:37.593056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.441 qpair failed and we were unable to recover it. 00:27:10.441 [2024-11-20 11:21:37.593170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.441 [2024-11-20 11:21:37.593204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.441 qpair failed and we were unable to recover it. 00:27:10.441 [2024-11-20 11:21:37.593397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.441 [2024-11-20 11:21:37.593430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.441 qpair failed and we were unable to recover it. 00:27:10.441 [2024-11-20 11:21:37.593539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.441 [2024-11-20 11:21:37.593573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.441 qpair failed and we were unable to recover it. 00:27:10.441 [2024-11-20 11:21:37.593758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.441 [2024-11-20 11:21:37.593790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.441 qpair failed and we were unable to recover it. 00:27:10.441 [2024-11-20 11:21:37.593983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.441 [2024-11-20 11:21:37.594018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.441 qpair failed and we were unable to recover it. 00:27:10.441 [2024-11-20 11:21:37.594198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.441 [2024-11-20 11:21:37.594231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.441 qpair failed and we were unable to recover it. 00:27:10.441 [2024-11-20 11:21:37.594407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.441 [2024-11-20 11:21:37.594440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.441 qpair failed and we were unable to recover it. 00:27:10.441 [2024-11-20 11:21:37.594560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.441 [2024-11-20 11:21:37.594594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.441 qpair failed and we were unable to recover it. 00:27:10.441 [2024-11-20 11:21:37.594801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.441 [2024-11-20 11:21:37.594834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.441 qpair failed and we were unable to recover it. 00:27:10.441 [2024-11-20 11:21:37.594967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.441 [2024-11-20 11:21:37.595002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.441 qpair failed and we were unable to recover it. 00:27:10.441 [2024-11-20 11:21:37.595188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.441 [2024-11-20 11:21:37.595220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.441 qpair failed and we were unable to recover it. 00:27:10.441 [2024-11-20 11:21:37.595400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.441 [2024-11-20 11:21:37.595432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.441 qpair failed and we were unable to recover it. 00:27:10.441 [2024-11-20 11:21:37.595652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.441 [2024-11-20 11:21:37.595685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.441 qpair failed and we were unable to recover it. 00:27:10.441 [2024-11-20 11:21:37.595812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.441 [2024-11-20 11:21:37.595851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.441 qpair failed and we were unable to recover it. 00:27:10.441 [2024-11-20 11:21:37.596072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.441 [2024-11-20 11:21:37.596108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.441 qpair failed and we were unable to recover it. 00:27:10.441 [2024-11-20 11:21:37.596375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.441 [2024-11-20 11:21:37.596408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.441 qpair failed and we were unable to recover it. 00:27:10.441 [2024-11-20 11:21:37.596527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.441 [2024-11-20 11:21:37.596560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.441 qpair failed and we were unable to recover it. 00:27:10.441 [2024-11-20 11:21:37.596696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.441 [2024-11-20 11:21:37.596730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.441 qpair failed and we were unable to recover it. 00:27:10.441 [2024-11-20 11:21:37.596852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.441 [2024-11-20 11:21:37.596885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.441 qpair failed and we were unable to recover it. 00:27:10.441 [2024-11-20 11:21:37.597030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.441 [2024-11-20 11:21:37.597066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.441 qpair failed and we were unable to recover it. 00:27:10.441 [2024-11-20 11:21:37.597171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.441 [2024-11-20 11:21:37.597203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.441 qpair failed and we were unable to recover it. 00:27:10.441 [2024-11-20 11:21:37.597331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.441 [2024-11-20 11:21:37.597364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.441 qpair failed and we were unable to recover it. 00:27:10.441 [2024-11-20 11:21:37.597469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.441 [2024-11-20 11:21:37.597503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.441 qpair failed and we were unable to recover it. 00:27:10.441 [2024-11-20 11:21:37.597619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.441 [2024-11-20 11:21:37.597653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.441 qpair failed and we were unable to recover it. 00:27:10.441 [2024-11-20 11:21:37.597761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.441 [2024-11-20 11:21:37.597794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.441 qpair failed and we were unable to recover it. 00:27:10.441 [2024-11-20 11:21:37.597921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.441 [2024-11-20 11:21:37.597967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.441 qpair failed and we were unable to recover it. 00:27:10.441 [2024-11-20 11:21:37.598147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.441 [2024-11-20 11:21:37.598181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.441 qpair failed and we were unable to recover it. 00:27:10.441 [2024-11-20 11:21:37.598299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.441 [2024-11-20 11:21:37.598333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.441 qpair failed and we were unable to recover it. 00:27:10.441 [2024-11-20 11:21:37.598530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.441 [2024-11-20 11:21:37.598564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.441 qpair failed and we were unable to recover it. 00:27:10.441 [2024-11-20 11:21:37.598676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.441 [2024-11-20 11:21:37.598708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.441 qpair failed and we were unable to recover it. 00:27:10.441 [2024-11-20 11:21:37.598891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.441 [2024-11-20 11:21:37.598924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.441 qpair failed and we were unable to recover it. 00:27:10.441 [2024-11-20 11:21:37.599177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.441 [2024-11-20 11:21:37.599211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.441 qpair failed and we were unable to recover it. 00:27:10.441 [2024-11-20 11:21:37.599398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.441 [2024-11-20 11:21:37.599431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.441 qpair failed and we were unable to recover it. 00:27:10.442 [2024-11-20 11:21:37.599547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.442 [2024-11-20 11:21:37.599580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.442 qpair failed and we were unable to recover it. 00:27:10.442 [2024-11-20 11:21:37.599703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.442 [2024-11-20 11:21:37.599735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.442 qpair failed and we were unable to recover it. 00:27:10.442 [2024-11-20 11:21:37.599864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.442 [2024-11-20 11:21:37.599898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.442 qpair failed and we were unable to recover it. 00:27:10.442 [2024-11-20 11:21:37.600082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.442 [2024-11-20 11:21:37.600116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.442 qpair failed and we were unable to recover it. 00:27:10.442 [2024-11-20 11:21:37.600326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.442 [2024-11-20 11:21:37.600359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.442 qpair failed and we were unable to recover it. 00:27:10.442 [2024-11-20 11:21:37.600539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.442 [2024-11-20 11:21:37.600572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.442 qpair failed and we were unable to recover it. 00:27:10.442 [2024-11-20 11:21:37.600818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.442 [2024-11-20 11:21:37.600852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.442 qpair failed and we were unable to recover it. 00:27:10.442 [2024-11-20 11:21:37.600969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.442 [2024-11-20 11:21:37.601004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.442 qpair failed and we were unable to recover it. 00:27:10.442 [2024-11-20 11:21:37.601137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.442 [2024-11-20 11:21:37.601170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.442 qpair failed and we were unable to recover it. 00:27:10.442 [2024-11-20 11:21:37.601275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.442 [2024-11-20 11:21:37.601308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.442 qpair failed and we were unable to recover it. 00:27:10.442 [2024-11-20 11:21:37.601442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.442 [2024-11-20 11:21:37.601475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.442 qpair failed and we were unable to recover it. 00:27:10.442 [2024-11-20 11:21:37.601604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.442 [2024-11-20 11:21:37.601637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.442 qpair failed and we were unable to recover it. 00:27:10.442 [2024-11-20 11:21:37.601814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.442 [2024-11-20 11:21:37.601849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.442 qpair failed and we were unable to recover it. 00:27:10.442 [2024-11-20 11:21:37.601972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.442 [2024-11-20 11:21:37.602006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.442 qpair failed and we were unable to recover it. 00:27:10.442 [2024-11-20 11:21:37.602210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.442 [2024-11-20 11:21:37.602244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.442 qpair failed and we were unable to recover it. 00:27:10.442 [2024-11-20 11:21:37.602498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.442 [2024-11-20 11:21:37.602532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.442 qpair failed and we were unable to recover it. 00:27:10.442 [2024-11-20 11:21:37.602637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.442 [2024-11-20 11:21:37.602669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.442 qpair failed and we were unable to recover it. 00:27:10.442 [2024-11-20 11:21:37.602775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.442 [2024-11-20 11:21:37.602808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.442 qpair failed and we were unable to recover it. 00:27:10.442 [2024-11-20 11:21:37.602929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.442 [2024-11-20 11:21:37.602971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.442 qpair failed and we were unable to recover it. 00:27:10.442 [2024-11-20 11:21:37.603082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.442 [2024-11-20 11:21:37.603115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.442 qpair failed and we were unable to recover it. 00:27:10.442 [2024-11-20 11:21:37.603218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.442 [2024-11-20 11:21:37.603251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.442 qpair failed and we were unable to recover it. 00:27:10.442 [2024-11-20 11:21:37.603389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.442 [2024-11-20 11:21:37.603426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.442 qpair failed and we were unable to recover it. 00:27:10.442 [2024-11-20 11:21:37.603545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.442 [2024-11-20 11:21:37.603578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.442 qpair failed and we were unable to recover it. 00:27:10.442 [2024-11-20 11:21:37.603798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.442 [2024-11-20 11:21:37.603831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.442 qpair failed and we were unable to recover it. 00:27:10.442 [2024-11-20 11:21:37.603958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.442 [2024-11-20 11:21:37.603992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.442 qpair failed and we were unable to recover it. 00:27:10.442 [2024-11-20 11:21:37.604117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.442 [2024-11-20 11:21:37.604148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.442 qpair failed and we were unable to recover it. 00:27:10.442 [2024-11-20 11:21:37.604256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.442 [2024-11-20 11:21:37.604288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.442 qpair failed and we were unable to recover it. 00:27:10.442 [2024-11-20 11:21:37.604414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.442 [2024-11-20 11:21:37.604445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.442 qpair failed and we were unable to recover it. 00:27:10.443 [2024-11-20 11:21:37.604552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.443 [2024-11-20 11:21:37.604584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.443 qpair failed and we were unable to recover it. 00:27:10.443 [2024-11-20 11:21:37.604768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.443 [2024-11-20 11:21:37.604800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.443 qpair failed and we were unable to recover it. 00:27:10.443 [2024-11-20 11:21:37.604988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.443 [2024-11-20 11:21:37.605023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.443 qpair failed and we were unable to recover it. 00:27:10.443 [2024-11-20 11:21:37.605127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.443 [2024-11-20 11:21:37.605159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.443 qpair failed and we were unable to recover it. 00:27:10.443 [2024-11-20 11:21:37.605267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.443 [2024-11-20 11:21:37.605299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.443 qpair failed and we were unable to recover it. 00:27:10.443 [2024-11-20 11:21:37.605410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.443 [2024-11-20 11:21:37.605442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.443 qpair failed and we were unable to recover it. 00:27:10.443 [2024-11-20 11:21:37.605560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.443 [2024-11-20 11:21:37.605598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.443 qpair failed and we were unable to recover it. 00:27:10.443 [2024-11-20 11:21:37.605771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.443 [2024-11-20 11:21:37.605804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.443 qpair failed and we were unable to recover it. 00:27:10.443 [2024-11-20 11:21:37.605975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.443 [2024-11-20 11:21:37.606009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.443 qpair failed and we were unable to recover it. 00:27:10.443 [2024-11-20 11:21:37.606206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.443 [2024-11-20 11:21:37.606239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.443 qpair failed and we were unable to recover it. 00:27:10.443 [2024-11-20 11:21:37.606420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.443 [2024-11-20 11:21:37.606453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.443 qpair failed and we were unable to recover it. 00:27:10.443 [2024-11-20 11:21:37.606578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.443 [2024-11-20 11:21:37.606611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.443 qpair failed and we were unable to recover it. 00:27:10.443 [2024-11-20 11:21:37.606802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.443 [2024-11-20 11:21:37.606833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.443 qpair failed and we were unable to recover it. 00:27:10.443 [2024-11-20 11:21:37.606966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.443 [2024-11-20 11:21:37.607001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.443 qpair failed and we were unable to recover it. 00:27:10.443 [2024-11-20 11:21:37.607107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.443 [2024-11-20 11:21:37.607139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.443 qpair failed and we were unable to recover it. 00:27:10.443 [2024-11-20 11:21:37.607320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.443 [2024-11-20 11:21:37.607353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.443 qpair failed and we were unable to recover it. 00:27:10.443 [2024-11-20 11:21:37.607527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.443 [2024-11-20 11:21:37.607559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.443 qpair failed and we were unable to recover it. 00:27:10.443 [2024-11-20 11:21:37.607676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.443 [2024-11-20 11:21:37.607707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.443 qpair failed and we were unable to recover it. 00:27:10.443 [2024-11-20 11:21:37.607822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.443 [2024-11-20 11:21:37.607855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.443 qpair failed and we were unable to recover it. 00:27:10.443 [2024-11-20 11:21:37.607971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.443 [2024-11-20 11:21:37.608004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.443 qpair failed and we were unable to recover it. 00:27:10.443 [2024-11-20 11:21:37.608193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.443 [2024-11-20 11:21:37.608225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.443 qpair failed and we were unable to recover it. 00:27:10.443 [2024-11-20 11:21:37.608398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.443 [2024-11-20 11:21:37.608430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.443 qpair failed and we were unable to recover it. 00:27:10.443 [2024-11-20 11:21:37.608532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.443 [2024-11-20 11:21:37.608564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.443 qpair failed and we were unable to recover it. 00:27:10.443 [2024-11-20 11:21:37.608669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.443 [2024-11-20 11:21:37.608701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.443 qpair failed and we were unable to recover it. 00:27:10.443 [2024-11-20 11:21:37.608922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.443 [2024-11-20 11:21:37.608963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.443 qpair failed and we were unable to recover it. 00:27:10.443 [2024-11-20 11:21:37.609085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.443 [2024-11-20 11:21:37.609117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.443 qpair failed and we were unable to recover it. 00:27:10.443 [2024-11-20 11:21:37.609256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.443 [2024-11-20 11:21:37.609288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.443 qpair failed and we were unable to recover it. 00:27:10.443 [2024-11-20 11:21:37.609459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.443 [2024-11-20 11:21:37.609491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.443 qpair failed and we were unable to recover it. 00:27:10.443 [2024-11-20 11:21:37.609678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.443 [2024-11-20 11:21:37.609711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.443 qpair failed and we were unable to recover it. 00:27:10.443 [2024-11-20 11:21:37.609913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.443 [2024-11-20 11:21:37.609945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.443 qpair failed and we were unable to recover it. 00:27:10.443 [2024-11-20 11:21:37.610164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.443 [2024-11-20 11:21:37.610196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.443 qpair failed and we were unable to recover it. 00:27:10.443 [2024-11-20 11:21:37.610370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.443 [2024-11-20 11:21:37.610403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.443 qpair failed and we were unable to recover it. 00:27:10.443 [2024-11-20 11:21:37.610530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.443 [2024-11-20 11:21:37.610563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.444 qpair failed and we were unable to recover it. 00:27:10.444 [2024-11-20 11:21:37.610714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.444 [2024-11-20 11:21:37.610757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.444 qpair failed and we were unable to recover it. 00:27:10.444 [2024-11-20 11:21:37.610886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.444 [2024-11-20 11:21:37.610921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.444 qpair failed and we were unable to recover it. 00:27:10.444 [2024-11-20 11:21:37.611049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.444 [2024-11-20 11:21:37.611082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.444 qpair failed and we were unable to recover it. 00:27:10.444 [2024-11-20 11:21:37.611195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.444 [2024-11-20 11:21:37.611228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.444 qpair failed and we were unable to recover it. 00:27:10.444 [2024-11-20 11:21:37.611343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.444 [2024-11-20 11:21:37.611376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.444 qpair failed and we were unable to recover it. 00:27:10.444 [2024-11-20 11:21:37.611503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.444 [2024-11-20 11:21:37.611535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.444 qpair failed and we were unable to recover it. 00:27:10.444 [2024-11-20 11:21:37.611654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.444 [2024-11-20 11:21:37.611687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.444 qpair failed and we were unable to recover it. 00:27:10.444 [2024-11-20 11:21:37.611789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.444 [2024-11-20 11:21:37.611822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.444 qpair failed and we were unable to recover it. 00:27:10.444 [2024-11-20 11:21:37.612077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.444 [2024-11-20 11:21:37.612112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.444 qpair failed and we were unable to recover it. 00:27:10.444 [2024-11-20 11:21:37.612221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.444 [2024-11-20 11:21:37.612254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.444 qpair failed and we were unable to recover it. 00:27:10.444 [2024-11-20 11:21:37.612497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.444 [2024-11-20 11:21:37.612530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.444 qpair failed and we were unable to recover it. 00:27:10.444 [2024-11-20 11:21:37.612650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.444 [2024-11-20 11:21:37.612683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.444 qpair failed and we were unable to recover it. 00:27:10.444 [2024-11-20 11:21:37.612793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.444 [2024-11-20 11:21:37.612824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.444 qpair failed and we were unable to recover it. 00:27:10.444 [2024-11-20 11:21:37.612944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.444 [2024-11-20 11:21:37.612992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.444 qpair failed and we were unable to recover it. 00:27:10.444 [2024-11-20 11:21:37.613110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.444 [2024-11-20 11:21:37.613143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.444 qpair failed and we were unable to recover it. 00:27:10.444 [2024-11-20 11:21:37.613272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.444 [2024-11-20 11:21:37.613304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.444 qpair failed and we were unable to recover it. 00:27:10.444 [2024-11-20 11:21:37.613426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.444 [2024-11-20 11:21:37.613458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.444 qpair failed and we were unable to recover it. 00:27:10.444 [2024-11-20 11:21:37.613567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.444 [2024-11-20 11:21:37.613599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.444 qpair failed and we were unable to recover it. 00:27:10.444 [2024-11-20 11:21:37.613703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.444 [2024-11-20 11:21:37.613736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.444 qpair failed and we were unable to recover it. 00:27:10.444 [2024-11-20 11:21:37.613841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.444 [2024-11-20 11:21:37.613873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.444 qpair failed and we were unable to recover it. 00:27:10.444 [2024-11-20 11:21:37.613979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.444 [2024-11-20 11:21:37.614012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.444 qpair failed and we were unable to recover it. 00:27:10.444 [2024-11-20 11:21:37.614185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.444 [2024-11-20 11:21:37.614219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.444 qpair failed and we were unable to recover it. 00:27:10.444 [2024-11-20 11:21:37.614335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.444 [2024-11-20 11:21:37.614368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.444 qpair failed and we were unable to recover it. 00:27:10.444 [2024-11-20 11:21:37.614556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.444 [2024-11-20 11:21:37.614589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.444 qpair failed and we were unable to recover it. 00:27:10.444 [2024-11-20 11:21:37.614719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.444 [2024-11-20 11:21:37.614752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.444 qpair failed and we were unable to recover it. 00:27:10.444 [2024-11-20 11:21:37.614869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.444 [2024-11-20 11:21:37.614901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.444 qpair failed and we were unable to recover it. 00:27:10.444 [2024-11-20 11:21:37.615097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.444 [2024-11-20 11:21:37.615131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.444 qpair failed and we were unable to recover it. 00:27:10.444 [2024-11-20 11:21:37.615276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.444 [2024-11-20 11:21:37.615309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.444 qpair failed and we were unable to recover it. 00:27:10.444 [2024-11-20 11:21:37.615479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.444 [2024-11-20 11:21:37.615513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.444 qpair failed and we were unable to recover it. 00:27:10.444 [2024-11-20 11:21:37.615615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.444 [2024-11-20 11:21:37.615648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.444 qpair failed and we were unable to recover it. 00:27:10.444 [2024-11-20 11:21:37.615753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.444 [2024-11-20 11:21:37.615788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.444 qpair failed and we were unable to recover it. 00:27:10.444 [2024-11-20 11:21:37.615890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.444 [2024-11-20 11:21:37.615924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.444 qpair failed and we were unable to recover it. 00:27:10.444 [2024-11-20 11:21:37.616049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.444 [2024-11-20 11:21:37.616084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.445 qpair failed and we were unable to recover it. 00:27:10.445 [2024-11-20 11:21:37.616225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.445 [2024-11-20 11:21:37.616258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.445 qpair failed and we were unable to recover it. 00:27:10.445 [2024-11-20 11:21:37.616385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.445 [2024-11-20 11:21:37.616421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.445 qpair failed and we were unable to recover it. 00:27:10.445 [2024-11-20 11:21:37.616621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.445 [2024-11-20 11:21:37.616654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.445 qpair failed and we were unable to recover it. 00:27:10.445 [2024-11-20 11:21:37.616829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.445 [2024-11-20 11:21:37.616862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.445 qpair failed and we were unable to recover it. 00:27:10.445 [2024-11-20 11:21:37.617038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.445 [2024-11-20 11:21:37.617073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.445 qpair failed and we were unable to recover it. 00:27:10.445 [2024-11-20 11:21:37.617212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.445 [2024-11-20 11:21:37.617245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.445 qpair failed and we were unable to recover it. 00:27:10.445 [2024-11-20 11:21:37.617355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.445 [2024-11-20 11:21:37.617387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.445 qpair failed and we were unable to recover it. 00:27:10.445 [2024-11-20 11:21:37.617588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.445 [2024-11-20 11:21:37.617628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.445 qpair failed and we were unable to recover it. 00:27:10.445 [2024-11-20 11:21:37.617819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.445 [2024-11-20 11:21:37.617852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.445 qpair failed and we were unable to recover it. 00:27:10.445 [2024-11-20 11:21:37.618051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.445 [2024-11-20 11:21:37.618088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.445 qpair failed and we were unable to recover it. 00:27:10.445 [2024-11-20 11:21:37.618220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.445 [2024-11-20 11:21:37.618255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.445 qpair failed and we were unable to recover it. 00:27:10.445 [2024-11-20 11:21:37.618362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.445 [2024-11-20 11:21:37.618395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.445 qpair failed and we were unable to recover it. 00:27:10.445 [2024-11-20 11:21:37.618512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.445 [2024-11-20 11:21:37.618546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.445 qpair failed and we were unable to recover it. 00:27:10.445 [2024-11-20 11:21:37.618659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.445 [2024-11-20 11:21:37.618692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.445 qpair failed and we were unable to recover it. 00:27:10.445 [2024-11-20 11:21:37.618863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.445 [2024-11-20 11:21:37.618897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.445 qpair failed and we were unable to recover it. 00:27:10.445 [2024-11-20 11:21:37.619111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.445 [2024-11-20 11:21:37.619146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.445 qpair failed and we were unable to recover it. 00:27:10.445 [2024-11-20 11:21:37.619346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.445 [2024-11-20 11:21:37.619379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.445 qpair failed and we were unable to recover it. 00:27:10.445 [2024-11-20 11:21:37.619496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.445 [2024-11-20 11:21:37.619529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.445 qpair failed and we were unable to recover it. 00:27:10.445 [2024-11-20 11:21:37.619645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.445 [2024-11-20 11:21:37.619678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.445 qpair failed and we were unable to recover it. 00:27:10.445 [2024-11-20 11:21:37.619791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.445 [2024-11-20 11:21:37.619823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.445 qpair failed and we were unable to recover it. 00:27:10.445 [2024-11-20 11:21:37.619941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.445 [2024-11-20 11:21:37.619982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.445 qpair failed and we were unable to recover it. 00:27:10.445 [2024-11-20 11:21:37.620161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.445 [2024-11-20 11:21:37.620195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.445 qpair failed and we were unable to recover it. 00:27:10.445 [2024-11-20 11:21:37.620326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.445 [2024-11-20 11:21:37.620360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.445 qpair failed and we were unable to recover it. 00:27:10.445 [2024-11-20 11:21:37.620408] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:10.445 [2024-11-20 11:21:37.620468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.445 [2024-11-20 11:21:37.620499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.445 qpair failed and we were unable to recover it. 00:27:10.445 [2024-11-20 11:21:37.620628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.445 [2024-11-20 11:21:37.620660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.445 qpair failed and we were unable to recover it. 00:27:10.445 [2024-11-20 11:21:37.620776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.445 [2024-11-20 11:21:37.620809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.445 qpair failed and we were unable to recover it. 00:27:10.445 [2024-11-20 11:21:37.620922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.445 [2024-11-20 11:21:37.620961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.445 qpair failed and we were unable to recover it. 00:27:10.445 [2024-11-20 11:21:37.621100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.445 [2024-11-20 11:21:37.621133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.445 qpair failed and we were unable to recover it. 00:27:10.445 [2024-11-20 11:21:37.621245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.445 [2024-11-20 11:21:37.621278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.445 qpair failed and we were unable to recover it. 00:27:10.445 [2024-11-20 11:21:37.621416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.445 [2024-11-20 11:21:37.621449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.445 qpair failed and we were unable to recover it. 00:27:10.445 [2024-11-20 11:21:37.621620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.445 [2024-11-20 11:21:37.621655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.445 qpair failed and we were unable to recover it. 00:27:10.445 [2024-11-20 11:21:37.621792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.445 [2024-11-20 11:21:37.621825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.445 qpair failed and we were unable to recover it. 00:27:10.445 [2024-11-20 11:21:37.622033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.445 [2024-11-20 11:21:37.622068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.446 qpair failed and we were unable to recover it. 00:27:10.446 [2024-11-20 11:21:37.622236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.446 [2024-11-20 11:21:37.622270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.446 qpair failed and we were unable to recover it. 00:27:10.446 [2024-11-20 11:21:37.622403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.446 [2024-11-20 11:21:37.622435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.446 qpair failed and we were unable to recover it. 00:27:10.446 [2024-11-20 11:21:37.622556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.446 [2024-11-20 11:21:37.622590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.446 qpair failed and we were unable to recover it. 00:27:10.446 [2024-11-20 11:21:37.622708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.446 [2024-11-20 11:21:37.622741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.446 qpair failed and we were unable to recover it. 00:27:10.446 [2024-11-20 11:21:37.622858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.446 [2024-11-20 11:21:37.622891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.446 qpair failed and we were unable to recover it. 00:27:10.446 [2024-11-20 11:21:37.623076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.446 [2024-11-20 11:21:37.623111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.446 qpair failed and we were unable to recover it. 00:27:10.446 [2024-11-20 11:21:37.623305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.446 [2024-11-20 11:21:37.623338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.446 qpair failed and we were unable to recover it. 00:27:10.446 [2024-11-20 11:21:37.623474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.446 [2024-11-20 11:21:37.623508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.446 qpair failed and we were unable to recover it. 00:27:10.446 [2024-11-20 11:21:37.623622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.446 [2024-11-20 11:21:37.623655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.446 qpair failed and we were unable to recover it. 00:27:10.446 [2024-11-20 11:21:37.623766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.446 [2024-11-20 11:21:37.623800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.446 qpair failed and we were unable to recover it. 00:27:10.446 [2024-11-20 11:21:37.623910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.446 [2024-11-20 11:21:37.623944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.446 qpair failed and we were unable to recover it. 00:27:10.446 [2024-11-20 11:21:37.624066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.446 [2024-11-20 11:21:37.624099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.446 qpair failed and we were unable to recover it. 00:27:10.446 [2024-11-20 11:21:37.624210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.446 [2024-11-20 11:21:37.624244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.446 qpair failed and we were unable to recover it. 00:27:10.446 [2024-11-20 11:21:37.624453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.446 [2024-11-20 11:21:37.624487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.446 qpair failed and we were unable to recover it. 00:27:10.446 [2024-11-20 11:21:37.624598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.446 [2024-11-20 11:21:37.624632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.446 qpair failed and we were unable to recover it. 00:27:10.446 [2024-11-20 11:21:37.624764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.446 [2024-11-20 11:21:37.624799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.446 qpair failed and we were unable to recover it. 00:27:10.446 [2024-11-20 11:21:37.624919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.446 [2024-11-20 11:21:37.624961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.446 qpair failed and we were unable to recover it. 00:27:10.446 [2024-11-20 11:21:37.625085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.446 [2024-11-20 11:21:37.625119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.446 qpair failed and we were unable to recover it. 00:27:10.446 [2024-11-20 11:21:37.625296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.446 [2024-11-20 11:21:37.625329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.446 qpair failed and we were unable to recover it. 00:27:10.446 [2024-11-20 11:21:37.625436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.446 [2024-11-20 11:21:37.625469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.446 qpair failed and we were unable to recover it. 00:27:10.446 [2024-11-20 11:21:37.625643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.446 [2024-11-20 11:21:37.625678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.446 qpair failed and we were unable to recover it. 00:27:10.446 [2024-11-20 11:21:37.625822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.446 [2024-11-20 11:21:37.625858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.446 qpair failed and we were unable to recover it. 00:27:10.446 [2024-11-20 11:21:37.625982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.446 [2024-11-20 11:21:37.626018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.446 qpair failed and we were unable to recover it. 00:27:10.446 [2024-11-20 11:21:37.626196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.446 [2024-11-20 11:21:37.626230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.446 qpair failed and we were unable to recover it. 00:27:10.446 [2024-11-20 11:21:37.626351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.446 [2024-11-20 11:21:37.626383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.446 qpair failed and we were unable to recover it. 00:27:10.446 [2024-11-20 11:21:37.626566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.446 [2024-11-20 11:21:37.626600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.446 qpair failed and we were unable to recover it. 00:27:10.446 [2024-11-20 11:21:37.626718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.446 [2024-11-20 11:21:37.626751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.446 qpair failed and we were unable to recover it. 00:27:10.446 [2024-11-20 11:21:37.627025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.446 [2024-11-20 11:21:37.627061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.446 qpair failed and we were unable to recover it. 00:27:10.446 [2024-11-20 11:21:37.627204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.446 [2024-11-20 11:21:37.627246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.446 qpair failed and we were unable to recover it. 00:27:10.446 [2024-11-20 11:21:37.627367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.446 [2024-11-20 11:21:37.627401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.446 qpair failed and we were unable to recover it. 00:27:10.446 [2024-11-20 11:21:37.627526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.446 [2024-11-20 11:21:37.627559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.446 qpair failed and we were unable to recover it. 00:27:10.446 [2024-11-20 11:21:37.627667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.446 [2024-11-20 11:21:37.627701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.446 qpair failed and we were unable to recover it. 00:27:10.446 [2024-11-20 11:21:37.627882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.446 [2024-11-20 11:21:37.627917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.446 qpair failed and we were unable to recover it. 00:27:10.447 [2024-11-20 11:21:37.628140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.447 [2024-11-20 11:21:37.628199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.447 qpair failed and we were unable to recover it. 00:27:10.447 [2024-11-20 11:21:37.628353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.447 [2024-11-20 11:21:37.628390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.447 qpair failed and we were unable to recover it. 00:27:10.447 [2024-11-20 11:21:37.628580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.447 [2024-11-20 11:21:37.628613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.447 qpair failed and we were unable to recover it. 00:27:10.447 [2024-11-20 11:21:37.628799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.447 [2024-11-20 11:21:37.628831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.447 qpair failed and we were unable to recover it. 00:27:10.447 [2024-11-20 11:21:37.628944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.447 [2024-11-20 11:21:37.628988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.447 qpair failed and we were unable to recover it. 00:27:10.447 [2024-11-20 11:21:37.629251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.447 [2024-11-20 11:21:37.629285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.447 qpair failed and we were unable to recover it. 00:27:10.447 [2024-11-20 11:21:37.629402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.447 [2024-11-20 11:21:37.629434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.447 qpair failed and we were unable to recover it. 00:27:10.447 [2024-11-20 11:21:37.629605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.447 [2024-11-20 11:21:37.629638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.447 qpair failed and we were unable to recover it. 00:27:10.447 [2024-11-20 11:21:37.629755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.447 [2024-11-20 11:21:37.629788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.447 qpair failed and we were unable to recover it. 00:27:10.447 [2024-11-20 11:21:37.629901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.447 [2024-11-20 11:21:37.629934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.447 qpair failed and we were unable to recover it. 00:27:10.447 [2024-11-20 11:21:37.630069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.447 [2024-11-20 11:21:37.630102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.447 qpair failed and we were unable to recover it. 00:27:10.447 [2024-11-20 11:21:37.630273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.447 [2024-11-20 11:21:37.630306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.447 qpair failed and we were unable to recover it. 00:27:10.447 [2024-11-20 11:21:37.630426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.447 [2024-11-20 11:21:37.630459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.447 qpair failed and we were unable to recover it. 00:27:10.447 [2024-11-20 11:21:37.630570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.447 [2024-11-20 11:21:37.630603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.447 qpair failed and we were unable to recover it. 00:27:10.447 [2024-11-20 11:21:37.630734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.447 [2024-11-20 11:21:37.630766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.447 qpair failed and we were unable to recover it. 00:27:10.447 [2024-11-20 11:21:37.630882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.447 [2024-11-20 11:21:37.630914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.447 qpair failed and we were unable to recover it. 00:27:10.447 [2024-11-20 11:21:37.631056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.447 [2024-11-20 11:21:37.631096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.447 qpair failed and we were unable to recover it. 00:27:10.447 [2024-11-20 11:21:37.631209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.447 [2024-11-20 11:21:37.631242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.447 qpair failed and we were unable to recover it. 00:27:10.447 [2024-11-20 11:21:37.631426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.447 [2024-11-20 11:21:37.631461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.447 qpair failed and we were unable to recover it. 00:27:10.447 [2024-11-20 11:21:37.631631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.447 [2024-11-20 11:21:37.631665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.447 qpair failed and we were unable to recover it. 00:27:10.447 [2024-11-20 11:21:37.631789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.447 [2024-11-20 11:21:37.631822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.447 qpair failed and we were unable to recover it. 00:27:10.447 [2024-11-20 11:21:37.631940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.447 [2024-11-20 11:21:37.631990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.447 qpair failed and we were unable to recover it. 00:27:10.447 [2024-11-20 11:21:37.632244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.447 [2024-11-20 11:21:37.632278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.447 qpair failed and we were unable to recover it. 00:27:10.447 [2024-11-20 11:21:37.632406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.447 [2024-11-20 11:21:37.632439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.447 qpair failed and we were unable to recover it. 00:27:10.447 [2024-11-20 11:21:37.632621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.447 [2024-11-20 11:21:37.632653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.447 qpair failed and we were unable to recover it. 00:27:10.447 [2024-11-20 11:21:37.632777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.447 [2024-11-20 11:21:37.632809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.447 qpair failed and we were unable to recover it. 00:27:10.447 [2024-11-20 11:21:37.632921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.447 [2024-11-20 11:21:37.632962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.447 qpair failed and we were unable to recover it. 00:27:10.447 [2024-11-20 11:21:37.633070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.447 [2024-11-20 11:21:37.633103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.447 qpair failed and we were unable to recover it. 00:27:10.447 [2024-11-20 11:21:37.633224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.447 [2024-11-20 11:21:37.633255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.447 qpair failed and we were unable to recover it. 00:27:10.447 [2024-11-20 11:21:37.633377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.447 [2024-11-20 11:21:37.633410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.447 qpair failed and we were unable to recover it. 00:27:10.447 [2024-11-20 11:21:37.633584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.447 [2024-11-20 11:21:37.633617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.447 qpair failed and we were unable to recover it. 00:27:10.447 [2024-11-20 11:21:37.633743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.447 [2024-11-20 11:21:37.633776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.447 qpair failed and we were unable to recover it. 00:27:10.447 [2024-11-20 11:21:37.633941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.447 [2024-11-20 11:21:37.633987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.447 qpair failed and we were unable to recover it. 00:27:10.448 [2024-11-20 11:21:37.634107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.448 [2024-11-20 11:21:37.634139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.448 qpair failed and we were unable to recover it. 00:27:10.448 [2024-11-20 11:21:37.634326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.448 [2024-11-20 11:21:37.634360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.448 qpair failed and we were unable to recover it. 00:27:10.448 [2024-11-20 11:21:37.634532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.448 [2024-11-20 11:21:37.634570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.448 qpair failed and we were unable to recover it. 00:27:10.448 [2024-11-20 11:21:37.634683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.448 [2024-11-20 11:21:37.634716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.448 qpair failed and we were unable to recover it. 00:27:10.448 [2024-11-20 11:21:37.634823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.448 [2024-11-20 11:21:37.634855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.448 qpair failed and we were unable to recover it. 00:27:10.448 [2024-11-20 11:21:37.634977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.448 [2024-11-20 11:21:37.635011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.448 qpair failed and we were unable to recover it. 00:27:10.448 [2024-11-20 11:21:37.635122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.448 [2024-11-20 11:21:37.635153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.448 qpair failed and we were unable to recover it. 00:27:10.448 [2024-11-20 11:21:37.635274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.448 [2024-11-20 11:21:37.635306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.448 qpair failed and we were unable to recover it. 00:27:10.448 [2024-11-20 11:21:37.635481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.448 [2024-11-20 11:21:37.635513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.448 qpair failed and we were unable to recover it. 00:27:10.448 [2024-11-20 11:21:37.635618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.448 [2024-11-20 11:21:37.635650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.448 qpair failed and we were unable to recover it. 00:27:10.448 [2024-11-20 11:21:37.635756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.448 [2024-11-20 11:21:37.635788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.448 qpair failed and we were unable to recover it. 00:27:10.448 [2024-11-20 11:21:37.635973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.448 [2024-11-20 11:21:37.636007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.448 qpair failed and we were unable to recover it. 00:27:10.448 [2024-11-20 11:21:37.636112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.448 [2024-11-20 11:21:37.636144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.448 qpair failed and we were unable to recover it. 00:27:10.448 [2024-11-20 11:21:37.636251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.448 [2024-11-20 11:21:37.636284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.448 qpair failed and we were unable to recover it. 00:27:10.448 [2024-11-20 11:21:37.636491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.448 [2024-11-20 11:21:37.636524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.448 qpair failed and we were unable to recover it. 00:27:10.448 [2024-11-20 11:21:37.636631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.448 [2024-11-20 11:21:37.636664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.448 qpair failed and we were unable to recover it. 00:27:10.448 [2024-11-20 11:21:37.636805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.448 [2024-11-20 11:21:37.636839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.448 qpair failed and we were unable to recover it. 00:27:10.448 [2024-11-20 11:21:37.636967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.448 [2024-11-20 11:21:37.637001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.448 qpair failed and we were unable to recover it. 00:27:10.448 [2024-11-20 11:21:37.637119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.448 [2024-11-20 11:21:37.637151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.448 qpair failed and we were unable to recover it. 00:27:10.448 [2024-11-20 11:21:37.637333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.448 [2024-11-20 11:21:37.637364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.448 qpair failed and we were unable to recover it. 00:27:10.448 [2024-11-20 11:21:37.637467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.448 [2024-11-20 11:21:37.637499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.448 qpair failed and we were unable to recover it. 00:27:10.448 [2024-11-20 11:21:37.637669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.448 [2024-11-20 11:21:37.637702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.448 qpair failed and we were unable to recover it. 00:27:10.448 [2024-11-20 11:21:37.637821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.448 [2024-11-20 11:21:37.637852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.448 qpair failed and we were unable to recover it. 00:27:10.448 [2024-11-20 11:21:37.638003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.448 [2024-11-20 11:21:37.638036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.448 qpair failed and we were unable to recover it. 00:27:10.448 [2024-11-20 11:21:37.638213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.448 [2024-11-20 11:21:37.638245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.448 qpair failed and we were unable to recover it. 00:27:10.448 [2024-11-20 11:21:37.638359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.448 [2024-11-20 11:21:37.638392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.448 qpair failed and we were unable to recover it. 00:27:10.448 [2024-11-20 11:21:37.638499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.448 [2024-11-20 11:21:37.638532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.448 qpair failed and we were unable to recover it. 00:27:10.448 [2024-11-20 11:21:37.638637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.448 [2024-11-20 11:21:37.638670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.448 qpair failed and we were unable to recover it. 00:27:10.448 [2024-11-20 11:21:37.638791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.449 [2024-11-20 11:21:37.638824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.449 qpair failed and we were unable to recover it. 00:27:10.449 [2024-11-20 11:21:37.638965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.449 [2024-11-20 11:21:37.639014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.449 qpair failed and we were unable to recover it. 00:27:10.449 [2024-11-20 11:21:37.639135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.449 [2024-11-20 11:21:37.639168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.449 qpair failed and we were unable to recover it. 00:27:10.449 [2024-11-20 11:21:37.639341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.449 [2024-11-20 11:21:37.639374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.449 qpair failed and we were unable to recover it. 00:27:10.449 [2024-11-20 11:21:37.639480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.449 [2024-11-20 11:21:37.639512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.449 qpair failed and we were unable to recover it. 00:27:10.449 [2024-11-20 11:21:37.639616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.449 [2024-11-20 11:21:37.639649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.449 qpair failed and we were unable to recover it. 00:27:10.449 [2024-11-20 11:21:37.639763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.449 [2024-11-20 11:21:37.639796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.449 qpair failed and we were unable to recover it. 00:27:10.449 [2024-11-20 11:21:37.639903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.449 [2024-11-20 11:21:37.639937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.449 qpair failed and we were unable to recover it. 00:27:10.449 [2024-11-20 11:21:37.640068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.449 [2024-11-20 11:21:37.640101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.449 qpair failed and we were unable to recover it. 00:27:10.449 [2024-11-20 11:21:37.640211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.449 [2024-11-20 11:21:37.640244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.449 qpair failed and we were unable to recover it. 00:27:10.449 [2024-11-20 11:21:37.640363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.449 [2024-11-20 11:21:37.640396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.449 qpair failed and we were unable to recover it. 00:27:10.449 [2024-11-20 11:21:37.640500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.449 [2024-11-20 11:21:37.640532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.449 qpair failed and we were unable to recover it. 00:27:10.449 [2024-11-20 11:21:37.640658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.449 [2024-11-20 11:21:37.640691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.449 qpair failed and we were unable to recover it. 00:27:10.449 [2024-11-20 11:21:37.640803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.449 [2024-11-20 11:21:37.640836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.449 qpair failed and we were unable to recover it. 00:27:10.449 [2024-11-20 11:21:37.640944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.449 [2024-11-20 11:21:37.640997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.449 qpair failed and we were unable to recover it. 00:27:10.449 [2024-11-20 11:21:37.641172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.449 [2024-11-20 11:21:37.641204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.449 qpair failed and we were unable to recover it. 00:27:10.449 [2024-11-20 11:21:37.641376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.449 [2024-11-20 11:21:37.641409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.449 qpair failed and we were unable to recover it. 00:27:10.449 [2024-11-20 11:21:37.641523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.449 [2024-11-20 11:21:37.641555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.449 qpair failed and we were unable to recover it. 00:27:10.449 [2024-11-20 11:21:37.641665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.449 [2024-11-20 11:21:37.641697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.449 qpair failed and we were unable to recover it. 00:27:10.449 [2024-11-20 11:21:37.641803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.449 [2024-11-20 11:21:37.641835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.449 qpair failed and we were unable to recover it. 00:27:10.449 [2024-11-20 11:21:37.641942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.449 [2024-11-20 11:21:37.641987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.449 qpair failed and we were unable to recover it. 00:27:10.449 [2024-11-20 11:21:37.642106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.449 [2024-11-20 11:21:37.642139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.449 qpair failed and we were unable to recover it. 00:27:10.449 [2024-11-20 11:21:37.642328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.449 [2024-11-20 11:21:37.642361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.449 qpair failed and we were unable to recover it. 00:27:10.449 [2024-11-20 11:21:37.642464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.449 [2024-11-20 11:21:37.642497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.449 qpair failed and we were unable to recover it. 00:27:10.449 [2024-11-20 11:21:37.642610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.449 [2024-11-20 11:21:37.642642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.449 qpair failed and we were unable to recover it. 00:27:10.449 [2024-11-20 11:21:37.642753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.449 [2024-11-20 11:21:37.642786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.449 qpair failed and we were unable to recover it. 00:27:10.449 [2024-11-20 11:21:37.642910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.449 [2024-11-20 11:21:37.642942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.449 qpair failed and we were unable to recover it. 00:27:10.449 [2024-11-20 11:21:37.643055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.449 [2024-11-20 11:21:37.643088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.449 qpair failed and we were unable to recover it. 00:27:10.449 [2024-11-20 11:21:37.643202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.449 [2024-11-20 11:21:37.643234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.449 qpair failed and we were unable to recover it. 00:27:10.449 [2024-11-20 11:21:37.643415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.449 [2024-11-20 11:21:37.643447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.449 qpair failed and we were unable to recover it. 00:27:10.449 [2024-11-20 11:21:37.643548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.449 [2024-11-20 11:21:37.643580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.449 qpair failed and we were unable to recover it. 00:27:10.449 [2024-11-20 11:21:37.643694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.449 [2024-11-20 11:21:37.643726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.449 qpair failed and we were unable to recover it. 00:27:10.449 [2024-11-20 11:21:37.643834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.449 [2024-11-20 11:21:37.643867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.449 qpair failed and we were unable to recover it. 00:27:10.450 [2024-11-20 11:21:37.644043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.450 [2024-11-20 11:21:37.644078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.450 qpair failed and we were unable to recover it. 00:27:10.450 [2024-11-20 11:21:37.644205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.450 [2024-11-20 11:21:37.644238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.450 qpair failed and we were unable to recover it. 00:27:10.450 [2024-11-20 11:21:37.644342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.450 [2024-11-20 11:21:37.644374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.450 qpair failed and we were unable to recover it. 00:27:10.450 [2024-11-20 11:21:37.644482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.450 [2024-11-20 11:21:37.644515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.450 qpair failed and we were unable to recover it. 00:27:10.450 [2024-11-20 11:21:37.644625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.450 [2024-11-20 11:21:37.644658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.450 qpair failed and we were unable to recover it. 00:27:10.450 [2024-11-20 11:21:37.644785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.450 [2024-11-20 11:21:37.644818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.450 qpair failed and we were unable to recover it. 00:27:10.450 [2024-11-20 11:21:37.644963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.450 [2024-11-20 11:21:37.644998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.450 qpair failed and we were unable to recover it. 00:27:10.450 [2024-11-20 11:21:37.645107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.450 [2024-11-20 11:21:37.645139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.450 qpair failed and we were unable to recover it. 00:27:10.450 [2024-11-20 11:21:37.645262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.450 [2024-11-20 11:21:37.645299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.450 qpair failed and we were unable to recover it. 00:27:10.450 [2024-11-20 11:21:37.645410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.450 [2024-11-20 11:21:37.645442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.450 qpair failed and we were unable to recover it. 00:27:10.450 [2024-11-20 11:21:37.645621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.450 [2024-11-20 11:21:37.645653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.450 qpair failed and we were unable to recover it. 00:27:10.450 [2024-11-20 11:21:37.645826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.450 [2024-11-20 11:21:37.645859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.450 qpair failed and we were unable to recover it. 00:27:10.450 [2024-11-20 11:21:37.645969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.450 [2024-11-20 11:21:37.646003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.450 qpair failed and we were unable to recover it. 00:27:10.450 [2024-11-20 11:21:37.646195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.450 [2024-11-20 11:21:37.646227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.450 qpair failed and we were unable to recover it. 00:27:10.450 [2024-11-20 11:21:37.646348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.450 [2024-11-20 11:21:37.646381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.450 qpair failed and we were unable to recover it. 00:27:10.450 [2024-11-20 11:21:37.646556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.450 [2024-11-20 11:21:37.646588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.450 qpair failed and we were unable to recover it. 00:27:10.450 [2024-11-20 11:21:37.646697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.450 [2024-11-20 11:21:37.646730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.450 qpair failed and we were unable to recover it. 00:27:10.450 [2024-11-20 11:21:37.646850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.450 [2024-11-20 11:21:37.646882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.450 qpair failed and we were unable to recover it. 00:27:10.450 [2024-11-20 11:21:37.647004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.450 [2024-11-20 11:21:37.647037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.450 qpair failed and we were unable to recover it. 00:27:10.450 [2024-11-20 11:21:37.647218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.450 [2024-11-20 11:21:37.647250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.450 qpair failed and we were unable to recover it. 00:27:10.450 [2024-11-20 11:21:37.647361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.450 [2024-11-20 11:21:37.647392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.450 qpair failed and we were unable to recover it. 00:27:10.450 [2024-11-20 11:21:37.647496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.450 [2024-11-20 11:21:37.647535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.450 qpair failed and we were unable to recover it. 00:27:10.450 [2024-11-20 11:21:37.647653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.450 [2024-11-20 11:21:37.647686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.450 qpair failed and we were unable to recover it. 00:27:10.450 [2024-11-20 11:21:37.647822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.450 [2024-11-20 11:21:37.647856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.450 qpair failed and we were unable to recover it. 00:27:10.450 [2024-11-20 11:21:37.648049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.450 [2024-11-20 11:21:37.648083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.450 qpair failed and we were unable to recover it. 00:27:10.450 [2024-11-20 11:21:37.648197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.450 [2024-11-20 11:21:37.648229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.450 qpair failed and we were unable to recover it. 00:27:10.450 [2024-11-20 11:21:37.648338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.450 [2024-11-20 11:21:37.648369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.450 qpair failed and we were unable to recover it. 00:27:10.450 [2024-11-20 11:21:37.648492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.450 [2024-11-20 11:21:37.648526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.450 qpair failed and we were unable to recover it. 00:27:10.450 [2024-11-20 11:21:37.648704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.450 [2024-11-20 11:21:37.648737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.450 qpair failed and we were unable to recover it. 00:27:10.450 [2024-11-20 11:21:37.648918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.450 [2024-11-20 11:21:37.648958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.450 qpair failed and we were unable to recover it. 00:27:10.450 [2024-11-20 11:21:37.649065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.450 [2024-11-20 11:21:37.649097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.450 qpair failed and we were unable to recover it. 00:27:10.450 [2024-11-20 11:21:37.649359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.450 [2024-11-20 11:21:37.649393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.450 qpair failed and we were unable to recover it. 00:27:10.450 [2024-11-20 11:21:37.649494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.450 [2024-11-20 11:21:37.649526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.450 qpair failed and we were unable to recover it. 00:27:10.451 [2024-11-20 11:21:37.649698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.451 [2024-11-20 11:21:37.649731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.451 qpair failed and we were unable to recover it. 00:27:10.451 [2024-11-20 11:21:37.649833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.451 [2024-11-20 11:21:37.649866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.451 qpair failed and we were unable to recover it. 00:27:10.451 [2024-11-20 11:21:37.650018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.451 [2024-11-20 11:21:37.650053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.451 qpair failed and we were unable to recover it. 00:27:10.451 [2024-11-20 11:21:37.650176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.451 [2024-11-20 11:21:37.650209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.451 qpair failed and we were unable to recover it. 00:27:10.451 [2024-11-20 11:21:37.650390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.451 [2024-11-20 11:21:37.650423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.451 qpair failed and we were unable to recover it. 00:27:10.451 [2024-11-20 11:21:37.650542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.451 [2024-11-20 11:21:37.650575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.451 qpair failed and we were unable to recover it. 00:27:10.451 [2024-11-20 11:21:37.650749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.451 [2024-11-20 11:21:37.650783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.451 qpair failed and we were unable to recover it. 00:27:10.451 [2024-11-20 11:21:37.650908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.451 [2024-11-20 11:21:37.650940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.451 qpair failed and we were unable to recover it. 00:27:10.451 [2024-11-20 11:21:37.651128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.451 [2024-11-20 11:21:37.651162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.451 qpair failed and we were unable to recover it. 00:27:10.451 [2024-11-20 11:21:37.651286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.451 [2024-11-20 11:21:37.651317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.451 qpair failed and we were unable to recover it. 00:27:10.451 [2024-11-20 11:21:37.651429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.451 [2024-11-20 11:21:37.651460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.451 qpair failed and we were unable to recover it. 00:27:10.451 [2024-11-20 11:21:37.651587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.451 [2024-11-20 11:21:37.651620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.451 qpair failed and we were unable to recover it. 00:27:10.451 [2024-11-20 11:21:37.651738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.451 [2024-11-20 11:21:37.651771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.451 qpair failed and we were unable to recover it. 00:27:10.451 [2024-11-20 11:21:37.651882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.451 [2024-11-20 11:21:37.651914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.451 qpair failed and we were unable to recover it. 00:27:10.451 [2024-11-20 11:21:37.652043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.451 [2024-11-20 11:21:37.652081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.451 qpair failed and we were unable to recover it. 00:27:10.451 [2024-11-20 11:21:37.652212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.451 [2024-11-20 11:21:37.652245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.451 qpair failed and we were unable to recover it. 00:27:10.451 [2024-11-20 11:21:37.652370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.451 [2024-11-20 11:21:37.652402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.451 qpair failed and we were unable to recover it. 00:27:10.451 [2024-11-20 11:21:37.652509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.451 [2024-11-20 11:21:37.652541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.451 qpair failed and we were unable to recover it. 00:27:10.451 [2024-11-20 11:21:37.652650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.451 [2024-11-20 11:21:37.652683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.451 qpair failed and we were unable to recover it. 00:27:10.451 [2024-11-20 11:21:37.652793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.451 [2024-11-20 11:21:37.652827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.451 qpair failed and we were unable to recover it. 00:27:10.451 [2024-11-20 11:21:37.653002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.451 [2024-11-20 11:21:37.653037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.451 qpair failed and we were unable to recover it. 00:27:10.451 [2024-11-20 11:21:37.653235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.451 [2024-11-20 11:21:37.653267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.451 qpair failed and we were unable to recover it. 00:27:10.451 [2024-11-20 11:21:37.653514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.451 [2024-11-20 11:21:37.653547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.451 qpair failed and we were unable to recover it. 00:27:10.451 [2024-11-20 11:21:37.653661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.451 [2024-11-20 11:21:37.653692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.451 qpair failed and we were unable to recover it. 00:27:10.451 [2024-11-20 11:21:37.653806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.451 [2024-11-20 11:21:37.653838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.451 qpair failed and we were unable to recover it. 00:27:10.451 [2024-11-20 11:21:37.654038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.451 [2024-11-20 11:21:37.654071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.451 qpair failed and we were unable to recover it. 00:27:10.451 [2024-11-20 11:21:37.654187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.451 [2024-11-20 11:21:37.654219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.451 qpair failed and we were unable to recover it. 00:27:10.451 [2024-11-20 11:21:37.654406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.451 [2024-11-20 11:21:37.654438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.451 qpair failed and we were unable to recover it. 00:27:10.451 [2024-11-20 11:21:37.654612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.451 [2024-11-20 11:21:37.654652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.451 qpair failed and we were unable to recover it. 00:27:10.451 [2024-11-20 11:21:37.654772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.451 [2024-11-20 11:21:37.654804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.451 qpair failed and we were unable to recover it. 00:27:10.451 [2024-11-20 11:21:37.654907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.451 [2024-11-20 11:21:37.654939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.451 qpair failed and we were unable to recover it. 00:27:10.451 [2024-11-20 11:21:37.655127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.451 [2024-11-20 11:21:37.655160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.451 qpair failed and we were unable to recover it. 00:27:10.451 [2024-11-20 11:21:37.655277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.451 [2024-11-20 11:21:37.655309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.451 qpair failed and we were unable to recover it. 00:27:10.452 [2024-11-20 11:21:37.655444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.452 [2024-11-20 11:21:37.655477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.452 qpair failed and we were unable to recover it. 00:27:10.452 [2024-11-20 11:21:37.655664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.452 [2024-11-20 11:21:37.655697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.452 qpair failed and we were unable to recover it. 00:27:10.452 [2024-11-20 11:21:37.655813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.452 [2024-11-20 11:21:37.655846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.452 qpair failed and we were unable to recover it. 00:27:10.452 [2024-11-20 11:21:37.655978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.452 [2024-11-20 11:21:37.656013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.452 qpair failed and we were unable to recover it. 00:27:10.452 [2024-11-20 11:21:37.656134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.452 [2024-11-20 11:21:37.656167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.452 qpair failed and we were unable to recover it. 00:27:10.452 [2024-11-20 11:21:37.656272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.452 [2024-11-20 11:21:37.656305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.452 qpair failed and we were unable to recover it. 00:27:10.452 [2024-11-20 11:21:37.656546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.452 [2024-11-20 11:21:37.656579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.452 qpair failed and we were unable to recover it. 00:27:10.452 [2024-11-20 11:21:37.656689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.452 [2024-11-20 11:21:37.656722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.452 qpair failed and we were unable to recover it. 00:27:10.452 [2024-11-20 11:21:37.656894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.452 [2024-11-20 11:21:37.656926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.452 qpair failed and we were unable to recover it. 00:27:10.452 [2024-11-20 11:21:37.657142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.452 [2024-11-20 11:21:37.657176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.452 qpair failed and we were unable to recover it. 00:27:10.452 [2024-11-20 11:21:37.657365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.452 [2024-11-20 11:21:37.657397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.452 qpair failed and we were unable to recover it. 00:27:10.452 [2024-11-20 11:21:37.657579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.452 [2024-11-20 11:21:37.657612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.452 qpair failed and we were unable to recover it. 00:27:10.452 [2024-11-20 11:21:37.657797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.452 [2024-11-20 11:21:37.657830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.452 qpair failed and we were unable to recover it. 00:27:10.452 [2024-11-20 11:21:37.657959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.452 [2024-11-20 11:21:37.657993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.452 qpair failed and we were unable to recover it. 00:27:10.452 [2024-11-20 11:21:37.658177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.452 [2024-11-20 11:21:37.658211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.452 qpair failed and we were unable to recover it. 00:27:10.452 [2024-11-20 11:21:37.658414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.452 [2024-11-20 11:21:37.658447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.452 qpair failed and we were unable to recover it. 00:27:10.452 [2024-11-20 11:21:37.658629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.452 [2024-11-20 11:21:37.658661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.452 qpair failed and we were unable to recover it. 00:27:10.452 [2024-11-20 11:21:37.658846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.452 [2024-11-20 11:21:37.658879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.452 qpair failed and we were unable to recover it. 00:27:10.452 [2024-11-20 11:21:37.659059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.452 [2024-11-20 11:21:37.659094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.452 qpair failed and we were unable to recover it. 00:27:10.452 [2024-11-20 11:21:37.659209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.452 [2024-11-20 11:21:37.659241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.452 qpair failed and we were unable to recover it. 00:27:10.452 [2024-11-20 11:21:37.659459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.452 [2024-11-20 11:21:37.659492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.452 qpair failed and we were unable to recover it. 00:27:10.452 [2024-11-20 11:21:37.659689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.452 [2024-11-20 11:21:37.659722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.452 qpair failed and we were unable to recover it. 00:27:10.452 [2024-11-20 11:21:37.659910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.452 [2024-11-20 11:21:37.659955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.452 qpair failed and we were unable to recover it. 00:27:10.452 [2024-11-20 11:21:37.660176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.452 [2024-11-20 11:21:37.660211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.452 qpair failed and we were unable to recover it. 00:27:10.452 [2024-11-20 11:21:37.660400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.452 [2024-11-20 11:21:37.660434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.452 qpair failed and we were unable to recover it. 00:27:10.452 [2024-11-20 11:21:37.660613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.452 [2024-11-20 11:21:37.660646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.452 qpair failed and we were unable to recover it. 00:27:10.452 [2024-11-20 11:21:37.660752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.452 [2024-11-20 11:21:37.660785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.452 qpair failed and we were unable to recover it. 00:27:10.452 [2024-11-20 11:21:37.660917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.452 [2024-11-20 11:21:37.660958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.452 qpair failed and we were unable to recover it. 00:27:10.452 [2024-11-20 11:21:37.661082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.452 [2024-11-20 11:21:37.661115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.452 qpair failed and we were unable to recover it. 00:27:10.452 [2024-11-20 11:21:37.661246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.452 [2024-11-20 11:21:37.661279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.452 qpair failed and we were unable to recover it. 00:27:10.452 [2024-11-20 11:21:37.661382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.452 [2024-11-20 11:21:37.661415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.452 qpair failed and we were unable to recover it. 00:27:10.452 [2024-11-20 11:21:37.661536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.452 [2024-11-20 11:21:37.661569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.452 qpair failed and we were unable to recover it. 00:27:10.452 [2024-11-20 11:21:37.661743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.452 [2024-11-20 11:21:37.661774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.452 qpair failed and we were unable to recover it. 00:27:10.453 [2024-11-20 11:21:37.661897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.453 [2024-11-20 11:21:37.661930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.453 qpair failed and we were unable to recover it. 00:27:10.453 [2024-11-20 11:21:37.662214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.453 [2024-11-20 11:21:37.662250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.453 qpair failed and we were unable to recover it. 00:27:10.453 [2024-11-20 11:21:37.662429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.453 [2024-11-20 11:21:37.662468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.453 qpair failed and we were unable to recover it. 00:27:10.453 [2024-11-20 11:21:37.662576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.453 [2024-11-20 11:21:37.662609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.453 qpair failed and we were unable to recover it. 00:27:10.453 [2024-11-20 11:21:37.662762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.453 [2024-11-20 11:21:37.662796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.453 qpair failed and we were unable to recover it. 00:27:10.453 [2024-11-20 11:21:37.662980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.453 [2024-11-20 11:21:37.663014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.453 qpair failed and we were unable to recover it. 00:27:10.453 [2024-11-20 11:21:37.663130] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:10.453 [2024-11-20 11:21:37.663159] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:10.453 [2024-11-20 11:21:37.663167] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:10.453 [2024-11-20 11:21:37.663174] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:10.453 [2024-11-20 11:21:37.663179] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:10.453 [2024-11-20 11:21:37.663202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.453 [2024-11-20 11:21:37.663233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.453 qpair failed and we were unable to recover it. 00:27:10.453 [2024-11-20 11:21:37.663348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.453 [2024-11-20 11:21:37.663379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.453 qpair failed and we were unable to recover it. 00:27:10.453 [2024-11-20 11:21:37.663573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.453 [2024-11-20 11:21:37.663605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.453 qpair failed and we were unable to recover it. 00:27:10.453 [2024-11-20 11:21:37.663726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.453 [2024-11-20 11:21:37.663758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.453 qpair failed and we were unable to recover it. 00:27:10.453 [2024-11-20 11:21:37.663888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.453 [2024-11-20 11:21:37.663921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.453 qpair failed and we were unable to recover it. 00:27:10.453 [2024-11-20 11:21:37.664081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.453 [2024-11-20 11:21:37.664113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.453 qpair failed and we were unable to recover it. 00:27:10.453 [2024-11-20 11:21:37.664236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.453 [2024-11-20 11:21:37.664268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.453 qpair failed and we were unable to recover it. 00:27:10.453 [2024-11-20 11:21:37.664379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.453 [2024-11-20 11:21:37.664411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.453 qpair failed and we were unable to recover it. 00:27:10.453 [2024-11-20 11:21:37.664619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.453 [2024-11-20 11:21:37.664654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.453 qpair failed and we were unable to recover it. 00:27:10.453 [2024-11-20 11:21:37.664833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.453 [2024-11-20 11:21:37.664865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.453 qpair failed and we were unable to recover it. 00:27:10.453 [2024-11-20 11:21:37.664789] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:27:10.453 [2024-11-20 11:21:37.664898] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:27:10.453 [2024-11-20 11:21:37.665008] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:27:10.453 [2024-11-20 11:21:37.665008] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:27:10.453 [2024-11-20 11:21:37.665102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.453 [2024-11-20 11:21:37.665136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.453 qpair failed and we were unable to recover it. 00:27:10.453 [2024-11-20 11:21:37.665313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.453 [2024-11-20 11:21:37.665343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.453 qpair failed and we were unable to recover it. 00:27:10.453 [2024-11-20 11:21:37.665462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.453 [2024-11-20 11:21:37.665493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.453 qpair failed and we were unable to recover it. 00:27:10.453 [2024-11-20 11:21:37.665685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.453 [2024-11-20 11:21:37.665718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.453 qpair failed and we were unable to recover it. 00:27:10.453 [2024-11-20 11:21:37.665913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.453 [2024-11-20 11:21:37.665945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.453 qpair failed and we were unable to recover it. 00:27:10.453 [2024-11-20 11:21:37.666137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.453 [2024-11-20 11:21:37.666177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.453 qpair failed and we were unable to recover it. 00:27:10.453 [2024-11-20 11:21:37.666301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.453 [2024-11-20 11:21:37.666333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.453 qpair failed and we were unable to recover it. 00:27:10.453 [2024-11-20 11:21:37.666442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.453 [2024-11-20 11:21:37.666476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.453 qpair failed and we were unable to recover it. 00:27:10.453 [2024-11-20 11:21:37.666651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.453 [2024-11-20 11:21:37.666682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.453 qpair failed and we were unable to recover it. 00:27:10.453 [2024-11-20 11:21:37.666858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.453 [2024-11-20 11:21:37.666891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.453 qpair failed and we were unable to recover it. 00:27:10.454 [2024-11-20 11:21:37.667083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.454 [2024-11-20 11:21:37.667118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.454 qpair failed and we were unable to recover it. 00:27:10.454 [2024-11-20 11:21:37.667227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.454 [2024-11-20 11:21:37.667261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.454 qpair failed and we were unable to recover it. 00:27:10.454 [2024-11-20 11:21:37.667376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.454 [2024-11-20 11:21:37.667408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.454 qpair failed and we were unable to recover it. 00:27:10.454 [2024-11-20 11:21:37.667542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.454 [2024-11-20 11:21:37.667574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.454 qpair failed and we were unable to recover it. 00:27:10.454 [2024-11-20 11:21:37.667688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.454 [2024-11-20 11:21:37.667721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.454 qpair failed and we were unable to recover it. 00:27:10.454 [2024-11-20 11:21:37.667858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.454 [2024-11-20 11:21:37.667892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.454 qpair failed and we were unable to recover it. 00:27:10.454 [2024-11-20 11:21:37.668032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.454 [2024-11-20 11:21:37.668067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.454 qpair failed and we were unable to recover it. 00:27:10.454 [2024-11-20 11:21:37.668187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.454 [2024-11-20 11:21:37.668220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.454 qpair failed and we were unable to recover it. 00:27:10.454 [2024-11-20 11:21:37.668327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.454 [2024-11-20 11:21:37.668359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.454 qpair failed and we were unable to recover it. 00:27:10.454 [2024-11-20 11:21:37.668550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.454 [2024-11-20 11:21:37.668582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.454 qpair failed and we were unable to recover it. 00:27:10.454 [2024-11-20 11:21:37.668774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.454 [2024-11-20 11:21:37.668808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.454 qpair failed and we were unable to recover it. 00:27:10.454 [2024-11-20 11:21:37.668933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.454 [2024-11-20 11:21:37.668978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.454 qpair failed and we were unable to recover it. 00:27:10.454 [2024-11-20 11:21:37.669102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.454 [2024-11-20 11:21:37.669134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.454 qpair failed and we were unable to recover it. 00:27:10.454 [2024-11-20 11:21:37.669258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.454 [2024-11-20 11:21:37.669298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.454 qpair failed and we were unable to recover it. 00:27:10.454 [2024-11-20 11:21:37.669405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.454 [2024-11-20 11:21:37.669439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.454 qpair failed and we were unable to recover it. 00:27:10.454 [2024-11-20 11:21:37.669627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.454 [2024-11-20 11:21:37.669660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.454 qpair failed and we were unable to recover it. 00:27:10.454 [2024-11-20 11:21:37.669878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.454 [2024-11-20 11:21:37.669910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.454 qpair failed and we were unable to recover it. 00:27:10.454 [2024-11-20 11:21:37.670040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.454 [2024-11-20 11:21:37.670074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.454 qpair failed and we were unable to recover it. 00:27:10.454 [2024-11-20 11:21:37.670186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.454 [2024-11-20 11:21:37.670219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.454 qpair failed and we were unable to recover it. 00:27:10.454 [2024-11-20 11:21:37.670329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.454 [2024-11-20 11:21:37.670362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.454 qpair failed and we were unable to recover it. 00:27:10.454 [2024-11-20 11:21:37.670550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.454 [2024-11-20 11:21:37.670584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.454 qpair failed and we were unable to recover it. 00:27:10.454 [2024-11-20 11:21:37.670756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.454 [2024-11-20 11:21:37.670790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.454 qpair failed and we were unable to recover it. 00:27:10.454 [2024-11-20 11:21:37.671058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.454 [2024-11-20 11:21:37.671092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.454 qpair failed and we were unable to recover it. 00:27:10.454 [2024-11-20 11:21:37.671209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.454 [2024-11-20 11:21:37.671241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.454 qpair failed and we were unable to recover it. 00:27:10.454 [2024-11-20 11:21:37.671351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.454 [2024-11-20 11:21:37.671389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.454 qpair failed and we were unable to recover it. 00:27:10.454 [2024-11-20 11:21:37.671507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.454 [2024-11-20 11:21:37.671539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.454 qpair failed and we were unable to recover it. 00:27:10.454 [2024-11-20 11:21:37.671755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.454 [2024-11-20 11:21:37.671788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.454 qpair failed and we were unable to recover it. 00:27:10.454 [2024-11-20 11:21:37.671926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.454 [2024-11-20 11:21:37.671969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.454 qpair failed and we were unable to recover it. 00:27:10.454 [2024-11-20 11:21:37.672095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.454 [2024-11-20 11:21:37.672126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.454 qpair failed and we were unable to recover it. 00:27:10.454 [2024-11-20 11:21:37.672242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.454 [2024-11-20 11:21:37.672273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.454 qpair failed and we were unable to recover it. 00:27:10.454 [2024-11-20 11:21:37.672376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.454 [2024-11-20 11:21:37.672408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.454 qpair failed and we were unable to recover it. 00:27:10.454 [2024-11-20 11:21:37.672516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.454 [2024-11-20 11:21:37.672548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.454 qpair failed and we were unable to recover it. 00:27:10.454 [2024-11-20 11:21:37.672722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.454 [2024-11-20 11:21:37.672755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.455 qpair failed and we were unable to recover it. 00:27:10.455 [2024-11-20 11:21:37.672866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.455 [2024-11-20 11:21:37.672898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.455 qpair failed and we were unable to recover it. 00:27:10.455 [2024-11-20 11:21:37.673043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.455 [2024-11-20 11:21:37.673076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.455 qpair failed and we were unable to recover it. 00:27:10.455 [2024-11-20 11:21:37.673198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.455 [2024-11-20 11:21:37.673230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.455 qpair failed and we were unable to recover it. 00:27:10.455 [2024-11-20 11:21:37.673350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.455 [2024-11-20 11:21:37.673383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.455 qpair failed and we were unable to recover it. 00:27:10.455 [2024-11-20 11:21:37.673501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.455 [2024-11-20 11:21:37.673533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.455 qpair failed and we were unable to recover it. 00:27:10.455 [2024-11-20 11:21:37.673652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.455 [2024-11-20 11:21:37.673683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.455 qpair failed and we were unable to recover it. 00:27:10.455 [2024-11-20 11:21:37.673858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.455 [2024-11-20 11:21:37.673892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.455 qpair failed and we were unable to recover it. 00:27:10.455 [2024-11-20 11:21:37.674081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.455 [2024-11-20 11:21:37.674114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.455 qpair failed and we were unable to recover it. 00:27:10.455 [2024-11-20 11:21:37.674218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.455 [2024-11-20 11:21:37.674249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.455 qpair failed and we were unable to recover it. 00:27:10.455 [2024-11-20 11:21:37.674437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.455 [2024-11-20 11:21:37.674470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.455 qpair failed and we were unable to recover it. 00:27:10.455 [2024-11-20 11:21:37.674651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.455 [2024-11-20 11:21:37.674684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.455 qpair failed and we were unable to recover it. 00:27:10.455 [2024-11-20 11:21:37.674898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.455 [2024-11-20 11:21:37.674932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.455 qpair failed and we were unable to recover it. 00:27:10.455 [2024-11-20 11:21:37.675219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.455 [2024-11-20 11:21:37.675253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.455 qpair failed and we were unable to recover it. 00:27:10.455 [2024-11-20 11:21:37.675364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.455 [2024-11-20 11:21:37.675397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.455 qpair failed and we were unable to recover it. 00:27:10.455 [2024-11-20 11:21:37.675662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.455 [2024-11-20 11:21:37.675697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.455 qpair failed and we were unable to recover it. 00:27:10.455 [2024-11-20 11:21:37.675883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.455 [2024-11-20 11:21:37.675917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.455 qpair failed and we were unable to recover it. 00:27:10.455 [2024-11-20 11:21:37.676073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.455 [2024-11-20 11:21:37.676127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.455 qpair failed and we were unable to recover it. 00:27:10.455 [2024-11-20 11:21:37.676247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.455 [2024-11-20 11:21:37.676281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.455 qpair failed and we were unable to recover it. 00:27:10.455 [2024-11-20 11:21:37.676456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.455 [2024-11-20 11:21:37.676490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.455 qpair failed and we were unable to recover it. 00:27:10.455 [2024-11-20 11:21:37.676617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.455 [2024-11-20 11:21:37.676651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.455 qpair failed and we were unable to recover it. 00:27:10.455 [2024-11-20 11:21:37.676785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.455 [2024-11-20 11:21:37.676828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.455 qpair failed and we were unable to recover it. 00:27:10.455 [2024-11-20 11:21:37.676932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.455 [2024-11-20 11:21:37.676977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.455 qpair failed and we were unable to recover it. 00:27:10.455 [2024-11-20 11:21:37.677156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.455 [2024-11-20 11:21:37.677190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.455 qpair failed and we were unable to recover it. 00:27:10.455 [2024-11-20 11:21:37.677320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.455 [2024-11-20 11:21:37.677354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.455 qpair failed and we were unable to recover it. 00:27:10.455 [2024-11-20 11:21:37.677548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.455 [2024-11-20 11:21:37.677583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.455 qpair failed and we were unable to recover it. 00:27:10.455 [2024-11-20 11:21:37.677704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.455 [2024-11-20 11:21:37.677737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.455 qpair failed and we were unable to recover it. 00:27:10.455 [2024-11-20 11:21:37.677852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.455 [2024-11-20 11:21:37.677885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.455 qpair failed and we were unable to recover it. 00:27:10.455 [2024-11-20 11:21:37.678021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.455 [2024-11-20 11:21:37.678056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.455 qpair failed and we were unable to recover it. 00:27:10.455 [2024-11-20 11:21:37.678168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.455 [2024-11-20 11:21:37.678202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.455 qpair failed and we were unable to recover it. 00:27:10.455 [2024-11-20 11:21:37.678391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.455 [2024-11-20 11:21:37.678424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.455 qpair failed and we were unable to recover it. 00:27:10.455 [2024-11-20 11:21:37.678538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.455 [2024-11-20 11:21:37.678571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.455 qpair failed and we were unable to recover it. 00:27:10.455 [2024-11-20 11:21:37.678809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.455 [2024-11-20 11:21:37.678842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.455 qpair failed and we were unable to recover it. 00:27:10.455 [2024-11-20 11:21:37.678978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.455 [2024-11-20 11:21:37.679012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.455 qpair failed and we were unable to recover it. 00:27:10.456 [2024-11-20 11:21:37.679249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.456 [2024-11-20 11:21:37.679284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.456 qpair failed and we were unable to recover it. 00:27:10.456 [2024-11-20 11:21:37.679531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.456 [2024-11-20 11:21:37.679565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.456 qpair failed and we were unable to recover it. 00:27:10.456 [2024-11-20 11:21:37.679677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.456 [2024-11-20 11:21:37.679711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.456 qpair failed and we were unable to recover it. 00:27:10.456 [2024-11-20 11:21:37.679886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.456 [2024-11-20 11:21:37.679921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.456 qpair failed and we were unable to recover it. 00:27:10.456 [2024-11-20 11:21:37.680061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.456 [2024-11-20 11:21:37.680098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.456 qpair failed and we were unable to recover it. 00:27:10.456 [2024-11-20 11:21:37.680271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.456 [2024-11-20 11:21:37.680306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.456 qpair failed and we were unable to recover it. 00:27:10.456 [2024-11-20 11:21:37.680483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.456 [2024-11-20 11:21:37.680518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.456 qpair failed and we were unable to recover it. 00:27:10.456 [2024-11-20 11:21:37.680627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.456 [2024-11-20 11:21:37.680661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.456 qpair failed and we were unable to recover it. 00:27:10.456 [2024-11-20 11:21:37.680790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.456 [2024-11-20 11:21:37.680823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.456 qpair failed and we were unable to recover it. 00:27:10.456 [2024-11-20 11:21:37.681066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.456 [2024-11-20 11:21:37.681102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.456 qpair failed and we were unable to recover it. 00:27:10.456 [2024-11-20 11:21:37.681227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.456 [2024-11-20 11:21:37.681261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.456 qpair failed and we were unable to recover it. 00:27:10.456 [2024-11-20 11:21:37.681445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.456 [2024-11-20 11:21:37.681480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.456 qpair failed and we were unable to recover it. 00:27:10.456 [2024-11-20 11:21:37.681671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.456 [2024-11-20 11:21:37.681707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.456 qpair failed and we were unable to recover it. 00:27:10.456 [2024-11-20 11:21:37.681958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.456 [2024-11-20 11:21:37.681996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.456 qpair failed and we were unable to recover it. 00:27:10.456 [2024-11-20 11:21:37.682133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.456 [2024-11-20 11:21:37.682166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.456 qpair failed and we were unable to recover it. 00:27:10.456 [2024-11-20 11:21:37.682359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.456 [2024-11-20 11:21:37.682396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.456 qpair failed and we were unable to recover it. 00:27:10.456 [2024-11-20 11:21:37.682505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.456 [2024-11-20 11:21:37.682539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.456 qpair failed and we were unable to recover it. 00:27:10.456 [2024-11-20 11:21:37.682660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.456 [2024-11-20 11:21:37.682694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.456 qpair failed and we were unable to recover it. 00:27:10.456 [2024-11-20 11:21:37.682878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.456 [2024-11-20 11:21:37.682914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.456 qpair failed and we were unable to recover it. 00:27:10.456 [2024-11-20 11:21:37.683078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.456 [2024-11-20 11:21:37.683127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.456 qpair failed and we were unable to recover it. 00:27:10.456 [2024-11-20 11:21:37.683241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.456 [2024-11-20 11:21:37.683275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.456 qpair failed and we were unable to recover it. 00:27:10.456 [2024-11-20 11:21:37.683456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.456 [2024-11-20 11:21:37.683490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.456 qpair failed and we were unable to recover it. 00:27:10.456 [2024-11-20 11:21:37.683607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.456 [2024-11-20 11:21:37.683640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.456 qpair failed and we were unable to recover it. 00:27:10.456 [2024-11-20 11:21:37.683816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.456 [2024-11-20 11:21:37.683850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.456 qpair failed and we were unable to recover it. 00:27:10.456 [2024-11-20 11:21:37.683960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.456 [2024-11-20 11:21:37.683997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.456 qpair failed and we were unable to recover it. 00:27:10.456 [2024-11-20 11:21:37.684111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.456 [2024-11-20 11:21:37.684145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.456 qpair failed and we were unable to recover it. 00:27:10.456 [2024-11-20 11:21:37.684393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.456 [2024-11-20 11:21:37.684426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.456 qpair failed and we were unable to recover it. 00:27:10.456 [2024-11-20 11:21:37.684604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.456 [2024-11-20 11:21:37.684638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.456 qpair failed and we were unable to recover it. 00:27:10.456 [2024-11-20 11:21:37.684765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.456 [2024-11-20 11:21:37.684798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.456 qpair failed and we were unable to recover it. 00:27:10.456 [2024-11-20 11:21:37.684978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.456 [2024-11-20 11:21:37.685013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.456 qpair failed and we were unable to recover it. 00:27:10.456 [2024-11-20 11:21:37.685136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.456 [2024-11-20 11:21:37.685170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.456 qpair failed and we were unable to recover it. 00:27:10.456 [2024-11-20 11:21:37.685299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.456 [2024-11-20 11:21:37.685330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.456 qpair failed and we were unable to recover it. 00:27:10.456 [2024-11-20 11:21:37.685587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.456 [2024-11-20 11:21:37.685634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.457 qpair failed and we were unable to recover it. 00:27:10.457 [2024-11-20 11:21:37.685826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.457 [2024-11-20 11:21:37.685860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.457 qpair failed and we were unable to recover it. 00:27:10.457 [2024-11-20 11:21:37.685979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.457 [2024-11-20 11:21:37.686014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.457 qpair failed and we were unable to recover it. 00:27:10.457 [2024-11-20 11:21:37.686266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.457 [2024-11-20 11:21:37.686300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.457 qpair failed and we were unable to recover it. 00:27:10.457 [2024-11-20 11:21:37.686424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.457 [2024-11-20 11:21:37.686457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.457 qpair failed and we were unable to recover it. 00:27:10.457 [2024-11-20 11:21:37.686583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.457 [2024-11-20 11:21:37.686617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.457 qpair failed and we were unable to recover it. 00:27:10.457 [2024-11-20 11:21:37.686732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.457 [2024-11-20 11:21:37.686764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.457 qpair failed and we were unable to recover it. 00:27:10.457 [2024-11-20 11:21:37.686887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.457 [2024-11-20 11:21:37.686920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.457 qpair failed and we were unable to recover it. 00:27:10.457 [2024-11-20 11:21:37.687119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.457 [2024-11-20 11:21:37.687154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.457 qpair failed and we were unable to recover it. 00:27:10.457 [2024-11-20 11:21:37.687323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.457 [2024-11-20 11:21:37.687364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.457 qpair failed and we were unable to recover it. 00:27:10.457 [2024-11-20 11:21:37.687549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.457 [2024-11-20 11:21:37.687583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.457 qpair failed and we were unable to recover it. 00:27:10.457 [2024-11-20 11:21:37.687797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.457 [2024-11-20 11:21:37.687830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.457 qpair failed and we were unable to recover it. 00:27:10.457 [2024-11-20 11:21:37.687975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.457 [2024-11-20 11:21:37.688010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.457 qpair failed and we were unable to recover it. 00:27:10.457 [2024-11-20 11:21:37.688128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.457 [2024-11-20 11:21:37.688163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.457 qpair failed and we were unable to recover it. 00:27:10.457 [2024-11-20 11:21:37.688273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.457 [2024-11-20 11:21:37.688307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.457 qpair failed and we were unable to recover it. 00:27:10.457 [2024-11-20 11:21:37.688417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.457 [2024-11-20 11:21:37.688452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.457 qpair failed and we were unable to recover it. 00:27:10.457 [2024-11-20 11:21:37.688582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.457 [2024-11-20 11:21:37.688617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.457 qpair failed and we were unable to recover it. 00:27:10.457 [2024-11-20 11:21:37.688733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.457 [2024-11-20 11:21:37.688770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.457 qpair failed and we were unable to recover it. 00:27:10.457 [2024-11-20 11:21:37.688969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.457 [2024-11-20 11:21:37.689005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.457 qpair failed and we were unable to recover it. 00:27:10.457 [2024-11-20 11:21:37.689147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.457 [2024-11-20 11:21:37.689183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.457 qpair failed and we were unable to recover it. 00:27:10.457 [2024-11-20 11:21:37.689291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.457 [2024-11-20 11:21:37.689325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.457 qpair failed and we were unable to recover it. 00:27:10.457 [2024-11-20 11:21:37.689509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.457 [2024-11-20 11:21:37.689544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.457 qpair failed and we were unable to recover it. 00:27:10.457 [2024-11-20 11:21:37.689661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.457 [2024-11-20 11:21:37.689697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.457 qpair failed and we were unable to recover it. 00:27:10.457 [2024-11-20 11:21:37.689817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.457 [2024-11-20 11:21:37.689851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.457 qpair failed and we were unable to recover it. 00:27:10.457 [2024-11-20 11:21:37.689975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.457 [2024-11-20 11:21:37.690011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.457 qpair failed and we were unable to recover it. 00:27:10.457 [2024-11-20 11:21:37.690127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.457 [2024-11-20 11:21:37.690161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.457 qpair failed and we were unable to recover it. 00:27:10.457 [2024-11-20 11:21:37.690348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.457 [2024-11-20 11:21:37.690383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.457 qpair failed and we were unable to recover it. 00:27:10.457 [2024-11-20 11:21:37.690502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.457 [2024-11-20 11:21:37.690536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.457 qpair failed and we were unable to recover it. 00:27:10.457 [2024-11-20 11:21:37.690667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.457 [2024-11-20 11:21:37.690701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.457 qpair failed and we were unable to recover it. 00:27:10.457 [2024-11-20 11:21:37.690838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.457 [2024-11-20 11:21:37.690872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.457 qpair failed and we were unable to recover it. 00:27:10.457 [2024-11-20 11:21:37.691002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.457 [2024-11-20 11:21:37.691039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.457 qpair failed and we were unable to recover it. 00:27:10.457 [2024-11-20 11:21:37.691170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.457 [2024-11-20 11:21:37.691204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.457 qpair failed and we were unable to recover it. 00:27:10.457 [2024-11-20 11:21:37.691312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.457 [2024-11-20 11:21:37.691346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.457 qpair failed and we were unable to recover it. 00:27:10.457 [2024-11-20 11:21:37.691465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.457 [2024-11-20 11:21:37.691499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.457 qpair failed and we were unable to recover it. 00:27:10.458 [2024-11-20 11:21:37.691669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.458 [2024-11-20 11:21:37.691703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.458 qpair failed and we were unable to recover it. 00:27:10.458 [2024-11-20 11:21:37.691814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.458 [2024-11-20 11:21:37.691848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.458 qpair failed and we were unable to recover it. 00:27:10.458 [2024-11-20 11:21:37.692022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.458 [2024-11-20 11:21:37.692058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.458 qpair failed and we were unable to recover it. 00:27:10.458 [2024-11-20 11:21:37.692187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.458 [2024-11-20 11:21:37.692221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.458 qpair failed and we were unable to recover it. 00:27:10.458 [2024-11-20 11:21:37.692402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.458 [2024-11-20 11:21:37.692436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.458 qpair failed and we were unable to recover it. 00:27:10.458 [2024-11-20 11:21:37.692560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.458 [2024-11-20 11:21:37.692593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.458 qpair failed and we were unable to recover it. 00:27:10.458 [2024-11-20 11:21:37.692704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.458 [2024-11-20 11:21:37.692737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.458 qpair failed and we were unable to recover it. 00:27:10.458 [2024-11-20 11:21:37.692915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.458 [2024-11-20 11:21:37.692957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.458 qpair failed and we were unable to recover it. 00:27:10.458 [2024-11-20 11:21:37.693134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.458 [2024-11-20 11:21:37.693168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.458 qpair failed and we were unable to recover it. 00:27:10.458 [2024-11-20 11:21:37.693290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.458 [2024-11-20 11:21:37.693324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.458 qpair failed and we were unable to recover it. 00:27:10.458 [2024-11-20 11:21:37.693432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.458 [2024-11-20 11:21:37.693465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.458 qpair failed and we were unable to recover it. 00:27:10.458 [2024-11-20 11:21:37.693581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.458 [2024-11-20 11:21:37.693615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.458 qpair failed and we were unable to recover it. 00:27:10.458 [2024-11-20 11:21:37.693728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.458 [2024-11-20 11:21:37.693761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.458 qpair failed and we were unable to recover it. 00:27:10.458 [2024-11-20 11:21:37.693880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.458 [2024-11-20 11:21:37.693914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.458 qpair failed and we were unable to recover it. 00:27:10.458 [2024-11-20 11:21:37.694173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.458 [2024-11-20 11:21:37.694227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.458 qpair failed and we were unable to recover it. 00:27:10.458 [2024-11-20 11:21:37.694445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.458 [2024-11-20 11:21:37.694478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.458 qpair failed and we were unable to recover it. 00:27:10.458 [2024-11-20 11:21:37.694622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.458 [2024-11-20 11:21:37.694675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.458 qpair failed and we were unable to recover it. 00:27:10.458 [2024-11-20 11:21:37.694820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.458 [2024-11-20 11:21:37.694858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.458 qpair failed and we were unable to recover it. 00:27:10.458 [2024-11-20 11:21:37.694984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.458 [2024-11-20 11:21:37.695019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.458 qpair failed and we were unable to recover it. 00:27:10.458 [2024-11-20 11:21:37.695195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.458 [2024-11-20 11:21:37.695230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.458 qpair failed and we were unable to recover it. 00:27:10.458 [2024-11-20 11:21:37.695474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.458 [2024-11-20 11:21:37.695507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.458 qpair failed and we were unable to recover it. 00:27:10.458 [2024-11-20 11:21:37.695627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.458 [2024-11-20 11:21:37.695661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.458 qpair failed and we were unable to recover it. 00:27:10.458 [2024-11-20 11:21:37.695886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.458 [2024-11-20 11:21:37.695919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.458 qpair failed and we were unable to recover it. 00:27:10.458 [2024-11-20 11:21:37.696082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.458 [2024-11-20 11:21:37.696116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.458 qpair failed and we were unable to recover it. 00:27:10.458 [2024-11-20 11:21:37.696300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.458 [2024-11-20 11:21:37.696334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.458 qpair failed and we were unable to recover it. 00:27:10.458 [2024-11-20 11:21:37.696460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.458 [2024-11-20 11:21:37.696493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.458 qpair failed and we were unable to recover it. 00:27:10.458 [2024-11-20 11:21:37.696609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.458 [2024-11-20 11:21:37.696643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.458 qpair failed and we were unable to recover it. 00:27:10.458 [2024-11-20 11:21:37.696833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.458 [2024-11-20 11:21:37.696867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.458 qpair failed and we were unable to recover it. 00:27:10.459 [2024-11-20 11:21:37.697050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.459 [2024-11-20 11:21:37.697085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.459 qpair failed and we were unable to recover it. 00:27:10.459 [2024-11-20 11:21:37.697194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.459 [2024-11-20 11:21:37.697227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.459 qpair failed and we were unable to recover it. 00:27:10.459 [2024-11-20 11:21:37.697342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.459 [2024-11-20 11:21:37.697376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.459 qpair failed and we were unable to recover it. 00:27:10.459 [2024-11-20 11:21:37.697503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.459 [2024-11-20 11:21:37.697536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.459 qpair failed and we were unable to recover it. 00:27:10.459 [2024-11-20 11:21:37.697711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.459 [2024-11-20 11:21:37.697744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.459 qpair failed and we were unable to recover it. 00:27:10.459 [2024-11-20 11:21:37.697918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.459 [2024-11-20 11:21:37.697959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.459 qpair failed and we were unable to recover it. 00:27:10.459 [2024-11-20 11:21:37.698143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.459 [2024-11-20 11:21:37.698176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.459 qpair failed and we were unable to recover it. 00:27:10.459 [2024-11-20 11:21:37.698351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.459 [2024-11-20 11:21:37.698385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.459 qpair failed and we were unable to recover it. 00:27:10.459 [2024-11-20 11:21:37.698537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.459 [2024-11-20 11:21:37.698570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.459 qpair failed and we were unable to recover it. 00:27:10.459 [2024-11-20 11:21:37.698752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.459 [2024-11-20 11:21:37.698785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.459 qpair failed and we were unable to recover it. 00:27:10.459 [2024-11-20 11:21:37.699000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.459 [2024-11-20 11:21:37.699034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.459 qpair failed and we were unable to recover it. 00:27:10.459 [2024-11-20 11:21:37.699208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.459 [2024-11-20 11:21:37.699243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.459 qpair failed and we were unable to recover it. 00:27:10.459 [2024-11-20 11:21:37.699450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.459 [2024-11-20 11:21:37.699482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.459 qpair failed and we were unable to recover it. 00:27:10.459 [2024-11-20 11:21:37.699616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.459 [2024-11-20 11:21:37.699649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.459 qpair failed and we were unable to recover it. 00:27:10.459 [2024-11-20 11:21:37.699771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.459 [2024-11-20 11:21:37.699805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.459 qpair failed and we were unable to recover it. 00:27:10.459 [2024-11-20 11:21:37.699928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.459 [2024-11-20 11:21:37.699975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.459 qpair failed and we were unable to recover it. 00:27:10.459 [2024-11-20 11:21:37.700081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.459 [2024-11-20 11:21:37.700114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.459 qpair failed and we were unable to recover it. 00:27:10.459 [2024-11-20 11:21:37.700312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.459 [2024-11-20 11:21:37.700345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.459 qpair failed and we were unable to recover it. 00:27:10.459 [2024-11-20 11:21:37.700471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.459 [2024-11-20 11:21:37.700505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.459 qpair failed and we were unable to recover it. 00:27:10.459 [2024-11-20 11:21:37.700618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.459 [2024-11-20 11:21:37.700651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.459 qpair failed and we were unable to recover it. 00:27:10.459 [2024-11-20 11:21:37.700759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.459 [2024-11-20 11:21:37.700794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.459 qpair failed and we were unable to recover it. 00:27:10.459 [2024-11-20 11:21:37.700902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.459 [2024-11-20 11:21:37.700936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.459 qpair failed and we were unable to recover it. 00:27:10.459 [2024-11-20 11:21:37.701067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.459 [2024-11-20 11:21:37.701101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.459 qpair failed and we were unable to recover it. 00:27:10.459 [2024-11-20 11:21:37.701286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.459 [2024-11-20 11:21:37.701319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.459 qpair failed and we were unable to recover it. 00:27:10.459 [2024-11-20 11:21:37.701530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.459 [2024-11-20 11:21:37.701563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.459 qpair failed and we were unable to recover it. 00:27:10.459 [2024-11-20 11:21:37.701685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.459 [2024-11-20 11:21:37.701718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.459 qpair failed and we were unable to recover it. 00:27:10.459 [2024-11-20 11:21:37.701848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.459 [2024-11-20 11:21:37.701880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.459 qpair failed and we were unable to recover it. 00:27:10.459 [2024-11-20 11:21:37.702066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.459 [2024-11-20 11:21:37.702135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.459 qpair failed and we were unable to recover it. 00:27:10.459 [2024-11-20 11:21:37.702257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.459 [2024-11-20 11:21:37.702290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.459 qpair failed and we were unable to recover it. 00:27:10.459 [2024-11-20 11:21:37.702412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.459 [2024-11-20 11:21:37.702445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.459 qpair failed and we were unable to recover it. 00:27:10.459 [2024-11-20 11:21:37.702550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.459 [2024-11-20 11:21:37.702583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.459 qpair failed and we were unable to recover it. 00:27:10.459 [2024-11-20 11:21:37.702701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.459 [2024-11-20 11:21:37.702734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.459 qpair failed and we were unable to recover it. 00:27:10.459 [2024-11-20 11:21:37.702846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.459 [2024-11-20 11:21:37.702877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.459 qpair failed and we were unable to recover it. 00:27:10.459 [2024-11-20 11:21:37.702990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.460 [2024-11-20 11:21:37.703025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.460 qpair failed and we were unable to recover it. 00:27:10.460 [2024-11-20 11:21:37.703138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.460 [2024-11-20 11:21:37.703171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.460 qpair failed and we were unable to recover it. 00:27:10.460 [2024-11-20 11:21:37.703280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.460 [2024-11-20 11:21:37.703313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.460 qpair failed and we were unable to recover it. 00:27:10.460 [2024-11-20 11:21:37.703438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.460 [2024-11-20 11:21:37.703471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.460 qpair failed and we were unable to recover it. 00:27:10.460 [2024-11-20 11:21:37.703579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.460 [2024-11-20 11:21:37.703612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.460 qpair failed and we were unable to recover it. 00:27:10.460 [2024-11-20 11:21:37.703746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.460 [2024-11-20 11:21:37.703779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.460 qpair failed and we were unable to recover it. 00:27:10.460 [2024-11-20 11:21:37.703967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.460 [2024-11-20 11:21:37.704002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.460 qpair failed and we were unable to recover it. 00:27:10.460 [2024-11-20 11:21:37.704179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.460 [2024-11-20 11:21:37.704213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.460 qpair failed and we were unable to recover it. 00:27:10.460 [2024-11-20 11:21:37.704339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.460 [2024-11-20 11:21:37.704373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.460 qpair failed and we were unable to recover it. 00:27:10.460 [2024-11-20 11:21:37.704495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.460 [2024-11-20 11:21:37.704542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.460 qpair failed and we were unable to recover it. 00:27:10.460 [2024-11-20 11:21:37.704651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.460 [2024-11-20 11:21:37.704684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.460 qpair failed and we were unable to recover it. 00:27:10.460 [2024-11-20 11:21:37.704869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.460 [2024-11-20 11:21:37.704902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.460 qpair failed and we were unable to recover it. 00:27:10.460 [2024-11-20 11:21:37.705112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.460 [2024-11-20 11:21:37.705147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.460 qpair failed and we were unable to recover it. 00:27:10.460 [2024-11-20 11:21:37.705273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.460 [2024-11-20 11:21:37.705306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.460 qpair failed and we were unable to recover it. 00:27:10.460 [2024-11-20 11:21:37.705421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.460 [2024-11-20 11:21:37.705455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.460 qpair failed and we were unable to recover it. 00:27:10.460 [2024-11-20 11:21:37.705638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.460 [2024-11-20 11:21:37.705671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.460 qpair failed and we were unable to recover it. 00:27:10.460 [2024-11-20 11:21:37.705793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.460 [2024-11-20 11:21:37.705826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.460 qpair failed and we were unable to recover it. 00:27:10.460 [2024-11-20 11:21:37.706038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.460 [2024-11-20 11:21:37.706073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.460 qpair failed and we were unable to recover it. 00:27:10.460 [2024-11-20 11:21:37.706202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.460 [2024-11-20 11:21:37.706236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.460 qpair failed and we were unable to recover it. 00:27:10.460 [2024-11-20 11:21:37.706351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.460 [2024-11-20 11:21:37.706384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.460 qpair failed and we were unable to recover it. 00:27:10.460 [2024-11-20 11:21:37.706527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.460 [2024-11-20 11:21:37.706561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.460 qpair failed and we were unable to recover it. 00:27:10.460 [2024-11-20 11:21:37.706747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.460 [2024-11-20 11:21:37.706780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.460 qpair failed and we were unable to recover it. 00:27:10.460 [2024-11-20 11:21:37.706897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.460 [2024-11-20 11:21:37.706930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.460 qpair failed and we were unable to recover it. 00:27:10.460 [2024-11-20 11:21:37.707064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.460 [2024-11-20 11:21:37.707097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.460 qpair failed and we were unable to recover it. 00:27:10.460 [2024-11-20 11:21:37.707208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.460 [2024-11-20 11:21:37.707243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.460 qpair failed and we were unable to recover it. 00:27:10.460 [2024-11-20 11:21:37.707356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.460 [2024-11-20 11:21:37.707388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.460 qpair failed and we were unable to recover it. 00:27:10.460 [2024-11-20 11:21:37.707574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.460 [2024-11-20 11:21:37.707607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.460 qpair failed and we were unable to recover it. 00:27:10.460 [2024-11-20 11:21:37.707860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.460 [2024-11-20 11:21:37.707894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.460 qpair failed and we were unable to recover it. 00:27:10.460 [2024-11-20 11:21:37.708075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.460 [2024-11-20 11:21:37.708109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.460 qpair failed and we were unable to recover it. 00:27:10.460 [2024-11-20 11:21:37.708230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.460 [2024-11-20 11:21:37.708264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.460 qpair failed and we were unable to recover it. 00:27:10.460 [2024-11-20 11:21:37.708380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.460 [2024-11-20 11:21:37.708414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.460 qpair failed and we were unable to recover it. 00:27:10.460 [2024-11-20 11:21:37.708547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.460 [2024-11-20 11:21:37.708581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.460 qpair failed and we were unable to recover it. 00:27:10.460 [2024-11-20 11:21:37.708760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.460 [2024-11-20 11:21:37.708794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.460 qpair failed and we were unable to recover it. 00:27:10.460 [2024-11-20 11:21:37.708903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.461 [2024-11-20 11:21:37.708936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.461 qpair failed and we were unable to recover it. 00:27:10.461 [2024-11-20 11:21:37.709060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.461 [2024-11-20 11:21:37.709094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.461 qpair failed and we were unable to recover it. 00:27:10.461 [2024-11-20 11:21:37.709210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.461 [2024-11-20 11:21:37.709243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.461 qpair failed and we were unable to recover it. 00:27:10.461 [2024-11-20 11:21:37.709374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.461 [2024-11-20 11:21:37.709413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.461 qpair failed and we were unable to recover it. 00:27:10.461 [2024-11-20 11:21:37.709511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.461 [2024-11-20 11:21:37.709545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.461 qpair failed and we were unable to recover it. 00:27:10.461 [2024-11-20 11:21:37.709667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.461 [2024-11-20 11:21:37.709701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.461 qpair failed and we were unable to recover it. 00:27:10.461 [2024-11-20 11:21:37.709886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.461 [2024-11-20 11:21:37.709920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.461 qpair failed and we were unable to recover it. 00:27:10.461 [2024-11-20 11:21:37.710254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.461 [2024-11-20 11:21:37.710291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.461 qpair failed and we were unable to recover it. 00:27:10.461 [2024-11-20 11:21:37.710436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.461 [2024-11-20 11:21:37.710469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.461 qpair failed and we were unable to recover it. 00:27:10.461 [2024-11-20 11:21:37.710674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.461 [2024-11-20 11:21:37.710707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.461 qpair failed and we were unable to recover it. 00:27:10.461 [2024-11-20 11:21:37.710834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.461 [2024-11-20 11:21:37.710868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.461 qpair failed and we were unable to recover it. 00:27:10.461 [2024-11-20 11:21:37.710989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.461 [2024-11-20 11:21:37.711025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.461 qpair failed and we were unable to recover it. 00:27:10.461 [2024-11-20 11:21:37.711208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.461 [2024-11-20 11:21:37.711242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.461 qpair failed and we were unable to recover it. 00:27:10.461 [2024-11-20 11:21:37.711349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.461 [2024-11-20 11:21:37.711382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.461 qpair failed and we were unable to recover it. 00:27:10.461 [2024-11-20 11:21:37.711558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.461 [2024-11-20 11:21:37.711592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.461 qpair failed and we were unable to recover it. 00:27:10.461 [2024-11-20 11:21:37.711766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.461 [2024-11-20 11:21:37.711800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.461 qpair failed and we were unable to recover it. 00:27:10.461 [2024-11-20 11:21:37.711913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.461 [2024-11-20 11:21:37.711957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.461 qpair failed and we were unable to recover it. 00:27:10.461 [2024-11-20 11:21:37.712098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.461 [2024-11-20 11:21:37.712134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.461 qpair failed and we were unable to recover it. 00:27:10.461 [2024-11-20 11:21:37.712333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.461 [2024-11-20 11:21:37.712367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.461 qpair failed and we were unable to recover it. 00:27:10.461 [2024-11-20 11:21:37.712483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.461 [2024-11-20 11:21:37.712518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.461 qpair failed and we were unable to recover it. 00:27:10.461 [2024-11-20 11:21:37.712645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.461 [2024-11-20 11:21:37.712679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.461 qpair failed and we were unable to recover it. 00:27:10.461 [2024-11-20 11:21:37.712917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.461 [2024-11-20 11:21:37.712962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.461 qpair failed and we were unable to recover it. 00:27:10.461 [2024-11-20 11:21:37.713147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.461 [2024-11-20 11:21:37.713182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.461 qpair failed and we were unable to recover it. 00:27:10.461 [2024-11-20 11:21:37.713317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.461 [2024-11-20 11:21:37.713350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.461 qpair failed and we were unable to recover it. 00:27:10.461 [2024-11-20 11:21:37.713526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.461 [2024-11-20 11:21:37.713561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.461 qpair failed and we were unable to recover it. 00:27:10.461 [2024-11-20 11:21:37.713734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.461 [2024-11-20 11:21:37.713768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.461 qpair failed and we were unable to recover it. 00:27:10.461 [2024-11-20 11:21:37.713956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.461 [2024-11-20 11:21:37.713991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.461 qpair failed and we were unable to recover it. 00:27:10.461 [2024-11-20 11:21:37.714165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.461 [2024-11-20 11:21:37.714200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.461 qpair failed and we were unable to recover it. 00:27:10.461 [2024-11-20 11:21:37.714388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.461 [2024-11-20 11:21:37.714422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.461 qpair failed and we were unable to recover it. 00:27:10.461 [2024-11-20 11:21:37.714541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.461 [2024-11-20 11:21:37.714576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.461 qpair failed and we were unable to recover it. 00:27:10.461 [2024-11-20 11:21:37.714770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.461 [2024-11-20 11:21:37.714804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.461 qpair failed and we were unable to recover it. 00:27:10.461 [2024-11-20 11:21:37.715021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.461 [2024-11-20 11:21:37.715056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.461 qpair failed and we were unable to recover it. 00:27:10.461 [2024-11-20 11:21:37.715246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.461 [2024-11-20 11:21:37.715279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.461 qpair failed and we were unable to recover it. 00:27:10.461 [2024-11-20 11:21:37.715411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.461 [2024-11-20 11:21:37.715445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.461 qpair failed and we were unable to recover it. 00:27:10.462 [2024-11-20 11:21:37.715640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.462 [2024-11-20 11:21:37.715674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.462 qpair failed and we were unable to recover it. 00:27:10.462 [2024-11-20 11:21:37.715806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.462 [2024-11-20 11:21:37.715840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.462 qpair failed and we were unable to recover it. 00:27:10.462 [2024-11-20 11:21:37.715967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.462 [2024-11-20 11:21:37.716002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.462 qpair failed and we were unable to recover it. 00:27:10.462 [2024-11-20 11:21:37.716136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.462 [2024-11-20 11:21:37.716170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.462 qpair failed and we were unable to recover it. 00:27:10.462 [2024-11-20 11:21:37.716331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.462 [2024-11-20 11:21:37.716369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.462 qpair failed and we were unable to recover it. 00:27:10.462 [2024-11-20 11:21:37.716571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.462 [2024-11-20 11:21:37.716606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.462 qpair failed and we were unable to recover it. 00:27:10.462 [2024-11-20 11:21:37.716728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.462 [2024-11-20 11:21:37.716762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.462 qpair failed and we were unable to recover it. 00:27:10.462 [2024-11-20 11:21:37.716867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.462 [2024-11-20 11:21:37.716902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.462 qpair failed and we were unable to recover it. 00:27:10.462 [2024-11-20 11:21:37.717035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.462 [2024-11-20 11:21:37.717069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.462 qpair failed and we were unable to recover it. 00:27:10.462 [2024-11-20 11:21:37.717248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.462 [2024-11-20 11:21:37.717281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.462 qpair failed and we were unable to recover it. 00:27:10.462 [2024-11-20 11:21:37.717429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.462 [2024-11-20 11:21:37.717486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.462 qpair failed and we were unable to recover it. 00:27:10.462 [2024-11-20 11:21:37.717611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.462 [2024-11-20 11:21:37.717655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.462 qpair failed and we were unable to recover it. 00:27:10.462 [2024-11-20 11:21:37.717775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.462 [2024-11-20 11:21:37.717808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.462 qpair failed and we were unable to recover it. 00:27:10.462 [2024-11-20 11:21:37.718002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.462 [2024-11-20 11:21:37.718037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.462 qpair failed and we were unable to recover it. 00:27:10.462 [2024-11-20 11:21:37.718159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.462 [2024-11-20 11:21:37.718192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.462 qpair failed and we were unable to recover it. 00:27:10.462 [2024-11-20 11:21:37.718312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.462 [2024-11-20 11:21:37.718343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.462 qpair failed and we were unable to recover it. 00:27:10.462 [2024-11-20 11:21:37.718580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.462 [2024-11-20 11:21:37.718612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.462 qpair failed and we were unable to recover it. 00:27:10.462 [2024-11-20 11:21:37.718725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.462 [2024-11-20 11:21:37.718756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.462 qpair failed and we were unable to recover it. 00:27:10.462 [2024-11-20 11:21:37.718939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.462 [2024-11-20 11:21:37.718999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.462 qpair failed and we were unable to recover it. 00:27:10.462 [2024-11-20 11:21:37.719130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.462 [2024-11-20 11:21:37.719163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.462 qpair failed and we were unable to recover it. 00:27:10.462 [2024-11-20 11:21:37.719268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.462 [2024-11-20 11:21:37.719299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.462 qpair failed and we were unable to recover it. 00:27:10.462 [2024-11-20 11:21:37.719516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.462 [2024-11-20 11:21:37.719548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.462 qpair failed and we were unable to recover it. 00:27:10.462 [2024-11-20 11:21:37.719657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.462 [2024-11-20 11:21:37.719688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.462 qpair failed and we were unable to recover it. 00:27:10.462 [2024-11-20 11:21:37.719807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.462 [2024-11-20 11:21:37.719849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.462 qpair failed and we were unable to recover it. 00:27:10.462 [2024-11-20 11:21:37.720045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.462 [2024-11-20 11:21:37.720079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.462 qpair failed and we were unable to recover it. 00:27:10.462 [2024-11-20 11:21:37.720192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.462 [2024-11-20 11:21:37.720223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.462 qpair failed and we were unable to recover it. 00:27:10.462 [2024-11-20 11:21:37.720353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.462 [2024-11-20 11:21:37.720384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.462 qpair failed and we were unable to recover it. 00:27:10.462 [2024-11-20 11:21:37.720500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.462 [2024-11-20 11:21:37.720533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.462 qpair failed and we were unable to recover it. 00:27:10.462 [2024-11-20 11:21:37.720725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.462 [2024-11-20 11:21:37.720757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.462 qpair failed and we were unable to recover it. 00:27:10.462 [2024-11-20 11:21:37.720961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.462 [2024-11-20 11:21:37.720996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.462 qpair failed and we were unable to recover it. 00:27:10.462 [2024-11-20 11:21:37.721171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.462 [2024-11-20 11:21:37.721203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.462 qpair failed and we were unable to recover it. 00:27:10.462 [2024-11-20 11:21:37.721325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.462 [2024-11-20 11:21:37.721355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.462 qpair failed and we were unable to recover it. 00:27:10.462 [2024-11-20 11:21:37.721596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.462 [2024-11-20 11:21:37.721627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.462 qpair failed and we were unable to recover it. 00:27:10.463 [2024-11-20 11:21:37.721755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.463 [2024-11-20 11:21:37.721786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.463 qpair failed and we were unable to recover it. 00:27:10.463 [2024-11-20 11:21:37.721903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.463 [2024-11-20 11:21:37.721934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.463 qpair failed and we were unable to recover it. 00:27:10.463 [2024-11-20 11:21:37.722075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.463 [2024-11-20 11:21:37.722107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.463 qpair failed and we were unable to recover it. 00:27:10.463 [2024-11-20 11:21:37.722212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.463 [2024-11-20 11:21:37.722244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.463 qpair failed and we were unable to recover it. 00:27:10.463 [2024-11-20 11:21:37.722427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.463 [2024-11-20 11:21:37.722461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.463 qpair failed and we were unable to recover it. 00:27:10.463 [2024-11-20 11:21:37.722573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.463 [2024-11-20 11:21:37.722605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.463 qpair failed and we were unable to recover it. 00:27:10.463 [2024-11-20 11:21:37.722727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.463 [2024-11-20 11:21:37.722758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.463 qpair failed and we were unable to recover it. 00:27:10.463 [2024-11-20 11:21:37.722880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.463 [2024-11-20 11:21:37.722911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.463 qpair failed and we were unable to recover it. 00:27:10.463 [2024-11-20 11:21:37.723035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.463 [2024-11-20 11:21:37.723067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.463 qpair failed and we were unable to recover it. 00:27:10.463 [2024-11-20 11:21:37.723246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.463 [2024-11-20 11:21:37.723277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.463 qpair failed and we were unable to recover it. 00:27:10.463 [2024-11-20 11:21:37.723383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.463 [2024-11-20 11:21:37.723416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.463 qpair failed and we were unable to recover it. 00:27:10.463 [2024-11-20 11:21:37.723586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.463 [2024-11-20 11:21:37.723618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.463 qpair failed and we were unable to recover it. 00:27:10.463 [2024-11-20 11:21:37.723879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.463 [2024-11-20 11:21:37.723910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.463 qpair failed and we were unable to recover it. 00:27:10.463 [2024-11-20 11:21:37.724036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.463 [2024-11-20 11:21:37.724069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.463 qpair failed and we were unable to recover it. 00:27:10.463 [2024-11-20 11:21:37.724235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.463 [2024-11-20 11:21:37.724266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.463 qpair failed and we were unable to recover it. 00:27:10.463 [2024-11-20 11:21:37.724386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.463 [2024-11-20 11:21:37.724417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.463 qpair failed and we were unable to recover it. 00:27:10.463 [2024-11-20 11:21:37.724535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.463 [2024-11-20 11:21:37.724566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.463 qpair failed and we were unable to recover it. 00:27:10.463 [2024-11-20 11:21:37.724748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.463 [2024-11-20 11:21:37.724785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.463 qpair failed and we were unable to recover it. 00:27:10.463 [2024-11-20 11:21:37.724989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.463 [2024-11-20 11:21:37.725024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.463 qpair failed and we were unable to recover it. 00:27:10.463 [2024-11-20 11:21:37.725213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.463 [2024-11-20 11:21:37.725244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.463 qpair failed and we were unable to recover it. 00:27:10.463 [2024-11-20 11:21:37.725358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.463 [2024-11-20 11:21:37.725390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.463 qpair failed and we were unable to recover it. 00:27:10.463 [2024-11-20 11:21:37.725516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.463 [2024-11-20 11:21:37.725549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.463 qpair failed and we were unable to recover it. 00:27:10.463 [2024-11-20 11:21:37.725667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.463 [2024-11-20 11:21:37.725699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.463 qpair failed and we were unable to recover it. 00:27:10.463 [2024-11-20 11:21:37.725804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.463 [2024-11-20 11:21:37.725837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.463 qpair failed and we were unable to recover it. 00:27:10.463 [2024-11-20 11:21:37.726015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.463 [2024-11-20 11:21:37.726049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.463 qpair failed and we were unable to recover it. 00:27:10.463 [2024-11-20 11:21:37.726149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.463 [2024-11-20 11:21:37.726181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.463 qpair failed and we were unable to recover it. 00:27:10.463 [2024-11-20 11:21:37.726294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.463 [2024-11-20 11:21:37.726326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.463 qpair failed and we were unable to recover it. 00:27:10.463 [2024-11-20 11:21:37.726433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.463 [2024-11-20 11:21:37.726464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.463 qpair failed and we were unable to recover it. 00:27:10.463 [2024-11-20 11:21:37.726577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.463 [2024-11-20 11:21:37.726608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.463 qpair failed and we were unable to recover it. 00:27:10.463 [2024-11-20 11:21:37.726793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.464 [2024-11-20 11:21:37.726825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.464 qpair failed and we were unable to recover it. 00:27:10.464 [2024-11-20 11:21:37.726943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.464 [2024-11-20 11:21:37.726993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.464 qpair failed and we were unable to recover it. 00:27:10.464 [2024-11-20 11:21:37.727108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.464 [2024-11-20 11:21:37.727141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.464 qpair failed and we were unable to recover it. 00:27:10.464 [2024-11-20 11:21:37.727267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.464 [2024-11-20 11:21:37.727300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.464 qpair failed and we were unable to recover it. 00:27:10.464 [2024-11-20 11:21:37.727413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.464 [2024-11-20 11:21:37.727444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.464 qpair failed and we were unable to recover it. 00:27:10.464 [2024-11-20 11:21:37.727617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.464 [2024-11-20 11:21:37.727649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.464 qpair failed and we were unable to recover it. 00:27:10.464 [2024-11-20 11:21:37.727759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.464 [2024-11-20 11:21:37.727793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.464 qpair failed and we were unable to recover it. 00:27:10.464 [2024-11-20 11:21:37.727982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.464 [2024-11-20 11:21:37.728015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.464 qpair failed and we were unable to recover it. 00:27:10.464 [2024-11-20 11:21:37.728204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.464 [2024-11-20 11:21:37.728237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.464 qpair failed and we were unable to recover it. 00:27:10.464 [2024-11-20 11:21:37.728521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.464 [2024-11-20 11:21:37.728553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.464 qpair failed and we were unable to recover it. 00:27:10.464 [2024-11-20 11:21:37.728675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.464 [2024-11-20 11:21:37.728708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.464 qpair failed and we were unable to recover it. 00:27:10.464 [2024-11-20 11:21:37.728884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.464 [2024-11-20 11:21:37.728916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.464 qpair failed and we were unable to recover it. 00:27:10.464 [2024-11-20 11:21:37.729126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.464 [2024-11-20 11:21:37.729163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.464 qpair failed and we were unable to recover it. 00:27:10.464 [2024-11-20 11:21:37.729353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.464 [2024-11-20 11:21:37.729386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.464 qpair failed and we were unable to recover it. 00:27:10.464 [2024-11-20 11:21:37.729578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.464 [2024-11-20 11:21:37.729611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.464 qpair failed and we were unable to recover it. 00:27:10.464 [2024-11-20 11:21:37.729891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.464 [2024-11-20 11:21:37.729924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.464 qpair failed and we were unable to recover it. 00:27:10.464 [2024-11-20 11:21:37.730054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.464 [2024-11-20 11:21:37.730087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.464 qpair failed and we were unable to recover it. 00:27:10.464 [2024-11-20 11:21:37.730269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.464 [2024-11-20 11:21:37.730300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.464 qpair failed and we were unable to recover it. 00:27:10.464 [2024-11-20 11:21:37.730472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.464 [2024-11-20 11:21:37.730505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.464 qpair failed and we were unable to recover it. 00:27:10.464 [2024-11-20 11:21:37.730634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.464 [2024-11-20 11:21:37.730664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.464 qpair failed and we were unable to recover it. 00:27:10.464 [2024-11-20 11:21:37.730926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.464 [2024-11-20 11:21:37.730968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.464 qpair failed and we were unable to recover it. 00:27:10.464 [2024-11-20 11:21:37.731141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.464 [2024-11-20 11:21:37.731174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.464 qpair failed and we were unable to recover it. 00:27:10.464 [2024-11-20 11:21:37.731302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.464 [2024-11-20 11:21:37.731335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.464 qpair failed and we were unable to recover it. 00:27:10.464 [2024-11-20 11:21:37.731514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.464 [2024-11-20 11:21:37.731546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.464 qpair failed and we were unable to recover it. 00:27:10.464 [2024-11-20 11:21:37.731839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.464 [2024-11-20 11:21:37.731872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.464 qpair failed and we were unable to recover it. 00:27:10.464 [2024-11-20 11:21:37.732000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.464 [2024-11-20 11:21:37.732033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.464 qpair failed and we were unable to recover it. 00:27:10.464 [2024-11-20 11:21:37.732167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.464 [2024-11-20 11:21:37.732199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.464 qpair failed and we were unable to recover it. 00:27:10.464 [2024-11-20 11:21:37.732318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.464 [2024-11-20 11:21:37.732351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.464 qpair failed and we were unable to recover it. 00:27:10.464 [2024-11-20 11:21:37.732457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.464 [2024-11-20 11:21:37.732495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.464 qpair failed and we were unable to recover it. 00:27:10.464 [2024-11-20 11:21:37.732692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.464 [2024-11-20 11:21:37.732724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.464 qpair failed and we were unable to recover it. 00:27:10.464 [2024-11-20 11:21:37.732966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.464 [2024-11-20 11:21:37.733000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.464 qpair failed and we were unable to recover it. 00:27:10.464 [2024-11-20 11:21:37.733239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.464 [2024-11-20 11:21:37.733271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.464 qpair failed and we were unable to recover it. 00:27:10.464 [2024-11-20 11:21:37.733386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.464 [2024-11-20 11:21:37.733418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.464 qpair failed and we were unable to recover it. 00:27:10.464 [2024-11-20 11:21:37.733592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.465 [2024-11-20 11:21:37.733624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.465 qpair failed and we were unable to recover it. 00:27:10.465 [2024-11-20 11:21:37.733888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.465 [2024-11-20 11:21:37.733920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.465 qpair failed and we were unable to recover it. 00:27:10.465 [2024-11-20 11:21:37.734127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.465 [2024-11-20 11:21:37.734159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.465 qpair failed and we were unable to recover it. 00:27:10.465 [2024-11-20 11:21:37.734282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.465 [2024-11-20 11:21:37.734315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.465 qpair failed and we were unable to recover it. 00:27:10.465 [2024-11-20 11:21:37.734439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.465 [2024-11-20 11:21:37.734471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.465 qpair failed and we were unable to recover it. 00:27:10.465 [2024-11-20 11:21:37.734659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.465 [2024-11-20 11:21:37.734692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.465 qpair failed and we were unable to recover it. 00:27:10.465 [2024-11-20 11:21:37.734868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.465 [2024-11-20 11:21:37.734900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.465 qpair failed and we were unable to recover it. 00:27:10.465 [2024-11-20 11:21:37.735048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.465 [2024-11-20 11:21:37.735081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.465 qpair failed and we were unable to recover it. 00:27:10.465 [2024-11-20 11:21:37.735205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.465 [2024-11-20 11:21:37.735236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.465 qpair failed and we were unable to recover it. 00:27:10.465 [2024-11-20 11:21:37.735343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.465 [2024-11-20 11:21:37.735376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.465 qpair failed and we were unable to recover it. 00:27:10.465 [2024-11-20 11:21:37.735618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.465 [2024-11-20 11:21:37.735650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.465 qpair failed and we were unable to recover it. 00:27:10.465 [2024-11-20 11:21:37.735828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.465 [2024-11-20 11:21:37.735861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.465 qpair failed and we were unable to recover it. 00:27:10.465 [2024-11-20 11:21:37.736036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.465 [2024-11-20 11:21:37.736070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.465 qpair failed and we were unable to recover it. 00:27:10.465 [2024-11-20 11:21:37.736294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.465 [2024-11-20 11:21:37.736325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.465 qpair failed and we were unable to recover it. 00:27:10.465 [2024-11-20 11:21:37.736501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.465 [2024-11-20 11:21:37.736532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.465 qpair failed and we were unable to recover it. 00:27:10.465 [2024-11-20 11:21:37.736704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.465 [2024-11-20 11:21:37.736735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.465 qpair failed and we were unable to recover it. 00:27:10.465 [2024-11-20 11:21:37.736843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.465 [2024-11-20 11:21:37.736875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.465 qpair failed and we were unable to recover it. 00:27:10.465 [2024-11-20 11:21:37.737068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.465 [2024-11-20 11:21:37.737103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.465 qpair failed and we were unable to recover it. 00:27:10.465 [2024-11-20 11:21:37.737277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.465 [2024-11-20 11:21:37.737309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.465 qpair failed and we were unable to recover it. 00:27:10.465 [2024-11-20 11:21:37.737493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.465 [2024-11-20 11:21:37.737525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.465 qpair failed and we were unable to recover it. 00:27:10.465 [2024-11-20 11:21:37.737735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.465 [2024-11-20 11:21:37.737768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.465 qpair failed and we were unable to recover it. 00:27:10.465 [2024-11-20 11:21:37.737883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.465 [2024-11-20 11:21:37.737915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.465 qpair failed and we were unable to recover it. 00:27:10.465 [2024-11-20 11:21:37.738052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.465 [2024-11-20 11:21:37.738087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.465 qpair failed and we were unable to recover it. 00:27:10.465 [2024-11-20 11:21:37.738284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.465 [2024-11-20 11:21:37.738316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.465 qpair failed and we were unable to recover it. 00:27:10.465 [2024-11-20 11:21:37.738558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.465 [2024-11-20 11:21:37.738589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.465 qpair failed and we were unable to recover it. 00:27:10.465 [2024-11-20 11:21:37.738715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.465 [2024-11-20 11:21:37.738747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.465 qpair failed and we were unable to recover it. 00:27:10.465 [2024-11-20 11:21:37.738854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.465 [2024-11-20 11:21:37.738886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.465 qpair failed and we were unable to recover it. 00:27:10.465 [2024-11-20 11:21:37.739002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.465 [2024-11-20 11:21:37.739035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.465 qpair failed and we were unable to recover it. 00:27:10.465 [2024-11-20 11:21:37.739284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.465 [2024-11-20 11:21:37.739316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.465 qpair failed and we were unable to recover it. 00:27:10.465 [2024-11-20 11:21:37.739500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.465 [2024-11-20 11:21:37.739532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.465 qpair failed and we were unable to recover it. 00:27:10.465 [2024-11-20 11:21:37.739652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.465 [2024-11-20 11:21:37.739683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.465 qpair failed and we were unable to recover it. 00:27:10.465 [2024-11-20 11:21:37.739881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.465 [2024-11-20 11:21:37.739912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.465 qpair failed and we were unable to recover it. 00:27:10.465 [2024-11-20 11:21:37.740172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.465 [2024-11-20 11:21:37.740205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.465 qpair failed and we were unable to recover it. 00:27:10.466 [2024-11-20 11:21:37.740318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.466 [2024-11-20 11:21:37.740350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.466 qpair failed and we were unable to recover it. 00:27:10.466 [2024-11-20 11:21:37.740626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.466 [2024-11-20 11:21:37.740658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.466 qpair failed and we were unable to recover it. 00:27:10.466 [2024-11-20 11:21:37.740834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.466 [2024-11-20 11:21:37.740872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.466 qpair failed and we were unable to recover it. 00:27:10.466 [2024-11-20 11:21:37.740989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.466 [2024-11-20 11:21:37.741024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.466 qpair failed and we were unable to recover it. 00:27:10.466 [2024-11-20 11:21:37.741285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.466 [2024-11-20 11:21:37.741316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.466 qpair failed and we were unable to recover it. 00:27:10.466 [2024-11-20 11:21:37.741502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.466 [2024-11-20 11:21:37.741533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.466 qpair failed and we were unable to recover it. 00:27:10.466 [2024-11-20 11:21:37.741647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.466 [2024-11-20 11:21:37.741680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.466 qpair failed and we were unable to recover it. 00:27:10.466 [2024-11-20 11:21:37.741811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.466 [2024-11-20 11:21:37.741843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.466 qpair failed and we were unable to recover it. 00:27:10.466 [2024-11-20 11:21:37.741969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.466 [2024-11-20 11:21:37.742002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.466 qpair failed and we were unable to recover it. 00:27:10.466 [2024-11-20 11:21:37.742109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.466 [2024-11-20 11:21:37.742141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.466 qpair failed and we were unable to recover it. 00:27:10.466 [2024-11-20 11:21:37.742398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.466 [2024-11-20 11:21:37.742430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.466 qpair failed and we were unable to recover it. 00:27:10.466 [2024-11-20 11:21:37.742619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.466 [2024-11-20 11:21:37.742650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.466 qpair failed and we were unable to recover it. 00:27:10.466 [2024-11-20 11:21:37.742839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.466 [2024-11-20 11:21:37.742872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.466 qpair failed and we were unable to recover it. 00:27:10.466 [2024-11-20 11:21:37.742992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.466 [2024-11-20 11:21:37.743026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.466 qpair failed and we were unable to recover it. 00:27:10.466 [2024-11-20 11:21:37.743216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.466 [2024-11-20 11:21:37.743248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.466 qpair failed and we were unable to recover it. 00:27:10.466 [2024-11-20 11:21:37.743436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.466 [2024-11-20 11:21:37.743469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.466 qpair failed and we were unable to recover it. 00:27:10.466 [2024-11-20 11:21:37.743716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.466 [2024-11-20 11:21:37.743749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.466 qpair failed and we were unable to recover it. 00:27:10.466 [2024-11-20 11:21:37.743918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.466 [2024-11-20 11:21:37.743959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.466 qpair failed and we were unable to recover it. 00:27:10.466 [2024-11-20 11:21:37.744245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.466 [2024-11-20 11:21:37.744278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.466 qpair failed and we were unable to recover it. 00:27:10.466 [2024-11-20 11:21:37.744449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.466 [2024-11-20 11:21:37.744481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.466 qpair failed and we were unable to recover it. 00:27:10.466 [2024-11-20 11:21:37.744732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.466 [2024-11-20 11:21:37.744764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.466 qpair failed and we were unable to recover it. 00:27:10.466 [2024-11-20 11:21:37.745001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.466 [2024-11-20 11:21:37.745035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.466 qpair failed and we were unable to recover it. 00:27:10.466 [2024-11-20 11:21:37.745249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.466 [2024-11-20 11:21:37.745281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.466 qpair failed and we were unable to recover it. 00:27:10.466 [2024-11-20 11:21:37.745537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.466 [2024-11-20 11:21:37.745569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.466 qpair failed and we were unable to recover it. 00:27:10.466 [2024-11-20 11:21:37.745676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.466 [2024-11-20 11:21:37.745708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.466 qpair failed and we were unable to recover it. 00:27:10.466 [2024-11-20 11:21:37.745882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.466 [2024-11-20 11:21:37.745913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.466 qpair failed and we were unable to recover it. 00:27:10.466 [2024-11-20 11:21:37.746053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.466 [2024-11-20 11:21:37.746086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.466 qpair failed and we were unable to recover it. 00:27:10.466 [2024-11-20 11:21:37.746202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.466 [2024-11-20 11:21:37.746234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.466 qpair failed and we were unable to recover it. 00:27:10.466 [2024-11-20 11:21:37.746405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.466 [2024-11-20 11:21:37.746437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.466 qpair failed and we were unable to recover it. 00:27:10.466 [2024-11-20 11:21:37.746566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.466 [2024-11-20 11:21:37.746600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.466 qpair failed and we were unable to recover it. 00:27:10.466 [2024-11-20 11:21:37.746859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.466 [2024-11-20 11:21:37.746891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.466 qpair failed and we were unable to recover it. 00:27:10.466 [2024-11-20 11:21:37.747108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.466 [2024-11-20 11:21:37.747141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.466 qpair failed and we were unable to recover it. 00:27:10.466 [2024-11-20 11:21:37.747273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.466 [2024-11-20 11:21:37.747305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.466 qpair failed and we were unable to recover it. 00:27:10.467 [2024-11-20 11:21:37.747476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.467 [2024-11-20 11:21:37.747508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.467 qpair failed and we were unable to recover it. 00:27:10.467 [2024-11-20 11:21:37.747689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.467 [2024-11-20 11:21:37.747720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.467 qpair failed and we were unable to recover it. 00:27:10.467 [2024-11-20 11:21:37.747833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.467 [2024-11-20 11:21:37.747865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.467 qpair failed and we were unable to recover it. 00:27:10.467 [2024-11-20 11:21:37.748048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.467 [2024-11-20 11:21:37.748082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.467 qpair failed and we were unable to recover it. 00:27:10.467 [2024-11-20 11:21:37.748320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.467 [2024-11-20 11:21:37.748351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.467 qpair failed and we were unable to recover it. 00:27:10.467 [2024-11-20 11:21:37.748455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.467 [2024-11-20 11:21:37.748486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.467 qpair failed and we were unable to recover it. 00:27:10.467 [2024-11-20 11:21:37.748602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.467 [2024-11-20 11:21:37.748634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.467 qpair failed and we were unable to recover it. 00:27:10.467 [2024-11-20 11:21:37.748882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.467 [2024-11-20 11:21:37.748913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6844000b90 with addr=10.0.0.2, port=4420 00:27:10.467 qpair failed and we were unable to recover it. 00:27:10.467 [2024-11-20 11:21:37.749205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.467 [2024-11-20 11:21:37.749252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.467 qpair failed and we were unable to recover it. 00:27:10.467 [2024-11-20 11:21:37.749383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.467 [2024-11-20 11:21:37.749424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.467 qpair failed and we were unable to recover it. 00:27:10.467 [2024-11-20 11:21:37.749666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.467 [2024-11-20 11:21:37.749699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.467 qpair failed and we were unable to recover it. 00:27:10.467 [2024-11-20 11:21:37.749934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.467 [2024-11-20 11:21:37.749976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.467 qpair failed and we were unable to recover it. 00:27:10.467 [2024-11-20 11:21:37.750153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.467 [2024-11-20 11:21:37.750184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.467 qpair failed and we were unable to recover it. 00:27:10.467 [2024-11-20 11:21:37.750358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.467 [2024-11-20 11:21:37.750392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.467 qpair failed and we were unable to recover it. 00:27:10.467 [2024-11-20 11:21:37.750628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.467 [2024-11-20 11:21:37.750660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.467 qpair failed and we were unable to recover it. 00:27:10.467 [2024-11-20 11:21:37.750831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.467 [2024-11-20 11:21:37.750863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.467 qpair failed and we were unable to recover it. 00:27:10.467 [2024-11-20 11:21:37.751055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.467 [2024-11-20 11:21:37.751090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.467 qpair failed and we were unable to recover it. 00:27:10.467 [2024-11-20 11:21:37.751287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.467 [2024-11-20 11:21:37.751320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.467 qpair failed and we were unable to recover it. 00:27:10.467 [2024-11-20 11:21:37.751434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.467 [2024-11-20 11:21:37.751466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.467 qpair failed and we were unable to recover it. 00:27:10.467 [2024-11-20 11:21:37.751584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.467 [2024-11-20 11:21:37.751617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.467 qpair failed and we were unable to recover it. 00:27:10.467 [2024-11-20 11:21:37.751818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.467 [2024-11-20 11:21:37.751850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.467 qpair failed and we were unable to recover it. 00:27:10.467 [2024-11-20 11:21:37.752088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.467 [2024-11-20 11:21:37.752122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.467 qpair failed and we were unable to recover it. 00:27:10.467 [2024-11-20 11:21:37.752236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.467 [2024-11-20 11:21:37.752269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.467 qpair failed and we were unable to recover it. 00:27:10.467 [2024-11-20 11:21:37.752460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.467 [2024-11-20 11:21:37.752495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.467 qpair failed and we were unable to recover it. 00:27:10.467 [2024-11-20 11:21:37.752711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.467 [2024-11-20 11:21:37.752745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.467 qpair failed and we were unable to recover it. 00:27:10.467 [2024-11-20 11:21:37.752879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.467 [2024-11-20 11:21:37.752912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.467 qpair failed and we were unable to recover it. 00:27:10.467 [2024-11-20 11:21:37.753101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.467 [2024-11-20 11:21:37.753135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.467 qpair failed and we were unable to recover it. 00:27:10.467 [2024-11-20 11:21:37.753322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.467 [2024-11-20 11:21:37.753355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.467 qpair failed and we were unable to recover it. 00:27:10.467 [2024-11-20 11:21:37.753563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.467 [2024-11-20 11:21:37.753597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.467 qpair failed and we were unable to recover it. 00:27:10.467 [2024-11-20 11:21:37.753715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.467 [2024-11-20 11:21:37.753748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.467 qpair failed and we were unable to recover it. 00:27:10.467 [2024-11-20 11:21:37.753989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.467 [2024-11-20 11:21:37.754024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.467 qpair failed and we were unable to recover it. 00:27:10.467 [2024-11-20 11:21:37.754152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.467 [2024-11-20 11:21:37.754186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.467 qpair failed and we were unable to recover it. 00:27:10.467 [2024-11-20 11:21:37.754371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.467 [2024-11-20 11:21:37.754403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.467 qpair failed and we were unable to recover it. 00:27:10.467 [2024-11-20 11:21:37.754574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.468 [2024-11-20 11:21:37.754607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.468 qpair failed and we were unable to recover it. 00:27:10.468 [2024-11-20 11:21:37.754741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.468 [2024-11-20 11:21:37.754774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.468 qpair failed and we were unable to recover it. 00:27:10.468 [2024-11-20 11:21:37.754969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.468 [2024-11-20 11:21:37.755004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.468 qpair failed and we were unable to recover it. 00:27:10.468 [2024-11-20 11:21:37.755177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.468 [2024-11-20 11:21:37.755216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.468 qpair failed and we were unable to recover it. 00:27:10.468 [2024-11-20 11:21:37.755420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.468 [2024-11-20 11:21:37.755454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.468 qpair failed and we were unable to recover it. 00:27:10.468 [2024-11-20 11:21:37.755573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.468 [2024-11-20 11:21:37.755607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.468 qpair failed and we were unable to recover it. 00:27:10.468 [2024-11-20 11:21:37.755716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.468 [2024-11-20 11:21:37.755748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.468 qpair failed and we were unable to recover it. 00:27:10.468 [2024-11-20 11:21:37.755945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.468 [2024-11-20 11:21:37.755993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.468 qpair failed and we were unable to recover it. 00:27:10.468 [2024-11-20 11:21:37.756123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.468 [2024-11-20 11:21:37.756155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.468 qpair failed and we were unable to recover it. 00:27:10.468 [2024-11-20 11:21:37.756282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.468 [2024-11-20 11:21:37.756315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.468 qpair failed and we were unable to recover it. 00:27:10.468 [2024-11-20 11:21:37.756451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.468 [2024-11-20 11:21:37.756484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.468 qpair failed and we were unable to recover it. 00:27:10.468 [2024-11-20 11:21:37.756675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.468 [2024-11-20 11:21:37.756708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.468 qpair failed and we were unable to recover it. 00:27:10.468 [2024-11-20 11:21:37.756884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.468 [2024-11-20 11:21:37.756918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.468 qpair failed and we were unable to recover it. 00:27:10.468 [2024-11-20 11:21:37.757119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.468 [2024-11-20 11:21:37.757154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.468 qpair failed and we were unable to recover it. 00:27:10.468 [2024-11-20 11:21:37.757325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.468 [2024-11-20 11:21:37.757358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.468 qpair failed and we were unable to recover it. 00:27:10.468 [2024-11-20 11:21:37.757472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.468 [2024-11-20 11:21:37.757504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.468 qpair failed and we were unable to recover it. 00:27:10.468 [2024-11-20 11:21:37.757705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.468 [2024-11-20 11:21:37.757738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.468 qpair failed and we were unable to recover it. 00:27:10.468 [2024-11-20 11:21:37.757859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.468 [2024-11-20 11:21:37.757892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.468 qpair failed and we were unable to recover it. 00:27:10.468 [2024-11-20 11:21:37.758169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.468 [2024-11-20 11:21:37.758204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.468 qpair failed and we were unable to recover it. 00:27:10.468 11:21:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:10.468 [2024-11-20 11:21:37.758390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.468 [2024-11-20 11:21:37.758423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.468 qpair failed and we were unable to recover it. 00:27:10.468 11:21:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:27:10.468 [2024-11-20 11:21:37.758594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.468 [2024-11-20 11:21:37.758627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.468 qpair failed and we were unable to recover it. 00:27:10.468 [2024-11-20 11:21:37.758803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.468 [2024-11-20 11:21:37.758836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.468 qpair failed and we were unable to recover it. 00:27:10.468 11:21:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:10.468 [2024-11-20 11:21:37.759016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.468 [2024-11-20 11:21:37.759051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.468 qpair failed and we were unable to recover it. 00:27:10.468 [2024-11-20 11:21:37.759243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.468 11:21:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:10.468 [2024-11-20 11:21:37.759277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.468 qpair failed and we were unable to recover it. 00:27:10.468 [2024-11-20 11:21:37.759513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.468 11:21:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:10.468 [2024-11-20 11:21:37.759547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.468 qpair failed and we were unable to recover it. 00:27:10.468 [2024-11-20 11:21:37.759732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.468 [2024-11-20 11:21:37.759765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.468 qpair failed and we were unable to recover it. 00:27:10.468 [2024-11-20 11:21:37.759881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.468 [2024-11-20 11:21:37.759914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.468 qpair failed and we were unable to recover it. 00:27:10.468 [2024-11-20 11:21:37.760152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.468 [2024-11-20 11:21:37.760194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.468 qpair failed and we were unable to recover it. 00:27:10.468 [2024-11-20 11:21:37.760321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.468 [2024-11-20 11:21:37.760360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.468 qpair failed and we were unable to recover it. 00:27:10.468 [2024-11-20 11:21:37.760609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.468 [2024-11-20 11:21:37.760641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.468 qpair failed and we were unable to recover it. 00:27:10.468 [2024-11-20 11:21:37.760812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.468 [2024-11-20 11:21:37.760844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.468 qpair failed and we were unable to recover it. 00:27:10.468 [2024-11-20 11:21:37.761110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.469 [2024-11-20 11:21:37.761142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.469 qpair failed and we were unable to recover it. 00:27:10.469 [2024-11-20 11:21:37.761264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.469 [2024-11-20 11:21:37.761294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.469 qpair failed and we were unable to recover it. 00:27:10.469 [2024-11-20 11:21:37.761563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.469 [2024-11-20 11:21:37.761593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.469 qpair failed and we were unable to recover it. 00:27:10.469 [2024-11-20 11:21:37.761799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.469 [2024-11-20 11:21:37.761830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.469 qpair failed and we were unable to recover it. 00:27:10.469 [2024-11-20 11:21:37.762021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.469 [2024-11-20 11:21:37.762054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.469 qpair failed and we were unable to recover it. 00:27:10.469 [2024-11-20 11:21:37.762241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.469 [2024-11-20 11:21:37.762272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.469 qpair failed and we were unable to recover it. 00:27:10.469 [2024-11-20 11:21:37.762532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.469 [2024-11-20 11:21:37.762563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.469 qpair failed and we were unable to recover it. 00:27:10.469 [2024-11-20 11:21:37.762689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.469 [2024-11-20 11:21:37.762721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.469 qpair failed and we were unable to recover it. 00:27:10.469 [2024-11-20 11:21:37.762836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.469 [2024-11-20 11:21:37.762868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.469 qpair failed and we were unable to recover it. 00:27:10.469 [2024-11-20 11:21:37.763038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.469 [2024-11-20 11:21:37.763071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.469 qpair failed and we were unable to recover it. 00:27:10.469 [2024-11-20 11:21:37.763261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.469 [2024-11-20 11:21:37.763293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.469 qpair failed and we were unable to recover it. 00:27:10.469 [2024-11-20 11:21:37.763483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.469 [2024-11-20 11:21:37.763516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.469 qpair failed and we were unable to recover it. 00:27:10.469 [2024-11-20 11:21:37.763639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.469 [2024-11-20 11:21:37.763671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.469 qpair failed and we were unable to recover it. 00:27:10.469 [2024-11-20 11:21:37.763811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.469 [2024-11-20 11:21:37.763842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.469 qpair failed and we were unable to recover it. 00:27:10.469 [2024-11-20 11:21:37.763969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.469 [2024-11-20 11:21:37.764002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.469 qpair failed and we were unable to recover it. 00:27:10.469 [2024-11-20 11:21:37.764136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.469 [2024-11-20 11:21:37.764167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.469 qpair failed and we were unable to recover it. 00:27:10.469 [2024-11-20 11:21:37.764419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.469 [2024-11-20 11:21:37.764451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.469 qpair failed and we were unable to recover it. 00:27:10.469 [2024-11-20 11:21:37.764576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.469 [2024-11-20 11:21:37.764607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.469 qpair failed and we were unable to recover it. 00:27:10.469 [2024-11-20 11:21:37.764787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.469 [2024-11-20 11:21:37.764817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.469 qpair failed and we were unable to recover it. 00:27:10.469 [2024-11-20 11:21:37.764995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.469 [2024-11-20 11:21:37.765028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.469 qpair failed and we were unable to recover it. 00:27:10.469 [2024-11-20 11:21:37.765221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.469 [2024-11-20 11:21:37.765252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.469 qpair failed and we were unable to recover it. 00:27:10.469 [2024-11-20 11:21:37.765421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.469 [2024-11-20 11:21:37.765451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.469 qpair failed and we were unable to recover it. 00:27:10.469 [2024-11-20 11:21:37.765584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.469 [2024-11-20 11:21:37.765615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.469 qpair failed and we were unable to recover it. 00:27:10.469 [2024-11-20 11:21:37.765735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.469 [2024-11-20 11:21:37.765766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.469 qpair failed and we were unable to recover it. 00:27:10.469 [2024-11-20 11:21:37.765966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.469 [2024-11-20 11:21:37.766018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.469 qpair failed and we were unable to recover it. 00:27:10.469 [2024-11-20 11:21:37.766230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.469 [2024-11-20 11:21:37.766263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.469 qpair failed and we were unable to recover it. 00:27:10.469 [2024-11-20 11:21:37.766453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.469 [2024-11-20 11:21:37.766485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.469 qpair failed and we were unable to recover it. 00:27:10.469 [2024-11-20 11:21:37.766669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.469 [2024-11-20 11:21:37.766701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.469 qpair failed and we were unable to recover it. 00:27:10.469 [2024-11-20 11:21:37.766835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.469 [2024-11-20 11:21:37.766867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.469 qpair failed and we were unable to recover it. 00:27:10.470 [2024-11-20 11:21:37.767055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.470 [2024-11-20 11:21:37.767087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.470 qpair failed and we were unable to recover it. 00:27:10.470 [2024-11-20 11:21:37.767195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.470 [2024-11-20 11:21:37.767226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.470 qpair failed and we were unable to recover it. 00:27:10.470 [2024-11-20 11:21:37.767403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.470 [2024-11-20 11:21:37.767435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.470 qpair failed and we were unable to recover it. 00:27:10.470 [2024-11-20 11:21:37.767642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.470 [2024-11-20 11:21:37.767673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.470 qpair failed and we were unable to recover it. 00:27:10.470 [2024-11-20 11:21:37.767784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.470 [2024-11-20 11:21:37.767815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.470 qpair failed and we were unable to recover it. 00:27:10.470 [2024-11-20 11:21:37.767996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.470 [2024-11-20 11:21:37.768031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.470 qpair failed and we were unable to recover it. 00:27:10.470 [2024-11-20 11:21:37.768165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.470 [2024-11-20 11:21:37.768197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.470 qpair failed and we were unable to recover it. 00:27:10.470 [2024-11-20 11:21:37.768441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.470 [2024-11-20 11:21:37.768472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.470 qpair failed and we were unable to recover it. 00:27:10.470 [2024-11-20 11:21:37.768645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.470 [2024-11-20 11:21:37.768685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.470 qpair failed and we were unable to recover it. 00:27:10.470 [2024-11-20 11:21:37.768882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.470 [2024-11-20 11:21:37.768913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.470 qpair failed and we were unable to recover it. 00:27:10.470 [2024-11-20 11:21:37.769051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.470 [2024-11-20 11:21:37.769083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.470 qpair failed and we were unable to recover it. 00:27:10.470 [2024-11-20 11:21:37.769209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.470 [2024-11-20 11:21:37.769240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.470 qpair failed and we were unable to recover it. 00:27:10.470 [2024-11-20 11:21:37.769446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.470 [2024-11-20 11:21:37.769477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.470 qpair failed and we were unable to recover it. 00:27:10.470 [2024-11-20 11:21:37.769610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.470 [2024-11-20 11:21:37.769641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.470 qpair failed and we were unable to recover it. 00:27:10.470 [2024-11-20 11:21:37.769832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.470 [2024-11-20 11:21:37.769863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.470 qpair failed and we were unable to recover it. 00:27:10.470 [2024-11-20 11:21:37.770071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.470 [2024-11-20 11:21:37.770103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.470 qpair failed and we were unable to recover it. 00:27:10.470 [2024-11-20 11:21:37.770344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.470 [2024-11-20 11:21:37.770377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.470 qpair failed and we were unable to recover it. 00:27:10.470 [2024-11-20 11:21:37.770567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.470 [2024-11-20 11:21:37.770598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.470 qpair failed and we were unable to recover it. 00:27:10.470 [2024-11-20 11:21:37.770796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.470 [2024-11-20 11:21:37.770827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.470 qpair failed and we were unable to recover it. 00:27:10.470 [2024-11-20 11:21:37.771040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.470 [2024-11-20 11:21:37.771074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.470 qpair failed and we were unable to recover it. 00:27:10.470 [2024-11-20 11:21:37.771323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.470 [2024-11-20 11:21:37.771354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.470 qpair failed and we were unable to recover it. 00:27:10.470 [2024-11-20 11:21:37.771486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.470 [2024-11-20 11:21:37.771517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.470 qpair failed and we were unable to recover it. 00:27:10.470 [2024-11-20 11:21:37.771646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.470 [2024-11-20 11:21:37.771678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.470 qpair failed and we were unable to recover it. 00:27:10.470 [2024-11-20 11:21:37.771810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.470 [2024-11-20 11:21:37.771841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.470 qpair failed and we were unable to recover it. 00:27:10.470 [2024-11-20 11:21:37.772025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.470 [2024-11-20 11:21:37.772059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.470 qpair failed and we were unable to recover it. 00:27:10.470 [2024-11-20 11:21:37.772188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.470 [2024-11-20 11:21:37.772219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.470 qpair failed and we were unable to recover it. 00:27:10.470 [2024-11-20 11:21:37.772403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.470 [2024-11-20 11:21:37.772434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.470 qpair failed and we were unable to recover it. 00:27:10.470 [2024-11-20 11:21:37.772559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.470 [2024-11-20 11:21:37.772589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.470 qpair failed and we were unable to recover it. 00:27:10.470 [2024-11-20 11:21:37.772713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.470 [2024-11-20 11:21:37.772744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.470 qpair failed and we were unable to recover it. 00:27:10.470 [2024-11-20 11:21:37.772866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.470 [2024-11-20 11:21:37.772897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.470 qpair failed and we were unable to recover it. 00:27:10.470 [2024-11-20 11:21:37.773033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.470 [2024-11-20 11:21:37.773066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.470 qpair failed and we were unable to recover it. 00:27:10.470 [2024-11-20 11:21:37.773233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.470 [2024-11-20 11:21:37.773264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.470 qpair failed and we were unable to recover it. 00:27:10.470 [2024-11-20 11:21:37.773438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.470 [2024-11-20 11:21:37.773468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.471 qpair failed and we were unable to recover it. 00:27:10.471 [2024-11-20 11:21:37.773584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.471 [2024-11-20 11:21:37.773615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.471 qpair failed and we were unable to recover it. 00:27:10.471 [2024-11-20 11:21:37.773731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.471 [2024-11-20 11:21:37.773762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.471 qpair failed and we were unable to recover it. 00:27:10.471 [2024-11-20 11:21:37.773964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.471 [2024-11-20 11:21:37.773998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.471 qpair failed and we were unable to recover it. 00:27:10.471 [2024-11-20 11:21:37.774170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.471 [2024-11-20 11:21:37.774201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.471 qpair failed and we were unable to recover it. 00:27:10.471 [2024-11-20 11:21:37.774310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.471 [2024-11-20 11:21:37.774341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.471 qpair failed and we were unable to recover it. 00:27:10.471 [2024-11-20 11:21:37.774516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.471 [2024-11-20 11:21:37.774548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.471 qpair failed and we were unable to recover it. 00:27:10.471 [2024-11-20 11:21:37.774744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.471 [2024-11-20 11:21:37.774775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.471 qpair failed and we were unable to recover it. 00:27:10.471 [2024-11-20 11:21:37.774896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.471 [2024-11-20 11:21:37.774927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.471 qpair failed and we were unable to recover it. 00:27:10.471 [2024-11-20 11:21:37.775057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.471 [2024-11-20 11:21:37.775090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.471 qpair failed and we were unable to recover it. 00:27:10.471 [2024-11-20 11:21:37.775205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.471 [2024-11-20 11:21:37.775236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.471 qpair failed and we were unable to recover it. 00:27:10.471 [2024-11-20 11:21:37.775413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.471 [2024-11-20 11:21:37.775444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.471 qpair failed and we were unable to recover it. 00:27:10.471 [2024-11-20 11:21:37.775628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.471 [2024-11-20 11:21:37.775659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.471 qpair failed and we were unable to recover it. 00:27:10.471 [2024-11-20 11:21:37.775785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.471 [2024-11-20 11:21:37.775816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.471 qpair failed and we were unable to recover it. 00:27:10.471 [2024-11-20 11:21:37.775998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.471 [2024-11-20 11:21:37.776031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.471 qpair failed and we were unable to recover it. 00:27:10.471 [2024-11-20 11:21:37.776230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.471 [2024-11-20 11:21:37.776261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.471 qpair failed and we were unable to recover it. 00:27:10.471 [2024-11-20 11:21:37.776385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.471 [2024-11-20 11:21:37.776422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.471 qpair failed and we were unable to recover it. 00:27:10.471 [2024-11-20 11:21:37.776676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.471 [2024-11-20 11:21:37.776707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.471 qpair failed and we were unable to recover it. 00:27:10.471 [2024-11-20 11:21:37.776826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.471 [2024-11-20 11:21:37.776857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.471 qpair failed and we were unable to recover it. 00:27:10.471 [2024-11-20 11:21:37.777039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.471 [2024-11-20 11:21:37.777072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.471 qpair failed and we were unable to recover it. 00:27:10.471 [2024-11-20 11:21:37.777253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.471 [2024-11-20 11:21:37.777284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.471 qpair failed and we were unable to recover it. 00:27:10.471 [2024-11-20 11:21:37.777470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.471 [2024-11-20 11:21:37.777501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.471 qpair failed and we were unable to recover it. 00:27:10.471 [2024-11-20 11:21:37.777679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.471 [2024-11-20 11:21:37.777709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.471 qpair failed and we were unable to recover it. 00:27:10.471 [2024-11-20 11:21:37.777907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.471 [2024-11-20 11:21:37.777938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.471 qpair failed and we were unable to recover it. 00:27:10.471 [2024-11-20 11:21:37.778073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.471 [2024-11-20 11:21:37.778105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.471 qpair failed and we were unable to recover it. 00:27:10.471 [2024-11-20 11:21:37.778213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.471 [2024-11-20 11:21:37.778244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.471 qpair failed and we were unable to recover it. 00:27:10.471 [2024-11-20 11:21:37.778416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.471 [2024-11-20 11:21:37.778448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.471 qpair failed and we were unable to recover it. 00:27:10.471 [2024-11-20 11:21:37.778581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.471 [2024-11-20 11:21:37.778612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.471 qpair failed and we were unable to recover it. 00:27:10.471 [2024-11-20 11:21:37.778732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.471 [2024-11-20 11:21:37.778763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.471 qpair failed and we were unable to recover it. 00:27:10.471 [2024-11-20 11:21:37.778933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.471 [2024-11-20 11:21:37.778972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.471 qpair failed and we were unable to recover it. 00:27:10.471 [2024-11-20 11:21:37.779104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.471 [2024-11-20 11:21:37.779135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.471 qpair failed and we were unable to recover it. 00:27:10.471 [2024-11-20 11:21:37.779243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.471 [2024-11-20 11:21:37.779274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.471 qpair failed and we were unable to recover it. 00:27:10.471 [2024-11-20 11:21:37.779448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.471 [2024-11-20 11:21:37.779481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.471 qpair failed and we were unable to recover it. 00:27:10.471 [2024-11-20 11:21:37.779599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.471 [2024-11-20 11:21:37.779630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.472 qpair failed and we were unable to recover it. 00:27:10.472 [2024-11-20 11:21:37.779801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.472 [2024-11-20 11:21:37.779832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.472 qpair failed and we were unable to recover it. 00:27:10.472 [2024-11-20 11:21:37.780013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.472 [2024-11-20 11:21:37.780047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.472 qpair failed and we were unable to recover it. 00:27:10.472 [2024-11-20 11:21:37.780162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.472 [2024-11-20 11:21:37.780194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.472 qpair failed and we were unable to recover it. 00:27:10.472 [2024-11-20 11:21:37.780368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.472 [2024-11-20 11:21:37.780399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.472 qpair failed and we were unable to recover it. 00:27:10.472 [2024-11-20 11:21:37.780516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.472 [2024-11-20 11:21:37.780547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.472 qpair failed and we were unable to recover it. 00:27:10.472 [2024-11-20 11:21:37.780720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.472 [2024-11-20 11:21:37.780752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.472 qpair failed and we were unable to recover it. 00:27:10.472 [2024-11-20 11:21:37.780861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.472 [2024-11-20 11:21:37.780891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.472 qpair failed and we were unable to recover it. 00:27:10.472 [2024-11-20 11:21:37.781099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.472 [2024-11-20 11:21:37.781132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.472 qpair failed and we were unable to recover it. 00:27:10.472 [2024-11-20 11:21:37.781320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.472 [2024-11-20 11:21:37.781351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6850000b90 with addr=10.0.0.2, port=4420 00:27:10.472 qpair failed and we were unable to recover it. 00:27:10.472 [2024-11-20 11:21:37.781487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.472 [2024-11-20 11:21:37.781531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.472 qpair failed and we were unable to recover it. 00:27:10.472 [2024-11-20 11:21:37.781753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.472 [2024-11-20 11:21:37.781785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.472 qpair failed and we were unable to recover it. 00:27:10.472 [2024-11-20 11:21:37.781900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.472 [2024-11-20 11:21:37.781932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.472 qpair failed and we were unable to recover it. 00:27:10.472 [2024-11-20 11:21:37.782118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.472 [2024-11-20 11:21:37.782151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.472 qpair failed and we were unable to recover it. 00:27:10.472 [2024-11-20 11:21:37.782278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.472 [2024-11-20 11:21:37.782310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.472 qpair failed and we were unable to recover it. 00:27:10.472 [2024-11-20 11:21:37.782503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.472 [2024-11-20 11:21:37.782536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.472 qpair failed and we were unable to recover it. 00:27:10.472 [2024-11-20 11:21:37.782667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.472 [2024-11-20 11:21:37.782700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.472 qpair failed and we were unable to recover it. 00:27:10.472 [2024-11-20 11:21:37.782877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.472 [2024-11-20 11:21:37.782909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.472 qpair failed and we were unable to recover it. 00:27:10.472 [2024-11-20 11:21:37.783116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.472 [2024-11-20 11:21:37.783152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.472 qpair failed and we were unable to recover it. 00:27:10.472 [2024-11-20 11:21:37.783278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.472 [2024-11-20 11:21:37.783311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.472 qpair failed and we were unable to recover it. 00:27:10.472 [2024-11-20 11:21:37.783486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.472 [2024-11-20 11:21:37.783519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.472 qpair failed and we were unable to recover it. 00:27:10.472 [2024-11-20 11:21:37.783735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.472 [2024-11-20 11:21:37.783768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.472 qpair failed and we were unable to recover it. 00:27:10.472 [2024-11-20 11:21:37.783887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.472 [2024-11-20 11:21:37.783920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.472 qpair failed and we were unable to recover it. 00:27:10.472 [2024-11-20 11:21:37.784055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.472 [2024-11-20 11:21:37.784089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.472 qpair failed and we were unable to recover it. 00:27:10.472 [2024-11-20 11:21:37.784278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.472 [2024-11-20 11:21:37.784311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.472 qpair failed and we were unable to recover it. 00:27:10.472 [2024-11-20 11:21:37.784423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.472 [2024-11-20 11:21:37.784455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.472 qpair failed and we were unable to recover it. 00:27:10.472 [2024-11-20 11:21:37.784628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.472 [2024-11-20 11:21:37.784661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.472 qpair failed and we were unable to recover it. 00:27:10.472 [2024-11-20 11:21:37.784785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.472 [2024-11-20 11:21:37.784818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.472 qpair failed and we were unable to recover it. 00:27:10.472 [2024-11-20 11:21:37.785005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.472 [2024-11-20 11:21:37.785038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.472 qpair failed and we were unable to recover it. 00:27:10.472 [2024-11-20 11:21:37.785244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.472 [2024-11-20 11:21:37.785275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.472 qpair failed and we were unable to recover it. 00:27:10.472 [2024-11-20 11:21:37.785474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.472 [2024-11-20 11:21:37.785506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.472 qpair failed and we were unable to recover it. 00:27:10.472 [2024-11-20 11:21:37.785684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.472 [2024-11-20 11:21:37.785728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.472 qpair failed and we were unable to recover it. 00:27:10.472 [2024-11-20 11:21:37.785851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.472 [2024-11-20 11:21:37.785894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.472 qpair failed and we were unable to recover it. 00:27:10.472 [2024-11-20 11:21:37.786040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.472 [2024-11-20 11:21:37.786073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.472 qpair failed and we were unable to recover it. 00:27:10.473 [2024-11-20 11:21:37.786277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.473 [2024-11-20 11:21:37.786309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.473 qpair failed and we were unable to recover it. 00:27:10.473 [2024-11-20 11:21:37.786439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.473 [2024-11-20 11:21:37.786470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.473 qpair failed and we were unable to recover it. 00:27:10.473 [2024-11-20 11:21:37.786661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.473 [2024-11-20 11:21:37.786694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.473 qpair failed and we were unable to recover it. 00:27:10.473 [2024-11-20 11:21:37.786878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.473 [2024-11-20 11:21:37.786915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.473 qpair failed and we were unable to recover it. 00:27:10.473 [2024-11-20 11:21:37.787117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.473 [2024-11-20 11:21:37.787150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.473 qpair failed and we were unable to recover it. 00:27:10.473 [2024-11-20 11:21:37.787342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.473 [2024-11-20 11:21:37.787375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.473 qpair failed and we were unable to recover it. 00:27:10.473 [2024-11-20 11:21:37.787478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.473 [2024-11-20 11:21:37.787510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.473 qpair failed and we were unable to recover it. 00:27:10.473 [2024-11-20 11:21:37.787615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.473 [2024-11-20 11:21:37.787647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.473 qpair failed and we were unable to recover it. 00:27:10.473 [2024-11-20 11:21:37.787771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.473 [2024-11-20 11:21:37.787803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.473 qpair failed and we were unable to recover it. 00:27:10.473 [2024-11-20 11:21:37.787910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.473 [2024-11-20 11:21:37.787941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.473 qpair failed and we were unable to recover it. 00:27:10.473 [2024-11-20 11:21:37.788197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.473 [2024-11-20 11:21:37.788230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.473 qpair failed and we were unable to recover it. 00:27:10.473 [2024-11-20 11:21:37.788356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.473 [2024-11-20 11:21:37.788388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.473 qpair failed and we were unable to recover it. 00:27:10.473 [2024-11-20 11:21:37.788492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.473 [2024-11-20 11:21:37.788524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.473 qpair failed and we were unable to recover it. 00:27:10.473 [2024-11-20 11:21:37.788643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.473 [2024-11-20 11:21:37.788675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.473 qpair failed and we were unable to recover it. 00:27:10.473 [2024-11-20 11:21:37.788869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.473 [2024-11-20 11:21:37.788901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.473 qpair failed and we were unable to recover it. 00:27:10.473 [2024-11-20 11:21:37.789037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.473 [2024-11-20 11:21:37.789072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.473 qpair failed and we were unable to recover it. 00:27:10.473 [2024-11-20 11:21:37.789196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.473 [2024-11-20 11:21:37.789229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.473 qpair failed and we were unable to recover it. 00:27:10.473 [2024-11-20 11:21:37.789433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.473 [2024-11-20 11:21:37.789466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.473 qpair failed and we were unable to recover it. 00:27:10.473 [2024-11-20 11:21:37.789734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.473 [2024-11-20 11:21:37.789768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.473 qpair failed and we were unable to recover it. 00:27:10.473 [2024-11-20 11:21:37.789973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.473 [2024-11-20 11:21:37.790007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.473 qpair failed and we were unable to recover it. 00:27:10.473 [2024-11-20 11:21:37.790140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.473 [2024-11-20 11:21:37.790172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.473 qpair failed and we were unable to recover it. 00:27:10.473 [2024-11-20 11:21:37.790287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.473 [2024-11-20 11:21:37.790319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.473 qpair failed and we were unable to recover it. 00:27:10.473 [2024-11-20 11:21:37.790487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.473 [2024-11-20 11:21:37.790520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.473 qpair failed and we were unable to recover it. 00:27:10.473 [2024-11-20 11:21:37.790654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.473 [2024-11-20 11:21:37.790687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.473 qpair failed and we were unable to recover it. 00:27:10.473 [2024-11-20 11:21:37.790799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.473 [2024-11-20 11:21:37.790831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.473 qpair failed and we were unable to recover it. 00:27:10.473 [2024-11-20 11:21:37.790940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.473 [2024-11-20 11:21:37.790980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.473 qpair failed and we were unable to recover it. 00:27:10.473 [2024-11-20 11:21:37.791094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.473 [2024-11-20 11:21:37.791127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.473 qpair failed and we were unable to recover it. 00:27:10.473 [2024-11-20 11:21:37.791304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.473 [2024-11-20 11:21:37.791336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.473 qpair failed and we were unable to recover it. 00:27:10.473 [2024-11-20 11:21:37.791445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.473 [2024-11-20 11:21:37.791477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.473 qpair failed and we were unable to recover it. 00:27:10.473 [2024-11-20 11:21:37.791616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.473 [2024-11-20 11:21:37.791648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.473 qpair failed and we were unable to recover it. 00:27:10.473 [2024-11-20 11:21:37.791760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.473 [2024-11-20 11:21:37.791796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.473 qpair failed and we were unable to recover it. 00:27:10.473 [2024-11-20 11:21:37.791978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.473 [2024-11-20 11:21:37.792012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.473 qpair failed and we were unable to recover it. 00:27:10.473 [2024-11-20 11:21:37.792192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.473 [2024-11-20 11:21:37.792225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.473 qpair failed and we were unable to recover it. 00:27:10.473 [2024-11-20 11:21:37.792339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.474 [2024-11-20 11:21:37.792370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.474 qpair failed and we were unable to recover it. 00:27:10.474 [2024-11-20 11:21:37.792560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.474 [2024-11-20 11:21:37.792593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.474 qpair failed and we were unable to recover it. 00:27:10.474 [2024-11-20 11:21:37.792773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.474 [2024-11-20 11:21:37.792805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.474 qpair failed and we were unable to recover it. 00:27:10.474 [2024-11-20 11:21:37.792907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.474 [2024-11-20 11:21:37.792938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.474 qpair failed and we were unable to recover it. 00:27:10.474 [2024-11-20 11:21:37.793134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.474 [2024-11-20 11:21:37.793166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.474 qpair failed and we were unable to recover it. 00:27:10.474 [2024-11-20 11:21:37.793351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.474 [2024-11-20 11:21:37.793384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.474 qpair failed and we were unable to recover it. 00:27:10.474 [2024-11-20 11:21:37.793501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.474 [2024-11-20 11:21:37.793532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.474 qpair failed and we were unable to recover it. 00:27:10.474 [2024-11-20 11:21:37.793631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.474 [2024-11-20 11:21:37.793663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.474 qpair failed and we were unable to recover it. 00:27:10.474 [2024-11-20 11:21:37.793773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.474 [2024-11-20 11:21:37.793806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.474 qpair failed and we were unable to recover it. 00:27:10.474 [2024-11-20 11:21:37.793908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.474 [2024-11-20 11:21:37.793940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.474 qpair failed and we were unable to recover it. 00:27:10.474 [2024-11-20 11:21:37.794057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.474 [2024-11-20 11:21:37.794089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.474 qpair failed and we were unable to recover it. 00:27:10.474 [2024-11-20 11:21:37.794206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.474 [2024-11-20 11:21:37.794239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.474 qpair failed and we were unable to recover it. 00:27:10.474 [2024-11-20 11:21:37.794344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.474 [2024-11-20 11:21:37.794374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.474 qpair failed and we were unable to recover it. 00:27:10.474 11:21:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:10.474 [2024-11-20 11:21:37.794551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.474 [2024-11-20 11:21:37.794589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.474 qpair failed and we were unable to recover it. 00:27:10.474 [2024-11-20 11:21:37.794714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.474 [2024-11-20 11:21:37.794746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.474 qpair failed and we were unable to recover it. 00:27:10.474 [2024-11-20 11:21:37.794883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.474 [2024-11-20 11:21:37.794915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.474 qpair failed and we were unable to recover it. 00:27:10.474 11:21:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:10.474 [2024-11-20 11:21:37.795122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.474 [2024-11-20 11:21:37.795170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.474 qpair failed and we were unable to recover it. 00:27:10.474 11:21:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.474 [2024-11-20 11:21:37.795366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.474 [2024-11-20 11:21:37.795401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.474 qpair failed and we were unable to recover it. 00:27:10.474 [2024-11-20 11:21:37.795505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.474 [2024-11-20 11:21:37.795535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.474 qpair failed and we were unable to recover it. 00:27:10.474 11:21:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:10.474 [2024-11-20 11:21:37.795650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.474 [2024-11-20 11:21:37.795682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.474 qpair failed and we were unable to recover it. 00:27:10.474 [2024-11-20 11:21:37.795802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.474 [2024-11-20 11:21:37.795833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.474 qpair failed and we were unable to recover it. 00:27:10.474 [2024-11-20 11:21:37.795958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.474 [2024-11-20 11:21:37.795991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.474 qpair failed and we were unable to recover it. 00:27:10.474 [2024-11-20 11:21:37.796187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.474 [2024-11-20 11:21:37.796225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.474 qpair failed and we were unable to recover it. 00:27:10.474 [2024-11-20 11:21:37.796414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.474 [2024-11-20 11:21:37.796445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.474 qpair failed and we were unable to recover it. 00:27:10.474 [2024-11-20 11:21:37.796639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.474 [2024-11-20 11:21:37.796671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.474 qpair failed and we were unable to recover it. 00:27:10.474 [2024-11-20 11:21:37.796784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.474 [2024-11-20 11:21:37.796815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.474 qpair failed and we were unable to recover it. 00:27:10.474 [2024-11-20 11:21:37.797057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.474 [2024-11-20 11:21:37.797090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.474 qpair failed and we were unable to recover it. 00:27:10.474 [2024-11-20 11:21:37.797206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.474 [2024-11-20 11:21:37.797237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.474 qpair failed and we were unable to recover it. 00:27:10.474 [2024-11-20 11:21:37.797343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.475 [2024-11-20 11:21:37.797374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.475 qpair failed and we were unable to recover it. 00:27:10.475 [2024-11-20 11:21:37.797492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.475 [2024-11-20 11:21:37.797523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.475 qpair failed and we were unable to recover it. 00:27:10.475 [2024-11-20 11:21:37.797628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.475 [2024-11-20 11:21:37.797659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.475 qpair failed and we were unable to recover it. 00:27:10.475 [2024-11-20 11:21:37.797762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.475 [2024-11-20 11:21:37.797793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.475 qpair failed and we were unable to recover it. 00:27:10.475 [2024-11-20 11:21:37.797910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.475 [2024-11-20 11:21:37.797941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.475 qpair failed and we were unable to recover it. 00:27:10.475 [2024-11-20 11:21:37.798059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.475 [2024-11-20 11:21:37.798090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.475 qpair failed and we were unable to recover it. 00:27:10.475 [2024-11-20 11:21:37.798208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.475 [2024-11-20 11:21:37.798238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.475 qpair failed and we were unable to recover it. 00:27:10.475 [2024-11-20 11:21:37.798368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.475 [2024-11-20 11:21:37.798399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.475 qpair failed and we were unable to recover it. 00:27:10.475 [2024-11-20 11:21:37.798524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.475 [2024-11-20 11:21:37.798555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.475 qpair failed and we were unable to recover it. 00:27:10.475 [2024-11-20 11:21:37.798731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.475 [2024-11-20 11:21:37.798763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.475 qpair failed and we were unable to recover it. 00:27:10.475 [2024-11-20 11:21:37.798871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.475 [2024-11-20 11:21:37.798902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.475 qpair failed and we were unable to recover it. 00:27:10.475 [2024-11-20 11:21:37.799015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.475 [2024-11-20 11:21:37.799048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.475 qpair failed and we were unable to recover it. 00:27:10.475 [2024-11-20 11:21:37.799245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.475 [2024-11-20 11:21:37.799277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.475 qpair failed and we were unable to recover it. 00:27:10.475 [2024-11-20 11:21:37.799456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.475 [2024-11-20 11:21:37.799487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.475 qpair failed and we were unable to recover it. 00:27:10.475 [2024-11-20 11:21:37.799610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.475 [2024-11-20 11:21:37.799642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.475 qpair failed and we were unable to recover it. 00:27:10.475 [2024-11-20 11:21:37.799741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.475 [2024-11-20 11:21:37.799773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.475 qpair failed and we were unable to recover it. 00:27:10.475 [2024-11-20 11:21:37.799956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.475 [2024-11-20 11:21:37.799988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.475 qpair failed and we were unable to recover it. 00:27:10.475 [2024-11-20 11:21:37.800159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.475 [2024-11-20 11:21:37.800190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.475 qpair failed and we were unable to recover it. 00:27:10.475 [2024-11-20 11:21:37.800309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.475 [2024-11-20 11:21:37.800340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.475 qpair failed and we were unable to recover it. 00:27:10.475 [2024-11-20 11:21:37.800557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.475 [2024-11-20 11:21:37.800587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.475 qpair failed and we were unable to recover it. 00:27:10.475 [2024-11-20 11:21:37.800693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.475 [2024-11-20 11:21:37.800725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.475 qpair failed and we were unable to recover it. 00:27:10.475 [2024-11-20 11:21:37.800899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.475 [2024-11-20 11:21:37.800935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.475 qpair failed and we were unable to recover it. 00:27:10.475 [2024-11-20 11:21:37.801131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.475 [2024-11-20 11:21:37.801164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.475 qpair failed and we were unable to recover it. 00:27:10.475 [2024-11-20 11:21:37.801285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.475 [2024-11-20 11:21:37.801316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.475 qpair failed and we were unable to recover it. 00:27:10.475 [2024-11-20 11:21:37.801485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.475 [2024-11-20 11:21:37.801517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.475 qpair failed and we were unable to recover it. 00:27:10.475 [2024-11-20 11:21:37.801693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.475 [2024-11-20 11:21:37.801725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.475 qpair failed and we were unable to recover it. 00:27:10.475 [2024-11-20 11:21:37.801915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.475 [2024-11-20 11:21:37.801954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.475 qpair failed and we were unable to recover it. 00:27:10.475 [2024-11-20 11:21:37.802155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.475 [2024-11-20 11:21:37.802188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.475 qpair failed and we were unable to recover it. 00:27:10.475 [2024-11-20 11:21:37.802373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.475 [2024-11-20 11:21:37.802406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.475 qpair failed and we were unable to recover it. 00:27:10.475 [2024-11-20 11:21:37.802591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.475 [2024-11-20 11:21:37.802622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.475 qpair failed and we were unable to recover it. 00:27:10.475 [2024-11-20 11:21:37.802860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.475 [2024-11-20 11:21:37.802893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.475 qpair failed and we were unable to recover it. 00:27:10.475 [2024-11-20 11:21:37.803141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.475 [2024-11-20 11:21:37.803175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.475 qpair failed and we were unable to recover it. 00:27:10.475 [2024-11-20 11:21:37.803287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.475 [2024-11-20 11:21:37.803319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.475 qpair failed and we were unable to recover it. 00:27:10.475 [2024-11-20 11:21:37.803508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.476 [2024-11-20 11:21:37.803540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.476 qpair failed and we were unable to recover it. 00:27:10.476 [2024-11-20 11:21:37.803665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.476 [2024-11-20 11:21:37.803698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.476 qpair failed and we were unable to recover it. 00:27:10.476 [2024-11-20 11:21:37.803828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.476 [2024-11-20 11:21:37.803859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.476 qpair failed and we were unable to recover it. 00:27:10.476 [2024-11-20 11:21:37.804034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.476 [2024-11-20 11:21:37.804069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.476 qpair failed and we were unable to recover it. 00:27:10.476 [2024-11-20 11:21:37.804265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.476 [2024-11-20 11:21:37.804298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.476 qpair failed and we were unable to recover it. 00:27:10.476 [2024-11-20 11:21:37.804467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.476 [2024-11-20 11:21:37.804499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.476 qpair failed and we were unable to recover it. 00:27:10.476 [2024-11-20 11:21:37.804621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.476 [2024-11-20 11:21:37.804652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.476 qpair failed and we were unable to recover it. 00:27:10.476 [2024-11-20 11:21:37.804902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.476 [2024-11-20 11:21:37.804934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.476 qpair failed and we were unable to recover it. 00:27:10.476 [2024-11-20 11:21:37.805073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.476 [2024-11-20 11:21:37.805106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.476 qpair failed and we were unable to recover it. 00:27:10.476 [2024-11-20 11:21:37.805215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.476 [2024-11-20 11:21:37.805248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.476 qpair failed and we were unable to recover it. 00:27:10.476 [2024-11-20 11:21:37.805435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.476 [2024-11-20 11:21:37.805467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.476 qpair failed and we were unable to recover it. 00:27:10.476 [2024-11-20 11:21:37.805732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.476 [2024-11-20 11:21:37.805764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.476 qpair failed and we were unable to recover it. 00:27:10.476 [2024-11-20 11:21:37.805895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.476 [2024-11-20 11:21:37.805927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.476 qpair failed and we were unable to recover it. 00:27:10.476 [2024-11-20 11:21:37.806167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.476 [2024-11-20 11:21:37.806200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.476 qpair failed and we were unable to recover it. 00:27:10.476 [2024-11-20 11:21:37.806327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.476 [2024-11-20 11:21:37.806358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.476 qpair failed and we were unable to recover it. 00:27:10.476 [2024-11-20 11:21:37.806460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.476 [2024-11-20 11:21:37.806498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.476 qpair failed and we were unable to recover it. 00:27:10.476 [2024-11-20 11:21:37.806634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.476 [2024-11-20 11:21:37.806666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.476 qpair failed and we were unable to recover it. 00:27:10.476 [2024-11-20 11:21:37.806884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.476 [2024-11-20 11:21:37.806916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.476 qpair failed and we were unable to recover it. 00:27:10.476 [2024-11-20 11:21:37.807112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.476 [2024-11-20 11:21:37.807150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.476 qpair failed and we were unable to recover it. 00:27:10.476 [2024-11-20 11:21:37.807333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.476 [2024-11-20 11:21:37.807364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.476 qpair failed and we were unable to recover it. 00:27:10.476 [2024-11-20 11:21:37.807467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.476 [2024-11-20 11:21:37.807497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.476 qpair failed and we were unable to recover it. 00:27:10.476 [2024-11-20 11:21:37.807597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.476 [2024-11-20 11:21:37.807629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.476 qpair failed and we were unable to recover it. 00:27:10.476 [2024-11-20 11:21:37.807750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.476 [2024-11-20 11:21:37.807780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.476 qpair failed and we were unable to recover it. 00:27:10.476 [2024-11-20 11:21:37.807880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.476 [2024-11-20 11:21:37.807910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.476 qpair failed and we were unable to recover it. 00:27:10.476 [2024-11-20 11:21:37.808109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.476 [2024-11-20 11:21:37.808142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.476 qpair failed and we were unable to recover it. 00:27:10.476 [2024-11-20 11:21:37.808285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.476 [2024-11-20 11:21:37.808316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.476 qpair failed and we were unable to recover it. 00:27:10.476 [2024-11-20 11:21:37.808423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.476 [2024-11-20 11:21:37.808454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.476 qpair failed and we were unable to recover it. 00:27:10.476 [2024-11-20 11:21:37.808635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.476 [2024-11-20 11:21:37.808666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.476 qpair failed and we were unable to recover it. 00:27:10.476 [2024-11-20 11:21:37.808787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.476 [2024-11-20 11:21:37.808817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.476 qpair failed and we were unable to recover it. 00:27:10.476 [2024-11-20 11:21:37.809066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.476 [2024-11-20 11:21:37.809100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.476 qpair failed and we were unable to recover it. 00:27:10.476 [2024-11-20 11:21:37.809218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.476 [2024-11-20 11:21:37.809249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.476 qpair failed and we were unable to recover it. 00:27:10.476 [2024-11-20 11:21:37.809356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.476 [2024-11-20 11:21:37.809387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.476 qpair failed and we were unable to recover it. 00:27:10.476 [2024-11-20 11:21:37.809582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.476 [2024-11-20 11:21:37.809613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.476 qpair failed and we were unable to recover it. 00:27:10.476 [2024-11-20 11:21:37.809737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.477 [2024-11-20 11:21:37.809768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.477 qpair failed and we were unable to recover it. 00:27:10.477 [2024-11-20 11:21:37.809965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.477 [2024-11-20 11:21:37.809998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.477 qpair failed and we were unable to recover it. 00:27:10.477 [2024-11-20 11:21:37.810175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.477 [2024-11-20 11:21:37.810206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.477 qpair failed and we were unable to recover it. 00:27:10.477 [2024-11-20 11:21:37.810308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.477 [2024-11-20 11:21:37.810339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.477 qpair failed and we were unable to recover it. 00:27:10.477 [2024-11-20 11:21:37.810450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.477 [2024-11-20 11:21:37.810481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.477 qpair failed and we were unable to recover it. 00:27:10.477 [2024-11-20 11:21:37.810613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.477 [2024-11-20 11:21:37.810644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.477 qpair failed and we were unable to recover it. 00:27:10.477 [2024-11-20 11:21:37.810833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.477 [2024-11-20 11:21:37.810864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.477 qpair failed and we were unable to recover it. 00:27:10.477 [2024-11-20 11:21:37.811042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.477 [2024-11-20 11:21:37.811074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.477 qpair failed and we were unable to recover it. 00:27:10.477 [2024-11-20 11:21:37.811179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.477 [2024-11-20 11:21:37.811210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.477 qpair failed and we were unable to recover it. 00:27:10.477 [2024-11-20 11:21:37.811336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.477 [2024-11-20 11:21:37.811366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.477 qpair failed and we were unable to recover it. 00:27:10.477 [2024-11-20 11:21:37.811560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.477 [2024-11-20 11:21:37.811590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.477 qpair failed and we were unable to recover it. 00:27:10.477 [2024-11-20 11:21:37.811716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.477 [2024-11-20 11:21:37.811747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.477 qpair failed and we were unable to recover it. 00:27:10.477 [2024-11-20 11:21:37.811849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.477 [2024-11-20 11:21:37.811879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.477 qpair failed and we were unable to recover it. 00:27:10.477 [2024-11-20 11:21:37.811993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.477 [2024-11-20 11:21:37.812026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.477 qpair failed and we were unable to recover it. 00:27:10.477 [2024-11-20 11:21:37.812133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.477 [2024-11-20 11:21:37.812164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.477 qpair failed and we were unable to recover it. 00:27:10.477 [2024-11-20 11:21:37.812288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.477 [2024-11-20 11:21:37.812318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.477 qpair failed and we were unable to recover it. 00:27:10.477 [2024-11-20 11:21:37.812501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.477 [2024-11-20 11:21:37.812532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.477 qpair failed and we were unable to recover it. 00:27:10.477 [2024-11-20 11:21:37.812704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.477 [2024-11-20 11:21:37.812735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.477 qpair failed and we were unable to recover it. 00:27:10.477 [2024-11-20 11:21:37.812908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.477 [2024-11-20 11:21:37.812939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.477 qpair failed and we were unable to recover it. 00:27:10.477 [2024-11-20 11:21:37.813063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.477 [2024-11-20 11:21:37.813094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.477 qpair failed and we were unable to recover it. 00:27:10.477 [2024-11-20 11:21:37.813283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.477 [2024-11-20 11:21:37.813314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.477 qpair failed and we were unable to recover it. 00:27:10.477 [2024-11-20 11:21:37.813422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.477 [2024-11-20 11:21:37.813453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.477 qpair failed and we were unable to recover it. 00:27:10.477 [2024-11-20 11:21:37.813634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.477 [2024-11-20 11:21:37.813670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.477 qpair failed and we were unable to recover it. 00:27:10.477 [2024-11-20 11:21:37.813775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.477 [2024-11-20 11:21:37.813806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.477 qpair failed and we were unable to recover it. 00:27:10.477 [2024-11-20 11:21:37.813909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.477 [2024-11-20 11:21:37.813939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.477 qpair failed and we were unable to recover it. 00:27:10.477 [2024-11-20 11:21:37.814128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.477 [2024-11-20 11:21:37.814159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.477 qpair failed and we were unable to recover it. 00:27:10.477 [2024-11-20 11:21:37.814399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.477 [2024-11-20 11:21:37.814430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.477 qpair failed and we were unable to recover it. 00:27:10.477 [2024-11-20 11:21:37.814601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.477 [2024-11-20 11:21:37.814631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.477 qpair failed and we were unable to recover it. 00:27:10.477 [2024-11-20 11:21:37.814756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.477 [2024-11-20 11:21:37.814786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.477 qpair failed and we were unable to recover it. 00:27:10.477 [2024-11-20 11:21:37.815022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.477 [2024-11-20 11:21:37.815055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.477 qpair failed and we were unable to recover it. 00:27:10.477 [2024-11-20 11:21:37.815163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.477 [2024-11-20 11:21:37.815193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.477 qpair failed and we were unable to recover it. 00:27:10.477 [2024-11-20 11:21:37.815306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.477 [2024-11-20 11:21:37.815337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.477 qpair failed and we were unable to recover it. 00:27:10.477 [2024-11-20 11:21:37.815457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.477 [2024-11-20 11:21:37.815489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.477 qpair failed and we were unable to recover it. 00:27:10.477 [2024-11-20 11:21:37.815671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.478 [2024-11-20 11:21:37.815702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.478 qpair failed and we were unable to recover it. 00:27:10.478 [2024-11-20 11:21:37.815946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.478 [2024-11-20 11:21:37.815988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.478 qpair failed and we were unable to recover it. 00:27:10.478 [2024-11-20 11:21:37.816159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.478 [2024-11-20 11:21:37.816190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.478 qpair failed and we were unable to recover it. 00:27:10.478 [2024-11-20 11:21:37.816313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.478 [2024-11-20 11:21:37.816345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.478 qpair failed and we were unable to recover it. 00:27:10.478 [2024-11-20 11:21:37.816523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.478 [2024-11-20 11:21:37.816554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.478 qpair failed and we were unable to recover it. 00:27:10.478 [2024-11-20 11:21:37.816726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.478 [2024-11-20 11:21:37.816757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.478 qpair failed and we were unable to recover it. 00:27:10.478 [2024-11-20 11:21:37.816941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.478 [2024-11-20 11:21:37.816980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.478 qpair failed and we were unable to recover it. 00:27:10.478 [2024-11-20 11:21:37.817238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.478 [2024-11-20 11:21:37.817269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.478 qpair failed and we were unable to recover it. 00:27:10.478 [2024-11-20 11:21:37.817439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.478 [2024-11-20 11:21:37.817470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.478 qpair failed and we were unable to recover it. 00:27:10.478 [2024-11-20 11:21:37.817587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.478 [2024-11-20 11:21:37.817618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.478 qpair failed and we were unable to recover it. 00:27:10.478 [2024-11-20 11:21:37.817792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.478 [2024-11-20 11:21:37.817822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.478 qpair failed and we were unable to recover it. 00:27:10.478 [2024-11-20 11:21:37.818085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.478 [2024-11-20 11:21:37.818118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.478 qpair failed and we were unable to recover it. 00:27:10.478 [2024-11-20 11:21:37.818316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.478 [2024-11-20 11:21:37.818346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.478 qpair failed and we were unable to recover it. 00:27:10.478 [2024-11-20 11:21:37.818519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.478 [2024-11-20 11:21:37.818551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.478 qpair failed and we were unable to recover it. 00:27:10.478 [2024-11-20 11:21:37.818653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.478 [2024-11-20 11:21:37.818683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.478 qpair failed and we were unable to recover it. 00:27:10.478 [2024-11-20 11:21:37.818802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.478 [2024-11-20 11:21:37.818832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.478 qpair failed and we were unable to recover it. 00:27:10.478 [2024-11-20 11:21:37.819121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.478 [2024-11-20 11:21:37.819155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.478 qpair failed and we were unable to recover it. 00:27:10.478 [2024-11-20 11:21:37.819346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.478 [2024-11-20 11:21:37.819377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.478 qpair failed and we were unable to recover it. 00:27:10.478 [2024-11-20 11:21:37.819639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.478 [2024-11-20 11:21:37.819671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.478 qpair failed and we were unable to recover it. 00:27:10.478 [2024-11-20 11:21:37.819935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.478 [2024-11-20 11:21:37.819991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.478 qpair failed and we were unable to recover it. 00:27:10.478 [2024-11-20 11:21:37.820099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.478 [2024-11-20 11:21:37.820130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.478 qpair failed and we were unable to recover it. 00:27:10.478 [2024-11-20 11:21:37.820311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.478 [2024-11-20 11:21:37.820342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.478 qpair failed and we were unable to recover it. 00:27:10.478 [2024-11-20 11:21:37.820519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.478 [2024-11-20 11:21:37.820551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.478 qpair failed and we were unable to recover it. 00:27:10.478 [2024-11-20 11:21:37.820793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.478 [2024-11-20 11:21:37.820824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.478 qpair failed and we were unable to recover it. 00:27:10.478 [2024-11-20 11:21:37.821006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.478 [2024-11-20 11:21:37.821038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.478 qpair failed and we were unable to recover it. 00:27:10.478 [2024-11-20 11:21:37.821177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.478 [2024-11-20 11:21:37.821208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.478 qpair failed and we were unable to recover it. 00:27:10.478 [2024-11-20 11:21:37.821315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.478 [2024-11-20 11:21:37.821346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.478 qpair failed and we were unable to recover it. 00:27:10.478 [2024-11-20 11:21:37.821466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.478 [2024-11-20 11:21:37.821496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.478 qpair failed and we were unable to recover it. 00:27:10.478 [2024-11-20 11:21:37.821759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.478 [2024-11-20 11:21:37.821790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.478 qpair failed and we were unable to recover it. 00:27:10.478 [2024-11-20 11:21:37.821992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.478 [2024-11-20 11:21:37.822031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.478 qpair failed and we were unable to recover it. 00:27:10.478 [2024-11-20 11:21:37.822153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.478 [2024-11-20 11:21:37.822183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.478 qpair failed and we were unable to recover it. 00:27:10.478 [2024-11-20 11:21:37.822379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.478 [2024-11-20 11:21:37.822410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.478 qpair failed and we were unable to recover it. 00:27:10.478 [2024-11-20 11:21:37.822533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.478 [2024-11-20 11:21:37.822564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.478 qpair failed and we were unable to recover it. 00:27:10.478 [2024-11-20 11:21:37.822826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.479 [2024-11-20 11:21:37.822858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.479 qpair failed and we were unable to recover it. 00:27:10.479 [2024-11-20 11:21:37.822978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.479 [2024-11-20 11:21:37.823011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.479 qpair failed and we were unable to recover it. 00:27:10.479 [2024-11-20 11:21:37.823133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.479 [2024-11-20 11:21:37.823163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.479 qpair failed and we were unable to recover it. 00:27:10.479 [2024-11-20 11:21:37.823293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.479 [2024-11-20 11:21:37.823324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.479 qpair failed and we were unable to recover it. 00:27:10.479 [2024-11-20 11:21:37.823513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.479 [2024-11-20 11:21:37.823544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.479 qpair failed and we were unable to recover it. 00:27:10.479 [2024-11-20 11:21:37.823716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.479 [2024-11-20 11:21:37.823746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.479 qpair failed and we were unable to recover it. 00:27:10.479 [2024-11-20 11:21:37.823986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.479 [2024-11-20 11:21:37.824019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.479 qpair failed and we were unable to recover it. 00:27:10.479 [2024-11-20 11:21:37.824140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.479 [2024-11-20 11:21:37.824172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.479 qpair failed and we were unable to recover it. 00:27:10.479 [2024-11-20 11:21:37.824350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.479 [2024-11-20 11:21:37.824383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.479 qpair failed and we were unable to recover it. 00:27:10.479 [2024-11-20 11:21:37.824652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.479 [2024-11-20 11:21:37.824685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.479 qpair failed and we were unable to recover it. 00:27:10.479 [2024-11-20 11:21:37.824853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.479 [2024-11-20 11:21:37.824886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.479 qpair failed and we were unable to recover it. 00:27:10.479 [2024-11-20 11:21:37.825013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.479 [2024-11-20 11:21:37.825046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.479 qpair failed and we were unable to recover it. 00:27:10.479 [2024-11-20 11:21:37.825218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.479 [2024-11-20 11:21:37.825251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.479 qpair failed and we were unable to recover it. 00:27:10.479 [2024-11-20 11:21:37.825377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.479 [2024-11-20 11:21:37.825409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.479 qpair failed and we were unable to recover it. 00:27:10.479 [2024-11-20 11:21:37.825623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.479 [2024-11-20 11:21:37.825656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.479 qpair failed and we were unable to recover it. 00:27:10.479 [2024-11-20 11:21:37.825919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.479 [2024-11-20 11:21:37.825957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.479 qpair failed and we were unable to recover it. 00:27:10.479 [2024-11-20 11:21:37.826132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.479 [2024-11-20 11:21:37.826163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.479 qpair failed and we were unable to recover it. 00:27:10.479 [2024-11-20 11:21:37.826279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.479 [2024-11-20 11:21:37.826310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.479 qpair failed and we were unable to recover it. 00:27:10.479 [2024-11-20 11:21:37.826495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.479 [2024-11-20 11:21:37.826527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.479 qpair failed and we were unable to recover it. 00:27:10.479 [2024-11-20 11:21:37.826656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.479 [2024-11-20 11:21:37.826687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.479 qpair failed and we were unable to recover it. 00:27:10.479 [2024-11-20 11:21:37.826924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.479 [2024-11-20 11:21:37.826975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.479 qpair failed and we were unable to recover it. 00:27:10.479 [2024-11-20 11:21:37.827079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.479 [2024-11-20 11:21:37.827109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.479 qpair failed and we were unable to recover it. 00:27:10.479 [2024-11-20 11:21:37.827352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.479 [2024-11-20 11:21:37.827383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.479 qpair failed and we were unable to recover it. 00:27:10.479 [2024-11-20 11:21:37.827518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.479 [2024-11-20 11:21:37.827550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.479 qpair failed and we were unable to recover it. 00:27:10.479 [2024-11-20 11:21:37.827662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.479 [2024-11-20 11:21:37.827693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.479 qpair failed and we were unable to recover it. 00:27:10.479 [2024-11-20 11:21:37.827879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.479 [2024-11-20 11:21:37.827910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.479 qpair failed and we were unable to recover it. 00:27:10.479 [2024-11-20 11:21:37.828182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.479 [2024-11-20 11:21:37.828214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.479 qpair failed and we were unable to recover it. 00:27:10.479 [2024-11-20 11:21:37.828335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.479 [2024-11-20 11:21:37.828367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.479 qpair failed and we were unable to recover it. 00:27:10.479 [2024-11-20 11:21:37.828542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.479 Malloc0 00:27:10.479 [2024-11-20 11:21:37.828573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.479 qpair failed and we were unable to recover it. 00:27:10.479 [2024-11-20 11:21:37.828753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.480 [2024-11-20 11:21:37.828784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.480 qpair failed and we were unable to recover it. 00:27:10.480 [2024-11-20 11:21:37.828960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.480 [2024-11-20 11:21:37.828993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.480 qpair failed and we were unable to recover it. 00:27:10.480 [2024-11-20 11:21:37.829116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.480 [2024-11-20 11:21:37.829146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.480 qpair failed and we were unable to recover it. 00:27:10.480 11:21:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.480 [2024-11-20 11:21:37.829335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.480 [2024-11-20 11:21:37.829367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.480 qpair failed and we were unable to recover it. 00:27:10.480 [2024-11-20 11:21:37.829536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.480 [2024-11-20 11:21:37.829568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.480 qpair failed and we were unable to recover it. 00:27:10.480 11:21:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:10.480 [2024-11-20 11:21:37.829748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.480 [2024-11-20 11:21:37.829780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.480 qpair failed and we were unable to recover it. 00:27:10.480 [2024-11-20 11:21:37.829908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.480 [2024-11-20 11:21:37.829945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.480 qpair failed and we were unable to recover it. 00:27:10.480 11:21:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.480 [2024-11-20 11:21:37.830062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.480 [2024-11-20 11:21:37.830094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.480 qpair failed and we were unable to recover it. 00:27:10.480 11:21:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:10.480 [2024-11-20 11:21:37.830335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.480 [2024-11-20 11:21:37.830367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.480 qpair failed and we were unable to recover it. 00:27:10.480 [2024-11-20 11:21:37.830591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.480 [2024-11-20 11:21:37.830622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.480 qpair failed and we were unable to recover it. 00:27:10.480 [2024-11-20 11:21:37.830883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.480 [2024-11-20 11:21:37.830914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.480 qpair failed and we were unable to recover it. 00:27:10.480 [2024-11-20 11:21:37.831098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.480 [2024-11-20 11:21:37.831130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.480 qpair failed and we were unable to recover it. 00:27:10.480 [2024-11-20 11:21:37.831249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.480 [2024-11-20 11:21:37.831281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.480 qpair failed and we were unable to recover it. 00:27:10.480 [2024-11-20 11:21:37.831481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.480 [2024-11-20 11:21:37.831512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.480 qpair failed and we were unable to recover it. 00:27:10.480 [2024-11-20 11:21:37.831685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.480 [2024-11-20 11:21:37.831717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.480 qpair failed and we were unable to recover it. 00:27:10.480 [2024-11-20 11:21:37.832004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.480 [2024-11-20 11:21:37.832036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.480 qpair failed and we were unable to recover it. 00:27:10.480 [2024-11-20 11:21:37.832211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.480 [2024-11-20 11:21:37.832242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.480 qpair failed and we were unable to recover it. 00:27:10.480 [2024-11-20 11:21:37.832508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.480 [2024-11-20 11:21:37.832540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.480 qpair failed and we were unable to recover it. 00:27:10.480 [2024-11-20 11:21:37.832727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.480 [2024-11-20 11:21:37.832759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.480 qpair failed and we were unable to recover it. 00:27:10.480 [2024-11-20 11:21:37.832896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.480 [2024-11-20 11:21:37.832927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.480 qpair failed and we were unable to recover it. 00:27:10.480 [2024-11-20 11:21:37.833140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.480 [2024-11-20 11:21:37.833171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.480 qpair failed and we were unable to recover it. 00:27:10.480 [2024-11-20 11:21:37.833370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.480 [2024-11-20 11:21:37.833401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.480 qpair failed and we were unable to recover it. 00:27:10.480 [2024-11-20 11:21:37.833575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.480 [2024-11-20 11:21:37.833606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.480 qpair failed and we were unable to recover it. 00:27:10.480 [2024-11-20 11:21:37.833865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.480 [2024-11-20 11:21:37.833896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.480 qpair failed and we were unable to recover it. 00:27:10.480 [2024-11-20 11:21:37.834164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.480 [2024-11-20 11:21:37.834197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.480 qpair failed and we were unable to recover it. 00:27:10.480 [2024-11-20 11:21:37.834314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.480 [2024-11-20 11:21:37.834344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.480 qpair failed and we were unable to recover it. 00:27:10.480 [2024-11-20 11:21:37.834473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.480 [2024-11-20 11:21:37.834505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.480 qpair failed and we were unable to recover it. 00:27:10.480 [2024-11-20 11:21:37.834740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.480 [2024-11-20 11:21:37.834771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.480 qpair failed and we were unable to recover it. 00:27:10.480 [2024-11-20 11:21:37.834873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.480 [2024-11-20 11:21:37.834904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.480 qpair failed and we were unable to recover it. 00:27:10.480 [2024-11-20 11:21:37.835169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.480 [2024-11-20 11:21:37.835201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.480 qpair failed and we were unable to recover it. 00:27:10.480 [2024-11-20 11:21:37.835391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.480 [2024-11-20 11:21:37.835422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.480 qpair failed and we were unable to recover it. 00:27:10.480 [2024-11-20 11:21:37.835560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.480 [2024-11-20 11:21:37.835591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.481 qpair failed and we were unable to recover it. 00:27:10.481 [2024-11-20 11:21:37.835716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.481 [2024-11-20 11:21:37.835752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.481 qpair failed and we were unable to recover it. 00:27:10.481 [2024-11-20 11:21:37.835924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.481 [2024-11-20 11:21:37.835966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.481 qpair failed and we were unable to recover it. 00:27:10.481 [2024-11-20 11:21:37.836118] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:10.481 [2024-11-20 11:21:37.836205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.481 [2024-11-20 11:21:37.836236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.481 qpair failed and we were unable to recover it. 00:27:10.481 [2024-11-20 11:21:37.836475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.481 [2024-11-20 11:21:37.836506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.481 qpair failed and we were unable to recover it. 00:27:10.481 [2024-11-20 11:21:37.836683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.481 [2024-11-20 11:21:37.836715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.481 qpair failed and we were unable to recover it. 00:27:10.481 [2024-11-20 11:21:37.836962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.481 [2024-11-20 11:21:37.836995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.481 qpair failed and we were unable to recover it. 00:27:10.481 [2024-11-20 11:21:37.837202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.481 [2024-11-20 11:21:37.837234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.481 qpair failed and we were unable to recover it. 00:27:10.481 [2024-11-20 11:21:37.837402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.481 [2024-11-20 11:21:37.837433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.481 qpair failed and we were unable to recover it. 00:27:10.481 [2024-11-20 11:21:37.837611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.481 [2024-11-20 11:21:37.837642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.481 qpair failed and we were unable to recover it. 00:27:10.481 [2024-11-20 11:21:37.837752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.481 [2024-11-20 11:21:37.837783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.481 qpair failed and we were unable to recover it. 00:27:10.481 [2024-11-20 11:21:37.837886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.481 [2024-11-20 11:21:37.837918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.481 qpair failed and we were unable to recover it. 00:27:10.481 [2024-11-20 11:21:37.838239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.481 [2024-11-20 11:21:37.838289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.481 qpair failed and we were unable to recover it. 00:27:10.481 [2024-11-20 11:21:37.838557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.481 [2024-11-20 11:21:37.838590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.481 qpair failed and we were unable to recover it. 00:27:10.481 [2024-11-20 11:21:37.838785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.481 [2024-11-20 11:21:37.838825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.481 qpair failed and we were unable to recover it. 00:27:10.481 [2024-11-20 11:21:37.839054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.481 [2024-11-20 11:21:37.839090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.481 qpair failed and we were unable to recover it. 00:27:10.481 [2024-11-20 11:21:37.839287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.481 [2024-11-20 11:21:37.839320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.481 qpair failed and we were unable to recover it. 00:27:10.481 [2024-11-20 11:21:37.839501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.481 [2024-11-20 11:21:37.839533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.481 qpair failed and we were unable to recover it. 00:27:10.481 [2024-11-20 11:21:37.839802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.481 [2024-11-20 11:21:37.839834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.481 qpair failed and we were unable to recover it. 00:27:10.481 [2024-11-20 11:21:37.840016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.481 [2024-11-20 11:21:37.840051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.481 qpair failed and we were unable to recover it. 00:27:10.481 [2024-11-20 11:21:37.840250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.481 [2024-11-20 11:21:37.840283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.481 qpair failed and we were unable to recover it. 00:27:10.481 [2024-11-20 11:21:37.840529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.481 [2024-11-20 11:21:37.840561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.481 qpair failed and we were unable to recover it. 00:27:10.481 [2024-11-20 11:21:37.840770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.481 [2024-11-20 11:21:37.840801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.481 qpair failed and we were unable to recover it. 00:27:10.481 [2024-11-20 11:21:37.841045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.481 [2024-11-20 11:21:37.841080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.481 qpair failed and we were unable to recover it. 00:27:10.481 [2024-11-20 11:21:37.841291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.481 [2024-11-20 11:21:37.841323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.481 qpair failed and we were unable to recover it. 00:27:10.481 [2024-11-20 11:21:37.841471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.481 [2024-11-20 11:21:37.841504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.481 qpair failed and we were unable to recover it. 00:27:10.481 [2024-11-20 11:21:37.841701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.481 [2024-11-20 11:21:37.841732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.481 qpair failed and we were unable to recover it. 00:27:10.481 [2024-11-20 11:21:37.842026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.481 [2024-11-20 11:21:37.842060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.481 qpair failed and we were unable to recover it. 00:27:10.481 [2024-11-20 11:21:37.842338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.481 [2024-11-20 11:21:37.842371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.481 qpair failed and we were unable to recover it. 00:27:10.481 [2024-11-20 11:21:37.842553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.481 [2024-11-20 11:21:37.842585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.481 qpair failed and we were unable to recover it. 00:27:10.481 [2024-11-20 11:21:37.842715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.481 [2024-11-20 11:21:37.842747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.481 qpair failed and we were unable to recover it. 00:27:10.481 [2024-11-20 11:21:37.843024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.481 [2024-11-20 11:21:37.843059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.481 qpair failed and we were unable to recover it. 00:27:10.481 [2024-11-20 11:21:37.843242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.481 [2024-11-20 11:21:37.843274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.481 qpair failed and we were unable to recover it. 00:27:10.481 [2024-11-20 11:21:37.843449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.481 [2024-11-20 11:21:37.843480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.482 qpair failed and we were unable to recover it. 00:27:10.482 [2024-11-20 11:21:37.843728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.482 [2024-11-20 11:21:37.843760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.482 qpair failed and we were unable to recover it. 00:27:10.482 [2024-11-20 11:21:37.843885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.482 [2024-11-20 11:21:37.843915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.482 qpair failed and we were unable to recover it. 00:27:10.482 [2024-11-20 11:21:37.844131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.482 [2024-11-20 11:21:37.844165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.482 qpair failed and we were unable to recover it. 00:27:10.482 [2024-11-20 11:21:37.844290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.482 [2024-11-20 11:21:37.844321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.482 qpair failed and we were unable to recover it. 00:27:10.482 [2024-11-20 11:21:37.844506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.482 [2024-11-20 11:21:37.844537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.482 qpair failed and we were unable to recover it. 00:27:10.482 [2024-11-20 11:21:37.844666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.482 [2024-11-20 11:21:37.844697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.482 qpair failed and we were unable to recover it. 00:27:10.482 11:21:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.482 [2024-11-20 11:21:37.844912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.482 [2024-11-20 11:21:37.844943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.482 qpair failed and we were unable to recover it. 00:27:10.482 [2024-11-20 11:21:37.845158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.482 [2024-11-20 11:21:37.845189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.482 11:21:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:10.482 qpair failed and we were unable to recover it. 00:27:10.482 [2024-11-20 11:21:37.845358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.482 [2024-11-20 11:21:37.845389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.482 qpair failed and we were unable to recover it. 00:27:10.482 11:21:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.482 [2024-11-20 11:21:37.845597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.482 [2024-11-20 11:21:37.845628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.482 qpair failed and we were unable to recover it. 00:27:10.482 11:21:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:10.482 [2024-11-20 11:21:37.845816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.482 [2024-11-20 11:21:37.845847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.482 qpair failed and we were unable to recover it. 00:27:10.482 [2024-11-20 11:21:37.846103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.482 [2024-11-20 11:21:37.846135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.482 qpair failed and we were unable to recover it. 00:27:10.482 [2024-11-20 11:21:37.846341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.482 [2024-11-20 11:21:37.846372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.482 qpair failed and we were unable to recover it. 00:27:10.482 [2024-11-20 11:21:37.846493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.482 [2024-11-20 11:21:37.846524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.482 qpair failed and we were unable to recover it. 00:27:10.482 [2024-11-20 11:21:37.846784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.482 [2024-11-20 11:21:37.846815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.482 qpair failed and we were unable to recover it. 00:27:10.482 [2024-11-20 11:21:37.846991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.482 [2024-11-20 11:21:37.847024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.482 qpair failed and we were unable to recover it. 00:27:10.482 [2024-11-20 11:21:37.847162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.482 [2024-11-20 11:21:37.847193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.482 qpair failed and we were unable to recover it. 00:27:10.482 [2024-11-20 11:21:37.847427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.482 [2024-11-20 11:21:37.847457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.482 qpair failed and we were unable to recover it. 00:27:10.482 [2024-11-20 11:21:37.847641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.482 [2024-11-20 11:21:37.847672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.482 qpair failed and we were unable to recover it. 00:27:10.482 [2024-11-20 11:21:37.847916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.482 [2024-11-20 11:21:37.847958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.482 qpair failed and we were unable to recover it. 00:27:10.482 [2024-11-20 11:21:37.848207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.482 [2024-11-20 11:21:37.848239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.482 qpair failed and we were unable to recover it. 00:27:10.482 [2024-11-20 11:21:37.848423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.482 [2024-11-20 11:21:37.848453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.482 qpair failed and we were unable to recover it. 00:27:10.482 [2024-11-20 11:21:37.848633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.482 [2024-11-20 11:21:37.848664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.482 qpair failed and we were unable to recover it. 00:27:10.482 [2024-11-20 11:21:37.848768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.482 [2024-11-20 11:21:37.848799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.482 qpair failed and we were unable to recover it. 00:27:10.482 [2024-11-20 11:21:37.849036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.482 [2024-11-20 11:21:37.849068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.482 qpair failed and we were unable to recover it. 00:27:10.482 [2024-11-20 11:21:37.849237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.482 [2024-11-20 11:21:37.849268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.482 qpair failed and we were unable to recover it. 00:27:10.482 [2024-11-20 11:21:37.849468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.482 [2024-11-20 11:21:37.849498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.482 qpair failed and we were unable to recover it. 00:27:10.482 [2024-11-20 11:21:37.849624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.482 [2024-11-20 11:21:37.849655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.482 qpair failed and we were unable to recover it. 00:27:10.482 [2024-11-20 11:21:37.849888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.482 [2024-11-20 11:21:37.849919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.482 qpair failed and we were unable to recover it. 00:27:10.482 [2024-11-20 11:21:37.850124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.482 [2024-11-20 11:21:37.850160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.482 qpair failed and we were unable to recover it. 00:27:10.482 [2024-11-20 11:21:37.850289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.482 [2024-11-20 11:21:37.850321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.483 qpair failed and we were unable to recover it. 00:27:10.483 [2024-11-20 11:21:37.850561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.483 [2024-11-20 11:21:37.850592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.483 qpair failed and we were unable to recover it. 00:27:10.483 [2024-11-20 11:21:37.850698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.483 [2024-11-20 11:21:37.850732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.483 qpair failed and we were unable to recover it. 00:27:10.483 [2024-11-20 11:21:37.850925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.483 [2024-11-20 11:21:37.850966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.483 qpair failed and we were unable to recover it. 00:27:10.483 [2024-11-20 11:21:37.851150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.483 [2024-11-20 11:21:37.851180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.483 qpair failed and we were unable to recover it. 00:27:10.483 [2024-11-20 11:21:37.851416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.483 [2024-11-20 11:21:37.851446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.483 qpair failed and we were unable to recover it. 00:27:10.483 [2024-11-20 11:21:37.851630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.483 [2024-11-20 11:21:37.851660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.483 qpair failed and we were unable to recover it. 00:27:10.483 [2024-11-20 11:21:37.851781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.483 [2024-11-20 11:21:37.851811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.483 qpair failed and we were unable to recover it. 00:27:10.483 [2024-11-20 11:21:37.852096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.483 [2024-11-20 11:21:37.852129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.483 qpair failed and we were unable to recover it. 00:27:10.483 [2024-11-20 11:21:37.852300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.483 [2024-11-20 11:21:37.852331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.483 qpair failed and we were unable to recover it. 00:27:10.483 [2024-11-20 11:21:37.852502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.483 [2024-11-20 11:21:37.852533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.483 qpair failed and we were unable to recover it. 00:27:10.483 [2024-11-20 11:21:37.852650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.483 [2024-11-20 11:21:37.852679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.483 qpair failed and we were unable to recover it. 00:27:10.483 11:21:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.483 [2024-11-20 11:21:37.852894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.483 [2024-11-20 11:21:37.852925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.483 qpair failed and we were unable to recover it. 00:27:10.483 [2024-11-20 11:21:37.853119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.483 [2024-11-20 11:21:37.853152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.483 qpair failed and we were unable to recover it. 00:27:10.483 11:21:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:10.483 [2024-11-20 11:21:37.853331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.483 [2024-11-20 11:21:37.853367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.483 qpair failed and we were unable to recover it. 00:27:10.483 11:21:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.483 [2024-11-20 11:21:37.853572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.483 [2024-11-20 11:21:37.853604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.483 qpair failed and we were unable to recover it. 00:27:10.483 [2024-11-20 11:21:37.853773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.483 [2024-11-20 11:21:37.853804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.483 qpair failed and we were unable to recover it. 00:27:10.483 11:21:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:10.483 [2024-11-20 11:21:37.853994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.483 [2024-11-20 11:21:37.854025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.483 qpair failed and we were unable to recover it. 00:27:10.483 [2024-11-20 11:21:37.854203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.483 [2024-11-20 11:21:37.854234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.483 qpair failed and we were unable to recover it. 00:27:10.483 [2024-11-20 11:21:37.854461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.483 [2024-11-20 11:21:37.854491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.483 qpair failed and we were unable to recover it. 00:27:10.483 [2024-11-20 11:21:37.854754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.483 [2024-11-20 11:21:37.854785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.483 qpair failed and we were unable to recover it. 00:27:10.483 [2024-11-20 11:21:37.854997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.483 [2024-11-20 11:21:37.855029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.483 qpair failed and we were unable to recover it. 00:27:10.483 [2024-11-20 11:21:37.855263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.483 [2024-11-20 11:21:37.855294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.483 qpair failed and we were unable to recover it. 00:27:10.483 [2024-11-20 11:21:37.855479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.483 [2024-11-20 11:21:37.855510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.483 qpair failed and we were unable to recover it. 00:27:10.483 [2024-11-20 11:21:37.855691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.483 [2024-11-20 11:21:37.855722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.483 qpair failed and we were unable to recover it. 00:27:10.483 [2024-11-20 11:21:37.855903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.483 [2024-11-20 11:21:37.855934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.483 qpair failed and we were unable to recover it. 00:27:10.483 [2024-11-20 11:21:37.856182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.483 [2024-11-20 11:21:37.856214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.483 qpair failed and we were unable to recover it. 00:27:10.483 [2024-11-20 11:21:37.856409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.483 [2024-11-20 11:21:37.856440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.483 qpair failed and we were unable to recover it. 00:27:10.483 [2024-11-20 11:21:37.856557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.483 [2024-11-20 11:21:37.856587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.483 qpair failed and we were unable to recover it. 00:27:10.483 [2024-11-20 11:21:37.856704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.483 [2024-11-20 11:21:37.856733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.483 qpair failed and we were unable to recover it. 00:27:10.483 [2024-11-20 11:21:37.856844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.483 [2024-11-20 11:21:37.856875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.483 qpair failed and we were unable to recover it. 00:27:10.483 [2024-11-20 11:21:37.857134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.484 [2024-11-20 11:21:37.857167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.484 qpair failed and we were unable to recover it. 00:27:10.484 [2024-11-20 11:21:37.857421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.484 [2024-11-20 11:21:37.857451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.484 qpair failed and we were unable to recover it. 00:27:10.484 [2024-11-20 11:21:37.857641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.484 [2024-11-20 11:21:37.857671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.484 qpair failed and we were unable to recover it. 00:27:10.484 [2024-11-20 11:21:37.857780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.484 [2024-11-20 11:21:37.857810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.484 qpair failed and we were unable to recover it. 00:27:10.484 [2024-11-20 11:21:37.857982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.484 [2024-11-20 11:21:37.858013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.484 qpair failed and we were unable to recover it. 00:27:10.484 [2024-11-20 11:21:37.858196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.484 [2024-11-20 11:21:37.858226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.484 qpair failed and we were unable to recover it. 00:27:10.484 [2024-11-20 11:21:37.858353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.484 [2024-11-20 11:21:37.858384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.484 qpair failed and we were unable to recover it. 00:27:10.484 [2024-11-20 11:21:37.858495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.484 [2024-11-20 11:21:37.858526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.484 qpair failed and we were unable to recover it. 00:27:10.484 [2024-11-20 11:21:37.858645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.484 [2024-11-20 11:21:37.858675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.484 qpair failed and we were unable to recover it. 00:27:10.484 [2024-11-20 11:21:37.858796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.484 [2024-11-20 11:21:37.858832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.484 qpair failed and we were unable to recover it. 00:27:10.484 [2024-11-20 11:21:37.859070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.484 [2024-11-20 11:21:37.859101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.484 qpair failed and we were unable to recover it. 00:27:10.484 [2024-11-20 11:21:37.859279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.484 [2024-11-20 11:21:37.859309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.484 qpair failed and we were unable to recover it. 00:27:10.484 [2024-11-20 11:21:37.859434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.484 [2024-11-20 11:21:37.859465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.484 qpair failed and we were unable to recover it. 00:27:10.484 [2024-11-20 11:21:37.859589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.484 [2024-11-20 11:21:37.859620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.484 qpair failed and we were unable to recover it. 00:27:10.484 [2024-11-20 11:21:37.859793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.484 [2024-11-20 11:21:37.859823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.484 qpair failed and we were unable to recover it. 00:27:10.484 [2024-11-20 11:21:37.860004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.484 [2024-11-20 11:21:37.860036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.484 qpair failed and we were unable to recover it. 00:27:10.484 [2024-11-20 11:21:37.860149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.484 [2024-11-20 11:21:37.860179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.484 qpair failed and we were unable to recover it. 00:27:10.484 [2024-11-20 11:21:37.860351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.484 [2024-11-20 11:21:37.860381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.484 qpair failed and we were unable to recover it. 00:27:10.484 [2024-11-20 11:21:37.860529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.484 [2024-11-20 11:21:37.860559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.484 qpair failed and we were unable to recover it. 00:27:10.484 [2024-11-20 11:21:37.860660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.484 [2024-11-20 11:21:37.860690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.484 qpair failed and we were unable to recover it. 00:27:10.484 11:21:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.484 [2024-11-20 11:21:37.860968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.484 [2024-11-20 11:21:37.861001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.484 qpair failed and we were unable to recover it. 00:27:10.484 [2024-11-20 11:21:37.861185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.484 [2024-11-20 11:21:37.861216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.484 qpair failed and we were unable to recover it. 00:27:10.484 11:21:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:10.484 [2024-11-20 11:21:37.861385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.484 [2024-11-20 11:21:37.861416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.484 qpair failed and we were unable to recover it. 00:27:10.484 11:21:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.484 [2024-11-20 11:21:37.861672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.484 [2024-11-20 11:21:37.861703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.484 qpair failed and we were unable to recover it. 00:27:10.484 11:21:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:10.484 [2024-11-20 11:21:37.861907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.484 [2024-11-20 11:21:37.861937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.484 qpair failed and we were unable to recover it. 00:27:10.484 [2024-11-20 11:21:37.862129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.484 [2024-11-20 11:21:37.862160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.484 qpair failed and we were unable to recover it. 00:27:10.484 [2024-11-20 11:21:37.862353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.484 [2024-11-20 11:21:37.862383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.484 qpair failed and we were unable to recover it. 00:27:10.484 [2024-11-20 11:21:37.862628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.484 [2024-11-20 11:21:37.862659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.484 qpair failed and we were unable to recover it. 00:27:10.484 [2024-11-20 11:21:37.862894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.485 [2024-11-20 11:21:37.862924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.485 qpair failed and we were unable to recover it. 00:27:10.485 [2024-11-20 11:21:37.863160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.485 [2024-11-20 11:21:37.863192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.485 qpair failed and we were unable to recover it. 00:27:10.485 [2024-11-20 11:21:37.863364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.485 [2024-11-20 11:21:37.863394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.485 qpair failed and we were unable to recover it. 00:27:10.485 [2024-11-20 11:21:37.863515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.485 [2024-11-20 11:21:37.863545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.485 qpair failed and we were unable to recover it. 00:27:10.485 [2024-11-20 11:21:37.863717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.485 [2024-11-20 11:21:37.863748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.485 qpair failed and we were unable to recover it. 00:27:10.485 [2024-11-20 11:21:37.863895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.485 [2024-11-20 11:21:37.863925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f684c000b90 with addr=10.0.0.2, port=4420 00:27:10.485 qpair failed and we were unable to recover it. 00:27:10.485 [2024-11-20 11:21:37.864157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.485 [2024-11-20 11:21:37.864193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5ba0 with addr=10.0.0.2, port=4420 00:27:10.485 qpair failed and we were unable to recover it. 00:27:10.485 [2024-11-20 11:21:37.864330] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:10.485 [2024-11-20 11:21:37.866831] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.485 [2024-11-20 11:21:37.866966] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.485 [2024-11-20 11:21:37.867013] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.485 [2024-11-20 11:21:37.867038] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.485 [2024-11-20 11:21:37.867060] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:10.485 [2024-11-20 11:21:37.867114] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:10.485 qpair failed and we were unable to recover it. 00:27:10.485 11:21:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.485 11:21:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:10.485 11:21:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.485 11:21:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:10.485 [2024-11-20 11:21:37.876718] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.485 [2024-11-20 11:21:37.876819] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.485 [2024-11-20 11:21:37.876861] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.485 [2024-11-20 11:21:37.876883] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.485 11:21:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.485 [2024-11-20 11:21:37.876907] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:10.485 [2024-11-20 11:21:37.876967] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:10.485 qpair failed and we were unable to recover it. 00:27:10.485 11:21:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 23783 00:27:10.485 [2024-11-20 11:21:37.886764] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.485 [2024-11-20 11:21:37.886871] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.485 [2024-11-20 11:21:37.886899] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.485 [2024-11-20 11:21:37.886913] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.485 [2024-11-20 11:21:37.886927] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:10.485 [2024-11-20 11:21:37.886962] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:10.485 qpair failed and we were unable to recover it. 00:27:10.485 [2024-11-20 11:21:37.896714] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.485 [2024-11-20 11:21:37.896779] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.485 [2024-11-20 11:21:37.896798] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.485 [2024-11-20 11:21:37.896807] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.485 [2024-11-20 11:21:37.896815] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:10.485 [2024-11-20 11:21:37.896835] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:10.485 qpair failed and we were unable to recover it. 00:27:10.485 [2024-11-20 11:21:37.906705] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.746 [2024-11-20 11:21:37.906778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.746 [2024-11-20 11:21:37.906793] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.746 [2024-11-20 11:21:37.906800] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.746 [2024-11-20 11:21:37.906806] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:10.746 [2024-11-20 11:21:37.906820] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:10.746 qpair failed and we were unable to recover it. 00:27:10.746 [2024-11-20 11:21:37.916706] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.746 [2024-11-20 11:21:37.916782] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.746 [2024-11-20 11:21:37.916797] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.746 [2024-11-20 11:21:37.916803] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.746 [2024-11-20 11:21:37.916809] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:10.746 [2024-11-20 11:21:37.916824] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:10.746 qpair failed and we were unable to recover it. 00:27:10.746 [2024-11-20 11:21:37.926715] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.746 [2024-11-20 11:21:37.926772] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.746 [2024-11-20 11:21:37.926788] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.746 [2024-11-20 11:21:37.926795] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.746 [2024-11-20 11:21:37.926801] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:10.746 [2024-11-20 11:21:37.926816] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:10.746 qpair failed and we were unable to recover it. 00:27:10.746 [2024-11-20 11:21:37.936692] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.746 [2024-11-20 11:21:37.936750] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.746 [2024-11-20 11:21:37.936768] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.746 [2024-11-20 11:21:37.936774] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.746 [2024-11-20 11:21:37.936781] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:10.746 [2024-11-20 11:21:37.936796] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:10.746 qpair failed and we were unable to recover it. 00:27:10.746 [2024-11-20 11:21:37.946824] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.746 [2024-11-20 11:21:37.946879] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.746 [2024-11-20 11:21:37.946894] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.746 [2024-11-20 11:21:37.946901] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.746 [2024-11-20 11:21:37.946907] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:10.746 [2024-11-20 11:21:37.946922] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:10.746 qpair failed and we were unable to recover it. 00:27:10.746 [2024-11-20 11:21:37.956834] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.746 [2024-11-20 11:21:37.956912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.746 [2024-11-20 11:21:37.956926] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.746 [2024-11-20 11:21:37.956932] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.746 [2024-11-20 11:21:37.956938] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:10.746 [2024-11-20 11:21:37.956958] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:10.746 qpair failed and we were unable to recover it. 00:27:10.746 [2024-11-20 11:21:37.966854] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.746 [2024-11-20 11:21:37.966908] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.746 [2024-11-20 11:21:37.966922] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.746 [2024-11-20 11:21:37.966929] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.746 [2024-11-20 11:21:37.966935] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:10.746 [2024-11-20 11:21:37.966954] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:10.746 qpair failed and we were unable to recover it. 00:27:10.746 [2024-11-20 11:21:37.976887] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.746 [2024-11-20 11:21:37.976951] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.746 [2024-11-20 11:21:37.976966] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.746 [2024-11-20 11:21:37.976973] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.746 [2024-11-20 11:21:37.976979] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:10.746 [2024-11-20 11:21:37.976997] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:10.746 qpair failed and we were unable to recover it. 00:27:10.746 [2024-11-20 11:21:37.986956] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.747 [2024-11-20 11:21:37.987011] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.747 [2024-11-20 11:21:37.987025] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.747 [2024-11-20 11:21:37.987032] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.747 [2024-11-20 11:21:37.987038] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:10.747 [2024-11-20 11:21:37.987053] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:10.747 qpair failed and we were unable to recover it. 00:27:10.747 [2024-11-20 11:21:37.996934] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.747 [2024-11-20 11:21:37.996988] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.747 [2024-11-20 11:21:37.997002] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.747 [2024-11-20 11:21:37.997008] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.747 [2024-11-20 11:21:37.997014] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:10.747 [2024-11-20 11:21:37.997029] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:10.747 qpair failed and we were unable to recover it. 00:27:10.747 [2024-11-20 11:21:38.006980] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.747 [2024-11-20 11:21:38.007038] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.747 [2024-11-20 11:21:38.007052] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.747 [2024-11-20 11:21:38.007059] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.747 [2024-11-20 11:21:38.007065] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:10.747 [2024-11-20 11:21:38.007079] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:10.747 qpair failed and we were unable to recover it. 00:27:10.747 [2024-11-20 11:21:38.017042] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.747 [2024-11-20 11:21:38.017102] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.747 [2024-11-20 11:21:38.017115] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.747 [2024-11-20 11:21:38.017122] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.747 [2024-11-20 11:21:38.017129] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:10.747 [2024-11-20 11:21:38.017143] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:10.747 qpair failed and we were unable to recover it. 00:27:10.747 [2024-11-20 11:21:38.027049] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.747 [2024-11-20 11:21:38.027103] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.747 [2024-11-20 11:21:38.027119] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.747 [2024-11-20 11:21:38.027126] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.747 [2024-11-20 11:21:38.027132] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:10.747 [2024-11-20 11:21:38.027148] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:10.747 qpair failed and we were unable to recover it. 00:27:10.747 [2024-11-20 11:21:38.037085] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.747 [2024-11-20 11:21:38.037136] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.747 [2024-11-20 11:21:38.037152] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.747 [2024-11-20 11:21:38.037158] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.747 [2024-11-20 11:21:38.037165] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:10.747 [2024-11-20 11:21:38.037179] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:10.747 qpair failed and we were unable to recover it. 00:27:10.747 [2024-11-20 11:21:38.047127] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.747 [2024-11-20 11:21:38.047181] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.747 [2024-11-20 11:21:38.047195] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.747 [2024-11-20 11:21:38.047202] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.747 [2024-11-20 11:21:38.047208] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:10.747 [2024-11-20 11:21:38.047223] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:10.747 qpair failed and we were unable to recover it. 00:27:10.747 [2024-11-20 11:21:38.057120] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.747 [2024-11-20 11:21:38.057176] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.747 [2024-11-20 11:21:38.057190] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.747 [2024-11-20 11:21:38.057197] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.747 [2024-11-20 11:21:38.057204] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:10.747 [2024-11-20 11:21:38.057218] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:10.747 qpair failed and we were unable to recover it. 00:27:10.747 [2024-11-20 11:21:38.067154] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.747 [2024-11-20 11:21:38.067222] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.747 [2024-11-20 11:21:38.067239] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.747 [2024-11-20 11:21:38.067246] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.747 [2024-11-20 11:21:38.067252] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:10.747 [2024-11-20 11:21:38.067265] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:10.747 qpair failed and we were unable to recover it. 00:27:10.747 [2024-11-20 11:21:38.077129] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.747 [2024-11-20 11:21:38.077181] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.747 [2024-11-20 11:21:38.077195] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.747 [2024-11-20 11:21:38.077202] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.747 [2024-11-20 11:21:38.077208] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:10.747 [2024-11-20 11:21:38.077222] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:10.747 qpair failed and we were unable to recover it. 00:27:10.747 [2024-11-20 11:21:38.087212] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.747 [2024-11-20 11:21:38.087268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.747 [2024-11-20 11:21:38.087281] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.747 [2024-11-20 11:21:38.087288] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.747 [2024-11-20 11:21:38.087294] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:10.747 [2024-11-20 11:21:38.087309] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:10.747 qpair failed and we were unable to recover it. 00:27:10.747 [2024-11-20 11:21:38.097171] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.747 [2024-11-20 11:21:38.097228] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.747 [2024-11-20 11:21:38.097242] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.748 [2024-11-20 11:21:38.097249] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.748 [2024-11-20 11:21:38.097255] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:10.748 [2024-11-20 11:21:38.097269] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:10.748 qpair failed and we were unable to recover it. 00:27:10.748 [2024-11-20 11:21:38.107271] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.748 [2024-11-20 11:21:38.107328] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.748 [2024-11-20 11:21:38.107342] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.748 [2024-11-20 11:21:38.107349] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.748 [2024-11-20 11:21:38.107355] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:10.748 [2024-11-20 11:21:38.107372] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:10.748 qpair failed and we were unable to recover it. 00:27:10.748 [2024-11-20 11:21:38.117295] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.748 [2024-11-20 11:21:38.117348] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.748 [2024-11-20 11:21:38.117362] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.748 [2024-11-20 11:21:38.117369] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.748 [2024-11-20 11:21:38.117375] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:10.748 [2024-11-20 11:21:38.117389] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:10.748 qpair failed and we were unable to recover it. 00:27:10.748 [2024-11-20 11:21:38.127327] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.748 [2024-11-20 11:21:38.127377] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.748 [2024-11-20 11:21:38.127392] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.748 [2024-11-20 11:21:38.127399] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.748 [2024-11-20 11:21:38.127406] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:10.748 [2024-11-20 11:21:38.127420] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:10.748 qpair failed and we were unable to recover it. 00:27:10.748 [2024-11-20 11:21:38.137368] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.748 [2024-11-20 11:21:38.137428] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.748 [2024-11-20 11:21:38.137441] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.748 [2024-11-20 11:21:38.137448] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.748 [2024-11-20 11:21:38.137454] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:10.748 [2024-11-20 11:21:38.137468] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:10.748 qpair failed and we were unable to recover it. 00:27:10.748 [2024-11-20 11:21:38.147312] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.748 [2024-11-20 11:21:38.147366] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.748 [2024-11-20 11:21:38.147379] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.748 [2024-11-20 11:21:38.147386] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.748 [2024-11-20 11:21:38.147392] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:10.748 [2024-11-20 11:21:38.147406] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:10.748 qpair failed and we were unable to recover it. 00:27:10.748 [2024-11-20 11:21:38.157457] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.748 [2024-11-20 11:21:38.157544] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.748 [2024-11-20 11:21:38.157558] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.748 [2024-11-20 11:21:38.157565] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.748 [2024-11-20 11:21:38.157571] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:10.748 [2024-11-20 11:21:38.157585] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:10.748 qpair failed and we were unable to recover it. 00:27:10.748 [2024-11-20 11:21:38.167433] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.748 [2024-11-20 11:21:38.167486] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.748 [2024-11-20 11:21:38.167501] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.748 [2024-11-20 11:21:38.167507] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.748 [2024-11-20 11:21:38.167514] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:10.748 [2024-11-20 11:21:38.167528] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:10.748 qpair failed and we were unable to recover it. 00:27:10.748 [2024-11-20 11:21:38.177483] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.748 [2024-11-20 11:21:38.177552] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.748 [2024-11-20 11:21:38.177565] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.748 [2024-11-20 11:21:38.177572] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.748 [2024-11-20 11:21:38.177578] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:10.748 [2024-11-20 11:21:38.177593] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:10.748 qpair failed and we were unable to recover it. 00:27:10.748 [2024-11-20 11:21:38.187495] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.748 [2024-11-20 11:21:38.187553] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.748 [2024-11-20 11:21:38.187568] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.748 [2024-11-20 11:21:38.187575] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.748 [2024-11-20 11:21:38.187580] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:10.748 [2024-11-20 11:21:38.187595] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:10.748 qpair failed and we were unable to recover it. 00:27:10.748 [2024-11-20 11:21:38.197528] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.748 [2024-11-20 11:21:38.197583] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.748 [2024-11-20 11:21:38.197602] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.748 [2024-11-20 11:21:38.197610] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.748 [2024-11-20 11:21:38.197616] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:10.748 [2024-11-20 11:21:38.197631] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:10.748 qpair failed and we were unable to recover it. 00:27:10.748 [2024-11-20 11:21:38.207553] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.748 [2024-11-20 11:21:38.207622] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.748 [2024-11-20 11:21:38.207636] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.748 [2024-11-20 11:21:38.207643] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.749 [2024-11-20 11:21:38.207649] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:10.749 [2024-11-20 11:21:38.207664] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:10.749 qpair failed and we were unable to recover it. 00:27:10.749 [2024-11-20 11:21:38.217670] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.749 [2024-11-20 11:21:38.217728] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.749 [2024-11-20 11:21:38.217742] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.749 [2024-11-20 11:21:38.217749] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.749 [2024-11-20 11:21:38.217755] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:10.749 [2024-11-20 11:21:38.217770] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:10.749 qpair failed and we were unable to recover it. 00:27:10.749 [2024-11-20 11:21:38.227584] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.749 [2024-11-20 11:21:38.227641] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.749 [2024-11-20 11:21:38.227657] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.749 [2024-11-20 11:21:38.227664] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.749 [2024-11-20 11:21:38.227671] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:10.749 [2024-11-20 11:21:38.227685] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:10.749 qpair failed and we were unable to recover it. 00:27:10.749 [2024-11-20 11:21:38.237588] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.749 [2024-11-20 11:21:38.237638] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.749 [2024-11-20 11:21:38.237652] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.749 [2024-11-20 11:21:38.237659] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.749 [2024-11-20 11:21:38.237668] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:10.749 [2024-11-20 11:21:38.237683] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:10.749 qpair failed and we were unable to recover it. 00:27:11.009 [2024-11-20 11:21:38.247594] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.009 [2024-11-20 11:21:38.247648] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.009 [2024-11-20 11:21:38.247662] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.009 [2024-11-20 11:21:38.247669] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.009 [2024-11-20 11:21:38.247675] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:11.009 [2024-11-20 11:21:38.247689] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.009 qpair failed and we were unable to recover it. 00:27:11.009 [2024-11-20 11:21:38.257704] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.009 [2024-11-20 11:21:38.257761] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.009 [2024-11-20 11:21:38.257775] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.009 [2024-11-20 11:21:38.257783] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.009 [2024-11-20 11:21:38.257789] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:11.009 [2024-11-20 11:21:38.257804] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.009 qpair failed and we were unable to recover it. 00:27:11.009 [2024-11-20 11:21:38.267769] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.009 [2024-11-20 11:21:38.267873] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.009 [2024-11-20 11:21:38.267887] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.009 [2024-11-20 11:21:38.267894] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.009 [2024-11-20 11:21:38.267901] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:11.009 [2024-11-20 11:21:38.267915] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.009 qpair failed and we were unable to recover it. 00:27:11.009 [2024-11-20 11:21:38.277792] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.009 [2024-11-20 11:21:38.277860] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.009 [2024-11-20 11:21:38.277875] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.009 [2024-11-20 11:21:38.277881] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.009 [2024-11-20 11:21:38.277888] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:11.009 [2024-11-20 11:21:38.277902] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.009 qpair failed and we were unable to recover it. 00:27:11.009 [2024-11-20 11:21:38.287824] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.009 [2024-11-20 11:21:38.287881] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.009 [2024-11-20 11:21:38.287895] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.010 [2024-11-20 11:21:38.287902] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.010 [2024-11-20 11:21:38.287908] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:11.010 [2024-11-20 11:21:38.287922] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.010 qpair failed and we were unable to recover it. 00:27:11.010 [2024-11-20 11:21:38.297827] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.010 [2024-11-20 11:21:38.297885] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.010 [2024-11-20 11:21:38.297898] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.010 [2024-11-20 11:21:38.297905] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.010 [2024-11-20 11:21:38.297911] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:11.010 [2024-11-20 11:21:38.297925] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.010 qpair failed and we were unable to recover it. 00:27:11.010 [2024-11-20 11:21:38.307852] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.010 [2024-11-20 11:21:38.307912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.010 [2024-11-20 11:21:38.307926] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.010 [2024-11-20 11:21:38.307933] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.010 [2024-11-20 11:21:38.307939] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:11.010 [2024-11-20 11:21:38.307955] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.010 qpair failed and we were unable to recover it. 00:27:11.010 [2024-11-20 11:21:38.317892] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.010 [2024-11-20 11:21:38.317957] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.010 [2024-11-20 11:21:38.317972] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.010 [2024-11-20 11:21:38.317978] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.010 [2024-11-20 11:21:38.317984] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:11.010 [2024-11-20 11:21:38.317998] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.010 qpair failed and we were unable to recover it. 00:27:11.010 [2024-11-20 11:21:38.327905] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.010 [2024-11-20 11:21:38.327966] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.010 [2024-11-20 11:21:38.327987] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.010 [2024-11-20 11:21:38.327994] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.010 [2024-11-20 11:21:38.328000] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:11.010 [2024-11-20 11:21:38.328015] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.010 qpair failed and we were unable to recover it. 00:27:11.010 [2024-11-20 11:21:38.337939] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.010 [2024-11-20 11:21:38.338000] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.010 [2024-11-20 11:21:38.338014] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.010 [2024-11-20 11:21:38.338021] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.010 [2024-11-20 11:21:38.338027] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:11.010 [2024-11-20 11:21:38.338041] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.010 qpair failed and we were unable to recover it. 00:27:11.010 [2024-11-20 11:21:38.347969] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.010 [2024-11-20 11:21:38.348026] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.010 [2024-11-20 11:21:38.348040] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.010 [2024-11-20 11:21:38.348046] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.010 [2024-11-20 11:21:38.348052] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:11.010 [2024-11-20 11:21:38.348067] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.010 qpair failed and we were unable to recover it. 00:27:11.010 [2024-11-20 11:21:38.358001] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.010 [2024-11-20 11:21:38.358061] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.010 [2024-11-20 11:21:38.358075] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.010 [2024-11-20 11:21:38.358081] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.010 [2024-11-20 11:21:38.358087] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:11.010 [2024-11-20 11:21:38.358101] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.010 qpair failed and we were unable to recover it. 00:27:11.010 [2024-11-20 11:21:38.367958] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.010 [2024-11-20 11:21:38.368011] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.010 [2024-11-20 11:21:38.368026] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.010 [2024-11-20 11:21:38.368032] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.010 [2024-11-20 11:21:38.368041] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:11.010 [2024-11-20 11:21:38.368056] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.010 qpair failed and we were unable to recover it. 00:27:11.010 [2024-11-20 11:21:38.378073] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.010 [2024-11-20 11:21:38.378149] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.010 [2024-11-20 11:21:38.378163] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.010 [2024-11-20 11:21:38.378170] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.010 [2024-11-20 11:21:38.378175] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:11.010 [2024-11-20 11:21:38.378190] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.010 qpair failed and we were unable to recover it. 00:27:11.010 [2024-11-20 11:21:38.388133] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.010 [2024-11-20 11:21:38.388189] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.010 [2024-11-20 11:21:38.388203] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.010 [2024-11-20 11:21:38.388210] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.010 [2024-11-20 11:21:38.388216] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:11.010 [2024-11-20 11:21:38.388230] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.010 qpair failed and we were unable to recover it. 00:27:11.010 [2024-11-20 11:21:38.398104] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.010 [2024-11-20 11:21:38.398156] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.010 [2024-11-20 11:21:38.398169] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.010 [2024-11-20 11:21:38.398176] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.011 [2024-11-20 11:21:38.398182] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:11.011 [2024-11-20 11:21:38.398196] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.011 qpair failed and we were unable to recover it. 00:27:11.011 [2024-11-20 11:21:38.408133] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.011 [2024-11-20 11:21:38.408213] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.011 [2024-11-20 11:21:38.408227] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.011 [2024-11-20 11:21:38.408234] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.011 [2024-11-20 11:21:38.408240] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:11.011 [2024-11-20 11:21:38.408254] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.011 qpair failed and we were unable to recover it. 00:27:11.011 [2024-11-20 11:21:38.418207] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.011 [2024-11-20 11:21:38.418269] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.011 [2024-11-20 11:21:38.418282] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.011 [2024-11-20 11:21:38.418289] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.011 [2024-11-20 11:21:38.418295] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:11.011 [2024-11-20 11:21:38.418310] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.011 qpair failed and we were unable to recover it. 00:27:11.011 [2024-11-20 11:21:38.428277] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.011 [2024-11-20 11:21:38.428331] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.011 [2024-11-20 11:21:38.428346] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.011 [2024-11-20 11:21:38.428353] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.011 [2024-11-20 11:21:38.428359] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:11.011 [2024-11-20 11:21:38.428374] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.011 qpair failed and we were unable to recover it. 00:27:11.011 [2024-11-20 11:21:38.438265] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.011 [2024-11-20 11:21:38.438322] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.011 [2024-11-20 11:21:38.438336] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.011 [2024-11-20 11:21:38.438343] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.011 [2024-11-20 11:21:38.438349] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:11.011 [2024-11-20 11:21:38.438363] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.011 qpair failed and we were unable to recover it. 00:27:11.011 [2024-11-20 11:21:38.448302] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.011 [2024-11-20 11:21:38.448359] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.011 [2024-11-20 11:21:38.448374] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.011 [2024-11-20 11:21:38.448381] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.011 [2024-11-20 11:21:38.448387] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:11.011 [2024-11-20 11:21:38.448402] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.011 qpair failed and we were unable to recover it. 00:27:11.011 [2024-11-20 11:21:38.458292] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.011 [2024-11-20 11:21:38.458356] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.011 [2024-11-20 11:21:38.458374] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.011 [2024-11-20 11:21:38.458381] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.011 [2024-11-20 11:21:38.458387] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:11.011 [2024-11-20 11:21:38.458402] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.011 qpair failed and we were unable to recover it. 00:27:11.011 [2024-11-20 11:21:38.468309] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.011 [2024-11-20 11:21:38.468406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.011 [2024-11-20 11:21:38.468421] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.011 [2024-11-20 11:21:38.468427] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.011 [2024-11-20 11:21:38.468433] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:11.011 [2024-11-20 11:21:38.468448] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.011 qpair failed and we were unable to recover it. 00:27:11.011 [2024-11-20 11:21:38.478377] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.011 [2024-11-20 11:21:38.478430] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.011 [2024-11-20 11:21:38.478444] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.011 [2024-11-20 11:21:38.478450] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.011 [2024-11-20 11:21:38.478456] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:11.011 [2024-11-20 11:21:38.478471] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.011 qpair failed and we were unable to recover it. 00:27:11.011 [2024-11-20 11:21:38.488353] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.011 [2024-11-20 11:21:38.488417] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.011 [2024-11-20 11:21:38.488430] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.011 [2024-11-20 11:21:38.488437] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.011 [2024-11-20 11:21:38.488443] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:11.011 [2024-11-20 11:21:38.488457] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.011 qpair failed and we were unable to recover it. 00:27:11.011 [2024-11-20 11:21:38.498398] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.011 [2024-11-20 11:21:38.498458] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.011 [2024-11-20 11:21:38.498472] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.011 [2024-11-20 11:21:38.498478] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.011 [2024-11-20 11:21:38.498488] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:11.011 [2024-11-20 11:21:38.498502] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.011 qpair failed and we were unable to recover it. 00:27:11.272 [2024-11-20 11:21:38.508453] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.272 [2024-11-20 11:21:38.508511] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.272 [2024-11-20 11:21:38.508525] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.272 [2024-11-20 11:21:38.508531] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.272 [2024-11-20 11:21:38.508538] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:11.272 [2024-11-20 11:21:38.508552] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.272 qpair failed and we were unable to recover it. 00:27:11.272 [2024-11-20 11:21:38.518454] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.272 [2024-11-20 11:21:38.518541] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.272 [2024-11-20 11:21:38.518554] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.272 [2024-11-20 11:21:38.518561] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.272 [2024-11-20 11:21:38.518567] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:11.272 [2024-11-20 11:21:38.518581] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.272 qpair failed and we were unable to recover it. 00:27:11.272 [2024-11-20 11:21:38.528485] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.272 [2024-11-20 11:21:38.528538] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.272 [2024-11-20 11:21:38.528553] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.272 [2024-11-20 11:21:38.528560] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.272 [2024-11-20 11:21:38.528566] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:11.272 [2024-11-20 11:21:38.528582] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.272 qpair failed and we were unable to recover it. 00:27:11.272 [2024-11-20 11:21:38.538537] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.272 [2024-11-20 11:21:38.538597] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.272 [2024-11-20 11:21:38.538612] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.272 [2024-11-20 11:21:38.538620] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.272 [2024-11-20 11:21:38.538627] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:11.272 [2024-11-20 11:21:38.538641] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.272 qpair failed and we were unable to recover it. 00:27:11.272 [2024-11-20 11:21:38.548543] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.272 [2024-11-20 11:21:38.548606] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.272 [2024-11-20 11:21:38.548620] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.272 [2024-11-20 11:21:38.548627] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.272 [2024-11-20 11:21:38.548634] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:11.272 [2024-11-20 11:21:38.548648] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.272 qpair failed and we were unable to recover it. 00:27:11.272 [2024-11-20 11:21:38.558509] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.272 [2024-11-20 11:21:38.558566] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.272 [2024-11-20 11:21:38.558581] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.272 [2024-11-20 11:21:38.558588] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.272 [2024-11-20 11:21:38.558595] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:11.272 [2024-11-20 11:21:38.558610] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.272 qpair failed and we were unable to recover it. 00:27:11.272 [2024-11-20 11:21:38.568610] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.272 [2024-11-20 11:21:38.568670] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.272 [2024-11-20 11:21:38.568684] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.272 [2024-11-20 11:21:38.568691] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.272 [2024-11-20 11:21:38.568698] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:11.272 [2024-11-20 11:21:38.568713] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.272 qpair failed and we were unable to recover it. 00:27:11.272 [2024-11-20 11:21:38.578630] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.272 [2024-11-20 11:21:38.578709] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.272 [2024-11-20 11:21:38.578724] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.272 [2024-11-20 11:21:38.578731] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.272 [2024-11-20 11:21:38.578737] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:11.272 [2024-11-20 11:21:38.578752] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.272 qpair failed and we were unable to recover it. 00:27:11.272 [2024-11-20 11:21:38.588659] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.272 [2024-11-20 11:21:38.588713] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.272 [2024-11-20 11:21:38.588730] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.272 [2024-11-20 11:21:38.588738] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.272 [2024-11-20 11:21:38.588745] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:11.272 [2024-11-20 11:21:38.588759] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.272 qpair failed and we were unable to recover it. 00:27:11.272 [2024-11-20 11:21:38.598688] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.272 [2024-11-20 11:21:38.598747] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.272 [2024-11-20 11:21:38.598763] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.272 [2024-11-20 11:21:38.598770] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.272 [2024-11-20 11:21:38.598777] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:11.272 [2024-11-20 11:21:38.598792] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.272 qpair failed and we were unable to recover it. 00:27:11.272 [2024-11-20 11:21:38.608626] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.272 [2024-11-20 11:21:38.608680] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.272 [2024-11-20 11:21:38.608695] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.272 [2024-11-20 11:21:38.608702] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.272 [2024-11-20 11:21:38.608709] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:11.272 [2024-11-20 11:21:38.608723] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.273 qpair failed and we were unable to recover it. 00:27:11.273 [2024-11-20 11:21:38.618749] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.273 [2024-11-20 11:21:38.618805] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.273 [2024-11-20 11:21:38.618819] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.273 [2024-11-20 11:21:38.618827] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.273 [2024-11-20 11:21:38.618834] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:11.273 [2024-11-20 11:21:38.618848] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.273 qpair failed and we were unable to recover it. 00:27:11.273 [2024-11-20 11:21:38.628712] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.273 [2024-11-20 11:21:38.628807] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.273 [2024-11-20 11:21:38.628823] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.273 [2024-11-20 11:21:38.628830] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.273 [2024-11-20 11:21:38.628840] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:11.273 [2024-11-20 11:21:38.628856] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.273 qpair failed and we were unable to recover it. 00:27:11.273 [2024-11-20 11:21:38.638802] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.273 [2024-11-20 11:21:38.638858] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.273 [2024-11-20 11:21:38.638873] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.273 [2024-11-20 11:21:38.638881] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.273 [2024-11-20 11:21:38.638887] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:11.273 [2024-11-20 11:21:38.638903] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.273 qpair failed and we were unable to recover it. 00:27:11.273 [2024-11-20 11:21:38.648803] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.273 [2024-11-20 11:21:38.648862] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.273 [2024-11-20 11:21:38.648876] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.273 [2024-11-20 11:21:38.648883] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.273 [2024-11-20 11:21:38.648890] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:11.273 [2024-11-20 11:21:38.648904] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.273 qpair failed and we were unable to recover it. 00:27:11.273 [2024-11-20 11:21:38.658873] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.273 [2024-11-20 11:21:38.658931] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.273 [2024-11-20 11:21:38.658946] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.273 [2024-11-20 11:21:38.658958] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.273 [2024-11-20 11:21:38.658964] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:11.273 [2024-11-20 11:21:38.658980] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.273 qpair failed and we were unable to recover it. 00:27:11.273 [2024-11-20 11:21:38.668894] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.273 [2024-11-20 11:21:38.668952] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.273 [2024-11-20 11:21:38.668967] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.273 [2024-11-20 11:21:38.668975] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.273 [2024-11-20 11:21:38.668981] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:11.273 [2024-11-20 11:21:38.668996] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.273 qpair failed and we were unable to recover it. 00:27:11.273 [2024-11-20 11:21:38.679029] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.273 [2024-11-20 11:21:38.679095] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.273 [2024-11-20 11:21:38.679110] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.273 [2024-11-20 11:21:38.679118] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.273 [2024-11-20 11:21:38.679124] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:11.273 [2024-11-20 11:21:38.679139] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.273 qpair failed and we were unable to recover it. 00:27:11.273 [2024-11-20 11:21:38.689067] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.273 [2024-11-20 11:21:38.689153] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.273 [2024-11-20 11:21:38.689171] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.273 [2024-11-20 11:21:38.689178] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.273 [2024-11-20 11:21:38.689185] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:11.273 [2024-11-20 11:21:38.689201] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.273 qpair failed and we were unable to recover it. 00:27:11.273 [2024-11-20 11:21:38.699012] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.273 [2024-11-20 11:21:38.699070] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.273 [2024-11-20 11:21:38.699084] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.273 [2024-11-20 11:21:38.699092] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.273 [2024-11-20 11:21:38.699099] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:11.273 [2024-11-20 11:21:38.699115] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.273 qpair failed and we were unable to recover it. 00:27:11.273 [2024-11-20 11:21:38.709055] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.273 [2024-11-20 11:21:38.709113] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.273 [2024-11-20 11:21:38.709128] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.273 [2024-11-20 11:21:38.709135] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.273 [2024-11-20 11:21:38.709142] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:11.273 [2024-11-20 11:21:38.709157] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.273 qpair failed and we were unable to recover it. 00:27:11.273 [2024-11-20 11:21:38.719045] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.273 [2024-11-20 11:21:38.719101] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.273 [2024-11-20 11:21:38.719118] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.273 [2024-11-20 11:21:38.719126] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.273 [2024-11-20 11:21:38.719132] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:11.273 [2024-11-20 11:21:38.719147] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.273 qpair failed and we were unable to recover it. 00:27:11.273 [2024-11-20 11:21:38.729007] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.274 [2024-11-20 11:21:38.729062] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.274 [2024-11-20 11:21:38.729078] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.274 [2024-11-20 11:21:38.729085] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.274 [2024-11-20 11:21:38.729092] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:11.274 [2024-11-20 11:21:38.729107] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.274 qpair failed and we were unable to recover it. 00:27:11.274 [2024-11-20 11:21:38.739112] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.274 [2024-11-20 11:21:38.739170] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.274 [2024-11-20 11:21:38.739186] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.274 [2024-11-20 11:21:38.739194] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.274 [2024-11-20 11:21:38.739201] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:11.274 [2024-11-20 11:21:38.739216] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.274 qpair failed and we were unable to recover it. 00:27:11.274 [2024-11-20 11:21:38.749130] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.274 [2024-11-20 11:21:38.749187] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.274 [2024-11-20 11:21:38.749202] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.274 [2024-11-20 11:21:38.749209] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.274 [2024-11-20 11:21:38.749216] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:11.274 [2024-11-20 11:21:38.749231] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.274 qpair failed and we were unable to recover it. 00:27:11.274 [2024-11-20 11:21:38.759156] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.274 [2024-11-20 11:21:38.759208] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.274 [2024-11-20 11:21:38.759222] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.274 [2024-11-20 11:21:38.759229] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.274 [2024-11-20 11:21:38.759240] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:11.274 [2024-11-20 11:21:38.759255] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.274 qpair failed and we were unable to recover it. 00:27:11.534 [2024-11-20 11:21:38.769129] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.534 [2024-11-20 11:21:38.769215] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.534 [2024-11-20 11:21:38.769231] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.534 [2024-11-20 11:21:38.769239] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.534 [2024-11-20 11:21:38.769247] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:11.534 [2024-11-20 11:21:38.769263] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.534 qpair failed and we were unable to recover it. 00:27:11.534 [2024-11-20 11:21:38.779227] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.534 [2024-11-20 11:21:38.779282] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.534 [2024-11-20 11:21:38.779296] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.534 [2024-11-20 11:21:38.779304] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.534 [2024-11-20 11:21:38.779310] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:11.534 [2024-11-20 11:21:38.779325] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.534 qpair failed and we were unable to recover it. 00:27:11.534 [2024-11-20 11:21:38.789282] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.534 [2024-11-20 11:21:38.789357] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.534 [2024-11-20 11:21:38.789372] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.534 [2024-11-20 11:21:38.789379] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.534 [2024-11-20 11:21:38.789386] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:11.534 [2024-11-20 11:21:38.789400] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.534 qpair failed and we were unable to recover it. 00:27:11.534 [2024-11-20 11:21:38.799208] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.534 [2024-11-20 11:21:38.799271] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.534 [2024-11-20 11:21:38.799287] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.534 [2024-11-20 11:21:38.799294] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.534 [2024-11-20 11:21:38.799300] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:11.534 [2024-11-20 11:21:38.799316] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.534 qpair failed and we were unable to recover it. 00:27:11.534 [2024-11-20 11:21:38.809243] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.534 [2024-11-20 11:21:38.809297] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.534 [2024-11-20 11:21:38.809312] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.534 [2024-11-20 11:21:38.809319] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.534 [2024-11-20 11:21:38.809326] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:11.534 [2024-11-20 11:21:38.809341] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.534 qpair failed and we were unable to recover it. 00:27:11.534 [2024-11-20 11:21:38.819338] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.534 [2024-11-20 11:21:38.819393] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.534 [2024-11-20 11:21:38.819407] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.534 [2024-11-20 11:21:38.819414] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.534 [2024-11-20 11:21:38.819420] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:11.534 [2024-11-20 11:21:38.819435] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.534 qpair failed and we were unable to recover it. 00:27:11.534 [2024-11-20 11:21:38.829358] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.534 [2024-11-20 11:21:38.829416] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.534 [2024-11-20 11:21:38.829432] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.534 [2024-11-20 11:21:38.829440] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.534 [2024-11-20 11:21:38.829446] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:11.534 [2024-11-20 11:21:38.829462] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.534 qpair failed and we were unable to recover it. 00:27:11.534 [2024-11-20 11:21:38.839395] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.534 [2024-11-20 11:21:38.839451] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.534 [2024-11-20 11:21:38.839465] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.534 [2024-11-20 11:21:38.839472] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.534 [2024-11-20 11:21:38.839479] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:11.534 [2024-11-20 11:21:38.839494] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.534 qpair failed and we were unable to recover it. 00:27:11.534 [2024-11-20 11:21:38.849360] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.534 [2024-11-20 11:21:38.849446] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.534 [2024-11-20 11:21:38.849464] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.534 [2024-11-20 11:21:38.849471] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.534 [2024-11-20 11:21:38.849477] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:11.534 [2024-11-20 11:21:38.849492] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.534 qpair failed and we were unable to recover it. 00:27:11.534 [2024-11-20 11:21:38.859403] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.534 [2024-11-20 11:21:38.859458] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.534 [2024-11-20 11:21:38.859471] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.534 [2024-11-20 11:21:38.859478] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.534 [2024-11-20 11:21:38.859485] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:11.534 [2024-11-20 11:21:38.859500] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.534 qpair failed and we were unable to recover it. 00:27:11.534 [2024-11-20 11:21:38.869497] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.534 [2024-11-20 11:21:38.869580] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.534 [2024-11-20 11:21:38.869594] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.534 [2024-11-20 11:21:38.869601] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.534 [2024-11-20 11:21:38.869607] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:11.534 [2024-11-20 11:21:38.869622] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.534 qpair failed and we were unable to recover it. 00:27:11.535 [2024-11-20 11:21:38.879502] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.535 [2024-11-20 11:21:38.879560] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.535 [2024-11-20 11:21:38.879575] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.535 [2024-11-20 11:21:38.879582] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.535 [2024-11-20 11:21:38.879589] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:11.535 [2024-11-20 11:21:38.879604] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.535 qpair failed and we were unable to recover it. 00:27:11.535 [2024-11-20 11:21:38.889527] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.535 [2024-11-20 11:21:38.889584] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.535 [2024-11-20 11:21:38.889599] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.535 [2024-11-20 11:21:38.889606] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.535 [2024-11-20 11:21:38.889615] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:11.535 [2024-11-20 11:21:38.889630] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.535 qpair failed and we were unable to recover it. 00:27:11.535 [2024-11-20 11:21:38.899600] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.535 [2024-11-20 11:21:38.899657] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.535 [2024-11-20 11:21:38.899671] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.535 [2024-11-20 11:21:38.899679] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.535 [2024-11-20 11:21:38.899686] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:11.535 [2024-11-20 11:21:38.899701] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.535 qpair failed and we were unable to recover it. 00:27:11.535 [2024-11-20 11:21:38.909599] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.535 [2024-11-20 11:21:38.909657] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.535 [2024-11-20 11:21:38.909672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.535 [2024-11-20 11:21:38.909679] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.535 [2024-11-20 11:21:38.909685] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:11.535 [2024-11-20 11:21:38.909701] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.535 qpair failed and we were unable to recover it. 00:27:11.535 [2024-11-20 11:21:38.919629] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.535 [2024-11-20 11:21:38.919687] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.535 [2024-11-20 11:21:38.919701] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.535 [2024-11-20 11:21:38.919709] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.535 [2024-11-20 11:21:38.919715] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:11.535 [2024-11-20 11:21:38.919730] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.535 qpair failed and we were unable to recover it. 00:27:11.535 [2024-11-20 11:21:38.929685] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.535 [2024-11-20 11:21:38.929740] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.535 [2024-11-20 11:21:38.929755] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.535 [2024-11-20 11:21:38.929762] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.535 [2024-11-20 11:21:38.929769] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:11.535 [2024-11-20 11:21:38.929784] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.535 qpair failed and we were unable to recover it. 00:27:11.535 [2024-11-20 11:21:38.939613] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.535 [2024-11-20 11:21:38.939670] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.535 [2024-11-20 11:21:38.939685] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.535 [2024-11-20 11:21:38.939693] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.535 [2024-11-20 11:21:38.939699] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:11.535 [2024-11-20 11:21:38.939715] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.535 qpair failed and we were unable to recover it. 00:27:11.535 [2024-11-20 11:21:38.949705] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.535 [2024-11-20 11:21:38.949762] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.535 [2024-11-20 11:21:38.949777] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.535 [2024-11-20 11:21:38.949784] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.535 [2024-11-20 11:21:38.949791] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:11.535 [2024-11-20 11:21:38.949805] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.535 qpair failed and we were unable to recover it. 00:27:11.535 [2024-11-20 11:21:38.959750] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.535 [2024-11-20 11:21:38.959804] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.535 [2024-11-20 11:21:38.959821] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.535 [2024-11-20 11:21:38.959829] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.535 [2024-11-20 11:21:38.959836] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:11.535 [2024-11-20 11:21:38.959852] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.535 qpair failed and we were unable to recover it. 00:27:11.535 [2024-11-20 11:21:38.969764] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.535 [2024-11-20 11:21:38.969819] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.535 [2024-11-20 11:21:38.969834] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.535 [2024-11-20 11:21:38.969842] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.535 [2024-11-20 11:21:38.969849] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:11.535 [2024-11-20 11:21:38.969863] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.535 qpair failed and we were unable to recover it. 00:27:11.535 [2024-11-20 11:21:38.979808] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.535 [2024-11-20 11:21:38.979864] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.535 [2024-11-20 11:21:38.979883] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.535 [2024-11-20 11:21:38.979891] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.535 [2024-11-20 11:21:38.979898] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:11.535 [2024-11-20 11:21:38.979913] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.535 qpair failed and we were unable to recover it. 00:27:11.535 [2024-11-20 11:21:38.989827] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.536 [2024-11-20 11:21:38.989885] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.536 [2024-11-20 11:21:38.989900] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.536 [2024-11-20 11:21:38.989907] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.536 [2024-11-20 11:21:38.989914] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:11.536 [2024-11-20 11:21:38.989929] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.536 qpair failed and we were unable to recover it. 00:27:11.536 [2024-11-20 11:21:38.999849] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.536 [2024-11-20 11:21:38.999908] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.536 [2024-11-20 11:21:38.999922] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.536 [2024-11-20 11:21:38.999929] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.536 [2024-11-20 11:21:38.999935] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:11.536 [2024-11-20 11:21:38.999955] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.536 qpair failed and we were unable to recover it. 00:27:11.536 [2024-11-20 11:21:39.009872] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.536 [2024-11-20 11:21:39.009932] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.536 [2024-11-20 11:21:39.009952] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.536 [2024-11-20 11:21:39.009960] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.536 [2024-11-20 11:21:39.009966] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:11.536 [2024-11-20 11:21:39.009981] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.536 qpair failed and we were unable to recover it. 00:27:11.536 [2024-11-20 11:21:39.019877] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.536 [2024-11-20 11:21:39.019936] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.536 [2024-11-20 11:21:39.019962] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.536 [2024-11-20 11:21:39.019969] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.536 [2024-11-20 11:21:39.019979] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:11.536 [2024-11-20 11:21:39.019994] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.536 qpair failed and we were unable to recover it. 00:27:11.812 [2024-11-20 11:21:39.029942] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.812 [2024-11-20 11:21:39.030001] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.812 [2024-11-20 11:21:39.030016] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.812 [2024-11-20 11:21:39.030023] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.812 [2024-11-20 11:21:39.030029] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:11.812 [2024-11-20 11:21:39.030045] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.812 qpair failed and we were unable to recover it. 00:27:11.813 [2024-11-20 11:21:39.039938] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.813 [2024-11-20 11:21:39.039998] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.813 [2024-11-20 11:21:39.040013] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.813 [2024-11-20 11:21:39.040020] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.813 [2024-11-20 11:21:39.040026] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:11.813 [2024-11-20 11:21:39.040041] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.813 qpair failed and we were unable to recover it. 00:27:11.813 [2024-11-20 11:21:39.049993] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.813 [2024-11-20 11:21:39.050048] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.813 [2024-11-20 11:21:39.050062] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.813 [2024-11-20 11:21:39.050070] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.813 [2024-11-20 11:21:39.050076] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:11.813 [2024-11-20 11:21:39.050091] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.813 qpair failed and we were unable to recover it. 00:27:11.813 [2024-11-20 11:21:39.060053] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.813 [2024-11-20 11:21:39.060155] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.813 [2024-11-20 11:21:39.060169] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.813 [2024-11-20 11:21:39.060175] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.813 [2024-11-20 11:21:39.060182] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:11.813 [2024-11-20 11:21:39.060198] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.813 qpair failed and we were unable to recover it. 00:27:11.813 [2024-11-20 11:21:39.070050] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.813 [2024-11-20 11:21:39.070110] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.813 [2024-11-20 11:21:39.070124] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.814 [2024-11-20 11:21:39.070131] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.814 [2024-11-20 11:21:39.070138] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:11.814 [2024-11-20 11:21:39.070152] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.814 qpair failed and we were unable to recover it. 00:27:11.814 [2024-11-20 11:21:39.080061] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.814 [2024-11-20 11:21:39.080117] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.814 [2024-11-20 11:21:39.080131] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.814 [2024-11-20 11:21:39.080138] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.814 [2024-11-20 11:21:39.080145] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:11.814 [2024-11-20 11:21:39.080160] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.814 qpair failed and we were unable to recover it. 00:27:11.814 [2024-11-20 11:21:39.090052] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.814 [2024-11-20 11:21:39.090110] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.814 [2024-11-20 11:21:39.090125] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.814 [2024-11-20 11:21:39.090133] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.814 [2024-11-20 11:21:39.090140] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:11.814 [2024-11-20 11:21:39.090155] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.815 qpair failed and we were unable to recover it. 00:27:11.815 [2024-11-20 11:21:39.100089] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.815 [2024-11-20 11:21:39.100146] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.815 [2024-11-20 11:21:39.100160] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.815 [2024-11-20 11:21:39.100167] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.815 [2024-11-20 11:21:39.100174] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:11.815 [2024-11-20 11:21:39.100189] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.815 qpair failed and we were unable to recover it. 00:27:11.815 [2024-11-20 11:21:39.110186] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.815 [2024-11-20 11:21:39.110250] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.815 [2024-11-20 11:21:39.110268] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.815 [2024-11-20 11:21:39.110275] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.815 [2024-11-20 11:21:39.110282] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:11.815 [2024-11-20 11:21:39.110297] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.815 qpair failed and we were unable to recover it. 00:27:11.815 [2024-11-20 11:21:39.120128] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.815 [2024-11-20 11:21:39.120185] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.815 [2024-11-20 11:21:39.120200] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.815 [2024-11-20 11:21:39.120207] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.816 [2024-11-20 11:21:39.120213] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:11.816 [2024-11-20 11:21:39.120228] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.816 qpair failed and we were unable to recover it. 00:27:11.816 [2024-11-20 11:21:39.130193] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.816 [2024-11-20 11:21:39.130253] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.816 [2024-11-20 11:21:39.130268] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.816 [2024-11-20 11:21:39.130276] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.816 [2024-11-20 11:21:39.130282] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:11.816 [2024-11-20 11:21:39.130296] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.816 qpair failed and we were unable to recover it. 00:27:11.816 [2024-11-20 11:21:39.140249] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.816 [2024-11-20 11:21:39.140309] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.816 [2024-11-20 11:21:39.140323] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.816 [2024-11-20 11:21:39.140331] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.816 [2024-11-20 11:21:39.140337] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:11.816 [2024-11-20 11:21:39.140351] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.816 qpair failed and we were unable to recover it. 00:27:11.816 [2024-11-20 11:21:39.150282] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.816 [2024-11-20 11:21:39.150342] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.816 [2024-11-20 11:21:39.150356] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.816 [2024-11-20 11:21:39.150364] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.816 [2024-11-20 11:21:39.150373] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:11.816 [2024-11-20 11:21:39.150388] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.816 qpair failed and we were unable to recover it. 00:27:11.816 [2024-11-20 11:21:39.160239] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.816 [2024-11-20 11:21:39.160292] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.817 [2024-11-20 11:21:39.160306] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.817 [2024-11-20 11:21:39.160313] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.817 [2024-11-20 11:21:39.160320] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:11.817 [2024-11-20 11:21:39.160334] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.817 qpair failed and we were unable to recover it. 00:27:11.817 [2024-11-20 11:21:39.170340] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.817 [2024-11-20 11:21:39.170395] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.817 [2024-11-20 11:21:39.170410] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.817 [2024-11-20 11:21:39.170417] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.817 [2024-11-20 11:21:39.170424] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:11.817 [2024-11-20 11:21:39.170439] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.817 qpair failed and we were unable to recover it. 00:27:11.817 [2024-11-20 11:21:39.180351] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.817 [2024-11-20 11:21:39.180406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.817 [2024-11-20 11:21:39.180420] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.818 [2024-11-20 11:21:39.180427] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.818 [2024-11-20 11:21:39.180434] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:11.818 [2024-11-20 11:21:39.180448] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.818 qpair failed and we were unable to recover it. 00:27:11.818 [2024-11-20 11:21:39.190382] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.818 [2024-11-20 11:21:39.190438] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.818 [2024-11-20 11:21:39.190454] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.818 [2024-11-20 11:21:39.190461] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.818 [2024-11-20 11:21:39.190469] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:11.818 [2024-11-20 11:21:39.190484] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.818 qpair failed and we were unable to recover it. 00:27:11.818 [2024-11-20 11:21:39.200453] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.818 [2024-11-20 11:21:39.200516] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.819 [2024-11-20 11:21:39.200531] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.819 [2024-11-20 11:21:39.200539] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.819 [2024-11-20 11:21:39.200546] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:11.819 [2024-11-20 11:21:39.200561] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.819 qpair failed and we were unable to recover it. 00:27:11.819 [2024-11-20 11:21:39.210441] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.819 [2024-11-20 11:21:39.210525] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.819 [2024-11-20 11:21:39.210540] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.819 [2024-11-20 11:21:39.210547] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.819 [2024-11-20 11:21:39.210553] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:11.819 [2024-11-20 11:21:39.210568] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.819 qpair failed and we were unable to recover it. 00:27:11.819 [2024-11-20 11:21:39.220540] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.819 [2024-11-20 11:21:39.220598] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.819 [2024-11-20 11:21:39.220614] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.820 [2024-11-20 11:21:39.220622] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.820 [2024-11-20 11:21:39.220629] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:11.820 [2024-11-20 11:21:39.220645] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.820 qpair failed and we were unable to recover it. 00:27:11.820 [2024-11-20 11:21:39.230544] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.820 [2024-11-20 11:21:39.230612] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.820 [2024-11-20 11:21:39.230628] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.820 [2024-11-20 11:21:39.230635] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.820 [2024-11-20 11:21:39.230641] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:11.820 [2024-11-20 11:21:39.230656] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.820 qpair failed and we were unable to recover it. 00:27:11.820 [2024-11-20 11:21:39.240535] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.821 [2024-11-20 11:21:39.240592] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.821 [2024-11-20 11:21:39.240610] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.821 [2024-11-20 11:21:39.240619] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.821 [2024-11-20 11:21:39.240625] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:11.821 [2024-11-20 11:21:39.240641] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.821 qpair failed and we were unable to recover it. 00:27:11.821 [2024-11-20 11:21:39.250574] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.821 [2024-11-20 11:21:39.250644] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.821 [2024-11-20 11:21:39.250658] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.821 [2024-11-20 11:21:39.250665] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.821 [2024-11-20 11:21:39.250672] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:11.821 [2024-11-20 11:21:39.250686] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.821 qpair failed and we were unable to recover it. 00:27:11.821 [2024-11-20 11:21:39.260604] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.821 [2024-11-20 11:21:39.260659] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.821 [2024-11-20 11:21:39.260673] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.821 [2024-11-20 11:21:39.260680] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.821 [2024-11-20 11:21:39.260686] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:11.821 [2024-11-20 11:21:39.260702] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.821 qpair failed and we were unable to recover it. 00:27:11.821 [2024-11-20 11:21:39.270663] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.822 [2024-11-20 11:21:39.270723] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.822 [2024-11-20 11:21:39.270739] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.822 [2024-11-20 11:21:39.270747] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.822 [2024-11-20 11:21:39.270753] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:11.822 [2024-11-20 11:21:39.270767] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.822 qpair failed and we were unable to recover it. 00:27:11.822 [2024-11-20 11:21:39.280624] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.822 [2024-11-20 11:21:39.280719] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.822 [2024-11-20 11:21:39.280734] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.822 [2024-11-20 11:21:39.280741] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.822 [2024-11-20 11:21:39.280751] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:11.822 [2024-11-20 11:21:39.280766] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.822 qpair failed and we were unable to recover it. 00:27:11.822 [2024-11-20 11:21:39.290772] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.822 [2024-11-20 11:21:39.290854] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.822 [2024-11-20 11:21:39.290869] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.822 [2024-11-20 11:21:39.290877] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.822 [2024-11-20 11:21:39.290883] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:11.822 [2024-11-20 11:21:39.290898] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.822 qpair failed and we were unable to recover it. 00:27:12.087 [2024-11-20 11:21:39.300638] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.087 [2024-11-20 11:21:39.300707] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.087 [2024-11-20 11:21:39.300722] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.088 [2024-11-20 11:21:39.300729] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.088 [2024-11-20 11:21:39.300736] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:12.088 [2024-11-20 11:21:39.300750] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.088 qpair failed and we were unable to recover it. 00:27:12.088 [2024-11-20 11:21:39.310740] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.088 [2024-11-20 11:21:39.310793] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.088 [2024-11-20 11:21:39.310808] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.088 [2024-11-20 11:21:39.310815] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.088 [2024-11-20 11:21:39.310821] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:12.088 [2024-11-20 11:21:39.310837] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.088 qpair failed and we were unable to recover it. 00:27:12.088 [2024-11-20 11:21:39.320770] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.088 [2024-11-20 11:21:39.320826] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.088 [2024-11-20 11:21:39.320841] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.088 [2024-11-20 11:21:39.320847] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.088 [2024-11-20 11:21:39.320854] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:12.088 [2024-11-20 11:21:39.320869] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.088 qpair failed and we were unable to recover it. 00:27:12.088 [2024-11-20 11:21:39.330747] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.088 [2024-11-20 11:21:39.330800] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.088 [2024-11-20 11:21:39.330817] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.088 [2024-11-20 11:21:39.330824] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.088 [2024-11-20 11:21:39.330831] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:12.088 [2024-11-20 11:21:39.330846] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.088 qpair failed and we were unable to recover it. 00:27:12.088 [2024-11-20 11:21:39.340858] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.088 [2024-11-20 11:21:39.340913] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.088 [2024-11-20 11:21:39.340928] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.088 [2024-11-20 11:21:39.340935] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.088 [2024-11-20 11:21:39.340942] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:12.088 [2024-11-20 11:21:39.340960] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.088 qpair failed and we were unable to recover it. 00:27:12.088 [2024-11-20 11:21:39.350805] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.088 [2024-11-20 11:21:39.350860] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.088 [2024-11-20 11:21:39.350875] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.088 [2024-11-20 11:21:39.350882] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.088 [2024-11-20 11:21:39.350889] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:12.088 [2024-11-20 11:21:39.350903] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.088 qpair failed and we were unable to recover it. 00:27:12.088 [2024-11-20 11:21:39.360924] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.088 [2024-11-20 11:21:39.360982] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.088 [2024-11-20 11:21:39.360997] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.088 [2024-11-20 11:21:39.361005] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.088 [2024-11-20 11:21:39.361011] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:12.088 [2024-11-20 11:21:39.361026] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.088 qpair failed and we were unable to recover it. 00:27:12.088 [2024-11-20 11:21:39.370895] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.088 [2024-11-20 11:21:39.370945] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.088 [2024-11-20 11:21:39.370969] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.088 [2024-11-20 11:21:39.370977] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.088 [2024-11-20 11:21:39.370983] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:12.088 [2024-11-20 11:21:39.370998] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.088 qpair failed and we were unable to recover it. 00:27:12.088 [2024-11-20 11:21:39.380868] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.088 [2024-11-20 11:21:39.380923] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.088 [2024-11-20 11:21:39.380938] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.088 [2024-11-20 11:21:39.380946] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.088 [2024-11-20 11:21:39.380956] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:12.088 [2024-11-20 11:21:39.380971] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.088 qpair failed and we were unable to recover it. 00:27:12.088 [2024-11-20 11:21:39.390988] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.088 [2024-11-20 11:21:39.391045] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.088 [2024-11-20 11:21:39.391059] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.088 [2024-11-20 11:21:39.391066] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.088 [2024-11-20 11:21:39.391073] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:12.088 [2024-11-20 11:21:39.391088] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.088 qpair failed and we were unable to recover it. 00:27:12.088 [2024-11-20 11:21:39.401028] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.088 [2024-11-20 11:21:39.401082] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.088 [2024-11-20 11:21:39.401096] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.088 [2024-11-20 11:21:39.401103] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.088 [2024-11-20 11:21:39.401110] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:12.088 [2024-11-20 11:21:39.401124] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.088 qpair failed and we were unable to recover it. 00:27:12.088 [2024-11-20 11:21:39.411059] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.088 [2024-11-20 11:21:39.411115] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.088 [2024-11-20 11:21:39.411129] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.089 [2024-11-20 11:21:39.411137] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.089 [2024-11-20 11:21:39.411146] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:12.089 [2024-11-20 11:21:39.411161] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.089 qpair failed and we were unable to recover it. 00:27:12.089 [2024-11-20 11:21:39.421003] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.089 [2024-11-20 11:21:39.421059] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.089 [2024-11-20 11:21:39.421074] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.089 [2024-11-20 11:21:39.421081] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.089 [2024-11-20 11:21:39.421087] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:12.089 [2024-11-20 11:21:39.421102] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.089 qpair failed and we were unable to recover it. 00:27:12.089 [2024-11-20 11:21:39.431021] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.089 [2024-11-20 11:21:39.431089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.089 [2024-11-20 11:21:39.431104] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.089 [2024-11-20 11:21:39.431111] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.089 [2024-11-20 11:21:39.431118] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:12.089 [2024-11-20 11:21:39.431132] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.089 qpair failed and we were unable to recover it. 00:27:12.089 [2024-11-20 11:21:39.441108] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.089 [2024-11-20 11:21:39.441161] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.089 [2024-11-20 11:21:39.441175] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.089 [2024-11-20 11:21:39.441183] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.089 [2024-11-20 11:21:39.441189] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:12.089 [2024-11-20 11:21:39.441205] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.089 qpair failed and we were unable to recover it. 00:27:12.089 [2024-11-20 11:21:39.451132] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.089 [2024-11-20 11:21:39.451189] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.089 [2024-11-20 11:21:39.451204] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.089 [2024-11-20 11:21:39.451211] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.089 [2024-11-20 11:21:39.451218] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:12.089 [2024-11-20 11:21:39.451233] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.089 qpair failed and we were unable to recover it. 00:27:12.089 [2024-11-20 11:21:39.461208] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.089 [2024-11-20 11:21:39.461263] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.089 [2024-11-20 11:21:39.461278] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.089 [2024-11-20 11:21:39.461285] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.089 [2024-11-20 11:21:39.461291] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:12.089 [2024-11-20 11:21:39.461306] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.089 qpair failed and we were unable to recover it. 00:27:12.089 [2024-11-20 11:21:39.471218] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.089 [2024-11-20 11:21:39.471286] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.089 [2024-11-20 11:21:39.471302] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.089 [2024-11-20 11:21:39.471309] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.089 [2024-11-20 11:21:39.471315] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:12.089 [2024-11-20 11:21:39.471330] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.089 qpair failed and we were unable to recover it. 00:27:12.089 [2024-11-20 11:21:39.481228] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.089 [2024-11-20 11:21:39.481307] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.089 [2024-11-20 11:21:39.481322] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.089 [2024-11-20 11:21:39.481329] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.089 [2024-11-20 11:21:39.481335] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:12.089 [2024-11-20 11:21:39.481350] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.089 qpair failed and we were unable to recover it. 00:27:12.089 [2024-11-20 11:21:39.491249] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.089 [2024-11-20 11:21:39.491301] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.089 [2024-11-20 11:21:39.491316] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.089 [2024-11-20 11:21:39.491323] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.089 [2024-11-20 11:21:39.491329] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:12.089 [2024-11-20 11:21:39.491344] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.089 qpair failed and we were unable to recover it. 00:27:12.089 [2024-11-20 11:21:39.501288] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.089 [2024-11-20 11:21:39.501342] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.089 [2024-11-20 11:21:39.501361] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.089 [2024-11-20 11:21:39.501369] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.089 [2024-11-20 11:21:39.501376] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:12.089 [2024-11-20 11:21:39.501391] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.089 qpair failed and we were unable to recover it. 00:27:12.089 [2024-11-20 11:21:39.511340] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.089 [2024-11-20 11:21:39.511397] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.089 [2024-11-20 11:21:39.511412] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.089 [2024-11-20 11:21:39.511420] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.090 [2024-11-20 11:21:39.511428] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:12.090 [2024-11-20 11:21:39.511443] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.090 qpair failed and we were unable to recover it. 00:27:12.090 [2024-11-20 11:21:39.521329] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.090 [2024-11-20 11:21:39.521433] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.090 [2024-11-20 11:21:39.521450] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.090 [2024-11-20 11:21:39.521457] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.090 [2024-11-20 11:21:39.521464] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:12.090 [2024-11-20 11:21:39.521479] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.090 qpair failed and we were unable to recover it. 00:27:12.090 [2024-11-20 11:21:39.531360] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.090 [2024-11-20 11:21:39.531424] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.090 [2024-11-20 11:21:39.531440] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.090 [2024-11-20 11:21:39.531447] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.090 [2024-11-20 11:21:39.531453] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:12.090 [2024-11-20 11:21:39.531468] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.090 qpair failed and we were unable to recover it. 00:27:12.090 [2024-11-20 11:21:39.541328] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.090 [2024-11-20 11:21:39.541395] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.090 [2024-11-20 11:21:39.541410] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.090 [2024-11-20 11:21:39.541418] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.090 [2024-11-20 11:21:39.541427] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:12.090 [2024-11-20 11:21:39.541442] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.090 qpair failed and we were unable to recover it. 00:27:12.090 [2024-11-20 11:21:39.551410] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.090 [2024-11-20 11:21:39.551467] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.090 [2024-11-20 11:21:39.551481] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.090 [2024-11-20 11:21:39.551488] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.090 [2024-11-20 11:21:39.551495] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:12.090 [2024-11-20 11:21:39.551510] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.090 qpair failed and we were unable to recover it. 00:27:12.090 [2024-11-20 11:21:39.561441] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.090 [2024-11-20 11:21:39.561498] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.090 [2024-11-20 11:21:39.561512] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.090 [2024-11-20 11:21:39.561520] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.090 [2024-11-20 11:21:39.561527] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:12.090 [2024-11-20 11:21:39.561543] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.090 qpair failed and we were unable to recover it. 00:27:12.090 [2024-11-20 11:21:39.571521] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.090 [2024-11-20 11:21:39.571578] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.090 [2024-11-20 11:21:39.571593] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.090 [2024-11-20 11:21:39.571601] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.090 [2024-11-20 11:21:39.571607] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:12.090 [2024-11-20 11:21:39.571622] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.090 qpair failed and we were unable to recover it. 00:27:12.349 [2024-11-20 11:21:39.581508] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.349 [2024-11-20 11:21:39.581582] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.350 [2024-11-20 11:21:39.581596] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.350 [2024-11-20 11:21:39.581603] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.350 [2024-11-20 11:21:39.581609] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:12.350 [2024-11-20 11:21:39.581624] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.350 qpair failed and we were unable to recover it. 00:27:12.350 [2024-11-20 11:21:39.591528] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.350 [2024-11-20 11:21:39.591589] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.350 [2024-11-20 11:21:39.591603] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.350 [2024-11-20 11:21:39.591610] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.350 [2024-11-20 11:21:39.591617] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:12.350 [2024-11-20 11:21:39.591631] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.350 qpair failed and we were unable to recover it. 00:27:12.350 [2024-11-20 11:21:39.601550] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.350 [2024-11-20 11:21:39.601606] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.350 [2024-11-20 11:21:39.601621] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.350 [2024-11-20 11:21:39.601629] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.350 [2024-11-20 11:21:39.601635] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:12.350 [2024-11-20 11:21:39.601650] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.350 qpair failed and we were unable to recover it. 00:27:12.350 [2024-11-20 11:21:39.611509] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.350 [2024-11-20 11:21:39.611562] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.350 [2024-11-20 11:21:39.611577] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.350 [2024-11-20 11:21:39.611584] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.350 [2024-11-20 11:21:39.611591] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:12.350 [2024-11-20 11:21:39.611605] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.350 qpair failed and we were unable to recover it. 00:27:12.350 [2024-11-20 11:21:39.621612] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.350 [2024-11-20 11:21:39.621668] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.350 [2024-11-20 11:21:39.621684] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.350 [2024-11-20 11:21:39.621691] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.350 [2024-11-20 11:21:39.621698] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:12.350 [2024-11-20 11:21:39.621712] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.350 qpair failed and we were unable to recover it. 00:27:12.350 [2024-11-20 11:21:39.631674] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.350 [2024-11-20 11:21:39.631738] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.350 [2024-11-20 11:21:39.631760] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.350 [2024-11-20 11:21:39.631767] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.350 [2024-11-20 11:21:39.631773] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:12.350 [2024-11-20 11:21:39.631790] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.350 qpair failed and we were unable to recover it. 00:27:12.350 [2024-11-20 11:21:39.641666] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.350 [2024-11-20 11:21:39.641724] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.350 [2024-11-20 11:21:39.641739] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.350 [2024-11-20 11:21:39.641747] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.350 [2024-11-20 11:21:39.641753] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:12.350 [2024-11-20 11:21:39.641768] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.350 qpair failed and we were unable to recover it. 00:27:12.350 [2024-11-20 11:21:39.651680] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.350 [2024-11-20 11:21:39.651734] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.350 [2024-11-20 11:21:39.651749] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.350 [2024-11-20 11:21:39.651756] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.350 [2024-11-20 11:21:39.651762] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:12.350 [2024-11-20 11:21:39.651778] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.350 qpair failed and we were unable to recover it. 00:27:12.350 [2024-11-20 11:21:39.661718] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.350 [2024-11-20 11:21:39.661775] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.350 [2024-11-20 11:21:39.661789] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.350 [2024-11-20 11:21:39.661796] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.350 [2024-11-20 11:21:39.661803] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:12.350 [2024-11-20 11:21:39.661818] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.350 qpair failed and we were unable to recover it. 00:27:12.350 [2024-11-20 11:21:39.671761] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.350 [2024-11-20 11:21:39.671836] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.350 [2024-11-20 11:21:39.671851] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.350 [2024-11-20 11:21:39.671858] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.351 [2024-11-20 11:21:39.671867] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:12.351 [2024-11-20 11:21:39.671883] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.351 qpair failed and we were unable to recover it. 00:27:12.351 [2024-11-20 11:21:39.681755] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.351 [2024-11-20 11:21:39.681810] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.351 [2024-11-20 11:21:39.681825] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.351 [2024-11-20 11:21:39.681832] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.351 [2024-11-20 11:21:39.681839] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:12.351 [2024-11-20 11:21:39.681853] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.351 qpair failed and we were unable to recover it. 00:27:12.351 [2024-11-20 11:21:39.691793] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.351 [2024-11-20 11:21:39.691848] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.351 [2024-11-20 11:21:39.691865] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.351 [2024-11-20 11:21:39.691873] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.351 [2024-11-20 11:21:39.691881] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:12.351 [2024-11-20 11:21:39.691897] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.351 qpair failed and we were unable to recover it. 00:27:12.351 [2024-11-20 11:21:39.701866] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.351 [2024-11-20 11:21:39.701949] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.351 [2024-11-20 11:21:39.701965] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.351 [2024-11-20 11:21:39.701972] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.351 [2024-11-20 11:21:39.701979] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:12.351 [2024-11-20 11:21:39.701994] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.351 qpair failed and we were unable to recover it. 00:27:12.351 [2024-11-20 11:21:39.711897] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.351 [2024-11-20 11:21:39.711963] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.351 [2024-11-20 11:21:39.711979] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.351 [2024-11-20 11:21:39.711987] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.351 [2024-11-20 11:21:39.711993] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:12.351 [2024-11-20 11:21:39.712008] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.351 qpair failed and we were unable to recover it. 00:27:12.351 [2024-11-20 11:21:39.721858] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.351 [2024-11-20 11:21:39.721913] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.351 [2024-11-20 11:21:39.721928] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.351 [2024-11-20 11:21:39.721935] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.351 [2024-11-20 11:21:39.721942] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:12.351 [2024-11-20 11:21:39.721961] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.351 qpair failed and we were unable to recover it. 00:27:12.351 [2024-11-20 11:21:39.731974] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.351 [2024-11-20 11:21:39.732075] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.351 [2024-11-20 11:21:39.732092] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.351 [2024-11-20 11:21:39.732100] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.351 [2024-11-20 11:21:39.732106] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:12.351 [2024-11-20 11:21:39.732122] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.351 qpair failed and we were unable to recover it. 00:27:12.351 [2024-11-20 11:21:39.741942] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.351 [2024-11-20 11:21:39.742002] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.351 [2024-11-20 11:21:39.742016] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.351 [2024-11-20 11:21:39.742023] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.351 [2024-11-20 11:21:39.742030] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:12.351 [2024-11-20 11:21:39.742045] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.351 qpair failed and we were unable to recover it. 00:27:12.351 [2024-11-20 11:21:39.751970] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.351 [2024-11-20 11:21:39.752057] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.351 [2024-11-20 11:21:39.752072] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.351 [2024-11-20 11:21:39.752080] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.351 [2024-11-20 11:21:39.752086] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:12.351 [2024-11-20 11:21:39.752101] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.351 qpair failed and we were unable to recover it. 00:27:12.351 [2024-11-20 11:21:39.761964] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.351 [2024-11-20 11:21:39.762020] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.351 [2024-11-20 11:21:39.762039] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.351 [2024-11-20 11:21:39.762047] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.351 [2024-11-20 11:21:39.762053] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:12.351 [2024-11-20 11:21:39.762069] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.351 qpair failed and we were unable to recover it. 00:27:12.351 [2024-11-20 11:21:39.772022] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.351 [2024-11-20 11:21:39.772077] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.352 [2024-11-20 11:21:39.772093] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.352 [2024-11-20 11:21:39.772100] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.352 [2024-11-20 11:21:39.772107] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:12.352 [2024-11-20 11:21:39.772122] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.352 qpair failed and we were unable to recover it. 00:27:12.352 [2024-11-20 11:21:39.782124] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.352 [2024-11-20 11:21:39.782203] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.352 [2024-11-20 11:21:39.782217] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.352 [2024-11-20 11:21:39.782225] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.352 [2024-11-20 11:21:39.782231] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:12.352 [2024-11-20 11:21:39.782245] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.352 qpair failed and we were unable to recover it. 00:27:12.352 [2024-11-20 11:21:39.792014] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.352 [2024-11-20 11:21:39.792082] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.352 [2024-11-20 11:21:39.792096] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.352 [2024-11-20 11:21:39.792103] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.352 [2024-11-20 11:21:39.792109] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:12.352 [2024-11-20 11:21:39.792123] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.352 qpair failed and we were unable to recover it. 00:27:12.352 [2024-11-20 11:21:39.802161] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.352 [2024-11-20 11:21:39.802222] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.352 [2024-11-20 11:21:39.802235] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.352 [2024-11-20 11:21:39.802243] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.352 [2024-11-20 11:21:39.802253] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:12.352 [2024-11-20 11:21:39.802267] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.352 qpair failed and we were unable to recover it. 00:27:12.352 [2024-11-20 11:21:39.812170] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.352 [2024-11-20 11:21:39.812224] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.352 [2024-11-20 11:21:39.812238] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.352 [2024-11-20 11:21:39.812245] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.352 [2024-11-20 11:21:39.812252] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:12.352 [2024-11-20 11:21:39.812266] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.352 qpair failed and we were unable to recover it. 00:27:12.352 [2024-11-20 11:21:39.822166] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.352 [2024-11-20 11:21:39.822223] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.352 [2024-11-20 11:21:39.822238] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.352 [2024-11-20 11:21:39.822245] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.352 [2024-11-20 11:21:39.822252] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:12.352 [2024-11-20 11:21:39.822267] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.352 qpair failed and we were unable to recover it. 00:27:12.352 [2024-11-20 11:21:39.832249] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.352 [2024-11-20 11:21:39.832307] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.352 [2024-11-20 11:21:39.832322] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.352 [2024-11-20 11:21:39.832329] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.352 [2024-11-20 11:21:39.832335] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:12.352 [2024-11-20 11:21:39.832350] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.352 qpair failed and we were unable to recover it. 00:27:12.352 [2024-11-20 11:21:39.842229] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.352 [2024-11-20 11:21:39.842286] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.352 [2024-11-20 11:21:39.842300] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.352 [2024-11-20 11:21:39.842308] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.352 [2024-11-20 11:21:39.842315] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:12.352 [2024-11-20 11:21:39.842329] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.352 qpair failed and we were unable to recover it. 00:27:12.612 [2024-11-20 11:21:39.852182] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.612 [2024-11-20 11:21:39.852245] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.612 [2024-11-20 11:21:39.852260] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.612 [2024-11-20 11:21:39.852267] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.612 [2024-11-20 11:21:39.852274] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:12.612 [2024-11-20 11:21:39.852288] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.612 qpair failed and we were unable to recover it. 00:27:12.612 [2024-11-20 11:21:39.862275] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.612 [2024-11-20 11:21:39.862341] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.612 [2024-11-20 11:21:39.862355] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.612 [2024-11-20 11:21:39.862362] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.612 [2024-11-20 11:21:39.862369] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:12.612 [2024-11-20 11:21:39.862384] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.612 qpair failed and we were unable to recover it. 00:27:12.612 [2024-11-20 11:21:39.872276] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.612 [2024-11-20 11:21:39.872333] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.612 [2024-11-20 11:21:39.872348] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.612 [2024-11-20 11:21:39.872355] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.612 [2024-11-20 11:21:39.872361] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:12.612 [2024-11-20 11:21:39.872376] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.612 qpair failed and we were unable to recover it. 00:27:12.612 [2024-11-20 11:21:39.882333] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.612 [2024-11-20 11:21:39.882388] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.612 [2024-11-20 11:21:39.882402] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.612 [2024-11-20 11:21:39.882409] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.612 [2024-11-20 11:21:39.882416] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:12.612 [2024-11-20 11:21:39.882431] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.612 qpair failed and we were unable to recover it. 00:27:12.612 [2024-11-20 11:21:39.892373] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.612 [2024-11-20 11:21:39.892434] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.612 [2024-11-20 11:21:39.892453] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.612 [2024-11-20 11:21:39.892460] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.612 [2024-11-20 11:21:39.892466] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:12.612 [2024-11-20 11:21:39.892481] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.612 qpair failed and we were unable to recover it. 00:27:12.612 [2024-11-20 11:21:39.902333] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.612 [2024-11-20 11:21:39.902390] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.612 [2024-11-20 11:21:39.902405] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.612 [2024-11-20 11:21:39.902411] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.612 [2024-11-20 11:21:39.902418] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:12.612 [2024-11-20 11:21:39.902433] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.612 qpair failed and we were unable to recover it. 00:27:12.612 [2024-11-20 11:21:39.912458] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.612 [2024-11-20 11:21:39.912518] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.612 [2024-11-20 11:21:39.912532] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.613 [2024-11-20 11:21:39.912540] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.613 [2024-11-20 11:21:39.912546] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:12.613 [2024-11-20 11:21:39.912561] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.613 qpair failed and we were unable to recover it. 00:27:12.613 [2024-11-20 11:21:39.922459] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.613 [2024-11-20 11:21:39.922520] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.613 [2024-11-20 11:21:39.922535] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.613 [2024-11-20 11:21:39.922542] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.613 [2024-11-20 11:21:39.922548] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:12.613 [2024-11-20 11:21:39.922563] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.613 qpair failed and we were unable to recover it. 00:27:12.613 [2024-11-20 11:21:39.932479] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.613 [2024-11-20 11:21:39.932530] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.613 [2024-11-20 11:21:39.932545] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.613 [2024-11-20 11:21:39.932556] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.613 [2024-11-20 11:21:39.932563] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:12.613 [2024-11-20 11:21:39.932578] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.613 qpair failed and we were unable to recover it. 00:27:12.613 [2024-11-20 11:21:39.942519] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.613 [2024-11-20 11:21:39.942576] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.613 [2024-11-20 11:21:39.942590] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.613 [2024-11-20 11:21:39.942597] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.613 [2024-11-20 11:21:39.942604] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:12.613 [2024-11-20 11:21:39.942619] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.613 qpair failed and we were unable to recover it. 00:27:12.613 [2024-11-20 11:21:39.952542] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.613 [2024-11-20 11:21:39.952602] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.613 [2024-11-20 11:21:39.952616] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.613 [2024-11-20 11:21:39.952624] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.613 [2024-11-20 11:21:39.952630] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:12.613 [2024-11-20 11:21:39.952645] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.613 qpair failed and we were unable to recover it. 00:27:12.613 [2024-11-20 11:21:39.962565] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.613 [2024-11-20 11:21:39.962621] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.613 [2024-11-20 11:21:39.962635] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.613 [2024-11-20 11:21:39.962643] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.613 [2024-11-20 11:21:39.962649] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:12.613 [2024-11-20 11:21:39.962665] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.613 qpair failed and we were unable to recover it. 00:27:12.613 [2024-11-20 11:21:39.972591] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.613 [2024-11-20 11:21:39.972671] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.613 [2024-11-20 11:21:39.972685] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.613 [2024-11-20 11:21:39.972692] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.613 [2024-11-20 11:21:39.972699] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:12.613 [2024-11-20 11:21:39.972713] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.613 qpair failed and we were unable to recover it. 00:27:12.613 [2024-11-20 11:21:39.982630] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.613 [2024-11-20 11:21:39.982684] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.613 [2024-11-20 11:21:39.982698] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.613 [2024-11-20 11:21:39.982704] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.613 [2024-11-20 11:21:39.982711] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:12.613 [2024-11-20 11:21:39.982725] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.613 qpair failed and we were unable to recover it. 00:27:12.613 [2024-11-20 11:21:39.992650] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.613 [2024-11-20 11:21:39.992753] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.613 [2024-11-20 11:21:39.992767] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.613 [2024-11-20 11:21:39.992775] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.613 [2024-11-20 11:21:39.992781] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:12.613 [2024-11-20 11:21:39.992795] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.613 qpair failed and we were unable to recover it. 00:27:12.613 [2024-11-20 11:21:40.002663] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.613 [2024-11-20 11:21:40.002728] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.613 [2024-11-20 11:21:40.002745] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.613 [2024-11-20 11:21:40.002753] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.613 [2024-11-20 11:21:40.002759] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:12.613 [2024-11-20 11:21:40.002775] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.613 qpair failed and we were unable to recover it. 00:27:12.613 [2024-11-20 11:21:40.012793] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.613 [2024-11-20 11:21:40.012900] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.614 [2024-11-20 11:21:40.012917] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.614 [2024-11-20 11:21:40.012924] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.614 [2024-11-20 11:21:40.012931] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:12.614 [2024-11-20 11:21:40.012952] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.614 qpair failed and we were unable to recover it. 00:27:12.614 [2024-11-20 11:21:40.022695] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.614 [2024-11-20 11:21:40.022754] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.614 [2024-11-20 11:21:40.022773] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.614 [2024-11-20 11:21:40.022781] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.614 [2024-11-20 11:21:40.022788] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:12.614 [2024-11-20 11:21:40.022803] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.614 qpair failed and we were unable to recover it. 00:27:12.614 [2024-11-20 11:21:40.032790] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.614 [2024-11-20 11:21:40.032848] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.614 [2024-11-20 11:21:40.032864] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.614 [2024-11-20 11:21:40.032872] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.614 [2024-11-20 11:21:40.032879] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:12.614 [2024-11-20 11:21:40.032894] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.614 qpair failed and we were unable to recover it. 00:27:12.614 [2024-11-20 11:21:40.042823] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.614 [2024-11-20 11:21:40.042885] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.614 [2024-11-20 11:21:40.042901] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.614 [2024-11-20 11:21:40.042909] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.614 [2024-11-20 11:21:40.042916] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:12.614 [2024-11-20 11:21:40.042932] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.614 qpair failed and we were unable to recover it. 00:27:12.614 [2024-11-20 11:21:40.052884] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.614 [2024-11-20 11:21:40.052957] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.614 [2024-11-20 11:21:40.052974] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.614 [2024-11-20 11:21:40.052982] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.614 [2024-11-20 11:21:40.052988] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:12.614 [2024-11-20 11:21:40.053004] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.614 qpair failed and we were unable to recover it. 00:27:12.614 [2024-11-20 11:21:40.062937] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.614 [2024-11-20 11:21:40.063011] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.614 [2024-11-20 11:21:40.063027] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.614 [2024-11-20 11:21:40.063039] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.614 [2024-11-20 11:21:40.063045] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:12.614 [2024-11-20 11:21:40.063061] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.614 qpair failed and we were unable to recover it. 00:27:12.614 [2024-11-20 11:21:40.072936] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.614 [2024-11-20 11:21:40.073004] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.614 [2024-11-20 11:21:40.073020] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.614 [2024-11-20 11:21:40.073027] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.614 [2024-11-20 11:21:40.073034] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:12.614 [2024-11-20 11:21:40.073049] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.614 qpair failed and we were unable to recover it. 00:27:12.614 [2024-11-20 11:21:40.082963] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.614 [2024-11-20 11:21:40.083031] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.614 [2024-11-20 11:21:40.083048] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.614 [2024-11-20 11:21:40.083055] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.614 [2024-11-20 11:21:40.083062] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:12.614 [2024-11-20 11:21:40.083078] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.614 qpair failed and we were unable to recover it. 00:27:12.614 [2024-11-20 11:21:40.092983] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.614 [2024-11-20 11:21:40.093053] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.614 [2024-11-20 11:21:40.093069] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.614 [2024-11-20 11:21:40.093077] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.614 [2024-11-20 11:21:40.093084] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:12.614 [2024-11-20 11:21:40.093101] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.614 qpair failed and we were unable to recover it. 00:27:12.614 [2024-11-20 11:21:40.102992] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.614 [2024-11-20 11:21:40.103054] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.614 [2024-11-20 11:21:40.103070] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.614 [2024-11-20 11:21:40.103078] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.614 [2024-11-20 11:21:40.103084] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:12.614 [2024-11-20 11:21:40.103100] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.615 qpair failed and we were unable to recover it. 00:27:12.874 [2024-11-20 11:21:40.113004] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.874 [2024-11-20 11:21:40.113065] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.874 [2024-11-20 11:21:40.113082] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.874 [2024-11-20 11:21:40.113090] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.874 [2024-11-20 11:21:40.113097] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:12.874 [2024-11-20 11:21:40.113117] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.874 qpair failed and we were unable to recover it. 00:27:12.875 [2024-11-20 11:21:40.123078] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.875 [2024-11-20 11:21:40.123149] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.875 [2024-11-20 11:21:40.123166] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.875 [2024-11-20 11:21:40.123173] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.875 [2024-11-20 11:21:40.123180] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:12.875 [2024-11-20 11:21:40.123196] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.875 qpair failed and we were unable to recover it. 00:27:12.875 [2024-11-20 11:21:40.133058] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.875 [2024-11-20 11:21:40.133135] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.875 [2024-11-20 11:21:40.133151] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.875 [2024-11-20 11:21:40.133158] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.875 [2024-11-20 11:21:40.133165] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:12.875 [2024-11-20 11:21:40.133180] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.875 qpair failed and we were unable to recover it. 00:27:12.875 [2024-11-20 11:21:40.143089] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.875 [2024-11-20 11:21:40.143149] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.875 [2024-11-20 11:21:40.143165] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.875 [2024-11-20 11:21:40.143173] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.875 [2024-11-20 11:21:40.143179] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:12.875 [2024-11-20 11:21:40.143194] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.875 qpair failed and we were unable to recover it. 00:27:12.875 [2024-11-20 11:21:40.153129] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.875 [2024-11-20 11:21:40.153202] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.875 [2024-11-20 11:21:40.153222] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.875 [2024-11-20 11:21:40.153230] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.875 [2024-11-20 11:21:40.153236] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:12.875 [2024-11-20 11:21:40.153252] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.875 qpair failed and we were unable to recover it. 00:27:12.875 [2024-11-20 11:21:40.163138] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.875 [2024-11-20 11:21:40.163193] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.875 [2024-11-20 11:21:40.163209] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.875 [2024-11-20 11:21:40.163216] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.875 [2024-11-20 11:21:40.163223] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:12.875 [2024-11-20 11:21:40.163238] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.875 qpair failed and we were unable to recover it. 00:27:12.875 [2024-11-20 11:21:40.173164] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.875 [2024-11-20 11:21:40.173247] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.875 [2024-11-20 11:21:40.173263] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.875 [2024-11-20 11:21:40.173271] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.875 [2024-11-20 11:21:40.173278] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:12.875 [2024-11-20 11:21:40.173293] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.875 qpair failed and we were unable to recover it. 00:27:12.875 [2024-11-20 11:21:40.183137] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.875 [2024-11-20 11:21:40.183193] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.875 [2024-11-20 11:21:40.183209] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.875 [2024-11-20 11:21:40.183216] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.875 [2024-11-20 11:21:40.183223] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:12.875 [2024-11-20 11:21:40.183238] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.875 qpair failed and we were unable to recover it. 00:27:12.875 [2024-11-20 11:21:40.193327] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.875 [2024-11-20 11:21:40.193447] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.875 [2024-11-20 11:21:40.193464] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.875 [2024-11-20 11:21:40.193475] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.875 [2024-11-20 11:21:40.193482] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:12.875 [2024-11-20 11:21:40.193498] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.875 qpair failed and we were unable to recover it. 00:27:12.875 [2024-11-20 11:21:40.203196] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.875 [2024-11-20 11:21:40.203284] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.875 [2024-11-20 11:21:40.203300] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.875 [2024-11-20 11:21:40.203308] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.875 [2024-11-20 11:21:40.203314] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:12.875 [2024-11-20 11:21:40.203330] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.875 qpair failed and we were unable to recover it. 00:27:12.875 [2024-11-20 11:21:40.213295] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.875 [2024-11-20 11:21:40.213348] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.875 [2024-11-20 11:21:40.213364] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.875 [2024-11-20 11:21:40.213371] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.875 [2024-11-20 11:21:40.213378] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:12.875 [2024-11-20 11:21:40.213393] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.875 qpair failed and we were unable to recover it. 00:27:12.875 [2024-11-20 11:21:40.223316] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.875 [2024-11-20 11:21:40.223376] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.875 [2024-11-20 11:21:40.223391] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.875 [2024-11-20 11:21:40.223399] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.875 [2024-11-20 11:21:40.223406] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:12.875 [2024-11-20 11:21:40.223421] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.875 qpair failed and we were unable to recover it. 00:27:12.876 [2024-11-20 11:21:40.233336] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.876 [2024-11-20 11:21:40.233393] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.876 [2024-11-20 11:21:40.233410] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.876 [2024-11-20 11:21:40.233417] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.876 [2024-11-20 11:21:40.233424] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:12.876 [2024-11-20 11:21:40.233440] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.876 qpair failed and we were unable to recover it. 00:27:12.876 [2024-11-20 11:21:40.243348] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.876 [2024-11-20 11:21:40.243406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.876 [2024-11-20 11:21:40.243422] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.876 [2024-11-20 11:21:40.243429] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.876 [2024-11-20 11:21:40.243436] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:12.876 [2024-11-20 11:21:40.243451] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.876 qpair failed and we were unable to recover it. 00:27:12.876 [2024-11-20 11:21:40.253335] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.876 [2024-11-20 11:21:40.253430] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.876 [2024-11-20 11:21:40.253446] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.876 [2024-11-20 11:21:40.253454] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.876 [2024-11-20 11:21:40.253461] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:12.876 [2024-11-20 11:21:40.253477] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.876 qpair failed and we were unable to recover it. 00:27:12.876 [2024-11-20 11:21:40.263431] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.876 [2024-11-20 11:21:40.263489] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.876 [2024-11-20 11:21:40.263505] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.876 [2024-11-20 11:21:40.263513] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.876 [2024-11-20 11:21:40.263520] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:12.876 [2024-11-20 11:21:40.263535] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.876 qpair failed and we were unable to recover it. 00:27:12.876 [2024-11-20 11:21:40.273456] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.876 [2024-11-20 11:21:40.273521] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.876 [2024-11-20 11:21:40.273537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.876 [2024-11-20 11:21:40.273545] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.876 [2024-11-20 11:21:40.273552] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:12.876 [2024-11-20 11:21:40.273568] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.876 qpair failed and we were unable to recover it. 00:27:12.876 [2024-11-20 11:21:40.283478] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.876 [2024-11-20 11:21:40.283533] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.876 [2024-11-20 11:21:40.283554] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.876 [2024-11-20 11:21:40.283562] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.876 [2024-11-20 11:21:40.283568] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:12.876 [2024-11-20 11:21:40.283583] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.876 qpair failed and we were unable to recover it. 00:27:12.876 [2024-11-20 11:21:40.293506] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.876 [2024-11-20 11:21:40.293588] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.876 [2024-11-20 11:21:40.293604] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.876 [2024-11-20 11:21:40.293611] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.876 [2024-11-20 11:21:40.293618] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:12.876 [2024-11-20 11:21:40.293633] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.876 qpair failed and we were unable to recover it. 00:27:12.876 [2024-11-20 11:21:40.303539] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.876 [2024-11-20 11:21:40.303598] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.876 [2024-11-20 11:21:40.303614] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.876 [2024-11-20 11:21:40.303621] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.876 [2024-11-20 11:21:40.303628] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:12.876 [2024-11-20 11:21:40.303643] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.876 qpair failed and we were unable to recover it. 00:27:12.876 [2024-11-20 11:21:40.313555] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.876 [2024-11-20 11:21:40.313654] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.876 [2024-11-20 11:21:40.313669] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.876 [2024-11-20 11:21:40.313676] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.876 [2024-11-20 11:21:40.313683] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:12.876 [2024-11-20 11:21:40.313698] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.876 qpair failed and we were unable to recover it. 00:27:12.876 [2024-11-20 11:21:40.323595] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.876 [2024-11-20 11:21:40.323649] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.876 [2024-11-20 11:21:40.323665] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.876 [2024-11-20 11:21:40.323675] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.876 [2024-11-20 11:21:40.323682] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:12.876 [2024-11-20 11:21:40.323698] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.876 qpair failed and we were unable to recover it. 00:27:12.876 [2024-11-20 11:21:40.333571] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.876 [2024-11-20 11:21:40.333622] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.876 [2024-11-20 11:21:40.333639] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.876 [2024-11-20 11:21:40.333646] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.876 [2024-11-20 11:21:40.333653] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:12.876 [2024-11-20 11:21:40.333668] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.876 qpair failed and we were unable to recover it. 00:27:12.876 [2024-11-20 11:21:40.343593] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.877 [2024-11-20 11:21:40.343656] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.877 [2024-11-20 11:21:40.343672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.877 [2024-11-20 11:21:40.343679] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.877 [2024-11-20 11:21:40.343685] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:12.877 [2024-11-20 11:21:40.343700] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.877 qpair failed and we were unable to recover it. 00:27:12.877 [2024-11-20 11:21:40.353616] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.877 [2024-11-20 11:21:40.353670] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.877 [2024-11-20 11:21:40.353685] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.877 [2024-11-20 11:21:40.353692] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.877 [2024-11-20 11:21:40.353699] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:12.877 [2024-11-20 11:21:40.353714] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.877 qpair failed and we were unable to recover it. 00:27:12.877 [2024-11-20 11:21:40.363724] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.877 [2024-11-20 11:21:40.363779] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.877 [2024-11-20 11:21:40.363795] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.877 [2024-11-20 11:21:40.363802] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.877 [2024-11-20 11:21:40.363808] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:12.877 [2024-11-20 11:21:40.363824] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.877 qpair failed and we were unable to recover it. 00:27:13.137 [2024-11-20 11:21:40.373719] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.137 [2024-11-20 11:21:40.373773] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.137 [2024-11-20 11:21:40.373789] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.137 [2024-11-20 11:21:40.373796] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.137 [2024-11-20 11:21:40.373802] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:13.137 [2024-11-20 11:21:40.373818] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.137 qpair failed and we were unable to recover it. 00:27:13.137 [2024-11-20 11:21:40.383741] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.137 [2024-11-20 11:21:40.383804] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.138 [2024-11-20 11:21:40.383819] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.138 [2024-11-20 11:21:40.383826] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.138 [2024-11-20 11:21:40.383832] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:13.138 [2024-11-20 11:21:40.383847] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.138 qpair failed and we were unable to recover it. 00:27:13.138 [2024-11-20 11:21:40.393832] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.138 [2024-11-20 11:21:40.393888] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.138 [2024-11-20 11:21:40.393903] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.138 [2024-11-20 11:21:40.393910] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.138 [2024-11-20 11:21:40.393917] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:13.138 [2024-11-20 11:21:40.393932] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.138 qpair failed and we were unable to recover it. 00:27:13.138 [2024-11-20 11:21:40.403757] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.138 [2024-11-20 11:21:40.403808] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.138 [2024-11-20 11:21:40.403823] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.138 [2024-11-20 11:21:40.403830] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.138 [2024-11-20 11:21:40.403836] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:13.138 [2024-11-20 11:21:40.403851] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.138 qpair failed and we were unable to recover it. 00:27:13.138 [2024-11-20 11:21:40.413831] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.138 [2024-11-20 11:21:40.413895] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.138 [2024-11-20 11:21:40.413910] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.138 [2024-11-20 11:21:40.413917] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.138 [2024-11-20 11:21:40.413924] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:13.138 [2024-11-20 11:21:40.413939] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.138 qpair failed and we were unable to recover it. 00:27:13.138 [2024-11-20 11:21:40.423886] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.138 [2024-11-20 11:21:40.423966] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.138 [2024-11-20 11:21:40.423982] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.138 [2024-11-20 11:21:40.423989] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.138 [2024-11-20 11:21:40.423995] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:13.138 [2024-11-20 11:21:40.424010] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.138 qpair failed and we were unable to recover it. 00:27:13.138 [2024-11-20 11:21:40.433926] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.138 [2024-11-20 11:21:40.434005] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.138 [2024-11-20 11:21:40.434021] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.138 [2024-11-20 11:21:40.434029] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.138 [2024-11-20 11:21:40.434035] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:13.138 [2024-11-20 11:21:40.434051] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.138 qpair failed and we were unable to recover it. 00:27:13.138 [2024-11-20 11:21:40.443986] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.138 [2024-11-20 11:21:40.444042] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.138 [2024-11-20 11:21:40.444057] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.138 [2024-11-20 11:21:40.444063] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.138 [2024-11-20 11:21:40.444069] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:13.138 [2024-11-20 11:21:40.444085] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.138 qpair failed and we were unable to recover it. 00:27:13.138 [2024-11-20 11:21:40.453978] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.138 [2024-11-20 11:21:40.454032] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.138 [2024-11-20 11:21:40.454048] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.138 [2024-11-20 11:21:40.454058] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.138 [2024-11-20 11:21:40.454064] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:13.138 [2024-11-20 11:21:40.454079] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.138 qpair failed and we were unable to recover it. 00:27:13.138 [2024-11-20 11:21:40.464000] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.138 [2024-11-20 11:21:40.464056] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.138 [2024-11-20 11:21:40.464072] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.138 [2024-11-20 11:21:40.464079] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.138 [2024-11-20 11:21:40.464085] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:13.138 [2024-11-20 11:21:40.464101] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.138 qpair failed and we were unable to recover it. 00:27:13.138 [2024-11-20 11:21:40.474059] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.138 [2024-11-20 11:21:40.474120] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.138 [2024-11-20 11:21:40.474136] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.138 [2024-11-20 11:21:40.474143] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.138 [2024-11-20 11:21:40.474149] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:13.138 [2024-11-20 11:21:40.474164] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.138 qpair failed and we were unable to recover it. 00:27:13.138 [2024-11-20 11:21:40.484057] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.138 [2024-11-20 11:21:40.484115] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.138 [2024-11-20 11:21:40.484130] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.138 [2024-11-20 11:21:40.484137] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.138 [2024-11-20 11:21:40.484143] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:13.138 [2024-11-20 11:21:40.484159] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.138 qpair failed and we were unable to recover it. 00:27:13.138 [2024-11-20 11:21:40.494098] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.138 [2024-11-20 11:21:40.494149] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.138 [2024-11-20 11:21:40.494165] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.139 [2024-11-20 11:21:40.494172] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.139 [2024-11-20 11:21:40.494178] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:13.139 [2024-11-20 11:21:40.494193] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.139 qpair failed and we were unable to recover it. 00:27:13.139 [2024-11-20 11:21:40.504092] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.139 [2024-11-20 11:21:40.504151] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.139 [2024-11-20 11:21:40.504166] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.139 [2024-11-20 11:21:40.504173] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.139 [2024-11-20 11:21:40.504179] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:13.139 [2024-11-20 11:21:40.504194] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.139 qpair failed and we were unable to recover it. 00:27:13.139 [2024-11-20 11:21:40.514190] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.139 [2024-11-20 11:21:40.514266] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.139 [2024-11-20 11:21:40.514282] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.139 [2024-11-20 11:21:40.514289] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.139 [2024-11-20 11:21:40.514295] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:13.139 [2024-11-20 11:21:40.514310] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.139 qpair failed and we were unable to recover it. 00:27:13.139 [2024-11-20 11:21:40.524114] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.139 [2024-11-20 11:21:40.524170] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.139 [2024-11-20 11:21:40.524186] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.139 [2024-11-20 11:21:40.524193] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.139 [2024-11-20 11:21:40.524199] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:13.139 [2024-11-20 11:21:40.524213] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.139 qpair failed and we were unable to recover it. 00:27:13.139 [2024-11-20 11:21:40.534192] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.139 [2024-11-20 11:21:40.534258] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.139 [2024-11-20 11:21:40.534274] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.139 [2024-11-20 11:21:40.534281] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.139 [2024-11-20 11:21:40.534287] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:13.139 [2024-11-20 11:21:40.534302] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.139 qpair failed and we were unable to recover it. 00:27:13.139 [2024-11-20 11:21:40.544179] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.139 [2024-11-20 11:21:40.544240] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.139 [2024-11-20 11:21:40.544254] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.139 [2024-11-20 11:21:40.544261] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.139 [2024-11-20 11:21:40.544267] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:13.139 [2024-11-20 11:21:40.544281] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.139 qpair failed and we were unable to recover it. 00:27:13.139 [2024-11-20 11:21:40.554204] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.139 [2024-11-20 11:21:40.554262] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.139 [2024-11-20 11:21:40.554277] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.139 [2024-11-20 11:21:40.554284] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.139 [2024-11-20 11:21:40.554290] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:13.139 [2024-11-20 11:21:40.554305] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.139 qpair failed and we were unable to recover it. 00:27:13.139 [2024-11-20 11:21:40.564280] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.139 [2024-11-20 11:21:40.564336] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.139 [2024-11-20 11:21:40.564352] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.139 [2024-11-20 11:21:40.564359] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.139 [2024-11-20 11:21:40.564366] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:13.139 [2024-11-20 11:21:40.564381] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.139 qpair failed and we were unable to recover it. 00:27:13.139 [2024-11-20 11:21:40.574340] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.139 [2024-11-20 11:21:40.574393] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.139 [2024-11-20 11:21:40.574407] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.139 [2024-11-20 11:21:40.574414] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.139 [2024-11-20 11:21:40.574420] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:13.139 [2024-11-20 11:21:40.574435] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.139 qpair failed and we were unable to recover it. 00:27:13.139 [2024-11-20 11:21:40.584378] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.139 [2024-11-20 11:21:40.584434] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.139 [2024-11-20 11:21:40.584449] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.139 [2024-11-20 11:21:40.584459] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.139 [2024-11-20 11:21:40.584465] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:13.139 [2024-11-20 11:21:40.584480] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.139 qpair failed and we were unable to recover it. 00:27:13.139 [2024-11-20 11:21:40.594346] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.139 [2024-11-20 11:21:40.594403] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.139 [2024-11-20 11:21:40.594418] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.139 [2024-11-20 11:21:40.594425] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.139 [2024-11-20 11:21:40.594430] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:13.139 [2024-11-20 11:21:40.594445] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.139 qpair failed and we were unable to recover it. 00:27:13.139 [2024-11-20 11:21:40.604391] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.139 [2024-11-20 11:21:40.604444] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.139 [2024-11-20 11:21:40.604459] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.139 [2024-11-20 11:21:40.604465] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.139 [2024-11-20 11:21:40.604471] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:13.140 [2024-11-20 11:21:40.604486] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.140 qpair failed and we were unable to recover it. 00:27:13.140 [2024-11-20 11:21:40.614384] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.140 [2024-11-20 11:21:40.614441] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.140 [2024-11-20 11:21:40.614456] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.140 [2024-11-20 11:21:40.614463] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.140 [2024-11-20 11:21:40.614469] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:13.140 [2024-11-20 11:21:40.614483] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.140 qpair failed and we were unable to recover it. 00:27:13.140 [2024-11-20 11:21:40.624436] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.140 [2024-11-20 11:21:40.624514] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.140 [2024-11-20 11:21:40.624529] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.140 [2024-11-20 11:21:40.624535] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.140 [2024-11-20 11:21:40.624541] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:13.140 [2024-11-20 11:21:40.624556] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.140 qpair failed and we were unable to recover it. 00:27:13.399 [2024-11-20 11:21:40.634483] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.399 [2024-11-20 11:21:40.634541] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.399 [2024-11-20 11:21:40.634556] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.399 [2024-11-20 11:21:40.634563] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.399 [2024-11-20 11:21:40.634569] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:13.399 [2024-11-20 11:21:40.634584] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.399 qpair failed and we were unable to recover it. 00:27:13.399 [2024-11-20 11:21:40.644551] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.400 [2024-11-20 11:21:40.644601] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.400 [2024-11-20 11:21:40.644616] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.400 [2024-11-20 11:21:40.644623] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.400 [2024-11-20 11:21:40.644629] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:13.400 [2024-11-20 11:21:40.644643] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.400 qpair failed and we were unable to recover it. 00:27:13.400 [2024-11-20 11:21:40.654541] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.400 [2024-11-20 11:21:40.654598] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.400 [2024-11-20 11:21:40.654612] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.400 [2024-11-20 11:21:40.654619] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.400 [2024-11-20 11:21:40.654625] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:13.400 [2024-11-20 11:21:40.654639] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.400 qpair failed and we were unable to recover it. 00:27:13.400 [2024-11-20 11:21:40.664567] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.400 [2024-11-20 11:21:40.664626] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.400 [2024-11-20 11:21:40.664640] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.400 [2024-11-20 11:21:40.664648] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.400 [2024-11-20 11:21:40.664653] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:13.400 [2024-11-20 11:21:40.664668] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.400 qpair failed and we were unable to recover it. 00:27:13.400 [2024-11-20 11:21:40.674598] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.400 [2024-11-20 11:21:40.674654] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.400 [2024-11-20 11:21:40.674669] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.400 [2024-11-20 11:21:40.674676] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.400 [2024-11-20 11:21:40.674682] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:13.400 [2024-11-20 11:21:40.674696] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.400 qpair failed and we were unable to recover it. 00:27:13.400 [2024-11-20 11:21:40.684739] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.400 [2024-11-20 11:21:40.684844] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.400 [2024-11-20 11:21:40.684860] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.400 [2024-11-20 11:21:40.684911] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.400 [2024-11-20 11:21:40.684918] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:13.400 [2024-11-20 11:21:40.684935] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.400 qpair failed and we were unable to recover it. 00:27:13.400 [2024-11-20 11:21:40.694691] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.400 [2024-11-20 11:21:40.694750] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.400 [2024-11-20 11:21:40.694766] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.400 [2024-11-20 11:21:40.694773] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.400 [2024-11-20 11:21:40.694779] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:13.400 [2024-11-20 11:21:40.694794] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.400 qpair failed and we were unable to recover it. 00:27:13.400 [2024-11-20 11:21:40.704722] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.400 [2024-11-20 11:21:40.704800] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.400 [2024-11-20 11:21:40.704815] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.400 [2024-11-20 11:21:40.704822] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.400 [2024-11-20 11:21:40.704828] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:13.400 [2024-11-20 11:21:40.704843] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.400 qpair failed and we were unable to recover it. 00:27:13.400 [2024-11-20 11:21:40.714758] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.400 [2024-11-20 11:21:40.714814] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.400 [2024-11-20 11:21:40.714830] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.400 [2024-11-20 11:21:40.714840] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.400 [2024-11-20 11:21:40.714846] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:13.400 [2024-11-20 11:21:40.714862] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.400 qpair failed and we were unable to recover it. 00:27:13.400 [2024-11-20 11:21:40.724736] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.400 [2024-11-20 11:21:40.724789] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.400 [2024-11-20 11:21:40.724806] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.400 [2024-11-20 11:21:40.724813] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.400 [2024-11-20 11:21:40.724819] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:13.400 [2024-11-20 11:21:40.724835] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.400 qpair failed and we were unable to recover it. 00:27:13.400 [2024-11-20 11:21:40.734760] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.400 [2024-11-20 11:21:40.734816] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.400 [2024-11-20 11:21:40.734832] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.400 [2024-11-20 11:21:40.734839] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.400 [2024-11-20 11:21:40.734845] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:13.400 [2024-11-20 11:21:40.734860] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.400 qpair failed and we were unable to recover it. 00:27:13.400 [2024-11-20 11:21:40.744799] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.400 [2024-11-20 11:21:40.744857] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.400 [2024-11-20 11:21:40.744872] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.400 [2024-11-20 11:21:40.744879] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.400 [2024-11-20 11:21:40.744885] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:13.400 [2024-11-20 11:21:40.744900] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.400 qpair failed and we were unable to recover it. 00:27:13.400 [2024-11-20 11:21:40.754831] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.400 [2024-11-20 11:21:40.754887] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.401 [2024-11-20 11:21:40.754901] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.401 [2024-11-20 11:21:40.754908] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.401 [2024-11-20 11:21:40.754914] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:13.401 [2024-11-20 11:21:40.754932] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.401 qpair failed and we were unable to recover it. 00:27:13.401 [2024-11-20 11:21:40.764855] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.401 [2024-11-20 11:21:40.764909] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.401 [2024-11-20 11:21:40.764925] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.401 [2024-11-20 11:21:40.764932] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.401 [2024-11-20 11:21:40.764938] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:13.401 [2024-11-20 11:21:40.764957] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.401 qpair failed and we were unable to recover it. 00:27:13.401 [2024-11-20 11:21:40.774882] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.401 [2024-11-20 11:21:40.774933] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.401 [2024-11-20 11:21:40.774953] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.401 [2024-11-20 11:21:40.774960] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.401 [2024-11-20 11:21:40.774966] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:13.401 [2024-11-20 11:21:40.774982] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.401 qpair failed and we were unable to recover it. 00:27:13.401 [2024-11-20 11:21:40.784965] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.401 [2024-11-20 11:21:40.785023] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.401 [2024-11-20 11:21:40.785037] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.401 [2024-11-20 11:21:40.785044] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.401 [2024-11-20 11:21:40.785050] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:13.401 [2024-11-20 11:21:40.785065] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.401 qpair failed and we were unable to recover it. 00:27:13.401 [2024-11-20 11:21:40.794940] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.401 [2024-11-20 11:21:40.795003] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.401 [2024-11-20 11:21:40.795017] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.401 [2024-11-20 11:21:40.795024] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.401 [2024-11-20 11:21:40.795030] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:13.401 [2024-11-20 11:21:40.795045] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.401 qpair failed and we were unable to recover it. 00:27:13.401 [2024-11-20 11:21:40.804963] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.401 [2024-11-20 11:21:40.805020] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.401 [2024-11-20 11:21:40.805035] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.401 [2024-11-20 11:21:40.805042] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.401 [2024-11-20 11:21:40.805048] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:13.401 [2024-11-20 11:21:40.805063] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.401 qpair failed and we were unable to recover it. 00:27:13.401 [2024-11-20 11:21:40.815022] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.401 [2024-11-20 11:21:40.815082] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.401 [2024-11-20 11:21:40.815099] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.401 [2024-11-20 11:21:40.815108] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.401 [2024-11-20 11:21:40.815115] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:13.401 [2024-11-20 11:21:40.815131] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.401 qpair failed and we were unable to recover it. 00:27:13.401 [2024-11-20 11:21:40.825031] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.401 [2024-11-20 11:21:40.825092] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.401 [2024-11-20 11:21:40.825107] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.401 [2024-11-20 11:21:40.825114] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.401 [2024-11-20 11:21:40.825120] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:13.401 [2024-11-20 11:21:40.825135] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.401 qpair failed and we were unable to recover it. 00:27:13.401 [2024-11-20 11:21:40.835099] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.401 [2024-11-20 11:21:40.835163] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.401 [2024-11-20 11:21:40.835178] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.401 [2024-11-20 11:21:40.835185] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.401 [2024-11-20 11:21:40.835191] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:13.401 [2024-11-20 11:21:40.835206] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.401 qpair failed and we were unable to recover it. 00:27:13.401 [2024-11-20 11:21:40.845104] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.401 [2024-11-20 11:21:40.845172] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.401 [2024-11-20 11:21:40.845187] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.401 [2024-11-20 11:21:40.845196] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.401 [2024-11-20 11:21:40.845202] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:13.401 [2024-11-20 11:21:40.845217] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.401 qpair failed and we were unable to recover it. 00:27:13.401 [2024-11-20 11:21:40.855124] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.401 [2024-11-20 11:21:40.855180] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.401 [2024-11-20 11:21:40.855196] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.401 [2024-11-20 11:21:40.855203] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.401 [2024-11-20 11:21:40.855208] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:13.401 [2024-11-20 11:21:40.855224] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.401 qpair failed and we were unable to recover it. 00:27:13.401 [2024-11-20 11:21:40.865151] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.401 [2024-11-20 11:21:40.865209] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.401 [2024-11-20 11:21:40.865224] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.401 [2024-11-20 11:21:40.865231] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.402 [2024-11-20 11:21:40.865237] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:13.402 [2024-11-20 11:21:40.865251] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.402 qpair failed and we were unable to recover it. 00:27:13.402 [2024-11-20 11:21:40.875174] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.402 [2024-11-20 11:21:40.875228] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.402 [2024-11-20 11:21:40.875243] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.402 [2024-11-20 11:21:40.875250] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.402 [2024-11-20 11:21:40.875256] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:13.402 [2024-11-20 11:21:40.875271] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.402 qpair failed and we were unable to recover it. 00:27:13.402 [2024-11-20 11:21:40.885223] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.402 [2024-11-20 11:21:40.885282] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.402 [2024-11-20 11:21:40.885297] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.402 [2024-11-20 11:21:40.885304] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.402 [2024-11-20 11:21:40.885310] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:13.402 [2024-11-20 11:21:40.885328] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.402 qpair failed and we were unable to recover it. 00:27:13.662 [2024-11-20 11:21:40.895260] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.662 [2024-11-20 11:21:40.895311] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.662 [2024-11-20 11:21:40.895326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.662 [2024-11-20 11:21:40.895333] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.662 [2024-11-20 11:21:40.895339] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:13.662 [2024-11-20 11:21:40.895354] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.662 qpair failed and we were unable to recover it. 00:27:13.662 [2024-11-20 11:21:40.905344] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.662 [2024-11-20 11:21:40.905427] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.662 [2024-11-20 11:21:40.905442] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.662 [2024-11-20 11:21:40.905449] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.662 [2024-11-20 11:21:40.905455] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:13.662 [2024-11-20 11:21:40.905469] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.662 qpair failed and we were unable to recover it. 00:27:13.662 [2024-11-20 11:21:40.915332] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.662 [2024-11-20 11:21:40.915396] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.662 [2024-11-20 11:21:40.915411] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.662 [2024-11-20 11:21:40.915418] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.662 [2024-11-20 11:21:40.915424] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:13.662 [2024-11-20 11:21:40.915438] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.662 qpair failed and we were unable to recover it. 00:27:13.662 [2024-11-20 11:21:40.925289] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.662 [2024-11-20 11:21:40.925358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.662 [2024-11-20 11:21:40.925373] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.662 [2024-11-20 11:21:40.925380] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.662 [2024-11-20 11:21:40.925386] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:13.662 [2024-11-20 11:21:40.925402] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.662 qpair failed and we were unable to recover it. 00:27:13.662 [2024-11-20 11:21:40.935343] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.662 [2024-11-20 11:21:40.935401] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.662 [2024-11-20 11:21:40.935417] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.662 [2024-11-20 11:21:40.935424] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.662 [2024-11-20 11:21:40.935430] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:13.662 [2024-11-20 11:21:40.935446] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.662 qpair failed and we were unable to recover it. 00:27:13.662 [2024-11-20 11:21:40.945379] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.662 [2024-11-20 11:21:40.945560] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.662 [2024-11-20 11:21:40.945577] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.662 [2024-11-20 11:21:40.945584] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.662 [2024-11-20 11:21:40.945590] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:13.662 [2024-11-20 11:21:40.945605] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.662 qpair failed and we were unable to recover it. 00:27:13.662 [2024-11-20 11:21:40.955408] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.662 [2024-11-20 11:21:40.955467] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.662 [2024-11-20 11:21:40.955482] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.662 [2024-11-20 11:21:40.955489] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.662 [2024-11-20 11:21:40.955495] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:13.662 [2024-11-20 11:21:40.955509] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.662 qpair failed and we were unable to recover it. 00:27:13.662 [2024-11-20 11:21:40.965422] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.663 [2024-11-20 11:21:40.965476] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.663 [2024-11-20 11:21:40.965490] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.663 [2024-11-20 11:21:40.965497] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.663 [2024-11-20 11:21:40.965503] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:13.663 [2024-11-20 11:21:40.965518] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.663 qpair failed and we were unable to recover it. 00:27:13.663 [2024-11-20 11:21:40.975456] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.663 [2024-11-20 11:21:40.975515] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.663 [2024-11-20 11:21:40.975529] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.663 [2024-11-20 11:21:40.975539] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.663 [2024-11-20 11:21:40.975545] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:13.663 [2024-11-20 11:21:40.975559] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.663 qpair failed and we were unable to recover it. 00:27:13.663 [2024-11-20 11:21:40.985522] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.663 [2024-11-20 11:21:40.985582] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.663 [2024-11-20 11:21:40.985596] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.663 [2024-11-20 11:21:40.985602] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.663 [2024-11-20 11:21:40.985608] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:13.663 [2024-11-20 11:21:40.985623] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.663 qpair failed and we were unable to recover it. 00:27:13.663 [2024-11-20 11:21:40.995520] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.663 [2024-11-20 11:21:40.995579] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.663 [2024-11-20 11:21:40.995592] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.663 [2024-11-20 11:21:40.995599] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.663 [2024-11-20 11:21:40.995605] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:13.663 [2024-11-20 11:21:40.995618] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.663 qpair failed and we were unable to recover it. 00:27:13.663 [2024-11-20 11:21:41.005531] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.663 [2024-11-20 11:21:41.005587] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.663 [2024-11-20 11:21:41.005601] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.663 [2024-11-20 11:21:41.005607] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.663 [2024-11-20 11:21:41.005613] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:13.663 [2024-11-20 11:21:41.005627] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.663 qpair failed and we were unable to recover it. 00:27:13.663 [2024-11-20 11:21:41.015565] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.663 [2024-11-20 11:21:41.015615] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.663 [2024-11-20 11:21:41.015629] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.663 [2024-11-20 11:21:41.015635] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.663 [2024-11-20 11:21:41.015642] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:13.663 [2024-11-20 11:21:41.015659] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.663 qpair failed and we were unable to recover it. 00:27:13.663 [2024-11-20 11:21:41.025597] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.663 [2024-11-20 11:21:41.025652] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.663 [2024-11-20 11:21:41.025665] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.663 [2024-11-20 11:21:41.025672] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.663 [2024-11-20 11:21:41.025677] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:13.663 [2024-11-20 11:21:41.025692] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.663 qpair failed and we were unable to recover it. 00:27:13.663 [2024-11-20 11:21:41.035688] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.663 [2024-11-20 11:21:41.035769] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.663 [2024-11-20 11:21:41.035784] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.663 [2024-11-20 11:21:41.035791] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.663 [2024-11-20 11:21:41.035797] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:13.663 [2024-11-20 11:21:41.035811] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.663 qpair failed and we were unable to recover it. 00:27:13.663 [2024-11-20 11:21:41.045646] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.663 [2024-11-20 11:21:41.045696] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.663 [2024-11-20 11:21:41.045710] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.663 [2024-11-20 11:21:41.045717] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.663 [2024-11-20 11:21:41.045723] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:13.663 [2024-11-20 11:21:41.045738] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.663 qpair failed and we were unable to recover it. 00:27:13.663 [2024-11-20 11:21:41.055684] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.663 [2024-11-20 11:21:41.055740] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.663 [2024-11-20 11:21:41.055754] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.663 [2024-11-20 11:21:41.055761] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.663 [2024-11-20 11:21:41.055766] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:13.663 [2024-11-20 11:21:41.055781] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.663 qpair failed and we were unable to recover it. 00:27:13.663 [2024-11-20 11:21:41.065731] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.663 [2024-11-20 11:21:41.065795] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.663 [2024-11-20 11:21:41.065809] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.663 [2024-11-20 11:21:41.065816] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.663 [2024-11-20 11:21:41.065822] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:13.663 [2024-11-20 11:21:41.065836] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.663 qpair failed and we were unable to recover it. 00:27:13.663 [2024-11-20 11:21:41.075733] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.663 [2024-11-20 11:21:41.075787] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.663 [2024-11-20 11:21:41.075801] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.664 [2024-11-20 11:21:41.075807] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.664 [2024-11-20 11:21:41.075814] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:13.664 [2024-11-20 11:21:41.075828] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.664 qpair failed and we were unable to recover it. 00:27:13.664 [2024-11-20 11:21:41.085784] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.664 [2024-11-20 11:21:41.085850] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.664 [2024-11-20 11:21:41.085864] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.664 [2024-11-20 11:21:41.085870] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.664 [2024-11-20 11:21:41.085876] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:13.664 [2024-11-20 11:21:41.085890] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.664 qpair failed and we were unable to recover it. 00:27:13.664 [2024-11-20 11:21:41.095856] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.664 [2024-11-20 11:21:41.095910] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.664 [2024-11-20 11:21:41.095925] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.664 [2024-11-20 11:21:41.095932] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.664 [2024-11-20 11:21:41.095938] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:13.664 [2024-11-20 11:21:41.095955] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.664 qpair failed and we were unable to recover it. 00:27:13.664 [2024-11-20 11:21:41.105812] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.664 [2024-11-20 11:21:41.105868] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.664 [2024-11-20 11:21:41.105882] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.664 [2024-11-20 11:21:41.105892] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.664 [2024-11-20 11:21:41.105898] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:13.664 [2024-11-20 11:21:41.105912] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.664 qpair failed and we were unable to recover it. 00:27:13.664 [2024-11-20 11:21:41.115845] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.664 [2024-11-20 11:21:41.115896] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.664 [2024-11-20 11:21:41.115910] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.664 [2024-11-20 11:21:41.115916] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.664 [2024-11-20 11:21:41.115922] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:13.664 [2024-11-20 11:21:41.115936] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.664 qpair failed and we were unable to recover it. 00:27:13.664 [2024-11-20 11:21:41.125865] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.664 [2024-11-20 11:21:41.125918] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.664 [2024-11-20 11:21:41.125932] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.664 [2024-11-20 11:21:41.125938] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.664 [2024-11-20 11:21:41.125945] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:13.664 [2024-11-20 11:21:41.125964] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.664 qpair failed and we were unable to recover it. 00:27:13.664 [2024-11-20 11:21:41.135843] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.664 [2024-11-20 11:21:41.135900] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.664 [2024-11-20 11:21:41.135915] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.664 [2024-11-20 11:21:41.135922] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.664 [2024-11-20 11:21:41.135928] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:13.664 [2024-11-20 11:21:41.135942] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.664 qpair failed and we were unable to recover it. 00:27:13.664 [2024-11-20 11:21:41.145936] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.664 [2024-11-20 11:21:41.146000] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.664 [2024-11-20 11:21:41.146014] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.664 [2024-11-20 11:21:41.146021] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.664 [2024-11-20 11:21:41.146026] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:13.664 [2024-11-20 11:21:41.146043] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.664 qpair failed and we were unable to recover it. 00:27:13.924 [2024-11-20 11:21:41.155964] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.924 [2024-11-20 11:21:41.156018] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.924 [2024-11-20 11:21:41.156032] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.924 [2024-11-20 11:21:41.156039] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.924 [2024-11-20 11:21:41.156045] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:13.924 [2024-11-20 11:21:41.156059] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.924 qpair failed and we were unable to recover it. 00:27:13.924 [2024-11-20 11:21:41.165973] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.924 [2024-11-20 11:21:41.166030] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.924 [2024-11-20 11:21:41.166044] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.924 [2024-11-20 11:21:41.166050] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.924 [2024-11-20 11:21:41.166056] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:13.924 [2024-11-20 11:21:41.166071] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.924 qpair failed and we were unable to recover it. 00:27:13.924 [2024-11-20 11:21:41.176006] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.924 [2024-11-20 11:21:41.176065] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.924 [2024-11-20 11:21:41.176079] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.924 [2024-11-20 11:21:41.176085] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.924 [2024-11-20 11:21:41.176092] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:13.924 [2024-11-20 11:21:41.176106] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.924 qpair failed and we were unable to recover it. 00:27:13.924 [2024-11-20 11:21:41.186041] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.924 [2024-11-20 11:21:41.186098] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.924 [2024-11-20 11:21:41.186112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.924 [2024-11-20 11:21:41.186119] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.924 [2024-11-20 11:21:41.186125] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:13.924 [2024-11-20 11:21:41.186140] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.924 qpair failed and we were unable to recover it. 00:27:13.924 [2024-11-20 11:21:41.196063] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.924 [2024-11-20 11:21:41.196118] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.924 [2024-11-20 11:21:41.196133] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.925 [2024-11-20 11:21:41.196140] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.925 [2024-11-20 11:21:41.196146] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:13.925 [2024-11-20 11:21:41.196160] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.925 qpair failed and we were unable to recover it. 00:27:13.925 [2024-11-20 11:21:41.206123] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.925 [2024-11-20 11:21:41.206177] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.925 [2024-11-20 11:21:41.206192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.925 [2024-11-20 11:21:41.206199] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.925 [2024-11-20 11:21:41.206205] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:13.925 [2024-11-20 11:21:41.206219] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.925 qpair failed and we were unable to recover it. 00:27:13.925 [2024-11-20 11:21:41.216095] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.925 [2024-11-20 11:21:41.216172] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.925 [2024-11-20 11:21:41.216185] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.925 [2024-11-20 11:21:41.216192] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.925 [2024-11-20 11:21:41.216198] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:13.925 [2024-11-20 11:21:41.216212] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.925 qpair failed and we were unable to recover it. 00:27:13.925 [2024-11-20 11:21:41.226163] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.925 [2024-11-20 11:21:41.226219] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.925 [2024-11-20 11:21:41.226233] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.925 [2024-11-20 11:21:41.226240] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.925 [2024-11-20 11:21:41.226246] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:13.925 [2024-11-20 11:21:41.226261] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.925 qpair failed and we were unable to recover it. 00:27:13.925 [2024-11-20 11:21:41.236180] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.925 [2024-11-20 11:21:41.236268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.925 [2024-11-20 11:21:41.236282] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.925 [2024-11-20 11:21:41.236294] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.925 [2024-11-20 11:21:41.236300] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:13.925 [2024-11-20 11:21:41.236315] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.925 qpair failed and we were unable to recover it. 00:27:13.925 [2024-11-20 11:21:41.246208] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.925 [2024-11-20 11:21:41.246266] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.925 [2024-11-20 11:21:41.246280] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.925 [2024-11-20 11:21:41.246287] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.925 [2024-11-20 11:21:41.246292] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:13.925 [2024-11-20 11:21:41.246307] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.925 qpair failed and we were unable to recover it. 00:27:13.925 [2024-11-20 11:21:41.256243] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.925 [2024-11-20 11:21:41.256306] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.925 [2024-11-20 11:21:41.256320] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.925 [2024-11-20 11:21:41.256327] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.925 [2024-11-20 11:21:41.256333] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:13.925 [2024-11-20 11:21:41.256347] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.925 qpair failed and we were unable to recover it. 00:27:13.925 [2024-11-20 11:21:41.266276] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.925 [2024-11-20 11:21:41.266336] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.925 [2024-11-20 11:21:41.266349] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.925 [2024-11-20 11:21:41.266356] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.925 [2024-11-20 11:21:41.266362] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:13.925 [2024-11-20 11:21:41.266375] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.925 qpair failed and we were unable to recover it. 00:27:13.925 [2024-11-20 11:21:41.276307] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.925 [2024-11-20 11:21:41.276363] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.925 [2024-11-20 11:21:41.276377] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.925 [2024-11-20 11:21:41.276384] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.925 [2024-11-20 11:21:41.276390] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:13.925 [2024-11-20 11:21:41.276407] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.925 qpair failed and we were unable to recover it. 00:27:13.925 [2024-11-20 11:21:41.286328] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.925 [2024-11-20 11:21:41.286383] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.925 [2024-11-20 11:21:41.286396] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.925 [2024-11-20 11:21:41.286403] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.925 [2024-11-20 11:21:41.286409] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:13.925 [2024-11-20 11:21:41.286423] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.925 qpair failed and we were unable to recover it. 00:27:13.925 [2024-11-20 11:21:41.296352] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.925 [2024-11-20 11:21:41.296407] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.925 [2024-11-20 11:21:41.296420] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.925 [2024-11-20 11:21:41.296427] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.925 [2024-11-20 11:21:41.296433] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:13.925 [2024-11-20 11:21:41.296447] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.925 qpair failed and we were unable to recover it. 00:27:13.925 [2024-11-20 11:21:41.306397] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.925 [2024-11-20 11:21:41.306491] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.925 [2024-11-20 11:21:41.306504] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.926 [2024-11-20 11:21:41.306511] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.926 [2024-11-20 11:21:41.306516] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:13.926 [2024-11-20 11:21:41.306530] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.926 qpair failed and we were unable to recover it. 00:27:13.926 [2024-11-20 11:21:41.316418] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.926 [2024-11-20 11:21:41.316474] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.926 [2024-11-20 11:21:41.316488] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.926 [2024-11-20 11:21:41.316494] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.926 [2024-11-20 11:21:41.316500] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:13.926 [2024-11-20 11:21:41.316514] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.926 qpair failed and we were unable to recover it. 00:27:13.926 [2024-11-20 11:21:41.326377] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.926 [2024-11-20 11:21:41.326430] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.926 [2024-11-20 11:21:41.326444] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.926 [2024-11-20 11:21:41.326450] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.926 [2024-11-20 11:21:41.326456] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:13.926 [2024-11-20 11:21:41.326471] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.926 qpair failed and we were unable to recover it. 00:27:13.926 [2024-11-20 11:21:41.336414] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.926 [2024-11-20 11:21:41.336470] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.926 [2024-11-20 11:21:41.336485] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.926 [2024-11-20 11:21:41.336492] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.926 [2024-11-20 11:21:41.336498] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:13.926 [2024-11-20 11:21:41.336512] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.926 qpair failed and we were unable to recover it. 00:27:13.926 [2024-11-20 11:21:41.346517] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.926 [2024-11-20 11:21:41.346577] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.926 [2024-11-20 11:21:41.346590] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.926 [2024-11-20 11:21:41.346597] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.926 [2024-11-20 11:21:41.346602] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:13.926 [2024-11-20 11:21:41.346617] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.926 qpair failed and we were unable to recover it. 00:27:13.926 [2024-11-20 11:21:41.356544] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.926 [2024-11-20 11:21:41.356620] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.926 [2024-11-20 11:21:41.356634] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.926 [2024-11-20 11:21:41.356640] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.926 [2024-11-20 11:21:41.356647] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:13.926 [2024-11-20 11:21:41.356660] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.926 qpair failed and we were unable to recover it. 00:27:13.926 [2024-11-20 11:21:41.366555] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.926 [2024-11-20 11:21:41.366634] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.926 [2024-11-20 11:21:41.366648] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.926 [2024-11-20 11:21:41.366658] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.926 [2024-11-20 11:21:41.366664] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:13.926 [2024-11-20 11:21:41.366678] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.926 qpair failed and we were unable to recover it. 00:27:13.926 [2024-11-20 11:21:41.376619] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.926 [2024-11-20 11:21:41.376670] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.926 [2024-11-20 11:21:41.376684] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.926 [2024-11-20 11:21:41.376690] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.926 [2024-11-20 11:21:41.376696] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:13.926 [2024-11-20 11:21:41.376711] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.926 qpair failed and we were unable to recover it. 00:27:13.926 [2024-11-20 11:21:41.386666] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.926 [2024-11-20 11:21:41.386766] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.926 [2024-11-20 11:21:41.386781] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.926 [2024-11-20 11:21:41.386787] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.926 [2024-11-20 11:21:41.386793] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:13.926 [2024-11-20 11:21:41.386807] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.926 qpair failed and we were unable to recover it. 00:27:13.926 [2024-11-20 11:21:41.396650] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.926 [2024-11-20 11:21:41.396709] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.926 [2024-11-20 11:21:41.396723] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.926 [2024-11-20 11:21:41.396729] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.926 [2024-11-20 11:21:41.396735] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:13.926 [2024-11-20 11:21:41.396750] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.926 qpair failed and we were unable to recover it. 00:27:13.926 [2024-11-20 11:21:41.406682] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.926 [2024-11-20 11:21:41.406746] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.926 [2024-11-20 11:21:41.406761] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.926 [2024-11-20 11:21:41.406767] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.926 [2024-11-20 11:21:41.406773] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:13.926 [2024-11-20 11:21:41.406791] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.926 qpair failed and we were unable to recover it. 00:27:13.926 [2024-11-20 11:21:41.416630] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.926 [2024-11-20 11:21:41.416690] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.927 [2024-11-20 11:21:41.416703] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.927 [2024-11-20 11:21:41.416710] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.927 [2024-11-20 11:21:41.416716] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:13.927 [2024-11-20 11:21:41.416730] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.927 qpair failed and we were unable to recover it. 00:27:14.187 [2024-11-20 11:21:41.426733] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.187 [2024-11-20 11:21:41.426794] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.187 [2024-11-20 11:21:41.426808] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.187 [2024-11-20 11:21:41.426815] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.187 [2024-11-20 11:21:41.426821] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:14.187 [2024-11-20 11:21:41.426836] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.187 qpair failed and we were unable to recover it. 00:27:14.187 [2024-11-20 11:21:41.436781] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.187 [2024-11-20 11:21:41.436847] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.187 [2024-11-20 11:21:41.436862] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.187 [2024-11-20 11:21:41.436869] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.187 [2024-11-20 11:21:41.436875] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:14.187 [2024-11-20 11:21:41.436890] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.187 qpair failed and we were unable to recover it. 00:27:14.187 [2024-11-20 11:21:41.446706] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.187 [2024-11-20 11:21:41.446762] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.187 [2024-11-20 11:21:41.446777] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.187 [2024-11-20 11:21:41.446783] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.187 [2024-11-20 11:21:41.446789] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:14.187 [2024-11-20 11:21:41.446804] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.187 qpair failed and we were unable to recover it. 00:27:14.187 [2024-11-20 11:21:41.456802] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.187 [2024-11-20 11:21:41.456859] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.187 [2024-11-20 11:21:41.456874] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.187 [2024-11-20 11:21:41.456880] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.187 [2024-11-20 11:21:41.456886] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:14.187 [2024-11-20 11:21:41.456901] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.187 qpair failed and we were unable to recover it. 00:27:14.187 [2024-11-20 11:21:41.466849] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.187 [2024-11-20 11:21:41.466907] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.187 [2024-11-20 11:21:41.466921] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.187 [2024-11-20 11:21:41.466928] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.187 [2024-11-20 11:21:41.466934] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:14.187 [2024-11-20 11:21:41.466952] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.187 qpair failed and we were unable to recover it. 00:27:14.187 [2024-11-20 11:21:41.476892] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.187 [2024-11-20 11:21:41.476963] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.187 [2024-11-20 11:21:41.476977] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.187 [2024-11-20 11:21:41.476985] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.187 [2024-11-20 11:21:41.476991] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:14.187 [2024-11-20 11:21:41.477006] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.187 qpair failed and we were unable to recover it. 00:27:14.187 [2024-11-20 11:21:41.486896] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.187 [2024-11-20 11:21:41.486951] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.187 [2024-11-20 11:21:41.486966] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.187 [2024-11-20 11:21:41.486972] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.187 [2024-11-20 11:21:41.486978] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:14.187 [2024-11-20 11:21:41.486993] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.187 qpair failed and we were unable to recover it. 00:27:14.187 [2024-11-20 11:21:41.496926] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.187 [2024-11-20 11:21:41.496984] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.187 [2024-11-20 11:21:41.496998] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.187 [2024-11-20 11:21:41.497008] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.187 [2024-11-20 11:21:41.497014] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:14.187 [2024-11-20 11:21:41.497028] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.187 qpair failed and we were unable to recover it. 00:27:14.187 [2024-11-20 11:21:41.506944] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.187 [2024-11-20 11:21:41.507023] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.187 [2024-11-20 11:21:41.507037] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.187 [2024-11-20 11:21:41.507044] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.187 [2024-11-20 11:21:41.507050] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:14.187 [2024-11-20 11:21:41.507065] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.187 qpair failed and we were unable to recover it. 00:27:14.187 [2024-11-20 11:21:41.517028] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.187 [2024-11-20 11:21:41.517082] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.187 [2024-11-20 11:21:41.517099] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.187 [2024-11-20 11:21:41.517106] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.188 [2024-11-20 11:21:41.517113] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:14.188 [2024-11-20 11:21:41.517128] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.188 qpair failed and we were unable to recover it. 00:27:14.188 [2024-11-20 11:21:41.527000] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.188 [2024-11-20 11:21:41.527081] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.188 [2024-11-20 11:21:41.527095] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.188 [2024-11-20 11:21:41.527101] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.188 [2024-11-20 11:21:41.527107] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:14.188 [2024-11-20 11:21:41.527123] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.188 qpair failed and we were unable to recover it. 00:27:14.188 [2024-11-20 11:21:41.537061] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.188 [2024-11-20 11:21:41.537115] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.188 [2024-11-20 11:21:41.537129] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.188 [2024-11-20 11:21:41.537136] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.188 [2024-11-20 11:21:41.537142] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:14.188 [2024-11-20 11:21:41.537161] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.188 qpair failed and we were unable to recover it. 00:27:14.188 [2024-11-20 11:21:41.547076] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.188 [2024-11-20 11:21:41.547133] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.188 [2024-11-20 11:21:41.547148] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.188 [2024-11-20 11:21:41.547155] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.188 [2024-11-20 11:21:41.547161] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:14.188 [2024-11-20 11:21:41.547176] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.188 qpair failed and we were unable to recover it. 00:27:14.188 [2024-11-20 11:21:41.557101] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.188 [2024-11-20 11:21:41.557156] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.188 [2024-11-20 11:21:41.557170] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.188 [2024-11-20 11:21:41.557177] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.188 [2024-11-20 11:21:41.557183] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:14.188 [2024-11-20 11:21:41.557198] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.188 qpair failed and we were unable to recover it. 00:27:14.188 [2024-11-20 11:21:41.567176] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.188 [2024-11-20 11:21:41.567225] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.188 [2024-11-20 11:21:41.567239] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.188 [2024-11-20 11:21:41.567245] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.188 [2024-11-20 11:21:41.567252] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:14.188 [2024-11-20 11:21:41.567266] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.188 qpair failed and we were unable to recover it. 00:27:14.188 [2024-11-20 11:21:41.577195] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.188 [2024-11-20 11:21:41.577262] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.188 [2024-11-20 11:21:41.577277] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.188 [2024-11-20 11:21:41.577283] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.188 [2024-11-20 11:21:41.577289] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:14.188 [2024-11-20 11:21:41.577304] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.188 qpair failed and we were unable to recover it. 00:27:14.188 [2024-11-20 11:21:41.587206] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.188 [2024-11-20 11:21:41.587285] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.188 [2024-11-20 11:21:41.587299] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.188 [2024-11-20 11:21:41.587305] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.188 [2024-11-20 11:21:41.587311] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:14.188 [2024-11-20 11:21:41.587325] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.188 qpair failed and we were unable to recover it. 00:27:14.188 [2024-11-20 11:21:41.597200] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.188 [2024-11-20 11:21:41.597259] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.188 [2024-11-20 11:21:41.597273] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.188 [2024-11-20 11:21:41.597280] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.188 [2024-11-20 11:21:41.597286] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:14.188 [2024-11-20 11:21:41.597300] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.188 qpair failed and we were unable to recover it. 00:27:14.188 [2024-11-20 11:21:41.607285] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.188 [2024-11-20 11:21:41.607342] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.188 [2024-11-20 11:21:41.607356] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.188 [2024-11-20 11:21:41.607363] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.188 [2024-11-20 11:21:41.607369] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:14.188 [2024-11-20 11:21:41.607383] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.188 qpair failed and we were unable to recover it. 00:27:14.188 [2024-11-20 11:21:41.617325] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.188 [2024-11-20 11:21:41.617390] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.188 [2024-11-20 11:21:41.617404] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.188 [2024-11-20 11:21:41.617410] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.188 [2024-11-20 11:21:41.617416] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:14.188 [2024-11-20 11:21:41.617431] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.188 qpair failed and we were unable to recover it. 00:27:14.188 [2024-11-20 11:21:41.627236] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.188 [2024-11-20 11:21:41.627293] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.188 [2024-11-20 11:21:41.627306] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.188 [2024-11-20 11:21:41.627316] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.188 [2024-11-20 11:21:41.627322] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:14.189 [2024-11-20 11:21:41.627337] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.189 qpair failed and we were unable to recover it. 00:27:14.189 [2024-11-20 11:21:41.637313] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.189 [2024-11-20 11:21:41.637375] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.189 [2024-11-20 11:21:41.637390] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.189 [2024-11-20 11:21:41.637397] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.189 [2024-11-20 11:21:41.637403] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:14.189 [2024-11-20 11:21:41.637418] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.189 qpair failed and we were unable to recover it. 00:27:14.189 [2024-11-20 11:21:41.647392] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.189 [2024-11-20 11:21:41.647444] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.189 [2024-11-20 11:21:41.647458] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.189 [2024-11-20 11:21:41.647465] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.189 [2024-11-20 11:21:41.647471] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:14.189 [2024-11-20 11:21:41.647486] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.189 qpair failed and we were unable to recover it. 00:27:14.189 [2024-11-20 11:21:41.657429] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.189 [2024-11-20 11:21:41.657484] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.189 [2024-11-20 11:21:41.657498] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.189 [2024-11-20 11:21:41.657504] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.189 [2024-11-20 11:21:41.657510] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:14.189 [2024-11-20 11:21:41.657524] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.189 qpair failed and we were unable to recover it. 00:27:14.189 [2024-11-20 11:21:41.667420] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.189 [2024-11-20 11:21:41.667477] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.189 [2024-11-20 11:21:41.667490] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.189 [2024-11-20 11:21:41.667497] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.189 [2024-11-20 11:21:41.667503] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:14.189 [2024-11-20 11:21:41.667521] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.189 qpair failed and we were unable to recover it. 00:27:14.189 [2024-11-20 11:21:41.677437] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.189 [2024-11-20 11:21:41.677491] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.189 [2024-11-20 11:21:41.677505] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.189 [2024-11-20 11:21:41.677511] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.189 [2024-11-20 11:21:41.677517] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:14.189 [2024-11-20 11:21:41.677531] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.189 qpair failed and we were unable to recover it. 00:27:14.450 [2024-11-20 11:21:41.687408] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.450 [2024-11-20 11:21:41.687467] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.450 [2024-11-20 11:21:41.687484] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.450 [2024-11-20 11:21:41.687491] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.450 [2024-11-20 11:21:41.687497] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:14.450 [2024-11-20 11:21:41.687513] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.450 qpair failed and we were unable to recover it. 00:27:14.450 [2024-11-20 11:21:41.697525] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.450 [2024-11-20 11:21:41.697579] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.450 [2024-11-20 11:21:41.697593] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.450 [2024-11-20 11:21:41.697600] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.450 [2024-11-20 11:21:41.697606] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:14.450 [2024-11-20 11:21:41.697621] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.450 qpair failed and we were unable to recover it. 00:27:14.450 [2024-11-20 11:21:41.707477] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.450 [2024-11-20 11:21:41.707532] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.450 [2024-11-20 11:21:41.707546] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.450 [2024-11-20 11:21:41.707552] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.450 [2024-11-20 11:21:41.707559] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:14.450 [2024-11-20 11:21:41.707573] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.450 qpair failed and we were unable to recover it. 00:27:14.450 [2024-11-20 11:21:41.717619] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.450 [2024-11-20 11:21:41.717677] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.450 [2024-11-20 11:21:41.717692] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.450 [2024-11-20 11:21:41.717699] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.450 [2024-11-20 11:21:41.717704] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:14.450 [2024-11-20 11:21:41.717719] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.450 qpair failed and we were unable to recover it. 00:27:14.450 [2024-11-20 11:21:41.727530] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.450 [2024-11-20 11:21:41.727614] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.450 [2024-11-20 11:21:41.727628] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.450 [2024-11-20 11:21:41.727635] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.450 [2024-11-20 11:21:41.727640] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:14.450 [2024-11-20 11:21:41.727655] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.450 qpair failed and we were unable to recover it. 00:27:14.450 [2024-11-20 11:21:41.737636] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.450 [2024-11-20 11:21:41.737686] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.450 [2024-11-20 11:21:41.737701] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.450 [2024-11-20 11:21:41.737707] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.450 [2024-11-20 11:21:41.737713] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:14.450 [2024-11-20 11:21:41.737728] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.450 qpair failed and we were unable to recover it. 00:27:14.450 [2024-11-20 11:21:41.747691] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.450 [2024-11-20 11:21:41.747748] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.450 [2024-11-20 11:21:41.747763] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.450 [2024-11-20 11:21:41.747770] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.450 [2024-11-20 11:21:41.747776] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:14.450 [2024-11-20 11:21:41.747791] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.451 qpair failed and we were unable to recover it. 00:27:14.451 [2024-11-20 11:21:41.757627] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.451 [2024-11-20 11:21:41.757683] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.451 [2024-11-20 11:21:41.757700] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.451 [2024-11-20 11:21:41.757708] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.451 [2024-11-20 11:21:41.757713] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:14.451 [2024-11-20 11:21:41.757728] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.451 qpair failed and we were unable to recover it. 00:27:14.451 [2024-11-20 11:21:41.767691] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.451 [2024-11-20 11:21:41.767748] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.451 [2024-11-20 11:21:41.767762] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.451 [2024-11-20 11:21:41.767769] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.451 [2024-11-20 11:21:41.767775] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:14.451 [2024-11-20 11:21:41.767789] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.451 qpair failed and we were unable to recover it. 00:27:14.451 [2024-11-20 11:21:41.777676] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.451 [2024-11-20 11:21:41.777730] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.451 [2024-11-20 11:21:41.777744] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.451 [2024-11-20 11:21:41.777751] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.451 [2024-11-20 11:21:41.777757] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:14.451 [2024-11-20 11:21:41.777771] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.451 qpair failed and we were unable to recover it. 00:27:14.451 [2024-11-20 11:21:41.787794] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.451 [2024-11-20 11:21:41.787853] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.451 [2024-11-20 11:21:41.787867] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.451 [2024-11-20 11:21:41.787874] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.451 [2024-11-20 11:21:41.787880] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:14.451 [2024-11-20 11:21:41.787894] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.451 qpair failed and we were unable to recover it. 00:27:14.451 [2024-11-20 11:21:41.797745] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.451 [2024-11-20 11:21:41.797805] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.451 [2024-11-20 11:21:41.797819] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.451 [2024-11-20 11:21:41.797825] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.451 [2024-11-20 11:21:41.797831] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:14.451 [2024-11-20 11:21:41.797849] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.451 qpair failed and we were unable to recover it. 00:27:14.451 [2024-11-20 11:21:41.807833] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.451 [2024-11-20 11:21:41.807913] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.451 [2024-11-20 11:21:41.807927] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.451 [2024-11-20 11:21:41.807934] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.451 [2024-11-20 11:21:41.807940] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:14.451 [2024-11-20 11:21:41.807961] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.451 qpair failed and we were unable to recover it. 00:27:14.451 [2024-11-20 11:21:41.817849] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.451 [2024-11-20 11:21:41.817903] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.451 [2024-11-20 11:21:41.817917] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.451 [2024-11-20 11:21:41.817924] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.451 [2024-11-20 11:21:41.817930] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:14.451 [2024-11-20 11:21:41.817944] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.451 qpair failed and we were unable to recover it. 00:27:14.451 [2024-11-20 11:21:41.827934] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.451 [2024-11-20 11:21:41.827996] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.451 [2024-11-20 11:21:41.828010] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.451 [2024-11-20 11:21:41.828017] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.451 [2024-11-20 11:21:41.828023] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:14.451 [2024-11-20 11:21:41.828038] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.451 qpair failed and we were unable to recover it. 00:27:14.451 [2024-11-20 11:21:41.837916] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.451 [2024-11-20 11:21:41.837975] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.451 [2024-11-20 11:21:41.837990] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.451 [2024-11-20 11:21:41.837997] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.451 [2024-11-20 11:21:41.838003] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:14.451 [2024-11-20 11:21:41.838018] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.451 qpair failed and we were unable to recover it. 00:27:14.451 [2024-11-20 11:21:41.847958] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.451 [2024-11-20 11:21:41.848049] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.451 [2024-11-20 11:21:41.848063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.451 [2024-11-20 11:21:41.848070] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.451 [2024-11-20 11:21:41.848076] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:14.451 [2024-11-20 11:21:41.848090] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.451 qpair failed and we were unable to recover it. 00:27:14.451 [2024-11-20 11:21:41.857925] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.451 [2024-11-20 11:21:41.857987] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.451 [2024-11-20 11:21:41.858001] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.451 [2024-11-20 11:21:41.858008] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.451 [2024-11-20 11:21:41.858014] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:14.451 [2024-11-20 11:21:41.858028] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.451 qpair failed and we were unable to recover it. 00:27:14.451 [2024-11-20 11:21:41.867977] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.452 [2024-11-20 11:21:41.868035] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.452 [2024-11-20 11:21:41.868049] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.452 [2024-11-20 11:21:41.868056] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.452 [2024-11-20 11:21:41.868062] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:14.452 [2024-11-20 11:21:41.868076] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.452 qpair failed and we were unable to recover it. 00:27:14.452 [2024-11-20 11:21:41.877992] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.452 [2024-11-20 11:21:41.878048] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.452 [2024-11-20 11:21:41.878061] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.452 [2024-11-20 11:21:41.878067] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.452 [2024-11-20 11:21:41.878073] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:14.452 [2024-11-20 11:21:41.878087] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.452 qpair failed and we were unable to recover it. 00:27:14.452 [2024-11-20 11:21:41.888096] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.452 [2024-11-20 11:21:41.888153] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.452 [2024-11-20 11:21:41.888170] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.452 [2024-11-20 11:21:41.888177] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.452 [2024-11-20 11:21:41.888183] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:14.452 [2024-11-20 11:21:41.888197] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.452 qpair failed and we were unable to recover it. 00:27:14.452 [2024-11-20 11:21:41.898034] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.452 [2024-11-20 11:21:41.898086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.452 [2024-11-20 11:21:41.898099] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.452 [2024-11-20 11:21:41.898105] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.452 [2024-11-20 11:21:41.898111] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:14.452 [2024-11-20 11:21:41.898125] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.452 qpair failed and we were unable to recover it. 00:27:14.452 [2024-11-20 11:21:41.908151] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.452 [2024-11-20 11:21:41.908204] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.452 [2024-11-20 11:21:41.908217] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.452 [2024-11-20 11:21:41.908224] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.452 [2024-11-20 11:21:41.908230] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:14.452 [2024-11-20 11:21:41.908243] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.452 qpair failed and we were unable to recover it. 00:27:14.452 [2024-11-20 11:21:41.918157] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.452 [2024-11-20 11:21:41.918251] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.452 [2024-11-20 11:21:41.918264] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.452 [2024-11-20 11:21:41.918271] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.452 [2024-11-20 11:21:41.918277] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:14.452 [2024-11-20 11:21:41.918291] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.452 qpair failed and we were unable to recover it. 00:27:14.452 [2024-11-20 11:21:41.928125] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.452 [2024-11-20 11:21:41.928199] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.452 [2024-11-20 11:21:41.928214] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.452 [2024-11-20 11:21:41.928220] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.452 [2024-11-20 11:21:41.928226] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:14.452 [2024-11-20 11:21:41.928244] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.452 qpair failed and we were unable to recover it. 00:27:14.452 [2024-11-20 11:21:41.938220] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.452 [2024-11-20 11:21:41.938309] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.452 [2024-11-20 11:21:41.938324] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.452 [2024-11-20 11:21:41.938331] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.452 [2024-11-20 11:21:41.938337] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:14.452 [2024-11-20 11:21:41.938351] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.452 qpair failed and we were unable to recover it. 00:27:14.712 [2024-11-20 11:21:41.948241] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.712 [2024-11-20 11:21:41.948294] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.712 [2024-11-20 11:21:41.948307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.712 [2024-11-20 11:21:41.948314] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.712 [2024-11-20 11:21:41.948319] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:14.712 [2024-11-20 11:21:41.948334] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.712 qpair failed and we were unable to recover it. 00:27:14.712 [2024-11-20 11:21:41.958304] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.712 [2024-11-20 11:21:41.958356] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.712 [2024-11-20 11:21:41.958370] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.712 [2024-11-20 11:21:41.958376] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.712 [2024-11-20 11:21:41.958382] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:14.712 [2024-11-20 11:21:41.958396] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.712 qpair failed and we were unable to recover it. 00:27:14.712 [2024-11-20 11:21:41.968305] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.712 [2024-11-20 11:21:41.968356] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.712 [2024-11-20 11:21:41.968371] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.712 [2024-11-20 11:21:41.968377] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.712 [2024-11-20 11:21:41.968383] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:14.712 [2024-11-20 11:21:41.968397] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.712 qpair failed and we were unable to recover it. 00:27:14.712 [2024-11-20 11:21:41.978260] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.712 [2024-11-20 11:21:41.978312] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.712 [2024-11-20 11:21:41.978326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.712 [2024-11-20 11:21:41.978332] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.712 [2024-11-20 11:21:41.978339] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:14.712 [2024-11-20 11:21:41.978352] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.712 qpair failed and we were unable to recover it. 00:27:14.712 [2024-11-20 11:21:41.988370] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.712 [2024-11-20 11:21:41.988425] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.712 [2024-11-20 11:21:41.988439] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.712 [2024-11-20 11:21:41.988446] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.712 [2024-11-20 11:21:41.988452] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:14.712 [2024-11-20 11:21:41.988467] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.712 qpair failed and we were unable to recover it. 00:27:14.712 [2024-11-20 11:21:41.998365] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.712 [2024-11-20 11:21:41.998425] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.712 [2024-11-20 11:21:41.998439] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.712 [2024-11-20 11:21:41.998446] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.712 [2024-11-20 11:21:41.998452] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:14.712 [2024-11-20 11:21:41.998467] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.712 qpair failed and we were unable to recover it. 00:27:14.712 [2024-11-20 11:21:42.008408] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.713 [2024-11-20 11:21:42.008480] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.713 [2024-11-20 11:21:42.008493] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.713 [2024-11-20 11:21:42.008500] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.713 [2024-11-20 11:21:42.008506] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:14.713 [2024-11-20 11:21:42.008521] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.713 qpair failed and we were unable to recover it. 00:27:14.713 [2024-11-20 11:21:42.018354] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.713 [2024-11-20 11:21:42.018409] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.713 [2024-11-20 11:21:42.018428] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.713 [2024-11-20 11:21:42.018435] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.713 [2024-11-20 11:21:42.018440] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:14.713 [2024-11-20 11:21:42.018455] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.713 qpair failed and we were unable to recover it. 00:27:14.713 [2024-11-20 11:21:42.028419] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.713 [2024-11-20 11:21:42.028475] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.713 [2024-11-20 11:21:42.028490] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.713 [2024-11-20 11:21:42.028497] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.713 [2024-11-20 11:21:42.028503] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:14.713 [2024-11-20 11:21:42.028517] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.713 qpair failed and we were unable to recover it. 00:27:14.713 [2024-11-20 11:21:42.038456] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.713 [2024-11-20 11:21:42.038541] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.713 [2024-11-20 11:21:42.038556] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.713 [2024-11-20 11:21:42.038563] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.713 [2024-11-20 11:21:42.038569] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:14.713 [2024-11-20 11:21:42.038582] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.713 qpair failed and we were unable to recover it. 00:27:14.713 [2024-11-20 11:21:42.048458] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.713 [2024-11-20 11:21:42.048510] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.713 [2024-11-20 11:21:42.048524] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.713 [2024-11-20 11:21:42.048530] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.713 [2024-11-20 11:21:42.048536] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:14.713 [2024-11-20 11:21:42.048550] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.713 qpair failed and we were unable to recover it. 00:27:14.713 [2024-11-20 11:21:42.058478] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.713 [2024-11-20 11:21:42.058538] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.713 [2024-11-20 11:21:42.058553] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.713 [2024-11-20 11:21:42.058559] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.713 [2024-11-20 11:21:42.058566] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:14.713 [2024-11-20 11:21:42.058583] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.713 qpair failed and we were unable to recover it. 00:27:14.713 [2024-11-20 11:21:42.068594] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.713 [2024-11-20 11:21:42.068650] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.713 [2024-11-20 11:21:42.068664] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.713 [2024-11-20 11:21:42.068670] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.713 [2024-11-20 11:21:42.068676] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:14.713 [2024-11-20 11:21:42.068690] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.713 qpair failed and we were unable to recover it. 00:27:14.713 [2024-11-20 11:21:42.078638] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.713 [2024-11-20 11:21:42.078693] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.713 [2024-11-20 11:21:42.078706] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.713 [2024-11-20 11:21:42.078713] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.713 [2024-11-20 11:21:42.078719] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:14.713 [2024-11-20 11:21:42.078733] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.713 qpair failed and we were unable to recover it. 00:27:14.713 [2024-11-20 11:21:42.088629] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.713 [2024-11-20 11:21:42.088680] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.713 [2024-11-20 11:21:42.088694] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.713 [2024-11-20 11:21:42.088700] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.713 [2024-11-20 11:21:42.088706] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:14.713 [2024-11-20 11:21:42.088721] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.713 qpair failed and we were unable to recover it. 00:27:14.713 [2024-11-20 11:21:42.098659] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.713 [2024-11-20 11:21:42.098714] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.713 [2024-11-20 11:21:42.098729] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.713 [2024-11-20 11:21:42.098736] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.713 [2024-11-20 11:21:42.098741] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:14.713 [2024-11-20 11:21:42.098756] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.713 qpair failed and we were unable to recover it. 00:27:14.713 [2024-11-20 11:21:42.108691] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.713 [2024-11-20 11:21:42.108746] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.713 [2024-11-20 11:21:42.108761] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.713 [2024-11-20 11:21:42.108768] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.713 [2024-11-20 11:21:42.108774] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:14.713 [2024-11-20 11:21:42.108788] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.713 qpair failed and we were unable to recover it. 00:27:14.713 [2024-11-20 11:21:42.118708] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.713 [2024-11-20 11:21:42.118761] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.714 [2024-11-20 11:21:42.118777] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.714 [2024-11-20 11:21:42.118784] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.714 [2024-11-20 11:21:42.118790] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:14.714 [2024-11-20 11:21:42.118805] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.714 qpair failed and we were unable to recover it. 00:27:14.714 [2024-11-20 11:21:42.128732] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.714 [2024-11-20 11:21:42.128790] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.714 [2024-11-20 11:21:42.128805] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.714 [2024-11-20 11:21:42.128812] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.714 [2024-11-20 11:21:42.128818] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:14.714 [2024-11-20 11:21:42.128833] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.714 qpair failed and we were unable to recover it. 00:27:14.714 [2024-11-20 11:21:42.138795] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.714 [2024-11-20 11:21:42.138897] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.714 [2024-11-20 11:21:42.138912] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.714 [2024-11-20 11:21:42.138919] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.714 [2024-11-20 11:21:42.138925] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:14.714 [2024-11-20 11:21:42.138940] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.714 qpair failed and we were unable to recover it. 00:27:14.714 [2024-11-20 11:21:42.148802] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.714 [2024-11-20 11:21:42.148887] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.714 [2024-11-20 11:21:42.148904] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.714 [2024-11-20 11:21:42.148911] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.714 [2024-11-20 11:21:42.148916] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:14.714 [2024-11-20 11:21:42.148931] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.714 qpair failed and we were unable to recover it. 00:27:14.714 [2024-11-20 11:21:42.158827] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.714 [2024-11-20 11:21:42.158887] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.714 [2024-11-20 11:21:42.158901] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.714 [2024-11-20 11:21:42.158907] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.714 [2024-11-20 11:21:42.158913] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:14.714 [2024-11-20 11:21:42.158927] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.714 qpair failed and we were unable to recover it. 00:27:14.714 [2024-11-20 11:21:42.168847] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.714 [2024-11-20 11:21:42.168900] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.714 [2024-11-20 11:21:42.168914] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.714 [2024-11-20 11:21:42.168921] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.714 [2024-11-20 11:21:42.168927] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:14.714 [2024-11-20 11:21:42.168941] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.714 qpair failed and we were unable to recover it. 00:27:14.714 [2024-11-20 11:21:42.178905] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.714 [2024-11-20 11:21:42.178962] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.714 [2024-11-20 11:21:42.178976] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.714 [2024-11-20 11:21:42.178983] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.714 [2024-11-20 11:21:42.178989] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:14.714 [2024-11-20 11:21:42.179003] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.714 qpair failed and we were unable to recover it. 00:27:14.714 [2024-11-20 11:21:42.188924] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.714 [2024-11-20 11:21:42.188993] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.714 [2024-11-20 11:21:42.189007] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.714 [2024-11-20 11:21:42.189013] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.714 [2024-11-20 11:21:42.189019] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:14.714 [2024-11-20 11:21:42.189036] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.714 qpair failed and we were unable to recover it. 00:27:14.714 [2024-11-20 11:21:42.198980] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.714 [2024-11-20 11:21:42.199035] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.714 [2024-11-20 11:21:42.199049] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.714 [2024-11-20 11:21:42.199057] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.714 [2024-11-20 11:21:42.199063] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:14.714 [2024-11-20 11:21:42.199077] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.714 qpair failed and we were unable to recover it. 00:27:14.974 [2024-11-20 11:21:42.208999] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.974 [2024-11-20 11:21:42.209062] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.974 [2024-11-20 11:21:42.209076] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.974 [2024-11-20 11:21:42.209083] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.974 [2024-11-20 11:21:42.209089] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:14.974 [2024-11-20 11:21:42.209103] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.974 qpair failed and we were unable to recover it. 00:27:14.974 [2024-11-20 11:21:42.219003] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.974 [2024-11-20 11:21:42.219055] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.974 [2024-11-20 11:21:42.219069] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.974 [2024-11-20 11:21:42.219075] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.974 [2024-11-20 11:21:42.219081] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:14.974 [2024-11-20 11:21:42.219096] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.974 qpair failed and we were unable to recover it. 00:27:14.974 [2024-11-20 11:21:42.229026] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.974 [2024-11-20 11:21:42.229080] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.974 [2024-11-20 11:21:42.229094] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.974 [2024-11-20 11:21:42.229100] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.974 [2024-11-20 11:21:42.229106] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:14.974 [2024-11-20 11:21:42.229120] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.974 qpair failed and we were unable to recover it. 00:27:14.974 [2024-11-20 11:21:42.239051] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.974 [2024-11-20 11:21:42.239108] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.974 [2024-11-20 11:21:42.239123] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.974 [2024-11-20 11:21:42.239130] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.974 [2024-11-20 11:21:42.239136] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:14.974 [2024-11-20 11:21:42.239150] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.974 qpair failed and we were unable to recover it. 00:27:14.974 [2024-11-20 11:21:42.249073] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.975 [2024-11-20 11:21:42.249129] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.975 [2024-11-20 11:21:42.249143] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.975 [2024-11-20 11:21:42.249150] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.975 [2024-11-20 11:21:42.249155] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:14.975 [2024-11-20 11:21:42.249170] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.975 qpair failed and we were unable to recover it. 00:27:14.975 [2024-11-20 11:21:42.259100] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.975 [2024-11-20 11:21:42.259149] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.975 [2024-11-20 11:21:42.259163] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.975 [2024-11-20 11:21:42.259169] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.975 [2024-11-20 11:21:42.259175] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:14.975 [2024-11-20 11:21:42.259189] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.975 qpair failed and we were unable to recover it. 00:27:14.975 [2024-11-20 11:21:42.269144] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.975 [2024-11-20 11:21:42.269202] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.975 [2024-11-20 11:21:42.269216] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.975 [2024-11-20 11:21:42.269223] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.975 [2024-11-20 11:21:42.269229] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:14.975 [2024-11-20 11:21:42.269243] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.975 qpair failed and we were unable to recover it. 00:27:14.975 [2024-11-20 11:21:42.279231] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.975 [2024-11-20 11:21:42.279284] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.975 [2024-11-20 11:21:42.279301] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.975 [2024-11-20 11:21:42.279308] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.975 [2024-11-20 11:21:42.279314] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:14.975 [2024-11-20 11:21:42.279328] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.975 qpair failed and we were unable to recover it. 00:27:14.975 [2024-11-20 11:21:42.289233] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.975 [2024-11-20 11:21:42.289289] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.975 [2024-11-20 11:21:42.289303] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.975 [2024-11-20 11:21:42.289310] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.975 [2024-11-20 11:21:42.289315] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:14.975 [2024-11-20 11:21:42.289329] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.975 qpair failed and we were unable to recover it. 00:27:14.975 [2024-11-20 11:21:42.299211] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.975 [2024-11-20 11:21:42.299279] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.975 [2024-11-20 11:21:42.299292] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.975 [2024-11-20 11:21:42.299299] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.975 [2024-11-20 11:21:42.299305] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:14.975 [2024-11-20 11:21:42.299319] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.975 qpair failed and we were unable to recover it. 00:27:14.975 [2024-11-20 11:21:42.309182] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.975 [2024-11-20 11:21:42.309286] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.975 [2024-11-20 11:21:42.309300] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.975 [2024-11-20 11:21:42.309306] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.975 [2024-11-20 11:21:42.309313] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:14.975 [2024-11-20 11:21:42.309326] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.975 qpair failed and we were unable to recover it. 00:27:14.975 [2024-11-20 11:21:42.319273] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.975 [2024-11-20 11:21:42.319341] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.975 [2024-11-20 11:21:42.319354] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.975 [2024-11-20 11:21:42.319361] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.975 [2024-11-20 11:21:42.319369] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:14.975 [2024-11-20 11:21:42.319384] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.975 qpair failed and we were unable to recover it. 00:27:14.975 [2024-11-20 11:21:42.329291] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.975 [2024-11-20 11:21:42.329348] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.975 [2024-11-20 11:21:42.329362] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.975 [2024-11-20 11:21:42.329370] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.975 [2024-11-20 11:21:42.329376] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:14.975 [2024-11-20 11:21:42.329391] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.975 qpair failed and we were unable to recover it. 00:27:14.975 [2024-11-20 11:21:42.339317] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.975 [2024-11-20 11:21:42.339366] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.975 [2024-11-20 11:21:42.339380] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.975 [2024-11-20 11:21:42.339387] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.975 [2024-11-20 11:21:42.339393] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:14.975 [2024-11-20 11:21:42.339407] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.975 qpair failed and we were unable to recover it. 00:27:14.975 [2024-11-20 11:21:42.349349] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.975 [2024-11-20 11:21:42.349404] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.975 [2024-11-20 11:21:42.349418] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.975 [2024-11-20 11:21:42.349425] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.975 [2024-11-20 11:21:42.349431] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:14.975 [2024-11-20 11:21:42.349446] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.975 qpair failed and we were unable to recover it. 00:27:14.975 [2024-11-20 11:21:42.359380] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.975 [2024-11-20 11:21:42.359435] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.975 [2024-11-20 11:21:42.359449] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.976 [2024-11-20 11:21:42.359456] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.976 [2024-11-20 11:21:42.359462] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:14.976 [2024-11-20 11:21:42.359476] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.976 qpair failed and we were unable to recover it. 00:27:14.976 [2024-11-20 11:21:42.369403] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.976 [2024-11-20 11:21:42.369459] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.976 [2024-11-20 11:21:42.369473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.976 [2024-11-20 11:21:42.369480] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.976 [2024-11-20 11:21:42.369486] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:14.976 [2024-11-20 11:21:42.369501] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.976 qpair failed and we were unable to recover it. 00:27:14.976 [2024-11-20 11:21:42.379427] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.976 [2024-11-20 11:21:42.379484] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.976 [2024-11-20 11:21:42.379499] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.976 [2024-11-20 11:21:42.379506] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.976 [2024-11-20 11:21:42.379512] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:14.976 [2024-11-20 11:21:42.379526] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.976 qpair failed and we were unable to recover it. 00:27:14.976 [2024-11-20 11:21:42.389526] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.976 [2024-11-20 11:21:42.389630] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.976 [2024-11-20 11:21:42.389644] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.976 [2024-11-20 11:21:42.389651] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.976 [2024-11-20 11:21:42.389657] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:14.976 [2024-11-20 11:21:42.389671] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.976 qpair failed and we were unable to recover it. 00:27:14.976 [2024-11-20 11:21:42.399414] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.976 [2024-11-20 11:21:42.399512] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.976 [2024-11-20 11:21:42.399525] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.976 [2024-11-20 11:21:42.399532] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.976 [2024-11-20 11:21:42.399537] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:14.976 [2024-11-20 11:21:42.399552] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.976 qpair failed and we were unable to recover it. 00:27:14.976 [2024-11-20 11:21:42.409521] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.976 [2024-11-20 11:21:42.409587] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.976 [2024-11-20 11:21:42.409605] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.976 [2024-11-20 11:21:42.409611] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.976 [2024-11-20 11:21:42.409617] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:14.976 [2024-11-20 11:21:42.409632] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.976 qpair failed and we were unable to recover it. 00:27:14.976 [2024-11-20 11:21:42.419535] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.976 [2024-11-20 11:21:42.419615] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.976 [2024-11-20 11:21:42.419629] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.976 [2024-11-20 11:21:42.419636] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.976 [2024-11-20 11:21:42.419641] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:14.976 [2024-11-20 11:21:42.419656] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.976 qpair failed and we were unable to recover it. 00:27:14.976 [2024-11-20 11:21:42.429591] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.976 [2024-11-20 11:21:42.429666] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.976 [2024-11-20 11:21:42.429681] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.976 [2024-11-20 11:21:42.429687] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.976 [2024-11-20 11:21:42.429693] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:14.976 [2024-11-20 11:21:42.429708] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.976 qpair failed and we were unable to recover it. 00:27:14.976 [2024-11-20 11:21:42.439604] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.976 [2024-11-20 11:21:42.439658] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.976 [2024-11-20 11:21:42.439672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.976 [2024-11-20 11:21:42.439679] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.976 [2024-11-20 11:21:42.439685] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:14.976 [2024-11-20 11:21:42.439699] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.976 qpair failed and we were unable to recover it. 00:27:14.976 [2024-11-20 11:21:42.449634] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.976 [2024-11-20 11:21:42.449691] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.976 [2024-11-20 11:21:42.449704] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.976 [2024-11-20 11:21:42.449711] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.976 [2024-11-20 11:21:42.449720] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:14.976 [2024-11-20 11:21:42.449735] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.976 qpair failed and we were unable to recover it. 00:27:14.976 [2024-11-20 11:21:42.459679] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.976 [2024-11-20 11:21:42.459729] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.976 [2024-11-20 11:21:42.459743] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.976 [2024-11-20 11:21:42.459750] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.976 [2024-11-20 11:21:42.459756] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:14.976 [2024-11-20 11:21:42.459770] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.976 qpair failed and we were unable to recover it. 00:27:15.236 [2024-11-20 11:21:42.469718] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.236 [2024-11-20 11:21:42.469823] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.236 [2024-11-20 11:21:42.469836] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.236 [2024-11-20 11:21:42.469843] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.236 [2024-11-20 11:21:42.469849] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:15.236 [2024-11-20 11:21:42.469864] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:15.236 qpair failed and we were unable to recover it. 00:27:15.236 [2024-11-20 11:21:42.479763] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.236 [2024-11-20 11:21:42.479819] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.236 [2024-11-20 11:21:42.479833] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.236 [2024-11-20 11:21:42.479840] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.236 [2024-11-20 11:21:42.479846] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:15.236 [2024-11-20 11:21:42.479860] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:15.236 qpair failed and we were unable to recover it. 00:27:15.236 [2024-11-20 11:21:42.489748] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.236 [2024-11-20 11:21:42.489806] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.236 [2024-11-20 11:21:42.489820] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.236 [2024-11-20 11:21:42.489826] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.236 [2024-11-20 11:21:42.489832] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:15.237 [2024-11-20 11:21:42.489846] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:15.237 qpair failed and we were unable to recover it. 00:27:15.237 [2024-11-20 11:21:42.499764] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.237 [2024-11-20 11:21:42.499815] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.237 [2024-11-20 11:21:42.499829] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.237 [2024-11-20 11:21:42.499835] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.237 [2024-11-20 11:21:42.499841] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:15.237 [2024-11-20 11:21:42.499855] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:15.237 qpair failed and we were unable to recover it. 00:27:15.237 [2024-11-20 11:21:42.509812] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.237 [2024-11-20 11:21:42.509868] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.237 [2024-11-20 11:21:42.509882] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.237 [2024-11-20 11:21:42.509888] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.237 [2024-11-20 11:21:42.509894] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:15.237 [2024-11-20 11:21:42.509908] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:15.237 qpair failed and we were unable to recover it. 00:27:15.237 [2024-11-20 11:21:42.519853] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.237 [2024-11-20 11:21:42.519910] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.237 [2024-11-20 11:21:42.519925] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.237 [2024-11-20 11:21:42.519932] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.237 [2024-11-20 11:21:42.519938] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:15.237 [2024-11-20 11:21:42.519965] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:15.237 qpair failed and we were unable to recover it. 00:27:15.237 [2024-11-20 11:21:42.529863] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.237 [2024-11-20 11:21:42.529934] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.237 [2024-11-20 11:21:42.529953] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.237 [2024-11-20 11:21:42.529961] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.237 [2024-11-20 11:21:42.529967] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:15.237 [2024-11-20 11:21:42.529981] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:15.237 qpair failed and we were unable to recover it. 00:27:15.237 [2024-11-20 11:21:42.539919] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.237 [2024-11-20 11:21:42.540023] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.237 [2024-11-20 11:21:42.540049] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.237 [2024-11-20 11:21:42.540056] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.237 [2024-11-20 11:21:42.540063] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:15.237 [2024-11-20 11:21:42.540077] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:15.237 qpair failed and we were unable to recover it. 00:27:15.237 [2024-11-20 11:21:42.549955] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.237 [2024-11-20 11:21:42.550009] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.237 [2024-11-20 11:21:42.550023] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.237 [2024-11-20 11:21:42.550030] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.237 [2024-11-20 11:21:42.550036] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:15.237 [2024-11-20 11:21:42.550050] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:15.237 qpair failed and we were unable to recover it. 00:27:15.237 [2024-11-20 11:21:42.559992] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.237 [2024-11-20 11:21:42.560094] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.237 [2024-11-20 11:21:42.560108] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.237 [2024-11-20 11:21:42.560115] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.237 [2024-11-20 11:21:42.560121] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:15.237 [2024-11-20 11:21:42.560135] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:15.237 qpair failed and we were unable to recover it. 00:27:15.237 [2024-11-20 11:21:42.570015] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.237 [2024-11-20 11:21:42.570072] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.237 [2024-11-20 11:21:42.570087] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.237 [2024-11-20 11:21:42.570093] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.237 [2024-11-20 11:21:42.570100] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:15.237 [2024-11-20 11:21:42.570115] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:15.237 qpair failed and we were unable to recover it. 00:27:15.237 [2024-11-20 11:21:42.579933] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.237 [2024-11-20 11:21:42.579993] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.237 [2024-11-20 11:21:42.580007] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.237 [2024-11-20 11:21:42.580014] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.237 [2024-11-20 11:21:42.580031] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:15.237 [2024-11-20 11:21:42.580047] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:15.237 qpair failed and we were unable to recover it. 00:27:15.237 [2024-11-20 11:21:42.590038] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.237 [2024-11-20 11:21:42.590113] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.237 [2024-11-20 11:21:42.590127] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.237 [2024-11-20 11:21:42.590133] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.237 [2024-11-20 11:21:42.590139] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:15.237 [2024-11-20 11:21:42.590153] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:15.237 qpair failed and we were unable to recover it. 00:27:15.237 [2024-11-20 11:21:42.600077] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.237 [2024-11-20 11:21:42.600131] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.237 [2024-11-20 11:21:42.600145] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.237 [2024-11-20 11:21:42.600151] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.237 [2024-11-20 11:21:42.600158] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:15.237 [2024-11-20 11:21:42.600172] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:15.238 qpair failed and we were unable to recover it. 00:27:15.238 [2024-11-20 11:21:42.610113] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.238 [2024-11-20 11:21:42.610166] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.238 [2024-11-20 11:21:42.610180] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.238 [2024-11-20 11:21:42.610186] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.238 [2024-11-20 11:21:42.610192] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:15.238 [2024-11-20 11:21:42.610207] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:15.238 qpair failed and we were unable to recover it. 00:27:15.238 [2024-11-20 11:21:42.620200] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.238 [2024-11-20 11:21:42.620286] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.238 [2024-11-20 11:21:42.620300] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.238 [2024-11-20 11:21:42.620307] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.238 [2024-11-20 11:21:42.620313] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:15.238 [2024-11-20 11:21:42.620327] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:15.238 qpair failed and we were unable to recover it. 00:27:15.238 [2024-11-20 11:21:42.630168] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.238 [2024-11-20 11:21:42.630227] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.238 [2024-11-20 11:21:42.630242] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.238 [2024-11-20 11:21:42.630249] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.238 [2024-11-20 11:21:42.630255] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:15.238 [2024-11-20 11:21:42.630270] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:15.238 qpair failed and we were unable to recover it. 00:27:15.238 [2024-11-20 11:21:42.640238] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.238 [2024-11-20 11:21:42.640293] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.238 [2024-11-20 11:21:42.640307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.238 [2024-11-20 11:21:42.640313] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.238 [2024-11-20 11:21:42.640319] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:15.238 [2024-11-20 11:21:42.640334] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:15.238 qpair failed and we were unable to recover it. 00:27:15.238 [2024-11-20 11:21:42.650223] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.238 [2024-11-20 11:21:42.650281] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.238 [2024-11-20 11:21:42.650294] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.238 [2024-11-20 11:21:42.650301] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.238 [2024-11-20 11:21:42.650307] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:15.238 [2024-11-20 11:21:42.650320] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:15.238 qpair failed and we were unable to recover it. 00:27:15.238 [2024-11-20 11:21:42.660241] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.238 [2024-11-20 11:21:42.660299] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.238 [2024-11-20 11:21:42.660314] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.238 [2024-11-20 11:21:42.660321] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.238 [2024-11-20 11:21:42.660327] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:15.238 [2024-11-20 11:21:42.660341] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:15.238 qpair failed and we were unable to recover it. 00:27:15.238 [2024-11-20 11:21:42.670342] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.238 [2024-11-20 11:21:42.670421] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.238 [2024-11-20 11:21:42.670440] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.238 [2024-11-20 11:21:42.670447] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.238 [2024-11-20 11:21:42.670452] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:15.238 [2024-11-20 11:21:42.670467] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:15.238 qpair failed and we were unable to recover it. 00:27:15.238 [2024-11-20 11:21:42.680311] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.238 [2024-11-20 11:21:42.680368] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.238 [2024-11-20 11:21:42.680382] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.238 [2024-11-20 11:21:42.680389] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.238 [2024-11-20 11:21:42.680395] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:15.238 [2024-11-20 11:21:42.680409] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:15.238 qpair failed and we were unable to recover it. 00:27:15.238 [2024-11-20 11:21:42.690333] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.238 [2024-11-20 11:21:42.690426] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.238 [2024-11-20 11:21:42.690442] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.238 [2024-11-20 11:21:42.690449] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.238 [2024-11-20 11:21:42.690455] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:15.238 [2024-11-20 11:21:42.690471] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:15.238 qpair failed and we were unable to recover it. 00:27:15.238 [2024-11-20 11:21:42.700403] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.238 [2024-11-20 11:21:42.700457] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.238 [2024-11-20 11:21:42.700472] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.238 [2024-11-20 11:21:42.700479] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.238 [2024-11-20 11:21:42.700485] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:15.238 [2024-11-20 11:21:42.700500] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:15.238 qpair failed and we were unable to recover it. 00:27:15.238 [2024-11-20 11:21:42.710324] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.238 [2024-11-20 11:21:42.710382] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.238 [2024-11-20 11:21:42.710397] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.238 [2024-11-20 11:21:42.710403] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.238 [2024-11-20 11:21:42.710413] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:15.238 [2024-11-20 11:21:42.710428] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:15.238 qpair failed and we were unable to recover it. 00:27:15.238 [2024-11-20 11:21:42.720420] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.239 [2024-11-20 11:21:42.720475] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.239 [2024-11-20 11:21:42.720489] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.239 [2024-11-20 11:21:42.720496] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.239 [2024-11-20 11:21:42.720502] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:15.239 [2024-11-20 11:21:42.720516] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:15.239 qpair failed and we were unable to recover it. 00:27:15.498 [2024-11-20 11:21:42.730447] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.498 [2024-11-20 11:21:42.730501] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.498 [2024-11-20 11:21:42.730516] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.498 [2024-11-20 11:21:42.730523] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.498 [2024-11-20 11:21:42.730529] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:15.498 [2024-11-20 11:21:42.730543] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:15.498 qpair failed and we were unable to recover it. 00:27:15.498 [2024-11-20 11:21:42.740511] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.498 [2024-11-20 11:21:42.740573] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.498 [2024-11-20 11:21:42.740587] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.498 [2024-11-20 11:21:42.740594] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.498 [2024-11-20 11:21:42.740600] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:15.498 [2024-11-20 11:21:42.740615] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:15.499 qpair failed and we were unable to recover it. 00:27:15.499 [2024-11-20 11:21:42.750550] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.499 [2024-11-20 11:21:42.750652] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.499 [2024-11-20 11:21:42.750666] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.499 [2024-11-20 11:21:42.750672] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.499 [2024-11-20 11:21:42.750678] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:15.499 [2024-11-20 11:21:42.750693] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:15.499 qpair failed and we were unable to recover it. 00:27:15.499 [2024-11-20 11:21:42.760534] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.499 [2024-11-20 11:21:42.760589] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.499 [2024-11-20 11:21:42.760603] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.499 [2024-11-20 11:21:42.760609] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.499 [2024-11-20 11:21:42.760615] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:15.499 [2024-11-20 11:21:42.760630] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:15.499 qpair failed and we were unable to recover it. 00:27:15.499 [2024-11-20 11:21:42.770557] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.499 [2024-11-20 11:21:42.770635] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.499 [2024-11-20 11:21:42.770650] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.499 [2024-11-20 11:21:42.770657] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.499 [2024-11-20 11:21:42.770662] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:15.499 [2024-11-20 11:21:42.770677] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:15.499 qpair failed and we were unable to recover it. 00:27:15.499 [2024-11-20 11:21:42.780587] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.499 [2024-11-20 11:21:42.780641] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.499 [2024-11-20 11:21:42.780655] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.499 [2024-11-20 11:21:42.780661] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.499 [2024-11-20 11:21:42.780667] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:15.499 [2024-11-20 11:21:42.780682] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:15.499 qpair failed and we were unable to recover it. 00:27:15.499 [2024-11-20 11:21:42.790615] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.499 [2024-11-20 11:21:42.790668] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.499 [2024-11-20 11:21:42.790682] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.499 [2024-11-20 11:21:42.790688] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.499 [2024-11-20 11:21:42.790694] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:15.499 [2024-11-20 11:21:42.790709] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:15.499 qpair failed and we were unable to recover it. 00:27:15.499 [2024-11-20 11:21:42.800652] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.499 [2024-11-20 11:21:42.800705] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.499 [2024-11-20 11:21:42.800723] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.499 [2024-11-20 11:21:42.800730] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.499 [2024-11-20 11:21:42.800736] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:15.499 [2024-11-20 11:21:42.800750] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:15.499 qpair failed and we were unable to recover it. 00:27:15.499 [2024-11-20 11:21:42.810704] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.499 [2024-11-20 11:21:42.810786] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.499 [2024-11-20 11:21:42.810801] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.499 [2024-11-20 11:21:42.810807] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.499 [2024-11-20 11:21:42.810813] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:15.499 [2024-11-20 11:21:42.810827] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:15.499 qpair failed and we were unable to recover it. 00:27:15.499 [2024-11-20 11:21:42.820748] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.499 [2024-11-20 11:21:42.820808] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.499 [2024-11-20 11:21:42.820823] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.499 [2024-11-20 11:21:42.820829] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.499 [2024-11-20 11:21:42.820835] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:15.499 [2024-11-20 11:21:42.820850] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:15.499 qpair failed and we were unable to recover it. 00:27:15.499 [2024-11-20 11:21:42.830737] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.499 [2024-11-20 11:21:42.830798] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.499 [2024-11-20 11:21:42.830812] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.499 [2024-11-20 11:21:42.830819] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.499 [2024-11-20 11:21:42.830825] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:15.499 [2024-11-20 11:21:42.830839] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:15.499 qpair failed and we were unable to recover it. 00:27:15.499 [2024-11-20 11:21:42.840744] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.499 [2024-11-20 11:21:42.840797] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.499 [2024-11-20 11:21:42.840811] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.499 [2024-11-20 11:21:42.840818] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.499 [2024-11-20 11:21:42.840827] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:15.499 [2024-11-20 11:21:42.840842] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:15.499 qpair failed and we were unable to recover it. 00:27:15.499 [2024-11-20 11:21:42.850798] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.499 [2024-11-20 11:21:42.850853] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.499 [2024-11-20 11:21:42.850867] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.499 [2024-11-20 11:21:42.850873] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.499 [2024-11-20 11:21:42.850880] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:15.500 [2024-11-20 11:21:42.850894] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:15.500 qpair failed and we were unable to recover it. 00:27:15.500 [2024-11-20 11:21:42.860832] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.500 [2024-11-20 11:21:42.860892] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.500 [2024-11-20 11:21:42.860906] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.500 [2024-11-20 11:21:42.860913] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.500 [2024-11-20 11:21:42.860919] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:15.500 [2024-11-20 11:21:42.860933] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:15.500 qpair failed and we were unable to recover it. 00:27:15.500 [2024-11-20 11:21:42.870810] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.500 [2024-11-20 11:21:42.870868] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.500 [2024-11-20 11:21:42.870882] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.500 [2024-11-20 11:21:42.870889] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.500 [2024-11-20 11:21:42.870895] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:15.500 [2024-11-20 11:21:42.870909] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:15.500 qpair failed and we were unable to recover it. 00:27:15.500 [2024-11-20 11:21:42.880857] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.500 [2024-11-20 11:21:42.880912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.500 [2024-11-20 11:21:42.880927] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.500 [2024-11-20 11:21:42.880933] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.500 [2024-11-20 11:21:42.880940] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:15.500 [2024-11-20 11:21:42.880958] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:15.500 qpair failed and we were unable to recover it. 00:27:15.500 [2024-11-20 11:21:42.890934] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.500 [2024-11-20 11:21:42.890994] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.500 [2024-11-20 11:21:42.891008] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.500 [2024-11-20 11:21:42.891015] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.500 [2024-11-20 11:21:42.891021] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:15.500 [2024-11-20 11:21:42.891035] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:15.500 qpair failed and we were unable to recover it. 00:27:15.500 [2024-11-20 11:21:42.900929] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.500 [2024-11-20 11:21:42.900986] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.500 [2024-11-20 11:21:42.901000] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.500 [2024-11-20 11:21:42.901006] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.500 [2024-11-20 11:21:42.901012] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:15.500 [2024-11-20 11:21:42.901027] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:15.500 qpair failed and we were unable to recover it. 00:27:15.500 [2024-11-20 11:21:42.911003] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.500 [2024-11-20 11:21:42.911107] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.500 [2024-11-20 11:21:42.911121] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.500 [2024-11-20 11:21:42.911128] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.500 [2024-11-20 11:21:42.911134] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:15.500 [2024-11-20 11:21:42.911149] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:15.500 qpair failed and we were unable to recover it. 00:27:15.500 [2024-11-20 11:21:42.921011] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.500 [2024-11-20 11:21:42.921074] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.500 [2024-11-20 11:21:42.921089] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.500 [2024-11-20 11:21:42.921096] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.500 [2024-11-20 11:21:42.921102] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:15.500 [2024-11-20 11:21:42.921117] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:15.500 qpair failed and we were unable to recover it. 00:27:15.500 [2024-11-20 11:21:42.931060] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.500 [2024-11-20 11:21:42.931122] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.500 [2024-11-20 11:21:42.931140] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.500 [2024-11-20 11:21:42.931147] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.500 [2024-11-20 11:21:42.931153] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:15.500 [2024-11-20 11:21:42.931168] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:15.500 qpair failed and we were unable to recover it. 00:27:15.500 [2024-11-20 11:21:42.941092] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.500 [2024-11-20 11:21:42.941152] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.500 [2024-11-20 11:21:42.941166] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.500 [2024-11-20 11:21:42.941173] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.500 [2024-11-20 11:21:42.941179] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:15.500 [2024-11-20 11:21:42.941193] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:15.500 qpair failed and we were unable to recover it. 00:27:15.500 [2024-11-20 11:21:42.951079] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.500 [2024-11-20 11:21:42.951137] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.500 [2024-11-20 11:21:42.951151] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.500 [2024-11-20 11:21:42.951158] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.500 [2024-11-20 11:21:42.951163] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:15.500 [2024-11-20 11:21:42.951177] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:15.500 qpair failed and we were unable to recover it. 00:27:15.500 [2024-11-20 11:21:42.961160] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.501 [2024-11-20 11:21:42.961223] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.501 [2024-11-20 11:21:42.961237] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.501 [2024-11-20 11:21:42.961243] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.501 [2024-11-20 11:21:42.961249] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:15.501 [2024-11-20 11:21:42.961263] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:15.501 qpair failed and we were unable to recover it. 00:27:15.501 [2024-11-20 11:21:42.971105] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.501 [2024-11-20 11:21:42.971157] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.501 [2024-11-20 11:21:42.971170] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.501 [2024-11-20 11:21:42.971177] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.501 [2024-11-20 11:21:42.971186] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:15.501 [2024-11-20 11:21:42.971200] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:15.501 qpair failed and we were unable to recover it. 00:27:15.501 [2024-11-20 11:21:42.981167] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.501 [2024-11-20 11:21:42.981225] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.501 [2024-11-20 11:21:42.981238] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.501 [2024-11-20 11:21:42.981245] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.501 [2024-11-20 11:21:42.981251] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:15.501 [2024-11-20 11:21:42.981265] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:15.501 qpair failed and we were unable to recover it. 00:27:15.501 [2024-11-20 11:21:42.991198] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.501 [2024-11-20 11:21:42.991256] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.501 [2024-11-20 11:21:42.991269] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.501 [2024-11-20 11:21:42.991276] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.501 [2024-11-20 11:21:42.991282] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:15.501 [2024-11-20 11:21:42.991296] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:15.501 qpair failed and we were unable to recover it. 00:27:15.761 [2024-11-20 11:21:43.001288] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.761 [2024-11-20 11:21:43.001396] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.761 [2024-11-20 11:21:43.001410] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.761 [2024-11-20 11:21:43.001416] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.761 [2024-11-20 11:21:43.001422] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:15.761 [2024-11-20 11:21:43.001436] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:15.761 qpair failed and we were unable to recover it. 00:27:15.761 [2024-11-20 11:21:43.011289] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.761 [2024-11-20 11:21:43.011342] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.761 [2024-11-20 11:21:43.011355] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.761 [2024-11-20 11:21:43.011362] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.761 [2024-11-20 11:21:43.011368] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:15.761 [2024-11-20 11:21:43.011382] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:15.761 qpair failed and we were unable to recover it. 00:27:15.761 [2024-11-20 11:21:43.021275] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.761 [2024-11-20 11:21:43.021332] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.761 [2024-11-20 11:21:43.021346] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.761 [2024-11-20 11:21:43.021352] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.761 [2024-11-20 11:21:43.021358] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:15.761 [2024-11-20 11:21:43.021373] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:15.761 qpair failed and we were unable to recover it. 00:27:15.761 [2024-11-20 11:21:43.031355] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.761 [2024-11-20 11:21:43.031411] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.761 [2024-11-20 11:21:43.031425] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.761 [2024-11-20 11:21:43.031431] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.761 [2024-11-20 11:21:43.031438] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:15.761 [2024-11-20 11:21:43.031453] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:15.761 qpair failed and we were unable to recover it. 00:27:15.761 [2024-11-20 11:21:43.041323] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.761 [2024-11-20 11:21:43.041378] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.761 [2024-11-20 11:21:43.041393] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.761 [2024-11-20 11:21:43.041399] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.761 [2024-11-20 11:21:43.041406] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:15.761 [2024-11-20 11:21:43.041420] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:15.761 qpair failed and we were unable to recover it. 00:27:15.761 [2024-11-20 11:21:43.051364] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.761 [2024-11-20 11:21:43.051413] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.761 [2024-11-20 11:21:43.051427] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.761 [2024-11-20 11:21:43.051433] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.761 [2024-11-20 11:21:43.051440] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:15.761 [2024-11-20 11:21:43.051454] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:15.761 qpair failed and we were unable to recover it. 00:27:15.761 [2024-11-20 11:21:43.061396] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.761 [2024-11-20 11:21:43.061455] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.761 [2024-11-20 11:21:43.061473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.761 [2024-11-20 11:21:43.061480] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.761 [2024-11-20 11:21:43.061486] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:15.761 [2024-11-20 11:21:43.061501] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:15.761 qpair failed and we were unable to recover it. 00:27:15.761 [2024-11-20 11:21:43.071433] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.761 [2024-11-20 11:21:43.071486] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.761 [2024-11-20 11:21:43.071501] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.761 [2024-11-20 11:21:43.071509] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.761 [2024-11-20 11:21:43.071514] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:15.761 [2024-11-20 11:21:43.071529] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:15.761 qpair failed and we were unable to recover it. 00:27:15.761 [2024-11-20 11:21:43.081443] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.761 [2024-11-20 11:21:43.081495] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.761 [2024-11-20 11:21:43.081508] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.761 [2024-11-20 11:21:43.081515] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.761 [2024-11-20 11:21:43.081521] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:15.761 [2024-11-20 11:21:43.081535] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:15.761 qpair failed and we were unable to recover it. 00:27:15.761 [2024-11-20 11:21:43.091409] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.761 [2024-11-20 11:21:43.091461] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.761 [2024-11-20 11:21:43.091475] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.761 [2024-11-20 11:21:43.091481] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.761 [2024-11-20 11:21:43.091487] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:15.761 [2024-11-20 11:21:43.091502] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:15.761 qpair failed and we were unable to recover it. 00:27:15.761 [2024-11-20 11:21:43.101481] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.762 [2024-11-20 11:21:43.101568] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.762 [2024-11-20 11:21:43.101582] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.762 [2024-11-20 11:21:43.101589] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.762 [2024-11-20 11:21:43.101598] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:15.762 [2024-11-20 11:21:43.101612] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:15.762 qpair failed and we were unable to recover it. 00:27:15.762 [2024-11-20 11:21:43.111541] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.762 [2024-11-20 11:21:43.111598] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.762 [2024-11-20 11:21:43.111612] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.762 [2024-11-20 11:21:43.111618] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.762 [2024-11-20 11:21:43.111624] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:15.762 [2024-11-20 11:21:43.111639] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:15.762 qpair failed and we were unable to recover it. 00:27:15.762 [2024-11-20 11:21:43.121564] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.762 [2024-11-20 11:21:43.121622] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.762 [2024-11-20 11:21:43.121636] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.762 [2024-11-20 11:21:43.121643] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.762 [2024-11-20 11:21:43.121649] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:15.762 [2024-11-20 11:21:43.121664] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:15.762 qpair failed and we were unable to recover it. 00:27:15.762 [2024-11-20 11:21:43.131597] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.762 [2024-11-20 11:21:43.131668] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.762 [2024-11-20 11:21:43.131683] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.762 [2024-11-20 11:21:43.131689] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.762 [2024-11-20 11:21:43.131695] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:15.762 [2024-11-20 11:21:43.131710] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:15.762 qpair failed and we were unable to recover it. 00:27:15.762 [2024-11-20 11:21:43.141604] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.762 [2024-11-20 11:21:43.141654] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.762 [2024-11-20 11:21:43.141668] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.762 [2024-11-20 11:21:43.141675] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.762 [2024-11-20 11:21:43.141681] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:15.762 [2024-11-20 11:21:43.141696] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:15.762 qpair failed and we were unable to recover it. 00:27:15.762 [2024-11-20 11:21:43.151673] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.762 [2024-11-20 11:21:43.151729] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.762 [2024-11-20 11:21:43.151742] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.762 [2024-11-20 11:21:43.151749] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.762 [2024-11-20 11:21:43.151754] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:15.762 [2024-11-20 11:21:43.151769] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:15.762 qpair failed and we were unable to recover it. 00:27:15.762 [2024-11-20 11:21:43.161689] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.762 [2024-11-20 11:21:43.161745] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.762 [2024-11-20 11:21:43.161759] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.762 [2024-11-20 11:21:43.161765] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.762 [2024-11-20 11:21:43.161771] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:15.762 [2024-11-20 11:21:43.161786] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:15.762 qpair failed and we were unable to recover it. 00:27:15.762 [2024-11-20 11:21:43.171723] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.762 [2024-11-20 11:21:43.171796] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.762 [2024-11-20 11:21:43.171810] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.762 [2024-11-20 11:21:43.171816] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.762 [2024-11-20 11:21:43.171822] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:15.762 [2024-11-20 11:21:43.171836] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:15.762 qpair failed and we were unable to recover it. 00:27:15.762 [2024-11-20 11:21:43.181659] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.762 [2024-11-20 11:21:43.181712] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.762 [2024-11-20 11:21:43.181726] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.762 [2024-11-20 11:21:43.181733] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.762 [2024-11-20 11:21:43.181738] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:15.762 [2024-11-20 11:21:43.181753] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:15.762 qpair failed and we were unable to recover it. 00:27:15.762 [2024-11-20 11:21:43.191754] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.762 [2024-11-20 11:21:43.191839] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.762 [2024-11-20 11:21:43.191857] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.762 [2024-11-20 11:21:43.191864] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.762 [2024-11-20 11:21:43.191870] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:15.762 [2024-11-20 11:21:43.191885] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:15.762 qpair failed and we were unable to recover it. 00:27:15.762 [2024-11-20 11:21:43.201838] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.762 [2024-11-20 11:21:43.201896] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.762 [2024-11-20 11:21:43.201911] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.762 [2024-11-20 11:21:43.201918] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.762 [2024-11-20 11:21:43.201924] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:15.762 [2024-11-20 11:21:43.201938] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:15.762 qpair failed and we were unable to recover it. 00:27:15.762 [2024-11-20 11:21:43.211846] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.762 [2024-11-20 11:21:43.211914] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.763 [2024-11-20 11:21:43.211929] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.763 [2024-11-20 11:21:43.211936] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.763 [2024-11-20 11:21:43.211942] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:15.763 [2024-11-20 11:21:43.211961] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:15.763 qpair failed and we were unable to recover it. 00:27:15.763 [2024-11-20 11:21:43.221847] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.763 [2024-11-20 11:21:43.221915] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.763 [2024-11-20 11:21:43.221931] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.763 [2024-11-20 11:21:43.221937] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.763 [2024-11-20 11:21:43.221943] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:15.763 [2024-11-20 11:21:43.221963] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:15.763 qpair failed and we were unable to recover it. 00:27:15.763 [2024-11-20 11:21:43.231883] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.763 [2024-11-20 11:21:43.231937] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.763 [2024-11-20 11:21:43.231958] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.763 [2024-11-20 11:21:43.231965] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.763 [2024-11-20 11:21:43.231975] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:15.763 [2024-11-20 11:21:43.231990] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:15.763 qpair failed and we were unable to recover it. 00:27:15.763 [2024-11-20 11:21:43.241910] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.763 [2024-11-20 11:21:43.241970] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.763 [2024-11-20 11:21:43.241985] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.763 [2024-11-20 11:21:43.241992] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.763 [2024-11-20 11:21:43.241998] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:15.763 [2024-11-20 11:21:43.242012] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:15.763 qpair failed and we were unable to recover it. 00:27:15.763 [2024-11-20 11:21:43.251871] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.763 [2024-11-20 11:21:43.251926] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.763 [2024-11-20 11:21:43.251941] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.763 [2024-11-20 11:21:43.251952] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.763 [2024-11-20 11:21:43.251958] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:15.763 [2024-11-20 11:21:43.251973] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:15.763 qpair failed and we were unable to recover it. 00:27:16.023 [2024-11-20 11:21:43.261971] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.023 [2024-11-20 11:21:43.262029] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.023 [2024-11-20 11:21:43.262043] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.023 [2024-11-20 11:21:43.262050] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.023 [2024-11-20 11:21:43.262056] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:16.023 [2024-11-20 11:21:43.262070] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.023 qpair failed and we were unable to recover it. 00:27:16.023 [2024-11-20 11:21:43.272028] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.023 [2024-11-20 11:21:43.272135] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.023 [2024-11-20 11:21:43.272148] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.023 [2024-11-20 11:21:43.272155] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.023 [2024-11-20 11:21:43.272161] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:16.023 [2024-11-20 11:21:43.272175] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.023 qpair failed and we were unable to recover it. 00:27:16.023 [2024-11-20 11:21:43.282056] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.023 [2024-11-20 11:21:43.282114] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.023 [2024-11-20 11:21:43.282128] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.023 [2024-11-20 11:21:43.282135] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.023 [2024-11-20 11:21:43.282141] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:16.023 [2024-11-20 11:21:43.282156] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.023 qpair failed and we were unable to recover it. 00:27:16.023 [2024-11-20 11:21:43.292102] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.023 [2024-11-20 11:21:43.292155] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.023 [2024-11-20 11:21:43.292169] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.023 [2024-11-20 11:21:43.292176] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.023 [2024-11-20 11:21:43.292182] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:16.023 [2024-11-20 11:21:43.292198] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.023 qpair failed and we were unable to recover it. 00:27:16.023 [2024-11-20 11:21:43.302073] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.023 [2024-11-20 11:21:43.302154] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.023 [2024-11-20 11:21:43.302168] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.023 [2024-11-20 11:21:43.302175] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.023 [2024-11-20 11:21:43.302181] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:16.023 [2024-11-20 11:21:43.302194] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.023 qpair failed and we were unable to recover it. 00:27:16.023 [2024-11-20 11:21:43.312173] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.023 [2024-11-20 11:21:43.312251] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.023 [2024-11-20 11:21:43.312265] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.023 [2024-11-20 11:21:43.312272] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.023 [2024-11-20 11:21:43.312278] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:16.023 [2024-11-20 11:21:43.312292] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.023 qpair failed and we were unable to recover it. 00:27:16.023 [2024-11-20 11:21:43.322114] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.023 [2024-11-20 11:21:43.322206] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.023 [2024-11-20 11:21:43.322226] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.023 [2024-11-20 11:21:43.322232] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.023 [2024-11-20 11:21:43.322238] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:16.023 [2024-11-20 11:21:43.322253] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.023 qpair failed and we were unable to recover it. 00:27:16.023 [2024-11-20 11:21:43.332111] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.023 [2024-11-20 11:21:43.332161] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.023 [2024-11-20 11:21:43.332176] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.023 [2024-11-20 11:21:43.332183] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.023 [2024-11-20 11:21:43.332189] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:16.023 [2024-11-20 11:21:43.332203] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.023 qpair failed and we were unable to recover it. 00:27:16.023 [2024-11-20 11:21:43.342133] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.023 [2024-11-20 11:21:43.342188] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.023 [2024-11-20 11:21:43.342203] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.023 [2024-11-20 11:21:43.342209] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.024 [2024-11-20 11:21:43.342215] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:16.024 [2024-11-20 11:21:43.342229] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.024 qpair failed and we were unable to recover it. 00:27:16.024 [2024-11-20 11:21:43.352238] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.024 [2024-11-20 11:21:43.352296] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.024 [2024-11-20 11:21:43.352309] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.024 [2024-11-20 11:21:43.352316] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.024 [2024-11-20 11:21:43.352322] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:16.024 [2024-11-20 11:21:43.352336] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.024 qpair failed and we were unable to recover it. 00:27:16.024 [2024-11-20 11:21:43.362196] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.024 [2024-11-20 11:21:43.362248] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.024 [2024-11-20 11:21:43.362262] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.024 [2024-11-20 11:21:43.362268] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.024 [2024-11-20 11:21:43.362277] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:16.024 [2024-11-20 11:21:43.362291] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.024 qpair failed and we were unable to recover it. 00:27:16.024 [2024-11-20 11:21:43.372273] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.024 [2024-11-20 11:21:43.372327] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.024 [2024-11-20 11:21:43.372341] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.024 [2024-11-20 11:21:43.372347] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.024 [2024-11-20 11:21:43.372353] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:16.024 [2024-11-20 11:21:43.372367] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.024 qpair failed and we were unable to recover it. 00:27:16.024 [2024-11-20 11:21:43.382272] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.024 [2024-11-20 11:21:43.382326] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.024 [2024-11-20 11:21:43.382340] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.024 [2024-11-20 11:21:43.382346] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.024 [2024-11-20 11:21:43.382352] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:16.024 [2024-11-20 11:21:43.382366] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.024 qpair failed and we were unable to recover it. 00:27:16.024 [2024-11-20 11:21:43.392325] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.024 [2024-11-20 11:21:43.392384] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.024 [2024-11-20 11:21:43.392398] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.024 [2024-11-20 11:21:43.392404] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.024 [2024-11-20 11:21:43.392410] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:16.024 [2024-11-20 11:21:43.392425] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.024 qpair failed and we were unable to recover it. 00:27:16.024 [2024-11-20 11:21:43.402336] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.024 [2024-11-20 11:21:43.402392] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.024 [2024-11-20 11:21:43.402405] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.024 [2024-11-20 11:21:43.402412] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.024 [2024-11-20 11:21:43.402418] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:16.024 [2024-11-20 11:21:43.402432] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.024 qpair failed and we were unable to recover it. 00:27:16.024 [2024-11-20 11:21:43.412373] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.024 [2024-11-20 11:21:43.412427] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.024 [2024-11-20 11:21:43.412442] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.024 [2024-11-20 11:21:43.412448] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.024 [2024-11-20 11:21:43.412454] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:16.024 [2024-11-20 11:21:43.412468] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.024 qpair failed and we were unable to recover it. 00:27:16.024 [2024-11-20 11:21:43.422441] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.024 [2024-11-20 11:21:43.422498] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.024 [2024-11-20 11:21:43.422512] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.024 [2024-11-20 11:21:43.422519] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.024 [2024-11-20 11:21:43.422525] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:16.024 [2024-11-20 11:21:43.422539] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.024 qpair failed and we were unable to recover it. 00:27:16.024 [2024-11-20 11:21:43.432444] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.024 [2024-11-20 11:21:43.432500] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.024 [2024-11-20 11:21:43.432514] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.024 [2024-11-20 11:21:43.432521] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.024 [2024-11-20 11:21:43.432527] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:16.024 [2024-11-20 11:21:43.432542] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.024 qpair failed and we were unable to recover it. 00:27:16.024 [2024-11-20 11:21:43.442456] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.024 [2024-11-20 11:21:43.442511] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.024 [2024-11-20 11:21:43.442525] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.024 [2024-11-20 11:21:43.442531] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.024 [2024-11-20 11:21:43.442537] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:16.024 [2024-11-20 11:21:43.442551] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.024 qpair failed and we were unable to recover it. 00:27:16.024 [2024-11-20 11:21:43.452472] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.024 [2024-11-20 11:21:43.452524] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.024 [2024-11-20 11:21:43.452540] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.024 [2024-11-20 11:21:43.452547] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.024 [2024-11-20 11:21:43.452553] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:16.024 [2024-11-20 11:21:43.452567] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.025 qpair failed and we were unable to recover it. 00:27:16.025 [2024-11-20 11:21:43.462499] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.025 [2024-11-20 11:21:43.462552] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.025 [2024-11-20 11:21:43.462565] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.025 [2024-11-20 11:21:43.462571] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.025 [2024-11-20 11:21:43.462577] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:16.025 [2024-11-20 11:21:43.462592] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.025 qpair failed and we were unable to recover it. 00:27:16.025 [2024-11-20 11:21:43.472479] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.025 [2024-11-20 11:21:43.472537] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.025 [2024-11-20 11:21:43.472552] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.025 [2024-11-20 11:21:43.472559] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.025 [2024-11-20 11:21:43.472565] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:16.025 [2024-11-20 11:21:43.472579] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.025 qpair failed and we were unable to recover it. 00:27:16.025 [2024-11-20 11:21:43.482518] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.025 [2024-11-20 11:21:43.482578] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.025 [2024-11-20 11:21:43.482591] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.025 [2024-11-20 11:21:43.482598] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.025 [2024-11-20 11:21:43.482604] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:16.025 [2024-11-20 11:21:43.482618] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.025 qpair failed and we were unable to recover it. 00:27:16.025 [2024-11-20 11:21:43.492595] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.025 [2024-11-20 11:21:43.492653] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.025 [2024-11-20 11:21:43.492667] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.025 [2024-11-20 11:21:43.492673] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.025 [2024-11-20 11:21:43.492682] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:16.025 [2024-11-20 11:21:43.492696] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.025 qpair failed and we were unable to recover it. 00:27:16.025 [2024-11-20 11:21:43.502696] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.025 [2024-11-20 11:21:43.502779] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.025 [2024-11-20 11:21:43.502793] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.025 [2024-11-20 11:21:43.502799] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.025 [2024-11-20 11:21:43.502805] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:16.025 [2024-11-20 11:21:43.502820] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.025 qpair failed and we were unable to recover it. 00:27:16.025 [2024-11-20 11:21:43.512668] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.025 [2024-11-20 11:21:43.512723] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.025 [2024-11-20 11:21:43.512736] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.025 [2024-11-20 11:21:43.512743] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.025 [2024-11-20 11:21:43.512749] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:16.025 [2024-11-20 11:21:43.512763] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.025 qpair failed and we were unable to recover it. 00:27:16.285 [2024-11-20 11:21:43.522674] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.285 [2024-11-20 11:21:43.522731] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.285 [2024-11-20 11:21:43.522747] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.285 [2024-11-20 11:21:43.522754] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.285 [2024-11-20 11:21:43.522760] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:16.285 [2024-11-20 11:21:43.522774] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.285 qpair failed and we were unable to recover it. 00:27:16.285 [2024-11-20 11:21:43.532722] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.285 [2024-11-20 11:21:43.532778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.285 [2024-11-20 11:21:43.532792] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.285 [2024-11-20 11:21:43.532798] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.285 [2024-11-20 11:21:43.532804] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:16.285 [2024-11-20 11:21:43.532819] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.285 qpair failed and we were unable to recover it. 00:27:16.285 [2024-11-20 11:21:43.542728] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.285 [2024-11-20 11:21:43.542784] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.285 [2024-11-20 11:21:43.542798] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.285 [2024-11-20 11:21:43.542805] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.285 [2024-11-20 11:21:43.542811] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:16.285 [2024-11-20 11:21:43.542825] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.285 qpair failed and we were unable to recover it. 00:27:16.285 [2024-11-20 11:21:43.552774] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.285 [2024-11-20 11:21:43.552828] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.285 [2024-11-20 11:21:43.552842] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.285 [2024-11-20 11:21:43.552849] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.285 [2024-11-20 11:21:43.552855] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:16.285 [2024-11-20 11:21:43.552869] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.285 qpair failed and we were unable to recover it. 00:27:16.285 [2024-11-20 11:21:43.562793] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.285 [2024-11-20 11:21:43.562845] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.285 [2024-11-20 11:21:43.562860] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.285 [2024-11-20 11:21:43.562867] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.285 [2024-11-20 11:21:43.562873] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:16.285 [2024-11-20 11:21:43.562888] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.285 qpair failed and we were unable to recover it. 00:27:16.285 [2024-11-20 11:21:43.572826] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.285 [2024-11-20 11:21:43.572901] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.285 [2024-11-20 11:21:43.572917] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.285 [2024-11-20 11:21:43.572924] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.285 [2024-11-20 11:21:43.572930] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:16.285 [2024-11-20 11:21:43.572945] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.285 qpair failed and we were unable to recover it. 00:27:16.286 [2024-11-20 11:21:43.582850] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.286 [2024-11-20 11:21:43.582906] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.286 [2024-11-20 11:21:43.582923] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.286 [2024-11-20 11:21:43.582930] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.286 [2024-11-20 11:21:43.582936] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:16.286 [2024-11-20 11:21:43.582955] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.286 qpair failed and we were unable to recover it. 00:27:16.286 [2024-11-20 11:21:43.592930] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.286 [2024-11-20 11:21:43.593032] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.286 [2024-11-20 11:21:43.593046] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.286 [2024-11-20 11:21:43.593052] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.286 [2024-11-20 11:21:43.593058] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:16.286 [2024-11-20 11:21:43.593073] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.286 qpair failed and we were unable to recover it. 00:27:16.286 [2024-11-20 11:21:43.602927] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.286 [2024-11-20 11:21:43.603004] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.286 [2024-11-20 11:21:43.603019] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.286 [2024-11-20 11:21:43.603027] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.286 [2024-11-20 11:21:43.603034] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:16.286 [2024-11-20 11:21:43.603051] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.286 qpair failed and we were unable to recover it. 00:27:16.286 [2024-11-20 11:21:43.612967] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.286 [2024-11-20 11:21:43.613023] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.286 [2024-11-20 11:21:43.613038] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.286 [2024-11-20 11:21:43.613045] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.286 [2024-11-20 11:21:43.613051] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:16.286 [2024-11-20 11:21:43.613066] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.286 qpair failed and we were unable to recover it. 00:27:16.286 [2024-11-20 11:21:43.622977] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.286 [2024-11-20 11:21:43.623030] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.286 [2024-11-20 11:21:43.623044] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.286 [2024-11-20 11:21:43.623051] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.286 [2024-11-20 11:21:43.623060] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:16.286 [2024-11-20 11:21:43.623075] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.286 qpair failed and we were unable to recover it. 00:27:16.286 [2024-11-20 11:21:43.633006] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.286 [2024-11-20 11:21:43.633079] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.286 [2024-11-20 11:21:43.633095] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.286 [2024-11-20 11:21:43.633102] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.286 [2024-11-20 11:21:43.633108] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:16.286 [2024-11-20 11:21:43.633123] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.286 qpair failed and we were unable to recover it. 00:27:16.286 [2024-11-20 11:21:43.643023] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.286 [2024-11-20 11:21:43.643082] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.286 [2024-11-20 11:21:43.643096] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.286 [2024-11-20 11:21:43.643103] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.286 [2024-11-20 11:21:43.643109] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:16.286 [2024-11-20 11:21:43.643124] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.286 qpair failed and we were unable to recover it. 00:27:16.286 [2024-11-20 11:21:43.653045] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.286 [2024-11-20 11:21:43.653094] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.286 [2024-11-20 11:21:43.653108] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.286 [2024-11-20 11:21:43.653115] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.286 [2024-11-20 11:21:43.653121] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:16.286 [2024-11-20 11:21:43.653136] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.286 qpair failed and we were unable to recover it. 00:27:16.286 [2024-11-20 11:21:43.663097] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.286 [2024-11-20 11:21:43.663163] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.286 [2024-11-20 11:21:43.663177] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.286 [2024-11-20 11:21:43.663184] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.286 [2024-11-20 11:21:43.663190] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:16.286 [2024-11-20 11:21:43.663204] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.286 qpair failed and we were unable to recover it. 00:27:16.286 [2024-11-20 11:21:43.673088] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.286 [2024-11-20 11:21:43.673143] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.286 [2024-11-20 11:21:43.673157] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.286 [2024-11-20 11:21:43.673164] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.286 [2024-11-20 11:21:43.673170] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:16.286 [2024-11-20 11:21:43.673184] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.286 qpair failed and we were unable to recover it. 00:27:16.286 [2024-11-20 11:21:43.683088] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.286 [2024-11-20 11:21:43.683183] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.286 [2024-11-20 11:21:43.683197] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.286 [2024-11-20 11:21:43.683204] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.286 [2024-11-20 11:21:43.683209] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:16.286 [2024-11-20 11:21:43.683224] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.286 qpair failed and we were unable to recover it. 00:27:16.286 [2024-11-20 11:21:43.693173] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.286 [2024-11-20 11:21:43.693228] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.287 [2024-11-20 11:21:43.693245] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.287 [2024-11-20 11:21:43.693252] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.287 [2024-11-20 11:21:43.693258] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:16.287 [2024-11-20 11:21:43.693274] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.287 qpair failed and we were unable to recover it. 00:27:16.287 [2024-11-20 11:21:43.703198] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.287 [2024-11-20 11:21:43.703259] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.287 [2024-11-20 11:21:43.703274] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.287 [2024-11-20 11:21:43.703280] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.287 [2024-11-20 11:21:43.703286] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:16.287 [2024-11-20 11:21:43.703300] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.287 qpair failed and we were unable to recover it. 00:27:16.287 [2024-11-20 11:21:43.713230] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.287 [2024-11-20 11:21:43.713290] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.287 [2024-11-20 11:21:43.713307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.287 [2024-11-20 11:21:43.713314] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.287 [2024-11-20 11:21:43.713319] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:16.287 [2024-11-20 11:21:43.713334] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.287 qpair failed and we were unable to recover it. 00:27:16.287 [2024-11-20 11:21:43.723298] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.287 [2024-11-20 11:21:43.723356] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.287 [2024-11-20 11:21:43.723370] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.287 [2024-11-20 11:21:43.723377] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.287 [2024-11-20 11:21:43.723382] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:16.287 [2024-11-20 11:21:43.723397] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.287 qpair failed and we were unable to recover it. 00:27:16.287 [2024-11-20 11:21:43.733293] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.287 [2024-11-20 11:21:43.733344] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.287 [2024-11-20 11:21:43.733359] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.287 [2024-11-20 11:21:43.733366] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.287 [2024-11-20 11:21:43.733372] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:16.287 [2024-11-20 11:21:43.733387] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.287 qpair failed and we were unable to recover it. 00:27:16.287 [2024-11-20 11:21:43.743313] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.287 [2024-11-20 11:21:43.743364] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.287 [2024-11-20 11:21:43.743378] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.287 [2024-11-20 11:21:43.743385] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.287 [2024-11-20 11:21:43.743391] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:16.287 [2024-11-20 11:21:43.743405] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.287 qpair failed and we were unable to recover it. 00:27:16.287 [2024-11-20 11:21:43.753284] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.287 [2024-11-20 11:21:43.753340] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.287 [2024-11-20 11:21:43.753354] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.287 [2024-11-20 11:21:43.753360] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.287 [2024-11-20 11:21:43.753369] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:16.287 [2024-11-20 11:21:43.753383] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.287 qpair failed and we were unable to recover it. 00:27:16.287 [2024-11-20 11:21:43.763307] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.287 [2024-11-20 11:21:43.763363] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.287 [2024-11-20 11:21:43.763377] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.287 [2024-11-20 11:21:43.763384] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.287 [2024-11-20 11:21:43.763390] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:16.287 [2024-11-20 11:21:43.763404] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.287 qpair failed and we were unable to recover it. 00:27:16.287 [2024-11-20 11:21:43.773393] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.287 [2024-11-20 11:21:43.773449] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.287 [2024-11-20 11:21:43.773464] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.287 [2024-11-20 11:21:43.773471] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.287 [2024-11-20 11:21:43.773476] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:16.287 [2024-11-20 11:21:43.773491] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.287 qpair failed and we were unable to recover it. 00:27:16.547 [2024-11-20 11:21:43.783421] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.547 [2024-11-20 11:21:43.783504] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.547 [2024-11-20 11:21:43.783518] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.547 [2024-11-20 11:21:43.783524] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.547 [2024-11-20 11:21:43.783530] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16e5ba0 00:27:16.547 [2024-11-20 11:21:43.783544] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.547 qpair failed and we were unable to recover it. 00:27:16.547 [2024-11-20 11:21:43.793467] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.547 [2024-11-20 11:21:43.793563] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.547 [2024-11-20 11:21:43.793620] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.547 [2024-11-20 11:21:43.793646] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.547 [2024-11-20 11:21:43.793667] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f684c000b90 00:27:16.547 [2024-11-20 11:21:43.793720] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.547 qpair failed and we were unable to recover it. 00:27:16.547 [2024-11-20 11:21:43.803548] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.547 [2024-11-20 11:21:43.803617] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.547 [2024-11-20 11:21:43.803646] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.547 [2024-11-20 11:21:43.803660] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.547 [2024-11-20 11:21:43.803672] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f684c000b90 00:27:16.547 [2024-11-20 11:21:43.803703] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.547 qpair failed and we were unable to recover it. 00:27:16.547 [2024-11-20 11:21:43.803862] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:27:16.547 A controller has encountered a failure and is being reset. 00:27:16.547 Controller properly reset. 00:27:16.547 Initializing NVMe Controllers 00:27:16.547 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:16.547 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:16.547 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:27:16.547 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:27:16.547 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:27:16.547 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:27:16.547 Initialization complete. Launching workers. 00:27:16.547 Starting thread on core 1 00:27:16.547 Starting thread on core 2 00:27:16.547 Starting thread on core 3 00:27:16.547 Starting thread on core 0 00:27:16.547 11:21:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:27:16.547 00:27:16.547 real 0m10.794s 00:27:16.547 user 0m19.255s 00:27:16.547 sys 0m4.817s 00:27:16.547 11:21:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:16.547 11:21:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:16.547 ************************************ 00:27:16.547 END TEST nvmf_target_disconnect_tc2 00:27:16.547 ************************************ 00:27:16.547 11:21:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:27:16.547 11:21:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:27:16.547 11:21:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:27:16.547 11:21:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:16.547 11:21:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:27:16.547 11:21:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:16.547 11:21:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:27:16.547 11:21:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:16.547 11:21:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:16.547 rmmod nvme_tcp 00:27:16.547 rmmod nvme_fabrics 00:27:16.547 rmmod nvme_keyring 00:27:16.547 11:21:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:16.547 11:21:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:27:16.547 11:21:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:27:16.547 11:21:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 24354 ']' 00:27:16.547 11:21:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 24354 00:27:16.547 11:21:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 24354 ']' 00:27:16.547 11:21:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 24354 00:27:16.547 11:21:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:27:16.547 11:21:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:16.547 11:21:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 24354 00:27:16.547 11:21:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:27:16.547 11:21:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:27:16.547 11:21:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 24354' 00:27:16.547 killing process with pid 24354 00:27:16.547 11:21:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 24354 00:27:16.547 11:21:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 24354 00:27:16.806 11:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:16.806 11:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:16.806 11:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:16.806 11:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:27:16.806 11:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:27:16.806 11:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:16.806 11:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:27:16.806 11:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:16.806 11:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:16.806 11:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:16.806 11:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:16.806 11:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:19.345 11:21:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:19.345 00:27:19.345 real 0m19.617s 00:27:19.345 user 0m46.868s 00:27:19.345 sys 0m9.748s 00:27:19.345 11:21:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:19.345 11:21:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:19.345 ************************************ 00:27:19.345 END TEST nvmf_target_disconnect 00:27:19.345 ************************************ 00:27:19.345 11:21:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:27:19.345 00:27:19.345 real 5m50.018s 00:27:19.345 user 10m28.365s 00:27:19.345 sys 1m58.869s 00:27:19.345 11:21:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:19.345 11:21:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.345 ************************************ 00:27:19.345 END TEST nvmf_host 00:27:19.345 ************************************ 00:27:19.345 11:21:46 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:27:19.345 11:21:46 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:27:19.345 11:21:46 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:27:19.345 11:21:46 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:19.345 11:21:46 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:19.345 11:21:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:19.345 ************************************ 00:27:19.345 START TEST nvmf_target_core_interrupt_mode 00:27:19.345 ************************************ 00:27:19.345 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:27:19.345 * Looking for test storage... 00:27:19.345 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:27:19.345 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:19.345 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lcov --version 00:27:19.345 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:19.345 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:19.345 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:19.345 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:19.345 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:19.345 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:27:19.345 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:27:19.345 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:27:19.345 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:27:19.345 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:27:19.346 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:27:19.346 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:27:19.346 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:19.346 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:27:19.346 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:27:19.346 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:19.346 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:19.346 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:27:19.346 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:27:19.346 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:19.346 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:27:19.346 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:27:19.346 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:27:19.346 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:27:19.346 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:19.346 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:27:19.346 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:27:19.346 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:19.346 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:19.346 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:27:19.346 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:19.346 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:19.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:19.346 --rc genhtml_branch_coverage=1 00:27:19.346 --rc genhtml_function_coverage=1 00:27:19.346 --rc genhtml_legend=1 00:27:19.346 --rc geninfo_all_blocks=1 00:27:19.346 --rc geninfo_unexecuted_blocks=1 00:27:19.346 00:27:19.346 ' 00:27:19.346 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:19.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:19.346 --rc genhtml_branch_coverage=1 00:27:19.346 --rc genhtml_function_coverage=1 00:27:19.346 --rc genhtml_legend=1 00:27:19.346 --rc geninfo_all_blocks=1 00:27:19.346 --rc geninfo_unexecuted_blocks=1 00:27:19.346 00:27:19.346 ' 00:27:19.346 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:19.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:19.346 --rc genhtml_branch_coverage=1 00:27:19.346 --rc genhtml_function_coverage=1 00:27:19.346 --rc genhtml_legend=1 00:27:19.346 --rc geninfo_all_blocks=1 00:27:19.346 --rc geninfo_unexecuted_blocks=1 00:27:19.346 00:27:19.346 ' 00:27:19.346 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:19.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:19.346 --rc genhtml_branch_coverage=1 00:27:19.346 --rc genhtml_function_coverage=1 00:27:19.346 --rc genhtml_legend=1 00:27:19.346 --rc geninfo_all_blocks=1 00:27:19.346 --rc geninfo_unexecuted_blocks=1 00:27:19.346 00:27:19.346 ' 00:27:19.346 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:27:19.346 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:27:19.346 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:19.346 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:27:19.346 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:19.346 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:19.346 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:19.346 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:19.346 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:19.346 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:19.346 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:19.346 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:19.346 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:19.346 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:19.346 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:19.346 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:19.346 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:19.346 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:19.346 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:19.346 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:19.346 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:19.346 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:27:19.346 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:19.346 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:19.346 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:19.346 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:19.346 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:19.346 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:19.346 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:27:19.346 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:19.346 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:27:19.346 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:19.346 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:19.346 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:19.347 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:19.347 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:19.347 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:19.347 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:19.347 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:19.347 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:19.347 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:19.347 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:27:19.347 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:27:19.347 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:27:19.347 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:27:19.347 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:19.347 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:19.347 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:19.347 ************************************ 00:27:19.347 START TEST nvmf_abort 00:27:19.347 ************************************ 00:27:19.347 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:27:19.347 * Looking for test storage... 00:27:19.347 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:19.347 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:19.347 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:27:19.347 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:19.347 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:19.347 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:19.347 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:19.347 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:19.347 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:27:19.347 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:27:19.347 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:27:19.347 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:27:19.347 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:27:19.347 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:27:19.347 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:27:19.347 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:19.347 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:27:19.347 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:27:19.347 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:19.347 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:19.347 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:27:19.347 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:27:19.347 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:19.347 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:27:19.347 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:27:19.347 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:27:19.347 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:27:19.347 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:19.347 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:27:19.347 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:27:19.347 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:19.347 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:19.347 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:27:19.347 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:19.347 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:19.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:19.347 --rc genhtml_branch_coverage=1 00:27:19.347 --rc genhtml_function_coverage=1 00:27:19.347 --rc genhtml_legend=1 00:27:19.347 --rc geninfo_all_blocks=1 00:27:19.347 --rc geninfo_unexecuted_blocks=1 00:27:19.347 00:27:19.347 ' 00:27:19.347 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:19.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:19.347 --rc genhtml_branch_coverage=1 00:27:19.347 --rc genhtml_function_coverage=1 00:27:19.347 --rc genhtml_legend=1 00:27:19.347 --rc geninfo_all_blocks=1 00:27:19.347 --rc geninfo_unexecuted_blocks=1 00:27:19.347 00:27:19.347 ' 00:27:19.347 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:19.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:19.347 --rc genhtml_branch_coverage=1 00:27:19.347 --rc genhtml_function_coverage=1 00:27:19.347 --rc genhtml_legend=1 00:27:19.347 --rc geninfo_all_blocks=1 00:27:19.347 --rc geninfo_unexecuted_blocks=1 00:27:19.347 00:27:19.347 ' 00:27:19.347 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:19.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:19.347 --rc genhtml_branch_coverage=1 00:27:19.347 --rc genhtml_function_coverage=1 00:27:19.347 --rc genhtml_legend=1 00:27:19.347 --rc geninfo_all_blocks=1 00:27:19.347 --rc geninfo_unexecuted_blocks=1 00:27:19.347 00:27:19.347 ' 00:27:19.347 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:19.347 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:27:19.347 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:19.347 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:19.347 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:19.347 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:19.347 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:19.347 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:19.347 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:19.348 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:19.348 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:19.348 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:19.348 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:19.348 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:19.348 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:19.348 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:19.348 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:19.348 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:19.348 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:19.348 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:27:19.348 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:19.348 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:19.348 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:19.348 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:19.348 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:19.348 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:19.348 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:27:19.348 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:19.348 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:27:19.348 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:19.348 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:19.348 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:19.348 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:19.348 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:19.348 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:19.348 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:19.348 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:19.348 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:19.348 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:19.348 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:19.348 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:27:19.348 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:27:19.348 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:19.348 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:19.348 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:19.348 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:19.348 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:19.348 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:19.348 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:19.348 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:19.348 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:19.348 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:19.348 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:27:19.348 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:25.919 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:25.919 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:27:25.919 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:25.919 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:25.919 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:25.919 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:25.919 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:25.919 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:27:25.919 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:25.919 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:27:25.919 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:27:25.919 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:27:25.919 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:27:25.919 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:27:25.919 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:27:25.919 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:25.919 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:25.919 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:25.919 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:25.919 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:25.919 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:25.919 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:25.919 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:25.919 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:25.919 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:25.919 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:25.919 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:25.919 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:25.920 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:25.920 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:25.920 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:25.920 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:25.920 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:25.920 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:25.920 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:25.920 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:25.920 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:25.920 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:25.920 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:25.920 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:25.920 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:25.920 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:25.920 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:25.920 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:25.920 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:25.920 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:25.920 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:25.920 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:25.920 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:25.920 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:25.920 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:25.920 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:25.920 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:25.920 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:25.920 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:25.920 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:25.920 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:25.920 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:25.920 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:25.920 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:25.920 Found net devices under 0000:86:00.0: cvl_0_0 00:27:25.920 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:25.920 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:25.920 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:25.920 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:25.920 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:25.920 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:25.920 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:25.920 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:25.920 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:25.920 Found net devices under 0000:86:00.1: cvl_0_1 00:27:25.920 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:25.920 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:25.920 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:27:25.920 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:25.920 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:25.920 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:25.920 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:25.920 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:25.920 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:25.920 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:25.920 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:25.920 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:25.920 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:25.920 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:25.920 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:25.920 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:25.920 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:25.920 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:25.920 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:25.920 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:25.920 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:25.920 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:25.920 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:25.920 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:25.920 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:25.920 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:25.920 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:25.920 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:25.920 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:25.920 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:25.920 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.444 ms 00:27:25.920 00:27:25.920 --- 10.0.0.2 ping statistics --- 00:27:25.920 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:25.920 rtt min/avg/max/mdev = 0.444/0.444/0.444/0.000 ms 00:27:25.920 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:25.920 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:25.920 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:27:25.920 00:27:25.920 --- 10.0.0.1 ping statistics --- 00:27:25.920 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:25.920 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:27:25.920 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:25.920 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:27:25.920 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:25.920 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:25.920 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:25.920 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:25.920 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:25.920 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:25.920 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:25.920 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:27:25.920 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:25.921 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:25.921 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:25.921 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=29017 00:27:25.921 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:27:25.921 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 29017 00:27:25.921 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 29017 ']' 00:27:25.921 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:25.921 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:25.921 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:25.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:25.921 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:25.921 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:25.921 [2024-11-20 11:21:52.803884] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:27:25.921 [2024-11-20 11:21:52.804806] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:27:25.921 [2024-11-20 11:21:52.804840] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:25.921 [2024-11-20 11:21:52.884915] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:25.921 [2024-11-20 11:21:52.926501] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:25.921 [2024-11-20 11:21:52.926537] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:25.921 [2024-11-20 11:21:52.926544] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:25.921 [2024-11-20 11:21:52.926550] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:25.921 [2024-11-20 11:21:52.926556] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:25.921 [2024-11-20 11:21:52.928024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:25.921 [2024-11-20 11:21:52.928134] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:25.921 [2024-11-20 11:21:52.928135] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:25.921 [2024-11-20 11:21:52.994337] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:25.921 [2024-11-20 11:21:52.995112] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:27:25.921 [2024-11-20 11:21:52.995572] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:27:25.921 [2024-11-20 11:21:52.995677] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:27:25.921 11:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:25.921 11:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:27:25.921 11:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:25.921 11:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:25.921 11:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:25.921 11:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:25.921 11:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:27:25.921 11:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.921 11:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:25.921 [2024-11-20 11:21:53.056925] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:25.921 11:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.921 11:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:27:25.921 11:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.921 11:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:25.921 Malloc0 00:27:25.921 11:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.921 11:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:27:25.921 11:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.921 11:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:25.921 Delay0 00:27:25.921 11:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.921 11:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:27:25.921 11:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.921 11:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:25.921 11:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.921 11:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:27:25.921 11:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.921 11:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:25.921 11:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.921 11:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:25.921 11:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.921 11:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:25.921 [2024-11-20 11:21:53.144829] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:25.921 11:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.921 11:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:25.921 11:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.921 11:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:25.921 11:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.921 11:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:27:25.921 [2024-11-20 11:21:53.270705] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:27:28.454 Initializing NVMe Controllers 00:27:28.454 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:27:28.454 controller IO queue size 128 less than required 00:27:28.454 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:27:28.454 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:27:28.454 Initialization complete. Launching workers. 00:27:28.454 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 37087 00:27:28.454 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 37148, failed to submit 66 00:27:28.454 success 37087, unsuccessful 61, failed 0 00:27:28.454 11:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:28.454 11:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.454 11:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:28.454 11:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.454 11:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:27:28.454 11:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:27:28.454 11:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:28.454 11:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:27:28.454 11:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:28.454 11:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:27:28.454 11:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:28.454 11:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:28.454 rmmod nvme_tcp 00:27:28.454 rmmod nvme_fabrics 00:27:28.454 rmmod nvme_keyring 00:27:28.454 11:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:28.454 11:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:27:28.454 11:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:27:28.454 11:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 29017 ']' 00:27:28.454 11:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 29017 00:27:28.454 11:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 29017 ']' 00:27:28.454 11:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 29017 00:27:28.454 11:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:27:28.454 11:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:28.454 11:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 29017 00:27:28.454 11:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:28.454 11:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:28.454 11:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 29017' 00:27:28.454 killing process with pid 29017 00:27:28.454 11:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 29017 00:27:28.454 11:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 29017 00:27:28.454 11:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:28.454 11:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:28.454 11:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:28.454 11:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:27:28.454 11:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:27:28.454 11:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:28.454 11:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:27:28.454 11:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:28.454 11:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:28.454 11:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:28.454 11:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:28.454 11:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:30.362 11:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:30.362 00:27:30.362 real 0m11.159s 00:27:30.362 user 0m10.426s 00:27:30.362 sys 0m5.756s 00:27:30.362 11:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:30.362 11:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:30.362 ************************************ 00:27:30.362 END TEST nvmf_abort 00:27:30.362 ************************************ 00:27:30.362 11:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:27:30.362 11:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:30.362 11:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:30.362 11:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:30.362 ************************************ 00:27:30.362 START TEST nvmf_ns_hotplug_stress 00:27:30.362 ************************************ 00:27:30.362 11:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:27:30.622 * Looking for test storage... 00:27:30.622 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:30.622 11:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:30.622 11:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:27:30.622 11:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:30.622 11:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:30.622 11:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:30.622 11:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:30.622 11:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:30.622 11:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:27:30.622 11:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:27:30.622 11:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:27:30.622 11:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:27:30.622 11:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:27:30.622 11:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:27:30.622 11:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:27:30.622 11:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:30.622 11:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:27:30.622 11:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:27:30.622 11:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:30.622 11:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:30.622 11:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:27:30.622 11:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:27:30.622 11:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:30.622 11:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:27:30.622 11:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:27:30.622 11:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:27:30.622 11:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:27:30.622 11:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:30.622 11:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:27:30.622 11:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:27:30.622 11:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:30.622 11:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:30.622 11:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:27:30.622 11:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:30.622 11:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:30.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:30.622 --rc genhtml_branch_coverage=1 00:27:30.622 --rc genhtml_function_coverage=1 00:27:30.622 --rc genhtml_legend=1 00:27:30.622 --rc geninfo_all_blocks=1 00:27:30.622 --rc geninfo_unexecuted_blocks=1 00:27:30.622 00:27:30.622 ' 00:27:30.622 11:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:30.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:30.622 --rc genhtml_branch_coverage=1 00:27:30.622 --rc genhtml_function_coverage=1 00:27:30.622 --rc genhtml_legend=1 00:27:30.622 --rc geninfo_all_blocks=1 00:27:30.622 --rc geninfo_unexecuted_blocks=1 00:27:30.622 00:27:30.622 ' 00:27:30.622 11:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:30.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:30.622 --rc genhtml_branch_coverage=1 00:27:30.622 --rc genhtml_function_coverage=1 00:27:30.622 --rc genhtml_legend=1 00:27:30.622 --rc geninfo_all_blocks=1 00:27:30.622 --rc geninfo_unexecuted_blocks=1 00:27:30.622 00:27:30.622 ' 00:27:30.622 11:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:30.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:30.622 --rc genhtml_branch_coverage=1 00:27:30.622 --rc genhtml_function_coverage=1 00:27:30.622 --rc genhtml_legend=1 00:27:30.622 --rc geninfo_all_blocks=1 00:27:30.622 --rc geninfo_unexecuted_blocks=1 00:27:30.622 00:27:30.622 ' 00:27:30.622 11:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:30.622 11:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:27:30.622 11:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:30.622 11:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:30.622 11:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:30.622 11:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:30.622 11:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:30.622 11:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:30.622 11:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:30.622 11:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:30.622 11:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:30.623 11:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:30.623 11:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:30.623 11:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:30.623 11:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:30.623 11:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:30.623 11:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:30.623 11:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:30.623 11:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:30.623 11:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:27:30.623 11:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:30.623 11:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:30.623 11:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:30.623 11:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:30.623 11:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:30.623 11:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:30.623 11:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:27:30.623 11:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:30.623 11:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:27:30.623 11:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:30.623 11:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:30.623 11:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:30.623 11:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:30.623 11:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:30.623 11:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:30.623 11:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:30.623 11:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:30.623 11:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:30.623 11:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:30.623 11:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:30.623 11:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:27:30.623 11:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:30.623 11:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:30.623 11:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:30.623 11:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:30.623 11:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:30.623 11:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:30.623 11:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:30.623 11:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:30.623 11:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:30.623 11:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:30.623 11:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:27:30.623 11:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:27:37.226 11:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:37.226 11:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:27:37.226 11:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:37.226 11:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:37.226 11:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:37.226 11:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:37.226 11:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:37.226 11:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:27:37.226 11:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:37.226 11:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:27:37.226 11:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:27:37.226 11:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:27:37.226 11:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:27:37.226 11:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:27:37.226 11:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:27:37.226 11:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:37.226 11:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:37.226 11:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:37.226 11:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:37.226 11:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:37.226 11:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:37.226 11:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:37.226 11:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:37.226 11:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:37.226 11:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:37.226 11:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:37.226 11:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:37.226 11:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:37.226 11:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:37.226 11:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:37.226 11:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:37.226 11:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:37.226 11:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:37.226 11:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:37.226 11:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:37.226 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:37.226 11:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:37.226 11:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:37.226 11:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:37.226 11:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:37.226 11:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:37.226 11:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:37.226 11:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:37.226 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:37.226 11:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:37.226 11:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:37.226 11:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:37.227 11:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:37.227 11:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:37.227 11:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:37.227 11:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:37.227 11:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:37.227 11:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:37.227 11:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:37.227 11:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:37.227 11:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:37.227 11:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:37.227 11:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:37.227 11:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:37.227 11:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:37.227 Found net devices under 0000:86:00.0: cvl_0_0 00:27:37.227 11:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:37.227 11:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:37.227 11:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:37.227 11:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:37.227 11:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:37.227 11:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:37.227 11:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:37.227 11:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:37.227 11:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:37.227 Found net devices under 0000:86:00.1: cvl_0_1 00:27:37.227 11:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:37.227 11:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:37.227 11:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:27:37.227 11:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:37.227 11:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:37.227 11:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:37.227 11:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:37.227 11:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:37.227 11:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:37.227 11:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:37.227 11:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:37.227 11:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:37.227 11:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:37.227 11:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:37.227 11:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:37.227 11:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:37.227 11:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:37.227 11:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:37.227 11:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:37.227 11:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:37.227 11:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:37.227 11:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:37.227 11:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:37.227 11:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:37.227 11:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:37.227 11:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:37.227 11:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:37.227 11:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:37.227 11:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:37.227 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:37.227 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.339 ms 00:27:37.227 00:27:37.227 --- 10.0.0.2 ping statistics --- 00:27:37.227 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:37.227 rtt min/avg/max/mdev = 0.339/0.339/0.339/0.000 ms 00:27:37.227 11:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:37.227 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:37.227 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.185 ms 00:27:37.227 00:27:37.227 --- 10.0.0.1 ping statistics --- 00:27:37.227 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:37.227 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:27:37.227 11:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:37.227 11:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:27:37.227 11:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:37.227 11:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:37.227 11:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:37.227 11:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:37.227 11:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:37.227 11:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:37.227 11:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:37.227 11:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:27:37.227 11:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:37.227 11:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:37.227 11:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:27:37.227 11:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=33024 00:27:37.227 11:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:27:37.227 11:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 33024 00:27:37.227 11:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 33024 ']' 00:27:37.227 11:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:37.227 11:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:37.227 11:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:37.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:37.228 11:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:37.228 11:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:27:37.228 [2024-11-20 11:22:03.984248] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:27:37.228 [2024-11-20 11:22:03.985184] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:27:37.228 [2024-11-20 11:22:03.985218] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:37.228 [2024-11-20 11:22:04.066267] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:37.228 [2024-11-20 11:22:04.107931] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:37.228 [2024-11-20 11:22:04.107972] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:37.228 [2024-11-20 11:22:04.107983] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:37.228 [2024-11-20 11:22:04.107989] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:37.228 [2024-11-20 11:22:04.107994] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:37.228 [2024-11-20 11:22:04.109457] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:37.228 [2024-11-20 11:22:04.109564] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:37.228 [2024-11-20 11:22:04.109566] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:37.228 [2024-11-20 11:22:04.177980] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:37.228 [2024-11-20 11:22:04.178864] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:27:37.228 [2024-11-20 11:22:04.179045] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:27:37.228 [2024-11-20 11:22:04.179188] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:27:37.228 11:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:37.228 11:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:27:37.228 11:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:37.228 11:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:37.228 11:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:27:37.228 11:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:37.228 11:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:27:37.228 11:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:37.228 [2024-11-20 11:22:04.414330] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:37.228 11:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:27:37.228 11:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:37.486 [2024-11-20 11:22:04.806728] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:37.486 11:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:37.745 11:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:27:37.745 Malloc0 00:27:38.004 11:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:27:38.004 Delay0 00:27:38.004 11:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:38.262 11:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:27:38.521 NULL1 00:27:38.521 11:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:27:38.781 11:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=33290 00:27:38.781 11:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 33290 00:27:38.781 11:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:38.781 11:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:27:39.716 Read completed with error (sct=0, sc=11) 00:27:39.716 11:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:39.716 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:39.716 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:39.975 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:39.975 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:39.975 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:39.975 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:39.975 11:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:27:39.975 11:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:27:40.233 true 00:27:40.233 11:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 33290 00:27:40.233 11:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:41.167 11:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:41.167 11:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:27:41.167 11:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:27:41.425 true 00:27:41.425 11:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 33290 00:27:41.425 11:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:41.728 11:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:42.059 11:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:27:42.060 11:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:27:42.060 true 00:27:42.060 11:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 33290 00:27:42.060 11:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:43.000 11:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:43.259 11:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:27:43.260 11:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:27:43.519 true 00:27:43.519 11:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 33290 00:27:43.519 11:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:43.778 11:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:43.778 11:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:27:43.778 11:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:27:44.038 true 00:27:44.038 11:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 33290 00:27:44.038 11:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:44.974 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:44.974 11:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:44.974 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:45.232 11:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:27:45.232 11:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:27:45.490 true 00:27:45.490 11:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 33290 00:27:45.491 11:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:45.749 11:22:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:46.007 11:22:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:27:46.007 11:22:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:27:46.007 true 00:27:46.007 11:22:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 33290 00:27:46.007 11:22:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:47.386 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:47.386 11:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:47.386 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:47.386 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:47.386 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:47.386 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:47.386 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:47.386 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:47.386 11:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:27:47.386 11:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:27:47.644 true 00:27:47.644 11:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 33290 00:27:47.644 11:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:48.578 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:48.578 11:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:48.578 11:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:27:48.578 11:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:27:48.836 true 00:27:48.836 11:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 33290 00:27:48.836 11:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:49.094 11:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:49.352 11:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:27:49.352 11:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:27:49.610 true 00:27:49.610 11:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 33290 00:27:49.610 11:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:50.565 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:50.565 11:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:50.565 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:50.565 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:50.565 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:50.823 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:50.823 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:50.823 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:50.823 11:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:27:50.823 11:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:27:51.082 true 00:27:51.082 11:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 33290 00:27:51.082 11:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:52.017 11:22:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:52.017 11:22:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:27:52.017 11:22:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:27:52.276 true 00:27:52.276 11:22:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 33290 00:27:52.276 11:22:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:52.535 11:22:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:52.535 11:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:27:52.535 11:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:27:52.794 true 00:27:52.794 11:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 33290 00:27:52.794 11:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:54.171 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:54.171 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:54.171 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:54.171 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:54.171 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:54.171 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:54.171 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:54.171 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:27:54.171 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:27:54.429 true 00:27:54.429 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 33290 00:27:54.429 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:54.996 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:55.255 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:55.255 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:55.255 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:27:55.255 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:27:55.514 true 00:27:55.514 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 33290 00:27:55.514 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:55.773 11:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:56.031 11:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:27:56.031 11:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:27:56.031 true 00:27:56.031 11:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 33290 00:27:56.031 11:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:57.409 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:57.409 11:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:57.668 11:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:27:57.668 11:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:27:57.668 true 00:27:57.668 11:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 33290 00:27:57.668 11:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:57.927 11:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:58.185 11:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:27:58.185 11:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:27:58.443 true 00:27:58.443 11:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 33290 00:27:58.443 11:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:59.378 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:59.378 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:59.378 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:59.378 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:59.378 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:59.636 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:59.636 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:59.636 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:27:59.636 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:27:59.895 true 00:27:59.895 11:22:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 33290 00:27:59.895 11:22:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:00.830 11:22:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:00.830 11:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:28:00.830 11:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:28:01.088 true 00:28:01.089 11:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 33290 00:28:01.089 11:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:01.347 11:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:01.347 11:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:28:01.347 11:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:28:01.604 true 00:28:01.604 11:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 33290 00:28:01.604 11:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:02.538 11:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:02.796 11:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:28:02.796 11:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:28:03.054 true 00:28:03.054 11:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 33290 00:28:03.054 11:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:03.312 11:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:03.570 11:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:28:03.570 11:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:28:03.570 true 00:28:03.570 11:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 33290 00:28:03.570 11:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:04.947 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:04.947 11:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:04.947 11:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:28:04.947 11:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:28:04.947 true 00:28:05.205 11:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 33290 00:28:05.205 11:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:05.205 11:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:05.463 11:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:28:05.463 11:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:28:05.721 true 00:28:05.722 11:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 33290 00:28:05.722 11:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:06.657 11:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:06.915 11:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:28:06.915 11:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:28:07.173 true 00:28:07.173 11:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 33290 00:28:07.173 11:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:07.173 11:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:07.431 11:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:28:07.431 11:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:28:07.690 true 00:28:07.690 11:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 33290 00:28:07.690 11:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:08.626 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:08.626 11:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:08.884 11:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:28:08.884 11:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:28:08.884 Initializing NVMe Controllers 00:28:08.884 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:08.884 Controller IO queue size 128, less than required. 00:28:08.884 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:08.884 Controller IO queue size 128, less than required. 00:28:08.884 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:08.884 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:08.884 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:08.884 Initialization complete. Launching workers. 00:28:08.884 ======================================================== 00:28:08.884 Latency(us) 00:28:08.884 Device Information : IOPS MiB/s Average min max 00:28:08.884 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1422.53 0.69 57590.17 2890.92 1042765.44 00:28:08.884 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 16576.80 8.09 7721.08 1612.43 382614.93 00:28:08.884 ======================================================== 00:28:08.884 Total : 17999.33 8.79 11662.34 1612.43 1042765.44 00:28:08.884 00:28:09.143 true 00:28:09.143 11:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 33290 00:28:09.143 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (33290) - No such process 00:28:09.143 11:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 33290 00:28:09.143 11:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:09.402 11:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:09.402 11:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:28:09.402 11:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:28:09.402 11:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:28:09.402 11:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:09.402 11:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:28:09.660 null0 00:28:09.660 11:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:09.660 11:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:09.660 11:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:28:09.919 null1 00:28:09.919 11:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:09.919 11:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:09.919 11:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:28:09.919 null2 00:28:09.919 11:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:09.919 11:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:09.919 11:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:28:10.177 null3 00:28:10.177 11:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:10.177 11:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:10.177 11:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:28:10.436 null4 00:28:10.436 11:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:10.436 11:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:10.436 11:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:28:10.436 null5 00:28:10.436 11:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:10.436 11:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:10.436 11:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:28:10.695 null6 00:28:10.695 11:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:10.695 11:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:10.695 11:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:28:10.955 null7 00:28:10.955 11:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:10.955 11:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:10.955 11:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:28:10.955 11:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:10.955 11:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:28:10.955 11:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:10.955 11:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:28:10.955 11:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:10.955 11:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:10.955 11:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:10.955 11:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:10.955 11:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:10.955 11:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:10.955 11:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:28:10.955 11:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:10.955 11:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:10.955 11:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:28:10.956 11:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:10.956 11:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:10.956 11:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:10.956 11:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:10.956 11:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:10.956 11:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:28:10.956 11:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:10.956 11:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:28:10.956 11:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:10.956 11:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:10.956 11:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:10.956 11:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:10.956 11:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:10.956 11:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:28:10.956 11:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:10.956 11:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:28:10.956 11:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:10.956 11:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:10.956 11:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:10.956 11:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:10.956 11:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:10.956 11:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:28:10.956 11:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:10.956 11:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:28:10.956 11:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:10.956 11:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:10.956 11:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:10.956 11:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:10.956 11:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:10.956 11:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:28:10.956 11:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:10.956 11:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:28:10.956 11:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:10.956 11:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:10.956 11:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:10.956 11:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:10.956 11:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:10.956 11:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:28:10.956 11:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:10.956 11:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:28:10.956 11:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:10.956 11:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:10.956 11:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:10.956 11:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:10.956 11:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:10.956 11:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:28:10.956 11:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:10.956 11:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 38625 38627 38628 38630 38632 38634 38636 38638 00:28:10.956 11:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:28:10.956 11:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:10.956 11:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:10.956 11:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:11.216 11:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:11.216 11:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:11.216 11:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:11.216 11:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:11.216 11:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:11.216 11:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:11.216 11:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:11.216 11:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:11.216 11:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:11.216 11:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:11.216 11:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:11.216 11:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:11.216 11:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:11.216 11:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:11.216 11:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:11.216 11:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:11.216 11:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:11.216 11:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:11.216 11:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:11.476 11:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:11.476 11:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:11.476 11:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:11.476 11:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:11.476 11:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:11.476 11:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:11.476 11:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:11.476 11:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:11.476 11:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:11.476 11:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:11.476 11:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:11.476 11:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:11.476 11:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:11.476 11:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:11.476 11:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:11.476 11:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:11.476 11:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:11.476 11:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:11.476 11:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:11.476 11:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:11.476 11:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:11.735 11:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:11.735 11:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:11.735 11:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:11.735 11:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:11.735 11:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:11.735 11:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:11.735 11:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:11.735 11:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:11.735 11:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:11.735 11:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:11.735 11:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:11.735 11:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:11.735 11:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:11.735 11:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:11.735 11:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:11.735 11:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:11.735 11:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:11.735 11:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:11.735 11:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:11.735 11:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:11.735 11:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:11.735 11:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:11.735 11:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:11.735 11:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:11.994 11:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:11.994 11:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:11.994 11:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:11.994 11:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:11.994 11:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:11.994 11:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:11.994 11:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:11.994 11:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:12.254 11:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:12.254 11:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:12.254 11:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:12.254 11:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:12.254 11:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:12.254 11:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:12.254 11:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:12.254 11:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:12.254 11:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:12.254 11:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:12.254 11:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:12.254 11:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:12.254 11:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:12.254 11:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:12.254 11:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:12.254 11:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:12.254 11:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:12.254 11:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:12.254 11:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:12.254 11:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:12.254 11:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:12.254 11:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:12.254 11:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:12.254 11:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:12.254 11:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:12.254 11:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:12.513 11:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:12.513 11:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:12.513 11:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:12.513 11:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:12.513 11:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:12.513 11:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:12.513 11:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:12.513 11:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:12.513 11:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:12.513 11:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:12.513 11:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:12.513 11:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:12.513 11:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:12.513 11:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:12.513 11:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:12.513 11:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:12.513 11:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:12.513 11:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:12.513 11:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:12.514 11:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:12.514 11:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:12.514 11:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:12.514 11:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:12.514 11:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:12.514 11:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:12.514 11:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:12.514 11:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:12.514 11:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:12.514 11:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:12.514 11:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:12.772 11:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:12.772 11:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:12.772 11:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:12.772 11:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:12.772 11:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:12.772 11:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:12.772 11:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:12.772 11:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:13.031 11:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:13.031 11:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:13.031 11:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:13.031 11:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:13.031 11:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:13.031 11:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:13.031 11:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:13.031 11:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:13.031 11:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:13.031 11:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:13.031 11:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:13.031 11:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:13.031 11:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:13.031 11:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:13.031 11:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:13.031 11:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:13.031 11:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:13.031 11:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:13.031 11:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:13.031 11:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:13.031 11:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:13.031 11:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:13.031 11:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:13.031 11:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:13.290 11:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:13.290 11:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:13.290 11:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:13.290 11:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:13.290 11:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:13.290 11:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:13.290 11:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:13.290 11:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:13.550 11:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:13.550 11:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:13.550 11:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:13.550 11:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:13.550 11:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:13.550 11:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:13.550 11:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:13.550 11:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:13.550 11:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:13.550 11:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:13.550 11:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:13.550 11:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:13.550 11:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:13.550 11:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:13.550 11:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:13.550 11:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:13.550 11:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:13.550 11:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:13.550 11:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:13.550 11:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:13.550 11:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:13.550 11:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:13.550 11:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:13.550 11:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:13.550 11:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:13.550 11:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:13.550 11:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:13.550 11:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:13.550 11:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:13.550 11:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:13.550 11:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:13.550 11:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:13.809 11:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:13.809 11:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:13.809 11:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:13.809 11:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:13.809 11:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:13.809 11:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:13.809 11:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:13.809 11:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:13.809 11:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:13.810 11:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:13.810 11:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:13.810 11:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:13.810 11:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:13.810 11:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:13.810 11:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:13.810 11:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:13.810 11:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:13.810 11:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:13.810 11:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:13.810 11:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:13.810 11:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:13.810 11:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:13.810 11:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:13.810 11:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:14.069 11:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:14.069 11:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:14.069 11:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:14.069 11:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:14.069 11:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:14.069 11:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:14.069 11:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:14.069 11:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:14.329 11:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:14.329 11:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:14.329 11:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:14.329 11:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:14.329 11:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:14.329 11:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:14.329 11:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:14.329 11:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:14.329 11:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:14.329 11:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:14.329 11:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:14.329 11:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:14.329 11:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:14.329 11:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:14.329 11:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:14.329 11:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:14.329 11:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:14.329 11:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:14.329 11:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:14.329 11:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:14.329 11:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:14.329 11:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:14.329 11:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:14.329 11:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:14.589 11:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:14.589 11:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:14.589 11:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:14.589 11:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:14.589 11:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:14.589 11:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:14.589 11:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:14.589 11:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:14.589 11:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:14.589 11:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:14.589 11:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:14.589 11:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:14.589 11:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:14.589 11:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:14.589 11:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:14.589 11:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:14.589 11:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:14.589 11:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:14.589 11:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:14.589 11:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:14.589 11:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:14.589 11:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:14.589 11:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:14.589 11:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:14.589 11:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:14.589 11:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:14.589 11:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:14.589 11:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:14.589 11:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:14.589 11:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:14.589 11:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:14.589 11:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:14.848 11:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:14.848 11:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:14.848 11:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:14.849 11:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:14.849 11:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:14.849 11:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:14.849 11:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:14.849 11:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:15.108 11:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:15.108 11:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:15.108 11:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:15.108 11:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:15.108 11:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:15.108 11:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:15.108 11:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:15.108 11:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:15.108 11:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:15.108 11:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:15.108 11:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:15.108 11:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:15.108 11:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:15.108 11:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:15.108 11:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:15.108 11:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:15.108 11:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:28:15.108 11:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:28:15.108 11:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:15.108 11:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:28:15.108 11:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:15.108 11:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:28:15.108 11:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:15.108 11:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:15.108 rmmod nvme_tcp 00:28:15.108 rmmod nvme_fabrics 00:28:15.108 rmmod nvme_keyring 00:28:15.108 11:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:15.108 11:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:28:15.108 11:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:28:15.108 11:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 33024 ']' 00:28:15.108 11:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 33024 00:28:15.108 11:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 33024 ']' 00:28:15.108 11:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 33024 00:28:15.109 11:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:28:15.109 11:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:15.109 11:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 33024 00:28:15.109 11:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:15.109 11:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:15.109 11:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 33024' 00:28:15.109 killing process with pid 33024 00:28:15.109 11:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 33024 00:28:15.109 11:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 33024 00:28:15.368 11:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:15.368 11:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:15.368 11:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:15.368 11:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:28:15.368 11:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:28:15.368 11:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:15.368 11:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:28:15.368 11:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:15.369 11:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:15.369 11:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:15.369 11:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:15.369 11:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:17.907 11:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:17.907 00:28:17.907 real 0m46.992s 00:28:17.907 user 2m55.279s 00:28:17.907 sys 0m19.657s 00:28:17.907 11:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:17.907 11:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:28:17.907 ************************************ 00:28:17.907 END TEST nvmf_ns_hotplug_stress 00:28:17.907 ************************************ 00:28:17.907 11:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:28:17.907 11:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:17.907 11:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:17.907 11:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:17.907 ************************************ 00:28:17.907 START TEST nvmf_delete_subsystem 00:28:17.907 ************************************ 00:28:17.908 11:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:28:17.908 * Looking for test storage... 00:28:17.908 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:17.908 11:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:17.908 11:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:28:17.908 11:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:17.908 11:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:17.908 11:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:17.908 11:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:17.908 11:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:17.908 11:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:28:17.908 11:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:28:17.908 11:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:28:17.908 11:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:28:17.908 11:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:28:17.908 11:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:28:17.908 11:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:28:17.908 11:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:17.908 11:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:28:17.908 11:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:28:17.908 11:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:17.908 11:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:17.908 11:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:28:17.908 11:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:28:17.908 11:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:17.908 11:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:28:17.908 11:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:28:17.908 11:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:28:17.908 11:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:28:17.908 11:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:17.908 11:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:28:17.908 11:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:28:17.908 11:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:17.908 11:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:17.908 11:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:28:17.908 11:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:17.908 11:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:17.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:17.908 --rc genhtml_branch_coverage=1 00:28:17.908 --rc genhtml_function_coverage=1 00:28:17.908 --rc genhtml_legend=1 00:28:17.908 --rc geninfo_all_blocks=1 00:28:17.908 --rc geninfo_unexecuted_blocks=1 00:28:17.908 00:28:17.908 ' 00:28:17.908 11:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:17.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:17.908 --rc genhtml_branch_coverage=1 00:28:17.908 --rc genhtml_function_coverage=1 00:28:17.908 --rc genhtml_legend=1 00:28:17.908 --rc geninfo_all_blocks=1 00:28:17.908 --rc geninfo_unexecuted_blocks=1 00:28:17.908 00:28:17.908 ' 00:28:17.908 11:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:17.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:17.908 --rc genhtml_branch_coverage=1 00:28:17.908 --rc genhtml_function_coverage=1 00:28:17.908 --rc genhtml_legend=1 00:28:17.908 --rc geninfo_all_blocks=1 00:28:17.908 --rc geninfo_unexecuted_blocks=1 00:28:17.908 00:28:17.908 ' 00:28:17.908 11:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:17.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:17.908 --rc genhtml_branch_coverage=1 00:28:17.908 --rc genhtml_function_coverage=1 00:28:17.908 --rc genhtml_legend=1 00:28:17.908 --rc geninfo_all_blocks=1 00:28:17.908 --rc geninfo_unexecuted_blocks=1 00:28:17.908 00:28:17.908 ' 00:28:17.908 11:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:17.908 11:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:28:17.908 11:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:17.908 11:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:17.908 11:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:17.908 11:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:17.908 11:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:17.908 11:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:17.908 11:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:17.908 11:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:17.908 11:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:17.908 11:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:17.908 11:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:28:17.908 11:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:28:17.908 11:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:17.909 11:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:17.909 11:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:17.909 11:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:17.909 11:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:17.909 11:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:28:17.909 11:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:17.909 11:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:17.909 11:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:17.909 11:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:17.909 11:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:17.909 11:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:17.909 11:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:28:17.909 11:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:17.909 11:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:28:17.909 11:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:17.909 11:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:17.909 11:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:17.909 11:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:17.909 11:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:17.909 11:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:17.909 11:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:17.909 11:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:17.909 11:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:17.909 11:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:17.909 11:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:28:17.909 11:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:17.909 11:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:17.909 11:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:17.909 11:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:17.909 11:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:17.909 11:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:17.909 11:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:17.909 11:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:17.909 11:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:17.909 11:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:17.909 11:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:28:17.909 11:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:24.482 11:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:24.482 11:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:28:24.482 11:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:24.482 11:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:24.482 11:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:24.482 11:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:24.482 11:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:24.482 11:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:28:24.482 11:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:24.482 11:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:28:24.482 11:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:28:24.482 11:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:28:24.482 11:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:28:24.482 11:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:28:24.482 11:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:28:24.482 11:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:24.482 11:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:24.482 11:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:24.482 11:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:24.482 11:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:24.482 11:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:24.482 11:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:24.482 11:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:24.482 11:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:24.482 11:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:24.482 11:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:24.482 11:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:24.482 11:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:24.482 11:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:24.482 11:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:24.482 11:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:24.482 11:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:24.482 11:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:24.482 11:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:24.482 11:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:24.482 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:24.482 11:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:24.482 11:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:24.482 11:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:24.482 11:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:24.482 11:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:24.482 11:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:24.482 11:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:24.482 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:24.482 11:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:24.482 11:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:24.482 11:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:24.483 11:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:24.483 11:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:24.483 11:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:24.483 11:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:24.483 11:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:24.483 11:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:24.483 11:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:24.483 11:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:24.483 11:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:24.483 11:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:24.483 11:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:24.483 11:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:24.483 11:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:24.483 Found net devices under 0000:86:00.0: cvl_0_0 00:28:24.483 11:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:24.483 11:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:24.483 11:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:24.483 11:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:24.483 11:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:24.483 11:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:24.483 11:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:24.483 11:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:24.483 11:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:24.483 Found net devices under 0000:86:00.1: cvl_0_1 00:28:24.483 11:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:24.483 11:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:24.483 11:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:28:24.483 11:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:24.483 11:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:24.483 11:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:24.483 11:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:24.483 11:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:24.483 11:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:24.483 11:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:24.483 11:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:24.483 11:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:24.483 11:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:24.483 11:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:24.483 11:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:24.483 11:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:24.483 11:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:24.483 11:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:24.483 11:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:24.483 11:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:24.483 11:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:24.483 11:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:24.483 11:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:24.483 11:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:24.483 11:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:24.483 11:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:24.483 11:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:24.483 11:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:24.483 11:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:24.483 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:24.483 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.318 ms 00:28:24.483 00:28:24.483 --- 10.0.0.2 ping statistics --- 00:28:24.483 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:24.483 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:28:24.483 11:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:24.483 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:24.483 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.222 ms 00:28:24.483 00:28:24.483 --- 10.0.0.1 ping statistics --- 00:28:24.483 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:24.483 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:28:24.483 11:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:24.483 11:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:28:24.483 11:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:24.483 11:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:24.483 11:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:24.483 11:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:24.483 11:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:24.483 11:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:24.483 11:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:24.483 11:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:28:24.483 11:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:24.483 11:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:24.483 11:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:24.483 11:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=42892 00:28:24.483 11:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 42892 00:28:24.483 11:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:28:24.483 11:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 42892 ']' 00:28:24.483 11:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:24.483 11:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:24.484 11:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:24.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:24.484 11:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:24.484 11:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:24.484 [2024-11-20 11:22:51.059253] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:24.484 [2024-11-20 11:22:51.060163] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:28:24.484 [2024-11-20 11:22:51.060195] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:24.484 [2024-11-20 11:22:51.138401] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:24.484 [2024-11-20 11:22:51.181299] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:24.484 [2024-11-20 11:22:51.181338] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:24.484 [2024-11-20 11:22:51.181346] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:24.484 [2024-11-20 11:22:51.181352] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:24.484 [2024-11-20 11:22:51.181360] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:24.484 [2024-11-20 11:22:51.182601] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:24.484 [2024-11-20 11:22:51.182601] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:24.484 [2024-11-20 11:22:51.251291] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:24.484 [2024-11-20 11:22:51.251914] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:24.484 [2024-11-20 11:22:51.252086] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:24.484 11:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:24.484 11:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:28:24.484 11:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:24.484 11:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:24.484 11:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:24.484 11:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:24.484 11:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:24.484 11:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.484 11:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:24.484 [2024-11-20 11:22:51.331262] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:24.484 11:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.484 11:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:28:24.484 11:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.484 11:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:24.484 11:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.484 11:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:24.484 11:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.484 11:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:24.484 [2024-11-20 11:22:51.355499] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:24.484 11:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.484 11:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:28:24.484 11:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.484 11:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:24.484 NULL1 00:28:24.484 11:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.484 11:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:28:24.484 11:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.484 11:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:24.484 Delay0 00:28:24.484 11:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.484 11:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:24.484 11:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.484 11:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:24.484 11:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.484 11:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=43013 00:28:24.484 11:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:28:24.484 11:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:28:24.484 [2024-11-20 11:22:51.465360] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:28:26.389 11:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:26.389 11:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.389 11:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:26.389 Read completed with error (sct=0, sc=8) 00:28:26.389 starting I/O failed: -6 00:28:26.389 Write completed with error (sct=0, sc=8) 00:28:26.389 Read completed with error (sct=0, sc=8) 00:28:26.389 Read completed with error (sct=0, sc=8) 00:28:26.389 Read completed with error (sct=0, sc=8) 00:28:26.389 starting I/O failed: -6 00:28:26.389 Write completed with error (sct=0, sc=8) 00:28:26.389 Read completed with error (sct=0, sc=8) 00:28:26.389 Read completed with error (sct=0, sc=8) 00:28:26.389 Read completed with error (sct=0, sc=8) 00:28:26.389 starting I/O failed: -6 00:28:26.389 Read completed with error (sct=0, sc=8) 00:28:26.389 Read completed with error (sct=0, sc=8) 00:28:26.389 Read completed with error (sct=0, sc=8) 00:28:26.389 Read completed with error (sct=0, sc=8) 00:28:26.389 starting I/O failed: -6 00:28:26.389 Read completed with error (sct=0, sc=8) 00:28:26.389 Read completed with error (sct=0, sc=8) 00:28:26.389 Read completed with error (sct=0, sc=8) 00:28:26.389 Read completed with error (sct=0, sc=8) 00:28:26.389 starting I/O failed: -6 00:28:26.389 Read completed with error (sct=0, sc=8) 00:28:26.389 Read completed with error (sct=0, sc=8) 00:28:26.389 Read completed with error (sct=0, sc=8) 00:28:26.389 Read completed with error (sct=0, sc=8) 00:28:26.389 starting I/O failed: -6 00:28:26.389 Write completed with error (sct=0, sc=8) 00:28:26.389 Write completed with error (sct=0, sc=8) 00:28:26.389 Write completed with error (sct=0, sc=8) 00:28:26.389 Write completed with error (sct=0, sc=8) 00:28:26.389 starting I/O failed: -6 00:28:26.389 Read completed with error (sct=0, sc=8) 00:28:26.389 Write completed with error (sct=0, sc=8) 00:28:26.389 Write completed with error (sct=0, sc=8) 00:28:26.389 Write completed with error (sct=0, sc=8) 00:28:26.389 starting I/O failed: -6 00:28:26.389 Read completed with error (sct=0, sc=8) 00:28:26.389 Read completed with error (sct=0, sc=8) 00:28:26.389 Read completed with error (sct=0, sc=8) 00:28:26.389 [2024-11-20 11:22:53.668534] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb92c0 is same with the state(6) to be set 00:28:26.389 Write completed with error (sct=0, sc=8) 00:28:26.389 Read completed with error (sct=0, sc=8) 00:28:26.389 Read completed with error (sct=0, sc=8) 00:28:26.389 Read completed with error (sct=0, sc=8) 00:28:26.389 Read completed with error (sct=0, sc=8) 00:28:26.389 Write completed with error (sct=0, sc=8) 00:28:26.389 Read completed with error (sct=0, sc=8) 00:28:26.389 Write completed with error (sct=0, sc=8) 00:28:26.389 Write completed with error (sct=0, sc=8) 00:28:26.389 Read completed with error (sct=0, sc=8) 00:28:26.389 Read completed with error (sct=0, sc=8) 00:28:26.389 Write completed with error (sct=0, sc=8) 00:28:26.389 Read completed with error (sct=0, sc=8) 00:28:26.389 Read completed with error (sct=0, sc=8) 00:28:26.389 Read completed with error (sct=0, sc=8) 00:28:26.389 Read completed with error (sct=0, sc=8) 00:28:26.389 Read completed with error (sct=0, sc=8) 00:28:26.389 Write completed with error (sct=0, sc=8) 00:28:26.390 Write completed with error (sct=0, sc=8) 00:28:26.390 Read completed with error (sct=0, sc=8) 00:28:26.390 Read completed with error (sct=0, sc=8) 00:28:26.390 Read completed with error (sct=0, sc=8) 00:28:26.390 Read completed with error (sct=0, sc=8) 00:28:26.390 Read completed with error (sct=0, sc=8) 00:28:26.390 Write completed with error (sct=0, sc=8) 00:28:26.390 Read completed with error (sct=0, sc=8) 00:28:26.390 Read completed with error (sct=0, sc=8) 00:28:26.390 Read completed with error (sct=0, sc=8) 00:28:26.390 Read completed with error (sct=0, sc=8) 00:28:26.390 Read completed with error (sct=0, sc=8) 00:28:26.390 Read completed with error (sct=0, sc=8) 00:28:26.390 Read completed with error (sct=0, sc=8) 00:28:26.390 Read completed with error (sct=0, sc=8) 00:28:26.390 Write completed with error (sct=0, sc=8) 00:28:26.390 Read completed with error (sct=0, sc=8) 00:28:26.390 Read completed with error (sct=0, sc=8) 00:28:26.390 Read completed with error (sct=0, sc=8) 00:28:26.390 Write completed with error (sct=0, sc=8) 00:28:26.390 Read completed with error (sct=0, sc=8) 00:28:26.390 Read completed with error (sct=0, sc=8) 00:28:26.390 [2024-11-20 11:22:53.668811] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb9680 is same with the state(6) to be set 00:28:26.390 Read completed with error (sct=0, sc=8) 00:28:26.390 Read completed with error (sct=0, sc=8) 00:28:26.390 Read completed with error (sct=0, sc=8) 00:28:26.390 Read completed with error (sct=0, sc=8) 00:28:26.390 Read completed with error (sct=0, sc=8) 00:28:26.390 Write completed with error (sct=0, sc=8) 00:28:26.390 Read completed with error (sct=0, sc=8) 00:28:26.390 Read completed with error (sct=0, sc=8) 00:28:26.390 Read completed with error (sct=0, sc=8) 00:28:26.390 Read completed with error (sct=0, sc=8) 00:28:26.390 Write completed with error (sct=0, sc=8) 00:28:26.390 Read completed with error (sct=0, sc=8) 00:28:26.390 Read completed with error (sct=0, sc=8) 00:28:26.390 Read completed with error (sct=0, sc=8) 00:28:26.390 Write completed with error (sct=0, sc=8) 00:28:26.390 Read completed with error (sct=0, sc=8) 00:28:26.390 Write completed with error (sct=0, sc=8) 00:28:26.390 Read completed with error (sct=0, sc=8) 00:28:26.390 Read completed with error (sct=0, sc=8) 00:28:26.390 Read completed with error (sct=0, sc=8) 00:28:26.390 Write completed with error (sct=0, sc=8) 00:28:26.390 Read completed with error (sct=0, sc=8) 00:28:26.390 Write completed with error (sct=0, sc=8) 00:28:26.390 Read completed with error (sct=0, sc=8) 00:28:26.390 Read completed with error (sct=0, sc=8) 00:28:26.390 Write completed with error (sct=0, sc=8) 00:28:26.390 Write completed with error (sct=0, sc=8) 00:28:26.390 Write completed with error (sct=0, sc=8) 00:28:26.390 Read completed with error (sct=0, sc=8) 00:28:26.390 Read completed with error (sct=0, sc=8) 00:28:26.390 Read completed with error (sct=0, sc=8) 00:28:26.390 Read completed with error (sct=0, sc=8) 00:28:26.390 Read completed with error (sct=0, sc=8) 00:28:26.390 Read completed with error (sct=0, sc=8) 00:28:26.390 Read completed with error (sct=0, sc=8) 00:28:26.390 Read completed with error (sct=0, sc=8) 00:28:26.390 Read completed with error (sct=0, sc=8) 00:28:26.390 Write completed with error (sct=0, sc=8) 00:28:26.390 Read completed with error (sct=0, sc=8) 00:28:26.390 Write completed with error (sct=0, sc=8) 00:28:26.390 [2024-11-20 11:22:53.668995] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb9860 is same with the state(6) to be set 00:28:26.390 Read completed with error (sct=0, sc=8) 00:28:26.390 Read completed with error (sct=0, sc=8) 00:28:26.390 Read completed with error (sct=0, sc=8) 00:28:26.390 Read completed with error (sct=0, sc=8) 00:28:26.390 Read completed with error (sct=0, sc=8) 00:28:26.390 Write completed with error (sct=0, sc=8) 00:28:26.390 Read completed with error (sct=0, sc=8) 00:28:26.390 Read completed with error (sct=0, sc=8) 00:28:26.390 Write completed with error (sct=0, sc=8) 00:28:26.390 Write completed with error (sct=0, sc=8) 00:28:26.390 Write completed with error (sct=0, sc=8) 00:28:26.390 Read completed with error (sct=0, sc=8) 00:28:26.390 Read completed with error (sct=0, sc=8) 00:28:26.390 Read completed with error (sct=0, sc=8) 00:28:26.390 Read completed with error (sct=0, sc=8) 00:28:26.390 Write completed with error (sct=0, sc=8) 00:28:26.390 Read completed with error (sct=0, sc=8) 00:28:26.390 Read completed with error (sct=0, sc=8) 00:28:26.390 Write completed with error (sct=0, sc=8) 00:28:26.390 Read completed with error (sct=0, sc=8) 00:28:26.390 Write completed with error (sct=0, sc=8) 00:28:26.390 Read completed with error (sct=0, sc=8) 00:28:26.390 Write completed with error (sct=0, sc=8) 00:28:26.390 Read completed with error (sct=0, sc=8) 00:28:26.390 Read completed with error (sct=0, sc=8) 00:28:26.390 Write completed with error (sct=0, sc=8) 00:28:26.390 Write completed with error (sct=0, sc=8) 00:28:26.390 Read completed with error (sct=0, sc=8) 00:28:26.390 Read completed with error (sct=0, sc=8) 00:28:26.390 Write completed with error (sct=0, sc=8) 00:28:26.390 Read completed with error (sct=0, sc=8) 00:28:26.390 Read completed with error (sct=0, sc=8) 00:28:26.390 Write completed with error (sct=0, sc=8) 00:28:26.390 Write completed with error (sct=0, sc=8) 00:28:26.390 Read completed with error (sct=0, sc=8) 00:28:26.390 Read completed with error (sct=0, sc=8) 00:28:26.390 Write completed with error (sct=0, sc=8) 00:28:26.390 Read completed with error (sct=0, sc=8) 00:28:26.390 Write completed with error (sct=0, sc=8) 00:28:26.390 Write completed with error (sct=0, sc=8) 00:28:26.390 [2024-11-20 11:22:53.669160] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb94a0 is same with the state(6) to be set 00:28:27.327 [2024-11-20 11:22:54.644071] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eba9a0 is same with the state(6) to be set 00:28:27.327 Read completed with error (sct=0, sc=8) 00:28:27.327 Write completed with error (sct=0, sc=8) 00:28:27.327 Write completed with error (sct=0, sc=8) 00:28:27.327 Read completed with error (sct=0, sc=8) 00:28:27.327 starting I/O failed: -6 00:28:27.327 Read completed with error (sct=0, sc=8) 00:28:27.327 Write completed with error (sct=0, sc=8) 00:28:27.327 Read completed with error (sct=0, sc=8) 00:28:27.327 Read completed with error (sct=0, sc=8) 00:28:27.327 starting I/O failed: -6 00:28:27.327 Write completed with error (sct=0, sc=8) 00:28:27.327 Read completed with error (sct=0, sc=8) 00:28:27.327 Read completed with error (sct=0, sc=8) 00:28:27.327 Write completed with error (sct=0, sc=8) 00:28:27.327 starting I/O failed: -6 00:28:27.328 Read completed with error (sct=0, sc=8) 00:28:27.328 Write completed with error (sct=0, sc=8) 00:28:27.328 Read completed with error (sct=0, sc=8) 00:28:27.328 Read completed with error (sct=0, sc=8) 00:28:27.328 starting I/O failed: -6 00:28:27.328 Read completed with error (sct=0, sc=8) 00:28:27.328 Read completed with error (sct=0, sc=8) 00:28:27.328 Write completed with error (sct=0, sc=8) 00:28:27.328 Read completed with error (sct=0, sc=8) 00:28:27.328 starting I/O failed: -6 00:28:27.328 Write completed with error (sct=0, sc=8) 00:28:27.328 Write completed with error (sct=0, sc=8) 00:28:27.328 Read completed with error (sct=0, sc=8) 00:28:27.328 Write completed with error (sct=0, sc=8) 00:28:27.328 starting I/O failed: -6 00:28:27.328 [2024-11-20 11:22:54.671582] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ff6a4000c40 is same with the state(6) to be set 00:28:27.328 Read completed with error (sct=0, sc=8) 00:28:27.328 Read completed with error (sct=0, sc=8) 00:28:27.328 Read completed with error (sct=0, sc=8) 00:28:27.328 Read completed with error (sct=0, sc=8) 00:28:27.328 Read completed with error (sct=0, sc=8) 00:28:27.328 Read completed with error (sct=0, sc=8) 00:28:27.328 Write completed with error (sct=0, sc=8) 00:28:27.328 Read completed with error (sct=0, sc=8) 00:28:27.328 Read completed with error (sct=0, sc=8) 00:28:27.328 Read completed with error (sct=0, sc=8) 00:28:27.328 Read completed with error (sct=0, sc=8) 00:28:27.328 Read completed with error (sct=0, sc=8) 00:28:27.328 Read completed with error (sct=0, sc=8) 00:28:27.328 Write completed with error (sct=0, sc=8) 00:28:27.328 Read completed with error (sct=0, sc=8) 00:28:27.328 Read completed with error (sct=0, sc=8) 00:28:27.328 Read completed with error (sct=0, sc=8) 00:28:27.328 Read completed with error (sct=0, sc=8) 00:28:27.328 Read completed with error (sct=0, sc=8) 00:28:27.328 Write completed with error (sct=0, sc=8) 00:28:27.328 Read completed with error (sct=0, sc=8) 00:28:27.328 Read completed with error (sct=0, sc=8) 00:28:27.328 Read completed with error (sct=0, sc=8) 00:28:27.328 Read completed with error (sct=0, sc=8) 00:28:27.328 Write completed with error (sct=0, sc=8) 00:28:27.328 Read completed with error (sct=0, sc=8) 00:28:27.328 Read completed with error (sct=0, sc=8) 00:28:27.328 Read completed with error (sct=0, sc=8) 00:28:27.328 Read completed with error (sct=0, sc=8) 00:28:27.328 Read completed with error (sct=0, sc=8) 00:28:27.328 Read completed with error (sct=0, sc=8) 00:28:27.328 Read completed with error (sct=0, sc=8) 00:28:27.328 Read completed with error (sct=0, sc=8) 00:28:27.328 Read completed with error (sct=0, sc=8) 00:28:27.328 Read completed with error (sct=0, sc=8) 00:28:27.328 Write completed with error (sct=0, sc=8) 00:28:27.328 Read completed with error (sct=0, sc=8) 00:28:27.328 Read completed with error (sct=0, sc=8) 00:28:27.328 Read completed with error (sct=0, sc=8) 00:28:27.328 Read completed with error (sct=0, sc=8) 00:28:27.328 Read completed with error (sct=0, sc=8) 00:28:27.328 [2024-11-20 11:22:54.671803] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ff6a400d350 is same with the state(6) to be set 00:28:27.328 Read completed with error (sct=0, sc=8) 00:28:27.328 Read completed with error (sct=0, sc=8) 00:28:27.328 Write completed with error (sct=0, sc=8) 00:28:27.328 Write completed with error (sct=0, sc=8) 00:28:27.328 Read completed with error (sct=0, sc=8) 00:28:27.328 Read completed with error (sct=0, sc=8) 00:28:27.328 Read completed with error (sct=0, sc=8) 00:28:27.328 Read completed with error (sct=0, sc=8) 00:28:27.328 Read completed with error (sct=0, sc=8) 00:28:27.328 Read completed with error (sct=0, sc=8) 00:28:27.328 Read completed with error (sct=0, sc=8) 00:28:27.328 Write completed with error (sct=0, sc=8) 00:28:27.328 Read completed with error (sct=0, sc=8) 00:28:27.328 Read completed with error (sct=0, sc=8) 00:28:27.328 Read completed with error (sct=0, sc=8) 00:28:27.328 Read completed with error (sct=0, sc=8) 00:28:27.328 Read completed with error (sct=0, sc=8) 00:28:27.328 Write completed with error (sct=0, sc=8) 00:28:27.328 Write completed with error (sct=0, sc=8) 00:28:27.328 Write completed with error (sct=0, sc=8) 00:28:27.328 Read completed with error (sct=0, sc=8) 00:28:27.328 Write completed with error (sct=0, sc=8) 00:28:27.328 Read completed with error (sct=0, sc=8) 00:28:27.328 Read completed with error (sct=0, sc=8) 00:28:27.328 Write completed with error (sct=0, sc=8) 00:28:27.328 Read completed with error (sct=0, sc=8) 00:28:27.328 Read completed with error (sct=0, sc=8) 00:28:27.328 Read completed with error (sct=0, sc=8) 00:28:27.328 Read completed with error (sct=0, sc=8) 00:28:27.328 Read completed with error (sct=0, sc=8) 00:28:27.328 Write completed with error (sct=0, sc=8) 00:28:27.328 Write completed with error (sct=0, sc=8) 00:28:27.328 Read completed with error (sct=0, sc=8) 00:28:27.328 Write completed with error (sct=0, sc=8) 00:28:27.328 Read completed with error (sct=0, sc=8) 00:28:27.328 Write completed with error (sct=0, sc=8) 00:28:27.328 Read completed with error (sct=0, sc=8) 00:28:27.328 Read completed with error (sct=0, sc=8) 00:28:27.328 Write completed with error (sct=0, sc=8) 00:28:27.328 Read completed with error (sct=0, sc=8) 00:28:27.328 Write completed with error (sct=0, sc=8) 00:28:27.328 [2024-11-20 11:22:54.671980] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ff6a400d7e0 is same with the state(6) to be set 00:28:27.328 Read completed with error (sct=0, sc=8) 00:28:27.328 Read completed with error (sct=0, sc=8) 00:28:27.328 Read completed with error (sct=0, sc=8) 00:28:27.328 Write completed with error (sct=0, sc=8) 00:28:27.328 Read completed with error (sct=0, sc=8) 00:28:27.328 Read completed with error (sct=0, sc=8) 00:28:27.328 Read completed with error (sct=0, sc=8) 00:28:27.328 Write completed with error (sct=0, sc=8) 00:28:27.328 Read completed with error (sct=0, sc=8) 00:28:27.328 Read completed with error (sct=0, sc=8) 00:28:27.328 Read completed with error (sct=0, sc=8) 00:28:27.328 Read completed with error (sct=0, sc=8) 00:28:27.328 Read completed with error (sct=0, sc=8) 00:28:27.328 Read completed with error (sct=0, sc=8) 00:28:27.328 Read completed with error (sct=0, sc=8) 00:28:27.328 Read completed with error (sct=0, sc=8) 00:28:27.328 Read completed with error (sct=0, sc=8) 00:28:27.328 Read completed with error (sct=0, sc=8) 00:28:27.328 Write completed with error (sct=0, sc=8) 00:28:27.328 Write completed with error (sct=0, sc=8) 00:28:27.328 Read completed with error (sct=0, sc=8) 00:28:27.328 Read completed with error (sct=0, sc=8) 00:28:27.328 Read completed with error (sct=0, sc=8) 00:28:27.328 Read completed with error (sct=0, sc=8) 00:28:27.328 Read completed with error (sct=0, sc=8) 00:28:27.328 Read completed with error (sct=0, sc=8) 00:28:27.328 Write completed with error (sct=0, sc=8) 00:28:27.328 Read completed with error (sct=0, sc=8) 00:28:27.328 Write completed with error (sct=0, sc=8) 00:28:27.328 Write completed with error (sct=0, sc=8) 00:28:27.328 Read completed with error (sct=0, sc=8) 00:28:27.328 Write completed with error (sct=0, sc=8) 00:28:27.328 Write completed with error (sct=0, sc=8) 00:28:27.328 Read completed with error (sct=0, sc=8) 00:28:27.328 Read completed with error (sct=0, sc=8) 00:28:27.328 Read completed with error (sct=0, sc=8) 00:28:27.328 Write completed with error (sct=0, sc=8) 00:28:27.328 Write completed with error (sct=0, sc=8) 00:28:27.328 Read completed with error (sct=0, sc=8) 00:28:27.328 Read completed with error (sct=0, sc=8) 00:28:27.328 [2024-11-20 11:22:54.672650] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ff6a400d020 is same with the state(6) to be set 00:28:27.328 Initializing NVMe Controllers 00:28:27.328 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:27.328 Controller IO queue size 128, less than required. 00:28:27.328 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:27.328 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:28:27.328 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:28:27.328 Initialization complete. Launching workers. 00:28:27.328 ======================================================== 00:28:27.328 Latency(us) 00:28:27.328 Device Information : IOPS MiB/s Average min max 00:28:27.328 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 139.10 0.07 919772.89 288.31 1009987.23 00:28:27.328 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 170.39 0.08 1123879.00 236.99 2003770.59 00:28:27.328 ======================================================== 00:28:27.328 Total : 309.49 0.15 1032145.92 236.99 2003770.59 00:28:27.328 00:28:27.328 [2024-11-20 11:22:54.673327] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eba9a0 (9): Bad file descriptor 00:28:27.328 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:28:27.328 11:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.328 11:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:28:27.329 11:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 43013 00:28:27.329 11:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:28:27.897 11:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:28:27.897 11:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 43013 00:28:27.897 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (43013) - No such process 00:28:27.897 11:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 43013 00:28:27.897 11:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:28:27.897 11:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 43013 00:28:27.897 11:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:28:27.897 11:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:27.897 11:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:28:27.897 11:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:27.897 11:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 43013 00:28:27.897 11:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:28:27.897 11:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:27.897 11:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:27.897 11:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:27.897 11:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:28:27.897 11:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.897 11:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:27.897 11:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.897 11:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:27.897 11:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.897 11:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:27.897 [2024-11-20 11:22:55.207599] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:27.897 11:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.897 11:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:27.897 11:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.897 11:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:27.897 11:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.897 11:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=43542 00:28:27.897 11:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:28:27.897 11:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:28:27.898 11:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 43542 00:28:27.898 11:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:27.898 [2024-11-20 11:22:55.276386] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:28:28.465 11:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:28.465 11:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 43542 00:28:28.465 11:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:29.033 11:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:29.033 11:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 43542 00:28:29.033 11:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:29.292 11:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:29.292 11:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 43542 00:28:29.292 11:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:29.860 11:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:29.860 11:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 43542 00:28:29.861 11:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:30.428 11:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:30.428 11:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 43542 00:28:30.428 11:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:30.996 11:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:30.996 11:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 43542 00:28:30.996 11:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:30.996 Initializing NVMe Controllers 00:28:30.996 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:30.996 Controller IO queue size 128, less than required. 00:28:30.996 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:30.996 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:28:30.996 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:28:30.996 Initialization complete. Launching workers. 00:28:30.996 ======================================================== 00:28:30.996 Latency(us) 00:28:30.996 Device Information : IOPS MiB/s Average min max 00:28:30.996 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002221.93 1000138.10 1007135.65 00:28:30.996 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004087.01 1000247.53 1010652.75 00:28:30.996 ======================================================== 00:28:30.996 Total : 256.00 0.12 1003154.47 1000138.10 1010652.75 00:28:30.996 00:28:31.566 11:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:31.566 11:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 43542 00:28:31.566 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (43542) - No such process 00:28:31.566 11:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 43542 00:28:31.566 11:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:28:31.566 11:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:28:31.566 11:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:31.566 11:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:28:31.566 11:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:31.566 11:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:28:31.566 11:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:31.566 11:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:31.566 rmmod nvme_tcp 00:28:31.566 rmmod nvme_fabrics 00:28:31.566 rmmod nvme_keyring 00:28:31.566 11:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:31.566 11:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:28:31.566 11:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:28:31.566 11:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 42892 ']' 00:28:31.566 11:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 42892 00:28:31.566 11:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 42892 ']' 00:28:31.566 11:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 42892 00:28:31.566 11:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:28:31.566 11:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:31.566 11:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 42892 00:28:31.566 11:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:31.566 11:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:31.566 11:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 42892' 00:28:31.566 killing process with pid 42892 00:28:31.566 11:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 42892 00:28:31.566 11:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 42892 00:28:31.566 11:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:31.566 11:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:31.566 11:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:31.566 11:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:28:31.566 11:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:28:31.566 11:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:31.566 11:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:28:31.566 11:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:31.566 11:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:31.566 11:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:31.566 11:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:31.566 11:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:34.175 11:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:34.175 00:28:34.175 real 0m16.196s 00:28:34.175 user 0m26.076s 00:28:34.175 sys 0m6.298s 00:28:34.175 11:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:34.175 11:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:34.175 ************************************ 00:28:34.175 END TEST nvmf_delete_subsystem 00:28:34.175 ************************************ 00:28:34.175 11:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:28:34.175 11:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:34.175 11:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:34.175 11:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:34.175 ************************************ 00:28:34.175 START TEST nvmf_host_management 00:28:34.175 ************************************ 00:28:34.175 11:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:28:34.175 * Looking for test storage... 00:28:34.176 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:34.176 11:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:34.176 11:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:28:34.176 11:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:34.176 11:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:34.176 11:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:34.176 11:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:34.176 11:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:34.176 11:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:28:34.176 11:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:28:34.176 11:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:28:34.176 11:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:28:34.176 11:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:28:34.176 11:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:28:34.176 11:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:28:34.176 11:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:34.176 11:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:28:34.176 11:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:28:34.176 11:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:34.176 11:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:34.176 11:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:28:34.176 11:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:28:34.176 11:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:34.176 11:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:28:34.176 11:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:28:34.176 11:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:28:34.176 11:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:28:34.176 11:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:34.176 11:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:28:34.176 11:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:28:34.176 11:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:34.176 11:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:34.176 11:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:28:34.176 11:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:34.176 11:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:34.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:34.176 --rc genhtml_branch_coverage=1 00:28:34.176 --rc genhtml_function_coverage=1 00:28:34.176 --rc genhtml_legend=1 00:28:34.176 --rc geninfo_all_blocks=1 00:28:34.176 --rc geninfo_unexecuted_blocks=1 00:28:34.176 00:28:34.176 ' 00:28:34.176 11:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:34.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:34.176 --rc genhtml_branch_coverage=1 00:28:34.176 --rc genhtml_function_coverage=1 00:28:34.176 --rc genhtml_legend=1 00:28:34.176 --rc geninfo_all_blocks=1 00:28:34.176 --rc geninfo_unexecuted_blocks=1 00:28:34.176 00:28:34.176 ' 00:28:34.176 11:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:34.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:34.176 --rc genhtml_branch_coverage=1 00:28:34.176 --rc genhtml_function_coverage=1 00:28:34.176 --rc genhtml_legend=1 00:28:34.176 --rc geninfo_all_blocks=1 00:28:34.176 --rc geninfo_unexecuted_blocks=1 00:28:34.176 00:28:34.176 ' 00:28:34.176 11:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:34.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:34.176 --rc genhtml_branch_coverage=1 00:28:34.176 --rc genhtml_function_coverage=1 00:28:34.176 --rc genhtml_legend=1 00:28:34.176 --rc geninfo_all_blocks=1 00:28:34.176 --rc geninfo_unexecuted_blocks=1 00:28:34.176 00:28:34.176 ' 00:28:34.176 11:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:34.176 11:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:28:34.176 11:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:34.176 11:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:34.176 11:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:34.176 11:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:34.176 11:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:34.176 11:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:34.176 11:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:34.176 11:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:34.176 11:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:34.176 11:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:34.176 11:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:28:34.176 11:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:28:34.176 11:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:34.176 11:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:34.176 11:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:34.176 11:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:34.176 11:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:34.176 11:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:28:34.176 11:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:34.176 11:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:34.176 11:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:34.176 11:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:34.176 11:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:34.177 11:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:34.177 11:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:28:34.177 11:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:34.177 11:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:28:34.177 11:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:34.177 11:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:34.177 11:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:34.177 11:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:34.177 11:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:34.177 11:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:34.177 11:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:34.177 11:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:34.177 11:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:34.177 11:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:34.177 11:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:34.177 11:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:34.177 11:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:28:34.177 11:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:34.177 11:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:34.177 11:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:34.177 11:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:34.177 11:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:34.177 11:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:34.177 11:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:34.177 11:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:34.177 11:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:34.177 11:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:34.177 11:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:28:34.177 11:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:40.748 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:40.748 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:28:40.748 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:40.748 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:40.748 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:40.748 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:40.748 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:40.748 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:28:40.748 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:40.748 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:28:40.748 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:28:40.748 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:28:40.748 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:28:40.748 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:28:40.748 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:28:40.748 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:40.748 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:40.748 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:40.748 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:40.748 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:40.749 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:40.749 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:40.749 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:40.749 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:40.749 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:40.749 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:40.749 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:40.749 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:40.749 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:40.749 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:40.749 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:40.749 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:40.749 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:40.749 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:40.749 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:40.749 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:40.749 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:40.749 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:40.749 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:40.749 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:40.749 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:40.749 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:40.749 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:40.749 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:40.749 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:40.749 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:40.749 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:40.749 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:40.749 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:40.749 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:40.749 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:40.749 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:40.749 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:40.749 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:40.749 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:40.749 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:40.749 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:40.749 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:40.749 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:40.749 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:40.749 Found net devices under 0000:86:00.0: cvl_0_0 00:28:40.749 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:40.749 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:40.749 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:40.749 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:40.749 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:40.749 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:40.749 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:40.749 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:40.749 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:40.749 Found net devices under 0000:86:00.1: cvl_0_1 00:28:40.749 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:40.749 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:40.749 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:28:40.749 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:40.749 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:40.749 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:40.749 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:40.749 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:40.749 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:40.749 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:40.749 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:40.749 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:40.749 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:40.749 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:40.749 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:40.749 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:40.749 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:40.749 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:40.749 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:40.749 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:40.749 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:40.749 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:40.749 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:40.749 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:40.749 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:40.750 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:40.750 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:40.750 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:40.750 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:40.750 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:40.750 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.375 ms 00:28:40.750 00:28:40.750 --- 10.0.0.2 ping statistics --- 00:28:40.750 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:40.750 rtt min/avg/max/mdev = 0.375/0.375/0.375/0.000 ms 00:28:40.750 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:40.750 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:40.750 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:28:40.750 00:28:40.750 --- 10.0.0.1 ping statistics --- 00:28:40.750 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:40.750 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:28:40.750 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:40.750 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:28:40.750 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:40.750 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:40.750 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:40.750 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:40.750 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:40.750 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:40.750 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:40.750 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:28:40.750 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:28:40.750 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:28:40.750 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:40.750 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:40.750 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:40.750 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=47699 00:28:40.750 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 47699 00:28:40.750 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:28:40.750 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 47699 ']' 00:28:40.750 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:40.750 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:40.750 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:40.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:40.750 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:40.750 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:40.750 [2024-11-20 11:23:07.356224] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:40.750 [2024-11-20 11:23:07.357161] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:28:40.750 [2024-11-20 11:23:07.357193] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:40.750 [2024-11-20 11:23:07.436132] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:40.750 [2024-11-20 11:23:07.478932] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:40.750 [2024-11-20 11:23:07.478975] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:40.750 [2024-11-20 11:23:07.478982] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:40.750 [2024-11-20 11:23:07.478988] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:40.750 [2024-11-20 11:23:07.478993] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:40.750 [2024-11-20 11:23:07.480587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:40.750 [2024-11-20 11:23:07.480693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:40.750 [2024-11-20 11:23:07.480812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:40.750 [2024-11-20 11:23:07.480813] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:40.750 [2024-11-20 11:23:07.547929] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:40.750 [2024-11-20 11:23:07.548631] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:40.750 [2024-11-20 11:23:07.548936] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:28:40.750 [2024-11-20 11:23:07.549299] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:40.750 [2024-11-20 11:23:07.549344] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:28:40.750 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:40.750 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:28:40.750 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:40.750 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:40.750 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:40.750 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:40.750 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:40.750 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.750 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:40.750 [2024-11-20 11:23:07.617505] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:40.750 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.750 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:28:40.750 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:40.750 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:40.750 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:40.750 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:28:40.750 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:28:40.750 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.750 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:40.751 Malloc0 00:28:40.751 [2024-11-20 11:23:07.705800] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:40.751 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.751 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:28:40.751 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:40.751 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:40.751 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=47741 00:28:40.751 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 47741 /var/tmp/bdevperf.sock 00:28:40.751 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 47741 ']' 00:28:40.751 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:40.751 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:28:40.751 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:28:40.751 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:40.751 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:40.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:40.751 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:28:40.751 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:40.751 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:28:40.751 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:40.751 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:40.751 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:40.751 { 00:28:40.751 "params": { 00:28:40.751 "name": "Nvme$subsystem", 00:28:40.751 "trtype": "$TEST_TRANSPORT", 00:28:40.751 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:40.751 "adrfam": "ipv4", 00:28:40.751 "trsvcid": "$NVMF_PORT", 00:28:40.751 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:40.751 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:40.751 "hdgst": ${hdgst:-false}, 00:28:40.751 "ddgst": ${ddgst:-false} 00:28:40.751 }, 00:28:40.751 "method": "bdev_nvme_attach_controller" 00:28:40.751 } 00:28:40.751 EOF 00:28:40.751 )") 00:28:40.751 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:28:40.751 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:28:40.751 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:28:40.751 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:40.751 "params": { 00:28:40.751 "name": "Nvme0", 00:28:40.751 "trtype": "tcp", 00:28:40.751 "traddr": "10.0.0.2", 00:28:40.751 "adrfam": "ipv4", 00:28:40.751 "trsvcid": "4420", 00:28:40.751 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:40.751 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:40.751 "hdgst": false, 00:28:40.751 "ddgst": false 00:28:40.751 }, 00:28:40.751 "method": "bdev_nvme_attach_controller" 00:28:40.751 }' 00:28:40.751 [2024-11-20 11:23:07.805435] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:28:40.751 [2024-11-20 11:23:07.805485] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid47741 ] 00:28:40.751 [2024-11-20 11:23:07.882383] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:40.751 [2024-11-20 11:23:07.923975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:40.751 Running I/O for 10 seconds... 00:28:40.751 11:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:40.751 11:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:28:40.751 11:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:40.751 11:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.751 11:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:40.751 11:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.751 11:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:40.751 11:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:28:40.751 11:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:28:40.751 11:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:28:40.751 11:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:28:40.751 11:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:28:40.751 11:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:28:40.751 11:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:28:40.751 11:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:28:40.751 11:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:28:40.751 11:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.751 11:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:41.011 11:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.011 11:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=95 00:28:41.011 11:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 95 -ge 100 ']' 00:28:41.011 11:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:28:41.271 11:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:28:41.271 11:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:28:41.271 11:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:28:41.271 11:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:28:41.271 11:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.271 11:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:41.271 11:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.271 11:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=707 00:28:41.271 11:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 707 -ge 100 ']' 00:28:41.271 11:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:28:41.271 11:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:28:41.271 11:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:28:41.271 11:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:28:41.271 11:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.271 11:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:41.271 [2024-11-20 11:23:08.565461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.271 [2024-11-20 11:23:08.565501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.271 [2024-11-20 11:23:08.565519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.271 [2024-11-20 11:23:08.565527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.271 [2024-11-20 11:23:08.565537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.271 [2024-11-20 11:23:08.565544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.271 [2024-11-20 11:23:08.565552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.271 [2024-11-20 11:23:08.565560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.271 [2024-11-20 11:23:08.565568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.271 [2024-11-20 11:23:08.565575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.271 [2024-11-20 11:23:08.565583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.271 [2024-11-20 11:23:08.565589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.271 [2024-11-20 11:23:08.565598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.271 [2024-11-20 11:23:08.565605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.271 [2024-11-20 11:23:08.565614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.271 [2024-11-20 11:23:08.565621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.271 [2024-11-20 11:23:08.565629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.271 [2024-11-20 11:23:08.565636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.271 [2024-11-20 11:23:08.565644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.271 [2024-11-20 11:23:08.565651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.271 [2024-11-20 11:23:08.565659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.271 [2024-11-20 11:23:08.565666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.271 [2024-11-20 11:23:08.565674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.271 [2024-11-20 11:23:08.565680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.271 [2024-11-20 11:23:08.565689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.271 [2024-11-20 11:23:08.565700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.272 [2024-11-20 11:23:08.565708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.272 [2024-11-20 11:23:08.565715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.272 [2024-11-20 11:23:08.565723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.272 [2024-11-20 11:23:08.565731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.272 [2024-11-20 11:23:08.565739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.272 [2024-11-20 11:23:08.565745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.272 [2024-11-20 11:23:08.565753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.272 [2024-11-20 11:23:08.565760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.272 [2024-11-20 11:23:08.565768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.272 [2024-11-20 11:23:08.565775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.272 [2024-11-20 11:23:08.565783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.272 [2024-11-20 11:23:08.565789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.272 [2024-11-20 11:23:08.565797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.272 [2024-11-20 11:23:08.565805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.272 [2024-11-20 11:23:08.565812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.272 [2024-11-20 11:23:08.565819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.272 [2024-11-20 11:23:08.565827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.272 [2024-11-20 11:23:08.565834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.272 [2024-11-20 11:23:08.565842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.272 [2024-11-20 11:23:08.565848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.272 [2024-11-20 11:23:08.565856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.272 [2024-11-20 11:23:08.565863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.272 [2024-11-20 11:23:08.565874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.272 [2024-11-20 11:23:08.565881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.272 [2024-11-20 11:23:08.565891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.272 [2024-11-20 11:23:08.565898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.272 [2024-11-20 11:23:08.565905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.272 [2024-11-20 11:23:08.565912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.272 [2024-11-20 11:23:08.565920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.272 [2024-11-20 11:23:08.565927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.272 [2024-11-20 11:23:08.565935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.272 [2024-11-20 11:23:08.565942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.272 [2024-11-20 11:23:08.565955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.272 [2024-11-20 11:23:08.565962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.272 [2024-11-20 11:23:08.565970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.272 [2024-11-20 11:23:08.565976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.272 [2024-11-20 11:23:08.565984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.272 [2024-11-20 11:23:08.565991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.272 [2024-11-20 11:23:08.565999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.272 [2024-11-20 11:23:08.566005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.272 [2024-11-20 11:23:08.566013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.272 [2024-11-20 11:23:08.566020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.272 [2024-11-20 11:23:08.566029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.272 [2024-11-20 11:23:08.566035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.272 [2024-11-20 11:23:08.566044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.272 [2024-11-20 11:23:08.566050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.272 [2024-11-20 11:23:08.566058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.272 [2024-11-20 11:23:08.566065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.272 [2024-11-20 11:23:08.566072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.272 [2024-11-20 11:23:08.566081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.272 [2024-11-20 11:23:08.566090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.272 [2024-11-20 11:23:08.566096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.272 [2024-11-20 11:23:08.566104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.272 [2024-11-20 11:23:08.566110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.272 [2024-11-20 11:23:08.566120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.272 [2024-11-20 11:23:08.566126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.272 [2024-11-20 11:23:08.566134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.272 [2024-11-20 11:23:08.566141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.272 [2024-11-20 11:23:08.566149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.272 [2024-11-20 11:23:08.566156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.272 [2024-11-20 11:23:08.566164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.272 [2024-11-20 11:23:08.566170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.272 [2024-11-20 11:23:08.566178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.272 [2024-11-20 11:23:08.566185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.272 [2024-11-20 11:23:08.566192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.272 [2024-11-20 11:23:08.566199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.272 [2024-11-20 11:23:08.566208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.272 [2024-11-20 11:23:08.566214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.272 [2024-11-20 11:23:08.566222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.272 [2024-11-20 11:23:08.566229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.272 [2024-11-20 11:23:08.566237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.272 [2024-11-20 11:23:08.566243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.272 [2024-11-20 11:23:08.566252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.272 [2024-11-20 11:23:08.566260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.272 [2024-11-20 11:23:08.566273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.272 [2024-11-20 11:23:08.566280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.272 [2024-11-20 11:23:08.566289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.272 [2024-11-20 11:23:08.566295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.272 [2024-11-20 11:23:08.566304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.272 [2024-11-20 11:23:08.566311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.272 [2024-11-20 11:23:08.566319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.272 [2024-11-20 11:23:08.566325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.272 [2024-11-20 11:23:08.566333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.272 [2024-11-20 11:23:08.566340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.272 [2024-11-20 11:23:08.566348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.272 [2024-11-20 11:23:08.566354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.272 [2024-11-20 11:23:08.566363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.272 [2024-11-20 11:23:08.566371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.272 [2024-11-20 11:23:08.566379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.272 [2024-11-20 11:23:08.566385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.272 [2024-11-20 11:23:08.566393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.272 [2024-11-20 11:23:08.566400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.272 [2024-11-20 11:23:08.566408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.272 [2024-11-20 11:23:08.566414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.272 [2024-11-20 11:23:08.566423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.272 [2024-11-20 11:23:08.566429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.272 [2024-11-20 11:23:08.566437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.272 [2024-11-20 11:23:08.566444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.272 [2024-11-20 11:23:08.566452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.272 [2024-11-20 11:23:08.566460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.272 [2024-11-20 11:23:08.566468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.272 [2024-11-20 11:23:08.566474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.272 [2024-11-20 11:23:08.566501] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:41.272 [2024-11-20 11:23:08.567454] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:41.272 task offset: 98304 on job bdev=Nvme0n1 fails 00:28:41.272 00:28:41.272 Latency(us) 00:28:41.272 [2024-11-20T10:23:08.768Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:41.272 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:41.272 Job: Nvme0n1 ended in about 0.40 seconds with error 00:28:41.272 Verification LBA range: start 0x0 length 0x400 00:28:41.272 Nvme0n1 : 0.40 1906.89 119.18 158.91 0.00 30135.80 2949.12 27696.08 00:28:41.272 [2024-11-20T10:23:08.768Z] =================================================================================================================== 00:28:41.272 [2024-11-20T10:23:08.768Z] Total : 1906.89 119.18 158.91 0.00 30135.80 2949.12 27696.08 00:28:41.272 [2024-11-20 11:23:08.569861] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:41.272 [2024-11-20 11:23:08.569883] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a0a500 (9): Bad file descriptor 00:28:41.272 11:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.272 11:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:28:41.272 [2024-11-20 11:23:08.570828] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:28:41.272 11:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.272 [2024-11-20 11:23:08.570904] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:28:41.272 [2024-11-20 11:23:08.570927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.272 [2024-11-20 11:23:08.570942] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:28:41.272 [2024-11-20 11:23:08.570957] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:28:41.272 [2024-11-20 11:23:08.570964] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.272 [2024-11-20 11:23:08.570971] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a0a500 00:28:41.272 [2024-11-20 11:23:08.570991] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a0a500 (9): Bad file descriptor 00:28:41.272 [2024-11-20 11:23:08.571002] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:28:41.272 [2024-11-20 11:23:08.571009] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:28:41.273 [2024-11-20 11:23:08.571017] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:28:41.273 [2024-11-20 11:23:08.571025] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:28:41.273 11:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:41.273 11:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.273 11:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:28:42.208 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 47741 00:28:42.208 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (47741) - No such process 00:28:42.208 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:28:42.208 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:28:42.208 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:28:42.208 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:28:42.208 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:28:42.208 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:28:42.208 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:42.208 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:42.208 { 00:28:42.208 "params": { 00:28:42.208 "name": "Nvme$subsystem", 00:28:42.208 "trtype": "$TEST_TRANSPORT", 00:28:42.208 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:42.208 "adrfam": "ipv4", 00:28:42.208 "trsvcid": "$NVMF_PORT", 00:28:42.208 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:42.208 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:42.208 "hdgst": ${hdgst:-false}, 00:28:42.208 "ddgst": ${ddgst:-false} 00:28:42.208 }, 00:28:42.208 "method": "bdev_nvme_attach_controller" 00:28:42.208 } 00:28:42.208 EOF 00:28:42.208 )") 00:28:42.208 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:28:42.208 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:28:42.208 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:28:42.208 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:42.208 "params": { 00:28:42.208 "name": "Nvme0", 00:28:42.208 "trtype": "tcp", 00:28:42.208 "traddr": "10.0.0.2", 00:28:42.208 "adrfam": "ipv4", 00:28:42.208 "trsvcid": "4420", 00:28:42.208 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:42.208 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:42.208 "hdgst": false, 00:28:42.208 "ddgst": false 00:28:42.208 }, 00:28:42.208 "method": "bdev_nvme_attach_controller" 00:28:42.208 }' 00:28:42.208 [2024-11-20 11:23:09.639209] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:28:42.208 [2024-11-20 11:23:09.639257] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid47995 ] 00:28:42.465 [2024-11-20 11:23:09.717526] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:42.465 [2024-11-20 11:23:09.757529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:42.465 Running I/O for 1 seconds... 00:28:43.845 1983.00 IOPS, 123.94 MiB/s 00:28:43.845 Latency(us) 00:28:43.845 [2024-11-20T10:23:11.341Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:43.845 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:43.845 Verification LBA range: start 0x0 length 0x400 00:28:43.845 Nvme0n1 : 1.02 2007.33 125.46 0.00 0.00 31379.19 4900.95 27468.13 00:28:43.845 [2024-11-20T10:23:11.341Z] =================================================================================================================== 00:28:43.845 [2024-11-20T10:23:11.341Z] Total : 2007.33 125.46 0.00 0.00 31379.19 4900.95 27468.13 00:28:43.845 11:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:28:43.845 11:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:28:43.845 11:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:43.845 11:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:43.845 11:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:28:43.845 11:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:43.845 11:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:28:43.845 11:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:43.845 11:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:28:43.845 11:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:43.845 11:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:43.845 rmmod nvme_tcp 00:28:43.845 rmmod nvme_fabrics 00:28:43.845 rmmod nvme_keyring 00:28:43.845 11:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:43.845 11:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:28:43.845 11:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:28:43.845 11:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 47699 ']' 00:28:43.845 11:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 47699 00:28:43.845 11:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 47699 ']' 00:28:43.845 11:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 47699 00:28:43.845 11:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:28:43.845 11:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:43.845 11:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 47699 00:28:43.845 11:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:43.845 11:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:43.845 11:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 47699' 00:28:43.845 killing process with pid 47699 00:28:43.845 11:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 47699 00:28:43.845 11:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 47699 00:28:44.105 [2024-11-20 11:23:11.417929] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:28:44.105 11:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:44.105 11:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:44.105 11:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:44.105 11:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:28:44.105 11:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:28:44.105 11:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:44.105 11:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:28:44.105 11:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:44.105 11:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:44.105 11:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:44.105 11:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:44.105 11:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:46.641 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:46.641 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:28:46.641 00:28:46.641 real 0m12.350s 00:28:46.641 user 0m17.904s 00:28:46.641 sys 0m6.331s 00:28:46.641 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:46.641 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:46.641 ************************************ 00:28:46.641 END TEST nvmf_host_management 00:28:46.642 ************************************ 00:28:46.642 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:28:46.642 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:46.642 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:46.642 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:46.642 ************************************ 00:28:46.642 START TEST nvmf_lvol 00:28:46.642 ************************************ 00:28:46.642 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:28:46.642 * Looking for test storage... 00:28:46.642 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:46.642 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:46.642 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:28:46.642 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:46.642 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:46.642 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:46.642 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:46.642 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:46.642 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:28:46.642 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:28:46.642 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:28:46.642 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:28:46.642 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:28:46.642 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:28:46.642 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:28:46.642 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:46.642 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:28:46.642 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:28:46.642 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:46.642 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:46.642 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:28:46.642 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:28:46.642 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:46.642 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:28:46.642 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:28:46.642 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:28:46.642 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:28:46.642 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:46.642 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:28:46.642 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:28:46.642 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:46.642 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:46.642 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:28:46.642 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:46.642 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:46.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:46.642 --rc genhtml_branch_coverage=1 00:28:46.642 --rc genhtml_function_coverage=1 00:28:46.642 --rc genhtml_legend=1 00:28:46.642 --rc geninfo_all_blocks=1 00:28:46.642 --rc geninfo_unexecuted_blocks=1 00:28:46.642 00:28:46.642 ' 00:28:46.642 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:46.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:46.642 --rc genhtml_branch_coverage=1 00:28:46.642 --rc genhtml_function_coverage=1 00:28:46.642 --rc genhtml_legend=1 00:28:46.642 --rc geninfo_all_blocks=1 00:28:46.642 --rc geninfo_unexecuted_blocks=1 00:28:46.642 00:28:46.642 ' 00:28:46.642 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:46.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:46.642 --rc genhtml_branch_coverage=1 00:28:46.642 --rc genhtml_function_coverage=1 00:28:46.642 --rc genhtml_legend=1 00:28:46.642 --rc geninfo_all_blocks=1 00:28:46.642 --rc geninfo_unexecuted_blocks=1 00:28:46.642 00:28:46.642 ' 00:28:46.642 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:46.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:46.642 --rc genhtml_branch_coverage=1 00:28:46.642 --rc genhtml_function_coverage=1 00:28:46.642 --rc genhtml_legend=1 00:28:46.642 --rc geninfo_all_blocks=1 00:28:46.642 --rc geninfo_unexecuted_blocks=1 00:28:46.642 00:28:46.642 ' 00:28:46.642 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:46.642 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:28:46.642 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:46.642 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:46.642 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:46.642 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:46.642 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:46.642 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:46.642 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:46.642 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:46.642 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:46.642 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:46.642 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:28:46.642 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:28:46.642 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:46.642 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:46.642 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:46.642 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:46.642 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:46.642 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:28:46.642 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:46.642 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:46.642 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:46.643 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:46.643 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:46.643 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:46.643 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:28:46.643 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:46.643 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:28:46.643 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:46.643 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:46.643 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:46.643 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:46.643 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:46.643 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:46.643 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:46.643 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:46.643 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:46.643 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:46.643 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:46.643 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:46.643 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:28:46.643 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:28:46.643 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:46.643 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:28:46.643 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:46.643 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:46.643 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:46.643 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:46.643 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:46.643 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:46.643 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:46.643 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:46.643 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:46.643 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:46.643 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:28:46.643 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:28:53.215 11:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:53.215 11:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:28:53.215 11:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:53.215 11:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:53.215 11:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:53.215 11:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:53.215 11:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:53.215 11:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:28:53.215 11:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:53.215 11:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:28:53.215 11:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:28:53.215 11:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:28:53.215 11:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:28:53.215 11:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:28:53.215 11:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:28:53.215 11:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:53.215 11:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:53.215 11:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:53.215 11:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:53.215 11:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:53.215 11:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:53.215 11:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:53.215 11:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:53.215 11:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:53.215 11:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:53.215 11:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:53.215 11:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:53.215 11:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:53.215 11:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:53.215 11:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:53.215 11:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:53.215 11:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:53.215 11:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:53.215 11:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:53.215 11:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:53.215 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:53.215 11:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:53.215 11:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:53.215 11:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:53.215 11:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:53.215 11:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:53.215 11:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:53.215 11:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:53.215 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:53.215 11:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:53.215 11:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:53.215 11:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:53.215 11:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:53.215 11:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:53.215 11:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:53.215 11:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:53.216 11:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:53.216 11:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:53.216 11:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:53.216 11:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:53.216 11:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:53.216 11:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:53.216 11:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:53.216 11:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:53.216 11:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:53.216 Found net devices under 0000:86:00.0: cvl_0_0 00:28:53.216 11:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:53.216 11:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:53.216 11:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:53.216 11:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:53.216 11:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:53.216 11:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:53.216 11:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:53.216 11:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:53.216 11:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:53.216 Found net devices under 0000:86:00.1: cvl_0_1 00:28:53.216 11:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:53.216 11:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:53.216 11:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:28:53.216 11:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:53.216 11:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:53.216 11:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:53.216 11:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:53.216 11:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:53.216 11:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:53.216 11:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:53.216 11:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:53.216 11:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:53.216 11:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:53.216 11:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:53.216 11:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:53.216 11:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:53.216 11:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:53.216 11:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:53.216 11:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:53.216 11:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:53.216 11:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:53.216 11:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:53.216 11:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:53.216 11:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:53.216 11:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:53.216 11:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:53.216 11:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:53.216 11:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:53.216 11:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:53.216 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:53.216 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.432 ms 00:28:53.216 00:28:53.216 --- 10.0.0.2 ping statistics --- 00:28:53.216 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:53.216 rtt min/avg/max/mdev = 0.432/0.432/0.432/0.000 ms 00:28:53.216 11:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:53.216 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:53.216 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:28:53.216 00:28:53.216 --- 10.0.0.1 ping statistics --- 00:28:53.216 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:53.216 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:28:53.216 11:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:53.216 11:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:28:53.216 11:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:53.216 11:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:53.216 11:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:53.216 11:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:53.216 11:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:53.216 11:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:53.216 11:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:53.216 11:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:28:53.216 11:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:53.216 11:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:53.216 11:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:28:53.216 11:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=51754 00:28:53.216 11:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 51754 00:28:53.216 11:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:28:53.216 11:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 51754 ']' 00:28:53.216 11:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:53.216 11:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:53.216 11:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:53.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:53.216 11:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:53.216 11:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:28:53.216 [2024-11-20 11:23:19.804701] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:53.216 [2024-11-20 11:23:19.805638] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:28:53.216 [2024-11-20 11:23:19.805673] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:53.216 [2024-11-20 11:23:19.885716] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:53.216 [2024-11-20 11:23:19.927639] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:53.216 [2024-11-20 11:23:19.927677] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:53.217 [2024-11-20 11:23:19.927684] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:53.217 [2024-11-20 11:23:19.927690] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:53.217 [2024-11-20 11:23:19.927695] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:53.217 [2024-11-20 11:23:19.929019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:53.217 [2024-11-20 11:23:19.929125] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:53.217 [2024-11-20 11:23:19.929127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:53.217 [2024-11-20 11:23:19.996760] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:53.217 [2024-11-20 11:23:19.997543] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:28:53.217 [2024-11-20 11:23:19.997614] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:53.217 [2024-11-20 11:23:19.997809] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:53.217 11:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:53.217 11:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:28:53.217 11:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:53.217 11:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:53.217 11:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:28:53.217 11:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:53.217 11:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:28:53.476 [2024-11-20 11:23:20.869864] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:53.476 11:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:28:53.736 11:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:28:53.736 11:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:28:53.996 11:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:28:53.996 11:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:28:54.256 11:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:28:54.515 11:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=3986c483-c39f-4913-a702-d8a66fe7934a 00:28:54.515 11:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 3986c483-c39f-4913-a702-d8a66fe7934a lvol 20 00:28:54.515 11:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=e69062a1-7287-4778-83d2-630697309717 00:28:54.515 11:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:28:54.775 11:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 e69062a1-7287-4778-83d2-630697309717 00:28:55.034 11:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:55.034 [2024-11-20 11:23:22.521777] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:55.293 11:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:55.293 11:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=52247 00:28:55.293 11:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:28:55.293 11:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:28:56.671 11:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot e69062a1-7287-4778-83d2-630697309717 MY_SNAPSHOT 00:28:56.671 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=68c59f23-4367-4ce7-b985-e0ad9d455033 00:28:56.671 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize e69062a1-7287-4778-83d2-630697309717 30 00:28:56.930 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 68c59f23-4367-4ce7-b985-e0ad9d455033 MY_CLONE 00:28:57.189 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=31e52a47-c801-4185-8e1d-cf2f8be928b4 00:28:57.189 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 31e52a47-c801-4185-8e1d-cf2f8be928b4 00:28:57.758 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 52247 00:29:05.878 Initializing NVMe Controllers 00:29:05.878 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:29:05.878 Controller IO queue size 128, less than required. 00:29:05.878 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:05.878 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:29:05.878 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:29:05.878 Initialization complete. Launching workers. 00:29:05.878 ======================================================== 00:29:05.878 Latency(us) 00:29:05.878 Device Information : IOPS MiB/s Average min max 00:29:05.878 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12203.90 47.67 10491.22 1633.11 59759.71 00:29:05.878 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12314.00 48.10 10397.07 3102.03 49026.06 00:29:05.878 ======================================================== 00:29:05.878 Total : 24517.89 95.77 10443.93 1633.11 59759.71 00:29:05.878 00:29:05.878 11:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:06.137 11:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete e69062a1-7287-4778-83d2-630697309717 00:29:06.137 11:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3986c483-c39f-4913-a702-d8a66fe7934a 00:29:06.396 11:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:29:06.396 11:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:29:06.396 11:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:29:06.396 11:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:06.396 11:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:29:06.396 11:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:06.396 11:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:29:06.396 11:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:06.396 11:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:06.396 rmmod nvme_tcp 00:29:06.396 rmmod nvme_fabrics 00:29:06.396 rmmod nvme_keyring 00:29:06.396 11:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:06.396 11:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:29:06.396 11:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:29:06.396 11:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 51754 ']' 00:29:06.396 11:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 51754 00:29:06.396 11:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 51754 ']' 00:29:06.396 11:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 51754 00:29:06.396 11:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:29:06.396 11:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:06.396 11:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 51754 00:29:06.396 11:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:06.396 11:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:06.396 11:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 51754' 00:29:06.396 killing process with pid 51754 00:29:06.396 11:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 51754 00:29:06.396 11:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 51754 00:29:06.656 11:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:06.656 11:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:06.656 11:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:06.656 11:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:29:06.656 11:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:29:06.656 11:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:06.656 11:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:29:06.656 11:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:06.656 11:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:06.656 11:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:06.656 11:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:06.656 11:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:09.194 11:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:09.194 00:29:09.194 real 0m22.561s 00:29:09.194 user 0m55.810s 00:29:09.194 sys 0m10.023s 00:29:09.194 11:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:09.194 11:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:09.194 ************************************ 00:29:09.194 END TEST nvmf_lvol 00:29:09.194 ************************************ 00:29:09.194 11:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:29:09.194 11:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:09.194 11:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:09.194 11:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:09.194 ************************************ 00:29:09.194 START TEST nvmf_lvs_grow 00:29:09.194 ************************************ 00:29:09.194 11:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:29:09.194 * Looking for test storage... 00:29:09.194 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:09.194 11:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:09.194 11:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:29:09.194 11:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:09.194 11:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:09.194 11:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:09.194 11:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:09.194 11:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:09.194 11:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:29:09.194 11:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:29:09.194 11:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:29:09.194 11:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:29:09.194 11:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:29:09.194 11:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:29:09.194 11:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:29:09.194 11:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:09.194 11:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:29:09.194 11:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:29:09.194 11:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:09.194 11:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:09.194 11:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:29:09.194 11:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:29:09.194 11:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:09.194 11:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:29:09.194 11:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:29:09.194 11:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:29:09.194 11:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:29:09.194 11:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:09.194 11:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:29:09.194 11:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:29:09.194 11:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:09.194 11:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:09.194 11:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:29:09.194 11:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:09.194 11:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:09.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:09.194 --rc genhtml_branch_coverage=1 00:29:09.194 --rc genhtml_function_coverage=1 00:29:09.194 --rc genhtml_legend=1 00:29:09.194 --rc geninfo_all_blocks=1 00:29:09.194 --rc geninfo_unexecuted_blocks=1 00:29:09.194 00:29:09.194 ' 00:29:09.194 11:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:09.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:09.194 --rc genhtml_branch_coverage=1 00:29:09.194 --rc genhtml_function_coverage=1 00:29:09.194 --rc genhtml_legend=1 00:29:09.194 --rc geninfo_all_blocks=1 00:29:09.194 --rc geninfo_unexecuted_blocks=1 00:29:09.194 00:29:09.194 ' 00:29:09.194 11:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:09.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:09.194 --rc genhtml_branch_coverage=1 00:29:09.194 --rc genhtml_function_coverage=1 00:29:09.194 --rc genhtml_legend=1 00:29:09.195 --rc geninfo_all_blocks=1 00:29:09.195 --rc geninfo_unexecuted_blocks=1 00:29:09.195 00:29:09.195 ' 00:29:09.195 11:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:09.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:09.195 --rc genhtml_branch_coverage=1 00:29:09.195 --rc genhtml_function_coverage=1 00:29:09.195 --rc genhtml_legend=1 00:29:09.195 --rc geninfo_all_blocks=1 00:29:09.195 --rc geninfo_unexecuted_blocks=1 00:29:09.195 00:29:09.195 ' 00:29:09.195 11:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:09.195 11:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:29:09.195 11:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:09.195 11:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:09.195 11:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:09.195 11:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:09.195 11:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:09.195 11:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:09.195 11:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:09.195 11:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:09.195 11:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:09.195 11:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:09.195 11:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:09.195 11:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:09.195 11:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:09.195 11:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:09.195 11:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:09.195 11:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:09.195 11:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:09.195 11:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:29:09.195 11:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:09.195 11:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:09.195 11:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:09.195 11:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:09.195 11:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:09.195 11:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:09.195 11:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:29:09.195 11:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:09.195 11:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:29:09.195 11:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:09.195 11:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:09.195 11:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:09.195 11:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:09.195 11:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:09.195 11:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:09.195 11:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:09.195 11:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:09.195 11:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:09.195 11:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:09.195 11:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:09.195 11:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:09.195 11:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:29:09.195 11:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:09.195 11:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:09.195 11:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:09.195 11:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:09.195 11:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:09.195 11:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:09.195 11:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:09.195 11:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:09.195 11:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:09.195 11:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:09.195 11:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:29:09.195 11:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:15.766 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:15.766 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:29:15.766 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:15.766 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:15.766 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:15.766 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:15.766 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:15.766 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:29:15.766 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:15.766 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:29:15.766 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:29:15.766 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:29:15.766 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:29:15.766 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:29:15.766 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:29:15.766 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:15.766 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:15.766 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:15.766 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:15.766 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:15.766 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:15.766 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:15.766 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:15.766 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:15.766 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:15.766 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:15.766 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:15.766 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:15.766 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:15.766 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:15.766 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:15.766 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:15.766 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:15.766 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:15.766 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:15.766 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:15.766 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:15.767 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:15.767 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:15.767 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:15.767 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:15.767 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:15.767 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:15.767 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:15.767 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:15.767 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:15.767 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:15.767 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:15.767 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:15.767 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:15.767 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:15.767 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:15.767 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:15.767 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:15.767 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:15.767 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:15.767 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:15.767 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:15.767 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:15.767 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:15.767 Found net devices under 0000:86:00.0: cvl_0_0 00:29:15.767 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:15.767 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:15.767 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:15.767 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:15.767 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:15.767 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:15.767 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:15.767 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:15.767 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:15.767 Found net devices under 0000:86:00.1: cvl_0_1 00:29:15.767 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:15.767 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:15.767 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:29:15.767 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:15.767 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:15.767 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:15.767 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:15.767 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:15.767 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:15.767 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:15.767 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:15.767 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:15.767 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:15.767 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:15.767 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:15.767 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:15.767 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:15.767 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:15.767 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:15.767 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:15.767 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:15.767 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:15.767 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:15.767 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:15.767 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:15.767 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:15.767 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:15.767 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:15.767 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:15.767 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:15.767 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.287 ms 00:29:15.767 00:29:15.767 --- 10.0.0.2 ping statistics --- 00:29:15.767 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:15.767 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:29:15.767 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:15.767 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:15.767 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:29:15.767 00:29:15.767 --- 10.0.0.1 ping statistics --- 00:29:15.767 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:15.767 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:29:15.767 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:15.767 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:29:15.767 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:15.767 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:15.767 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:15.767 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:15.767 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:15.767 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:15.767 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:15.767 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:29:15.767 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:15.767 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:15.767 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:15.767 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=57597 00:29:15.767 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 57597 00:29:15.767 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:29:15.768 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 57597 ']' 00:29:15.768 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:15.768 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:15.768 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:15.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:15.768 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:15.768 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:15.768 [2024-11-20 11:23:42.437088] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:15.768 [2024-11-20 11:23:42.437983] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:29:15.768 [2024-11-20 11:23:42.438016] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:15.768 [2024-11-20 11:23:42.516480] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:15.768 [2024-11-20 11:23:42.558360] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:15.768 [2024-11-20 11:23:42.558396] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:15.768 [2024-11-20 11:23:42.558404] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:15.768 [2024-11-20 11:23:42.558411] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:15.768 [2024-11-20 11:23:42.558416] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:15.768 [2024-11-20 11:23:42.558975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:15.768 [2024-11-20 11:23:42.626699] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:15.768 [2024-11-20 11:23:42.626927] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:15.768 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:15.768 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:29:15.768 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:15.768 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:15.768 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:15.768 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:15.768 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:15.768 [2024-11-20 11:23:42.859605] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:15.768 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:29:15.768 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:15.768 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:15.768 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:15.768 ************************************ 00:29:15.768 START TEST lvs_grow_clean 00:29:15.768 ************************************ 00:29:15.768 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:29:15.768 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:29:15.768 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:29:15.768 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:29:15.768 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:29:15.768 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:29:15.768 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:29:15.768 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:15.768 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:15.768 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:15.768 11:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:29:15.768 11:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:29:16.026 11:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=a80347d8-3619-4040-8307-75bf54262e85 00:29:16.026 11:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a80347d8-3619-4040-8307-75bf54262e85 00:29:16.026 11:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:29:16.285 11:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:29:16.285 11:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:29:16.285 11:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u a80347d8-3619-4040-8307-75bf54262e85 lvol 150 00:29:16.544 11:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=a4150f6d-a7f5-4c01-807b-3468a8e90b50 00:29:16.544 11:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:16.544 11:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:29:16.544 [2024-11-20 11:23:43.967369] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:29:16.544 [2024-11-20 11:23:43.967501] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:29:16.544 true 00:29:16.544 11:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a80347d8-3619-4040-8307-75bf54262e85 00:29:16.544 11:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:29:16.804 11:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:29:16.804 11:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:29:17.063 11:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a4150f6d-a7f5-4c01-807b-3468a8e90b50 00:29:17.322 11:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:17.322 [2024-11-20 11:23:44.735855] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:17.322 11:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:17.581 11:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=58098 00:29:17.581 11:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:17.581 11:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:29:17.581 11:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 58098 /var/tmp/bdevperf.sock 00:29:17.581 11:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 58098 ']' 00:29:17.581 11:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:17.582 11:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:17.582 11:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:17.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:17.582 11:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:17.582 11:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:29:17.582 [2024-11-20 11:23:45.000335] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:29:17.582 [2024-11-20 11:23:45.000385] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58098 ] 00:29:17.841 [2024-11-20 11:23:45.075972] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:17.841 [2024-11-20 11:23:45.118181] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:17.841 11:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:17.841 11:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:29:17.841 11:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:29:18.100 Nvme0n1 00:29:18.100 11:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:29:18.360 [ 00:29:18.360 { 00:29:18.360 "name": "Nvme0n1", 00:29:18.360 "aliases": [ 00:29:18.360 "a4150f6d-a7f5-4c01-807b-3468a8e90b50" 00:29:18.360 ], 00:29:18.360 "product_name": "NVMe disk", 00:29:18.360 "block_size": 4096, 00:29:18.360 "num_blocks": 38912, 00:29:18.360 "uuid": "a4150f6d-a7f5-4c01-807b-3468a8e90b50", 00:29:18.360 "numa_id": 1, 00:29:18.360 "assigned_rate_limits": { 00:29:18.360 "rw_ios_per_sec": 0, 00:29:18.360 "rw_mbytes_per_sec": 0, 00:29:18.360 "r_mbytes_per_sec": 0, 00:29:18.360 "w_mbytes_per_sec": 0 00:29:18.360 }, 00:29:18.360 "claimed": false, 00:29:18.360 "zoned": false, 00:29:18.360 "supported_io_types": { 00:29:18.360 "read": true, 00:29:18.360 "write": true, 00:29:18.360 "unmap": true, 00:29:18.360 "flush": true, 00:29:18.360 "reset": true, 00:29:18.360 "nvme_admin": true, 00:29:18.360 "nvme_io": true, 00:29:18.360 "nvme_io_md": false, 00:29:18.360 "write_zeroes": true, 00:29:18.360 "zcopy": false, 00:29:18.360 "get_zone_info": false, 00:29:18.360 "zone_management": false, 00:29:18.360 "zone_append": false, 00:29:18.360 "compare": true, 00:29:18.360 "compare_and_write": true, 00:29:18.360 "abort": true, 00:29:18.360 "seek_hole": false, 00:29:18.360 "seek_data": false, 00:29:18.360 "copy": true, 00:29:18.360 "nvme_iov_md": false 00:29:18.360 }, 00:29:18.360 "memory_domains": [ 00:29:18.360 { 00:29:18.360 "dma_device_id": "system", 00:29:18.360 "dma_device_type": 1 00:29:18.360 } 00:29:18.360 ], 00:29:18.360 "driver_specific": { 00:29:18.360 "nvme": [ 00:29:18.360 { 00:29:18.360 "trid": { 00:29:18.360 "trtype": "TCP", 00:29:18.360 "adrfam": "IPv4", 00:29:18.360 "traddr": "10.0.0.2", 00:29:18.360 "trsvcid": "4420", 00:29:18.360 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:18.360 }, 00:29:18.360 "ctrlr_data": { 00:29:18.360 "cntlid": 1, 00:29:18.360 "vendor_id": "0x8086", 00:29:18.360 "model_number": "SPDK bdev Controller", 00:29:18.360 "serial_number": "SPDK0", 00:29:18.360 "firmware_revision": "25.01", 00:29:18.360 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:18.360 "oacs": { 00:29:18.360 "security": 0, 00:29:18.360 "format": 0, 00:29:18.360 "firmware": 0, 00:29:18.360 "ns_manage": 0 00:29:18.360 }, 00:29:18.360 "multi_ctrlr": true, 00:29:18.360 "ana_reporting": false 00:29:18.360 }, 00:29:18.360 "vs": { 00:29:18.360 "nvme_version": "1.3" 00:29:18.360 }, 00:29:18.360 "ns_data": { 00:29:18.360 "id": 1, 00:29:18.360 "can_share": true 00:29:18.360 } 00:29:18.360 } 00:29:18.360 ], 00:29:18.360 "mp_policy": "active_passive" 00:29:18.360 } 00:29:18.360 } 00:29:18.360 ] 00:29:18.360 11:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=58104 00:29:18.360 11:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:18.360 11:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:29:18.360 Running I/O for 10 seconds... 00:29:19.297 Latency(us) 00:29:19.297 [2024-11-20T10:23:46.793Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:19.297 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:19.297 Nvme0n1 : 1.00 22098.00 86.32 0.00 0.00 0.00 0.00 0.00 00:29:19.297 [2024-11-20T10:23:46.793Z] =================================================================================================================== 00:29:19.297 [2024-11-20T10:23:46.793Z] Total : 22098.00 86.32 0.00 0.00 0.00 0.00 0.00 00:29:19.297 00:29:20.233 11:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u a80347d8-3619-4040-8307-75bf54262e85 00:29:20.492 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:20.492 Nvme0n1 : 2.00 22415.50 87.56 0.00 0.00 0.00 0.00 0.00 00:29:20.492 [2024-11-20T10:23:47.988Z] =================================================================================================================== 00:29:20.492 [2024-11-20T10:23:47.988Z] Total : 22415.50 87.56 0.00 0.00 0.00 0.00 0.00 00:29:20.492 00:29:20.492 true 00:29:20.492 11:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a80347d8-3619-4040-8307-75bf54262e85 00:29:20.492 11:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:29:20.750 11:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:29:20.750 11:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:29:20.750 11:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 58104 00:29:21.318 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:21.318 Nvme0n1 : 3.00 22563.67 88.14 0.00 0.00 0.00 0.00 0.00 00:29:21.318 [2024-11-20T10:23:48.814Z] =================================================================================================================== 00:29:21.318 [2024-11-20T10:23:48.814Z] Total : 22563.67 88.14 0.00 0.00 0.00 0.00 0.00 00:29:21.318 00:29:22.256 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:22.256 Nvme0n1 : 4.00 22669.50 88.55 0.00 0.00 0.00 0.00 0.00 00:29:22.256 [2024-11-20T10:23:49.752Z] =================================================================================================================== 00:29:22.256 [2024-11-20T10:23:49.752Z] Total : 22669.50 88.55 0.00 0.00 0.00 0.00 0.00 00:29:22.256 00:29:23.635 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:23.635 Nvme0n1 : 5.00 22631.40 88.40 0.00 0.00 0.00 0.00 0.00 00:29:23.635 [2024-11-20T10:23:51.131Z] =================================================================================================================== 00:29:23.635 [2024-11-20T10:23:51.131Z] Total : 22631.40 88.40 0.00 0.00 0.00 0.00 0.00 00:29:23.635 00:29:24.572 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:24.572 Nvme0n1 : 6.00 22690.67 88.64 0.00 0.00 0.00 0.00 0.00 00:29:24.572 [2024-11-20T10:23:52.068Z] =================================================================================================================== 00:29:24.572 [2024-11-20T10:23:52.068Z] Total : 22690.67 88.64 0.00 0.00 0.00 0.00 0.00 00:29:24.572 00:29:25.509 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:25.509 Nvme0n1 : 7.00 22751.14 88.87 0.00 0.00 0.00 0.00 0.00 00:29:25.509 [2024-11-20T10:23:53.005Z] =================================================================================================================== 00:29:25.509 [2024-11-20T10:23:53.005Z] Total : 22751.14 88.87 0.00 0.00 0.00 0.00 0.00 00:29:25.509 00:29:26.448 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:26.448 Nvme0n1 : 8.00 22780.62 88.99 0.00 0.00 0.00 0.00 0.00 00:29:26.448 [2024-11-20T10:23:53.944Z] =================================================================================================================== 00:29:26.448 [2024-11-20T10:23:53.944Z] Total : 22780.62 88.99 0.00 0.00 0.00 0.00 0.00 00:29:26.448 00:29:27.385 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:27.385 Nvme0n1 : 9.00 22817.67 89.13 0.00 0.00 0.00 0.00 0.00 00:29:27.385 [2024-11-20T10:23:54.881Z] =================================================================================================================== 00:29:27.385 [2024-11-20T10:23:54.881Z] Total : 22817.67 89.13 0.00 0.00 0.00 0.00 0.00 00:29:27.386 00:29:28.324 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:28.324 Nvme0n1 : 10.00 22836.30 89.20 0.00 0.00 0.00 0.00 0.00 00:29:28.324 [2024-11-20T10:23:55.820Z] =================================================================================================================== 00:29:28.324 [2024-11-20T10:23:55.820Z] Total : 22836.30 89.20 0.00 0.00 0.00 0.00 0.00 00:29:28.324 00:29:28.324 00:29:28.324 Latency(us) 00:29:28.324 [2024-11-20T10:23:55.820Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:28.324 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:28.324 Nvme0n1 : 10.00 22839.55 89.22 0.00 0.00 5601.18 5157.40 27810.06 00:29:28.324 [2024-11-20T10:23:55.820Z] =================================================================================================================== 00:29:28.324 [2024-11-20T10:23:55.820Z] Total : 22839.55 89.22 0.00 0.00 5601.18 5157.40 27810.06 00:29:28.324 { 00:29:28.324 "results": [ 00:29:28.324 { 00:29:28.324 "job": "Nvme0n1", 00:29:28.324 "core_mask": "0x2", 00:29:28.324 "workload": "randwrite", 00:29:28.324 "status": "finished", 00:29:28.324 "queue_depth": 128, 00:29:28.324 "io_size": 4096, 00:29:28.324 "runtime": 10.003436, 00:29:28.324 "iops": 22839.552329819475, 00:29:28.324 "mibps": 89.21700128835732, 00:29:28.324 "io_failed": 0, 00:29:28.324 "io_timeout": 0, 00:29:28.324 "avg_latency_us": 5601.1787283112035, 00:29:28.324 "min_latency_us": 5157.398260869565, 00:29:28.324 "max_latency_us": 27810.059130434784 00:29:28.324 } 00:29:28.324 ], 00:29:28.324 "core_count": 1 00:29:28.324 } 00:29:28.324 11:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 58098 00:29:28.324 11:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 58098 ']' 00:29:28.324 11:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 58098 00:29:28.324 11:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:29:28.324 11:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:28.324 11:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58098 00:29:28.583 11:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:28.583 11:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:28.583 11:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58098' 00:29:28.584 killing process with pid 58098 00:29:28.584 11:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 58098 00:29:28.584 Received shutdown signal, test time was about 10.000000 seconds 00:29:28.584 00:29:28.584 Latency(us) 00:29:28.584 [2024-11-20T10:23:56.080Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:28.584 [2024-11-20T10:23:56.080Z] =================================================================================================================== 00:29:28.584 [2024-11-20T10:23:56.080Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:28.584 11:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 58098 00:29:28.584 11:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:28.842 11:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:29.102 11:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a80347d8-3619-4040-8307-75bf54262e85 00:29:29.102 11:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:29:29.367 11:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:29:29.367 11:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:29:29.367 11:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:29:29.367 [2024-11-20 11:23:56.775438] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:29:29.367 11:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a80347d8-3619-4040-8307-75bf54262e85 00:29:29.367 11:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:29:29.367 11:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a80347d8-3619-4040-8307-75bf54262e85 00:29:29.367 11:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:29.367 11:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:29.367 11:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:29.367 11:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:29.367 11:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:29.367 11:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:29.367 11:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:29.367 11:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:29:29.367 11:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a80347d8-3619-4040-8307-75bf54262e85 00:29:29.659 request: 00:29:29.659 { 00:29:29.659 "uuid": "a80347d8-3619-4040-8307-75bf54262e85", 00:29:29.659 "method": "bdev_lvol_get_lvstores", 00:29:29.659 "req_id": 1 00:29:29.659 } 00:29:29.659 Got JSON-RPC error response 00:29:29.659 response: 00:29:29.659 { 00:29:29.659 "code": -19, 00:29:29.659 "message": "No such device" 00:29:29.659 } 00:29:29.659 11:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:29:29.659 11:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:29.659 11:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:29.659 11:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:29.659 11:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:29.938 aio_bdev 00:29:29.938 11:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev a4150f6d-a7f5-4c01-807b-3468a8e90b50 00:29:29.938 11:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=a4150f6d-a7f5-4c01-807b-3468a8e90b50 00:29:29.938 11:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:29:29.938 11:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:29:29.938 11:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:29:29.938 11:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:29:29.938 11:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:30.197 11:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b a4150f6d-a7f5-4c01-807b-3468a8e90b50 -t 2000 00:29:30.197 [ 00:29:30.197 { 00:29:30.197 "name": "a4150f6d-a7f5-4c01-807b-3468a8e90b50", 00:29:30.197 "aliases": [ 00:29:30.197 "lvs/lvol" 00:29:30.197 ], 00:29:30.197 "product_name": "Logical Volume", 00:29:30.197 "block_size": 4096, 00:29:30.197 "num_blocks": 38912, 00:29:30.197 "uuid": "a4150f6d-a7f5-4c01-807b-3468a8e90b50", 00:29:30.197 "assigned_rate_limits": { 00:29:30.197 "rw_ios_per_sec": 0, 00:29:30.197 "rw_mbytes_per_sec": 0, 00:29:30.197 "r_mbytes_per_sec": 0, 00:29:30.197 "w_mbytes_per_sec": 0 00:29:30.197 }, 00:29:30.197 "claimed": false, 00:29:30.197 "zoned": false, 00:29:30.197 "supported_io_types": { 00:29:30.197 "read": true, 00:29:30.197 "write": true, 00:29:30.197 "unmap": true, 00:29:30.197 "flush": false, 00:29:30.197 "reset": true, 00:29:30.197 "nvme_admin": false, 00:29:30.197 "nvme_io": false, 00:29:30.197 "nvme_io_md": false, 00:29:30.197 "write_zeroes": true, 00:29:30.197 "zcopy": false, 00:29:30.197 "get_zone_info": false, 00:29:30.197 "zone_management": false, 00:29:30.197 "zone_append": false, 00:29:30.197 "compare": false, 00:29:30.197 "compare_and_write": false, 00:29:30.197 "abort": false, 00:29:30.197 "seek_hole": true, 00:29:30.197 "seek_data": true, 00:29:30.197 "copy": false, 00:29:30.198 "nvme_iov_md": false 00:29:30.198 }, 00:29:30.198 "driver_specific": { 00:29:30.198 "lvol": { 00:29:30.198 "lvol_store_uuid": "a80347d8-3619-4040-8307-75bf54262e85", 00:29:30.198 "base_bdev": "aio_bdev", 00:29:30.198 "thin_provision": false, 00:29:30.198 "num_allocated_clusters": 38, 00:29:30.198 "snapshot": false, 00:29:30.198 "clone": false, 00:29:30.198 "esnap_clone": false 00:29:30.198 } 00:29:30.198 } 00:29:30.198 } 00:29:30.198 ] 00:29:30.198 11:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:29:30.198 11:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a80347d8-3619-4040-8307-75bf54262e85 00:29:30.198 11:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:29:30.455 11:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:29:30.455 11:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a80347d8-3619-4040-8307-75bf54262e85 00:29:30.456 11:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:29:30.714 11:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:29:30.714 11:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete a4150f6d-a7f5-4c01-807b-3468a8e90b50 00:29:30.714 11:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a80347d8-3619-4040-8307-75bf54262e85 00:29:30.973 11:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:29:31.232 11:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:31.232 00:29:31.232 real 0m15.693s 00:29:31.232 user 0m15.172s 00:29:31.232 sys 0m1.512s 00:29:31.232 11:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:31.232 11:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:29:31.232 ************************************ 00:29:31.232 END TEST lvs_grow_clean 00:29:31.232 ************************************ 00:29:31.232 11:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:29:31.232 11:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:31.232 11:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:31.232 11:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:31.232 ************************************ 00:29:31.232 START TEST lvs_grow_dirty 00:29:31.232 ************************************ 00:29:31.232 11:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:29:31.232 11:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:29:31.232 11:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:29:31.232 11:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:29:31.232 11:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:29:31.232 11:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:29:31.232 11:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:29:31.232 11:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:31.232 11:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:31.232 11:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:31.491 11:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:29:31.491 11:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:29:31.749 11:23:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=2066d116-ddf7-4998-9d63-ffa59513e95c 00:29:31.749 11:23:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2066d116-ddf7-4998-9d63-ffa59513e95c 00:29:31.749 11:23:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:29:32.007 11:23:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:29:32.007 11:23:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:29:32.007 11:23:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 2066d116-ddf7-4998-9d63-ffa59513e95c lvol 150 00:29:32.007 11:23:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=a6e43512-49a2-4a06-bd98-ba246f41dcfc 00:29:32.007 11:23:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:32.007 11:23:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:29:32.265 [2024-11-20 11:23:59.659395] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:29:32.265 [2024-11-20 11:23:59.659526] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:29:32.265 true 00:29:32.265 11:23:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2066d116-ddf7-4998-9d63-ffa59513e95c 00:29:32.265 11:23:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:29:32.524 11:23:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:29:32.524 11:23:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:29:32.783 11:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a6e43512-49a2-4a06-bd98-ba246f41dcfc 00:29:32.783 11:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:33.042 [2024-11-20 11:24:00.455788] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:33.042 11:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:33.301 11:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=60662 00:29:33.301 11:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:33.301 11:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:29:33.301 11:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 60662 /var/tmp/bdevperf.sock 00:29:33.301 11:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 60662 ']' 00:29:33.301 11:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:33.301 11:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:33.301 11:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:33.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:33.301 11:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:33.301 11:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:29:33.301 [2024-11-20 11:24:00.714762] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:29:33.301 [2024-11-20 11:24:00.714813] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60662 ] 00:29:33.301 [2024-11-20 11:24:00.790425] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:33.560 [2024-11-20 11:24:00.834951] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:33.560 11:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:33.560 11:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:29:33.560 11:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:29:33.819 Nvme0n1 00:29:33.819 11:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:29:34.077 [ 00:29:34.077 { 00:29:34.077 "name": "Nvme0n1", 00:29:34.077 "aliases": [ 00:29:34.077 "a6e43512-49a2-4a06-bd98-ba246f41dcfc" 00:29:34.077 ], 00:29:34.077 "product_name": "NVMe disk", 00:29:34.078 "block_size": 4096, 00:29:34.078 "num_blocks": 38912, 00:29:34.078 "uuid": "a6e43512-49a2-4a06-bd98-ba246f41dcfc", 00:29:34.078 "numa_id": 1, 00:29:34.078 "assigned_rate_limits": { 00:29:34.078 "rw_ios_per_sec": 0, 00:29:34.078 "rw_mbytes_per_sec": 0, 00:29:34.078 "r_mbytes_per_sec": 0, 00:29:34.078 "w_mbytes_per_sec": 0 00:29:34.078 }, 00:29:34.078 "claimed": false, 00:29:34.078 "zoned": false, 00:29:34.078 "supported_io_types": { 00:29:34.078 "read": true, 00:29:34.078 "write": true, 00:29:34.078 "unmap": true, 00:29:34.078 "flush": true, 00:29:34.078 "reset": true, 00:29:34.078 "nvme_admin": true, 00:29:34.078 "nvme_io": true, 00:29:34.078 "nvme_io_md": false, 00:29:34.078 "write_zeroes": true, 00:29:34.078 "zcopy": false, 00:29:34.078 "get_zone_info": false, 00:29:34.078 "zone_management": false, 00:29:34.078 "zone_append": false, 00:29:34.078 "compare": true, 00:29:34.078 "compare_and_write": true, 00:29:34.078 "abort": true, 00:29:34.078 "seek_hole": false, 00:29:34.078 "seek_data": false, 00:29:34.078 "copy": true, 00:29:34.078 "nvme_iov_md": false 00:29:34.078 }, 00:29:34.078 "memory_domains": [ 00:29:34.078 { 00:29:34.078 "dma_device_id": "system", 00:29:34.078 "dma_device_type": 1 00:29:34.078 } 00:29:34.078 ], 00:29:34.078 "driver_specific": { 00:29:34.078 "nvme": [ 00:29:34.078 { 00:29:34.078 "trid": { 00:29:34.078 "trtype": "TCP", 00:29:34.078 "adrfam": "IPv4", 00:29:34.078 "traddr": "10.0.0.2", 00:29:34.078 "trsvcid": "4420", 00:29:34.078 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:34.078 }, 00:29:34.078 "ctrlr_data": { 00:29:34.078 "cntlid": 1, 00:29:34.078 "vendor_id": "0x8086", 00:29:34.078 "model_number": "SPDK bdev Controller", 00:29:34.078 "serial_number": "SPDK0", 00:29:34.078 "firmware_revision": "25.01", 00:29:34.078 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:34.078 "oacs": { 00:29:34.078 "security": 0, 00:29:34.078 "format": 0, 00:29:34.078 "firmware": 0, 00:29:34.078 "ns_manage": 0 00:29:34.078 }, 00:29:34.078 "multi_ctrlr": true, 00:29:34.078 "ana_reporting": false 00:29:34.078 }, 00:29:34.078 "vs": { 00:29:34.078 "nvme_version": "1.3" 00:29:34.078 }, 00:29:34.078 "ns_data": { 00:29:34.078 "id": 1, 00:29:34.078 "can_share": true 00:29:34.078 } 00:29:34.078 } 00:29:34.078 ], 00:29:34.078 "mp_policy": "active_passive" 00:29:34.078 } 00:29:34.078 } 00:29:34.078 ] 00:29:34.078 11:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=60781 00:29:34.078 11:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:34.078 11:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:29:34.078 Running I/O for 10 seconds... 00:29:35.012 Latency(us) 00:29:35.012 [2024-11-20T10:24:02.508Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:35.012 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:35.012 Nvme0n1 : 1.00 22225.00 86.82 0.00 0.00 0.00 0.00 0.00 00:29:35.012 [2024-11-20T10:24:02.508Z] =================================================================================================================== 00:29:35.012 [2024-11-20T10:24:02.508Z] Total : 22225.00 86.82 0.00 0.00 0.00 0.00 0.00 00:29:35.012 00:29:35.947 11:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 2066d116-ddf7-4998-9d63-ffa59513e95c 00:29:36.206 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:36.206 Nvme0n1 : 2.00 22606.00 88.30 0.00 0.00 0.00 0.00 0.00 00:29:36.206 [2024-11-20T10:24:03.702Z] =================================================================================================================== 00:29:36.206 [2024-11-20T10:24:03.702Z] Total : 22606.00 88.30 0.00 0.00 0.00 0.00 0.00 00:29:36.206 00:29:36.206 true 00:29:36.206 11:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2066d116-ddf7-4998-9d63-ffa59513e95c 00:29:36.206 11:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:29:36.464 11:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:29:36.464 11:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:29:36.464 11:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 60781 00:29:37.031 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:37.031 Nvme0n1 : 3.00 22733.00 88.80 0.00 0.00 0.00 0.00 0.00 00:29:37.031 [2024-11-20T10:24:04.527Z] =================================================================================================================== 00:29:37.031 [2024-11-20T10:24:04.527Z] Total : 22733.00 88.80 0.00 0.00 0.00 0.00 0.00 00:29:37.031 00:29:38.407 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:38.407 Nvme0n1 : 4.00 22828.25 89.17 0.00 0.00 0.00 0.00 0.00 00:29:38.407 [2024-11-20T10:24:05.903Z] =================================================================================================================== 00:29:38.407 [2024-11-20T10:24:05.903Z] Total : 22828.25 89.17 0.00 0.00 0.00 0.00 0.00 00:29:38.407 00:29:39.345 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:39.345 Nvme0n1 : 5.00 22910.80 89.50 0.00 0.00 0.00 0.00 0.00 00:29:39.345 [2024-11-20T10:24:06.841Z] =================================================================================================================== 00:29:39.345 [2024-11-20T10:24:06.841Z] Total : 22910.80 89.50 0.00 0.00 0.00 0.00 0.00 00:29:39.345 00:29:40.281 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:40.281 Nvme0n1 : 6.00 22965.83 89.71 0.00 0.00 0.00 0.00 0.00 00:29:40.281 [2024-11-20T10:24:07.777Z] =================================================================================================================== 00:29:40.281 [2024-11-20T10:24:07.777Z] Total : 22965.83 89.71 0.00 0.00 0.00 0.00 0.00 00:29:40.281 00:29:41.216 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:41.216 Nvme0n1 : 7.00 22987.00 89.79 0.00 0.00 0.00 0.00 0.00 00:29:41.216 [2024-11-20T10:24:08.712Z] =================================================================================================================== 00:29:41.216 [2024-11-20T10:24:08.712Z] Total : 22987.00 89.79 0.00 0.00 0.00 0.00 0.00 00:29:41.216 00:29:42.149 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:42.149 Nvme0n1 : 8.00 23018.75 89.92 0.00 0.00 0.00 0.00 0.00 00:29:42.149 [2024-11-20T10:24:09.645Z] =================================================================================================================== 00:29:42.149 [2024-11-20T10:24:09.646Z] Total : 23018.75 89.92 0.00 0.00 0.00 0.00 0.00 00:29:42.150 00:29:43.084 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:43.084 Nvme0n1 : 9.00 22990.78 89.81 0.00 0.00 0.00 0.00 0.00 00:29:43.084 [2024-11-20T10:24:10.580Z] =================================================================================================================== 00:29:43.084 [2024-11-20T10:24:10.580Z] Total : 22990.78 89.81 0.00 0.00 0.00 0.00 0.00 00:29:43.084 00:29:44.461 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:44.461 Nvme0n1 : 10.00 22990.40 89.81 0.00 0.00 0.00 0.00 0.00 00:29:44.461 [2024-11-20T10:24:11.957Z] =================================================================================================================== 00:29:44.461 [2024-11-20T10:24:11.957Z] Total : 22990.40 89.81 0.00 0.00 0.00 0.00 0.00 00:29:44.461 00:29:44.461 00:29:44.461 Latency(us) 00:29:44.461 [2024-11-20T10:24:11.957Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:44.461 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:44.461 Nvme0n1 : 10.01 22989.96 89.80 0.00 0.00 5564.63 3362.28 24960.67 00:29:44.461 [2024-11-20T10:24:11.957Z] =================================================================================================================== 00:29:44.461 [2024-11-20T10:24:11.957Z] Total : 22989.96 89.80 0.00 0.00 5564.63 3362.28 24960.67 00:29:44.461 { 00:29:44.461 "results": [ 00:29:44.461 { 00:29:44.461 "job": "Nvme0n1", 00:29:44.461 "core_mask": "0x2", 00:29:44.461 "workload": "randwrite", 00:29:44.461 "status": "finished", 00:29:44.461 "queue_depth": 128, 00:29:44.461 "io_size": 4096, 00:29:44.461 "runtime": 10.005757, 00:29:44.461 "iops": 22989.964677335258, 00:29:44.462 "mibps": 89.80454952084085, 00:29:44.462 "io_failed": 0, 00:29:44.462 "io_timeout": 0, 00:29:44.462 "avg_latency_us": 5564.628609342821, 00:29:44.462 "min_latency_us": 3362.2817391304347, 00:29:44.462 "max_latency_us": 24960.667826086956 00:29:44.462 } 00:29:44.462 ], 00:29:44.462 "core_count": 1 00:29:44.462 } 00:29:44.462 11:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 60662 00:29:44.462 11:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 60662 ']' 00:29:44.462 11:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 60662 00:29:44.462 11:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:29:44.462 11:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:44.462 11:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60662 00:29:44.462 11:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:44.462 11:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:44.462 11:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60662' 00:29:44.462 killing process with pid 60662 00:29:44.462 11:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 60662 00:29:44.462 Received shutdown signal, test time was about 10.000000 seconds 00:29:44.462 00:29:44.462 Latency(us) 00:29:44.462 [2024-11-20T10:24:11.958Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:44.462 [2024-11-20T10:24:11.958Z] =================================================================================================================== 00:29:44.462 [2024-11-20T10:24:11.958Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:44.462 11:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 60662 00:29:44.462 11:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:44.720 11:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:44.720 11:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2066d116-ddf7-4998-9d63-ffa59513e95c 00:29:44.720 11:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:29:44.980 11:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:29:44.980 11:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:29:44.980 11:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 57597 00:29:44.980 11:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 57597 00:29:44.980 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 57597 Killed "${NVMF_APP[@]}" "$@" 00:29:44.980 11:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:29:44.980 11:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:29:44.980 11:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:44.980 11:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:44.980 11:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:29:44.980 11:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=63036 00:29:44.980 11:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:29:44.980 11:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 63036 00:29:44.980 11:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 63036 ']' 00:29:44.980 11:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:44.980 11:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:44.980 11:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:44.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:44.980 11:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:44.980 11:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:29:44.980 [2024-11-20 11:24:12.463242] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:44.980 [2024-11-20 11:24:12.464169] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:29:44.980 [2024-11-20 11:24:12.464205] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:45.239 [2024-11-20 11:24:12.545135] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:45.239 [2024-11-20 11:24:12.586034] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:45.239 [2024-11-20 11:24:12.586071] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:45.239 [2024-11-20 11:24:12.586078] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:45.239 [2024-11-20 11:24:12.586084] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:45.239 [2024-11-20 11:24:12.586090] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:45.239 [2024-11-20 11:24:12.586681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:45.239 [2024-11-20 11:24:12.654536] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:45.239 [2024-11-20 11:24:12.654775] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:45.239 11:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:45.239 11:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:29:45.239 11:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:45.240 11:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:45.240 11:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:29:45.240 11:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:45.240 11:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:45.498 [2024-11-20 11:24:12.892006] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:29:45.498 [2024-11-20 11:24:12.892194] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:29:45.498 [2024-11-20 11:24:12.892279] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:29:45.498 11:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:29:45.498 11:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev a6e43512-49a2-4a06-bd98-ba246f41dcfc 00:29:45.498 11:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=a6e43512-49a2-4a06-bd98-ba246f41dcfc 00:29:45.498 11:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:29:45.498 11:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:29:45.498 11:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:29:45.498 11:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:29:45.498 11:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:45.757 11:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b a6e43512-49a2-4a06-bd98-ba246f41dcfc -t 2000 00:29:46.016 [ 00:29:46.016 { 00:29:46.016 "name": "a6e43512-49a2-4a06-bd98-ba246f41dcfc", 00:29:46.016 "aliases": [ 00:29:46.016 "lvs/lvol" 00:29:46.016 ], 00:29:46.016 "product_name": "Logical Volume", 00:29:46.016 "block_size": 4096, 00:29:46.016 "num_blocks": 38912, 00:29:46.016 "uuid": "a6e43512-49a2-4a06-bd98-ba246f41dcfc", 00:29:46.016 "assigned_rate_limits": { 00:29:46.016 "rw_ios_per_sec": 0, 00:29:46.016 "rw_mbytes_per_sec": 0, 00:29:46.016 "r_mbytes_per_sec": 0, 00:29:46.016 "w_mbytes_per_sec": 0 00:29:46.016 }, 00:29:46.016 "claimed": false, 00:29:46.016 "zoned": false, 00:29:46.016 "supported_io_types": { 00:29:46.016 "read": true, 00:29:46.016 "write": true, 00:29:46.016 "unmap": true, 00:29:46.016 "flush": false, 00:29:46.016 "reset": true, 00:29:46.016 "nvme_admin": false, 00:29:46.016 "nvme_io": false, 00:29:46.016 "nvme_io_md": false, 00:29:46.016 "write_zeroes": true, 00:29:46.016 "zcopy": false, 00:29:46.016 "get_zone_info": false, 00:29:46.016 "zone_management": false, 00:29:46.016 "zone_append": false, 00:29:46.016 "compare": false, 00:29:46.016 "compare_and_write": false, 00:29:46.016 "abort": false, 00:29:46.016 "seek_hole": true, 00:29:46.016 "seek_data": true, 00:29:46.016 "copy": false, 00:29:46.016 "nvme_iov_md": false 00:29:46.016 }, 00:29:46.016 "driver_specific": { 00:29:46.016 "lvol": { 00:29:46.016 "lvol_store_uuid": "2066d116-ddf7-4998-9d63-ffa59513e95c", 00:29:46.016 "base_bdev": "aio_bdev", 00:29:46.016 "thin_provision": false, 00:29:46.016 "num_allocated_clusters": 38, 00:29:46.016 "snapshot": false, 00:29:46.016 "clone": false, 00:29:46.016 "esnap_clone": false 00:29:46.016 } 00:29:46.016 } 00:29:46.016 } 00:29:46.016 ] 00:29:46.016 11:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:29:46.016 11:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2066d116-ddf7-4998-9d63-ffa59513e95c 00:29:46.016 11:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:29:46.273 11:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:29:46.273 11:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:29:46.273 11:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2066d116-ddf7-4998-9d63-ffa59513e95c 00:29:46.273 11:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:29:46.273 11:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:29:46.530 [2024-11-20 11:24:13.883127] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:29:46.530 11:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2066d116-ddf7-4998-9d63-ffa59513e95c 00:29:46.530 11:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:29:46.530 11:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2066d116-ddf7-4998-9d63-ffa59513e95c 00:29:46.530 11:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:46.530 11:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:46.530 11:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:46.530 11:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:46.530 11:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:46.530 11:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:46.530 11:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:46.530 11:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:29:46.530 11:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2066d116-ddf7-4998-9d63-ffa59513e95c 00:29:46.788 request: 00:29:46.788 { 00:29:46.788 "uuid": "2066d116-ddf7-4998-9d63-ffa59513e95c", 00:29:46.788 "method": "bdev_lvol_get_lvstores", 00:29:46.788 "req_id": 1 00:29:46.788 } 00:29:46.788 Got JSON-RPC error response 00:29:46.788 response: 00:29:46.788 { 00:29:46.788 "code": -19, 00:29:46.788 "message": "No such device" 00:29:46.788 } 00:29:46.788 11:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:29:46.788 11:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:46.788 11:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:46.788 11:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:46.788 11:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:47.046 aio_bdev 00:29:47.046 11:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev a6e43512-49a2-4a06-bd98-ba246f41dcfc 00:29:47.046 11:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=a6e43512-49a2-4a06-bd98-ba246f41dcfc 00:29:47.046 11:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:29:47.046 11:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:29:47.046 11:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:29:47.046 11:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:29:47.046 11:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:47.046 11:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b a6e43512-49a2-4a06-bd98-ba246f41dcfc -t 2000 00:29:47.305 [ 00:29:47.305 { 00:29:47.305 "name": "a6e43512-49a2-4a06-bd98-ba246f41dcfc", 00:29:47.305 "aliases": [ 00:29:47.305 "lvs/lvol" 00:29:47.305 ], 00:29:47.305 "product_name": "Logical Volume", 00:29:47.305 "block_size": 4096, 00:29:47.305 "num_blocks": 38912, 00:29:47.305 "uuid": "a6e43512-49a2-4a06-bd98-ba246f41dcfc", 00:29:47.305 "assigned_rate_limits": { 00:29:47.305 "rw_ios_per_sec": 0, 00:29:47.305 "rw_mbytes_per_sec": 0, 00:29:47.305 "r_mbytes_per_sec": 0, 00:29:47.305 "w_mbytes_per_sec": 0 00:29:47.305 }, 00:29:47.305 "claimed": false, 00:29:47.305 "zoned": false, 00:29:47.305 "supported_io_types": { 00:29:47.305 "read": true, 00:29:47.305 "write": true, 00:29:47.305 "unmap": true, 00:29:47.305 "flush": false, 00:29:47.305 "reset": true, 00:29:47.305 "nvme_admin": false, 00:29:47.305 "nvme_io": false, 00:29:47.305 "nvme_io_md": false, 00:29:47.305 "write_zeroes": true, 00:29:47.305 "zcopy": false, 00:29:47.305 "get_zone_info": false, 00:29:47.305 "zone_management": false, 00:29:47.305 "zone_append": false, 00:29:47.305 "compare": false, 00:29:47.305 "compare_and_write": false, 00:29:47.305 "abort": false, 00:29:47.305 "seek_hole": true, 00:29:47.305 "seek_data": true, 00:29:47.305 "copy": false, 00:29:47.305 "nvme_iov_md": false 00:29:47.305 }, 00:29:47.305 "driver_specific": { 00:29:47.305 "lvol": { 00:29:47.305 "lvol_store_uuid": "2066d116-ddf7-4998-9d63-ffa59513e95c", 00:29:47.305 "base_bdev": "aio_bdev", 00:29:47.305 "thin_provision": false, 00:29:47.305 "num_allocated_clusters": 38, 00:29:47.305 "snapshot": false, 00:29:47.305 "clone": false, 00:29:47.305 "esnap_clone": false 00:29:47.306 } 00:29:47.306 } 00:29:47.306 } 00:29:47.306 ] 00:29:47.306 11:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:29:47.306 11:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2066d116-ddf7-4998-9d63-ffa59513e95c 00:29:47.306 11:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:29:47.564 11:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:29:47.564 11:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2066d116-ddf7-4998-9d63-ffa59513e95c 00:29:47.564 11:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:29:47.823 11:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:29:47.823 11:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete a6e43512-49a2-4a06-bd98-ba246f41dcfc 00:29:47.823 11:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 2066d116-ddf7-4998-9d63-ffa59513e95c 00:29:48.081 11:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:29:48.340 11:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:48.340 00:29:48.340 real 0m17.030s 00:29:48.340 user 0m34.446s 00:29:48.340 sys 0m3.830s 00:29:48.340 11:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:48.340 11:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:29:48.340 ************************************ 00:29:48.340 END TEST lvs_grow_dirty 00:29:48.340 ************************************ 00:29:48.340 11:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:29:48.340 11:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:29:48.340 11:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:29:48.340 11:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:29:48.340 11:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:29:48.340 11:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:29:48.340 11:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:29:48.340 11:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:29:48.340 11:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:29:48.340 nvmf_trace.0 00:29:48.340 11:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:29:48.340 11:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:29:48.340 11:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:48.340 11:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:29:48.340 11:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:48.340 11:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:29:48.340 11:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:48.340 11:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:48.340 rmmod nvme_tcp 00:29:48.340 rmmod nvme_fabrics 00:29:48.603 rmmod nvme_keyring 00:29:48.603 11:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:48.603 11:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:29:48.603 11:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:29:48.603 11:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 63036 ']' 00:29:48.603 11:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 63036 00:29:48.603 11:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 63036 ']' 00:29:48.603 11:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 63036 00:29:48.603 11:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:29:48.603 11:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:48.603 11:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63036 00:29:48.603 11:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:48.603 11:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:48.603 11:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63036' 00:29:48.603 killing process with pid 63036 00:29:48.603 11:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 63036 00:29:48.603 11:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 63036 00:29:48.603 11:24:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:48.603 11:24:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:48.603 11:24:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:48.603 11:24:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:29:48.603 11:24:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:29:48.603 11:24:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:48.603 11:24:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:29:48.862 11:24:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:48.862 11:24:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:48.862 11:24:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:48.862 11:24:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:48.862 11:24:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:50.773 11:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:50.773 00:29:50.773 real 0m41.938s 00:29:50.773 user 0m52.151s 00:29:50.773 sys 0m10.243s 00:29:50.773 11:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:50.773 11:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:50.773 ************************************ 00:29:50.773 END TEST nvmf_lvs_grow 00:29:50.773 ************************************ 00:29:50.773 11:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:29:50.773 11:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:50.773 11:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:50.773 11:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:50.773 ************************************ 00:29:50.773 START TEST nvmf_bdev_io_wait 00:29:50.773 ************************************ 00:29:50.773 11:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:29:51.033 * Looking for test storage... 00:29:51.033 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:51.033 11:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:51.033 11:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:29:51.033 11:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:51.033 11:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:51.033 11:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:51.033 11:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:51.033 11:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:51.033 11:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:29:51.033 11:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:29:51.033 11:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:29:51.033 11:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:29:51.033 11:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:29:51.033 11:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:29:51.033 11:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:29:51.033 11:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:51.033 11:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:29:51.033 11:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:29:51.033 11:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:51.033 11:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:51.033 11:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:29:51.033 11:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:29:51.033 11:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:51.033 11:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:29:51.033 11:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:29:51.034 11:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:29:51.034 11:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:29:51.034 11:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:51.034 11:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:29:51.034 11:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:29:51.034 11:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:51.034 11:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:51.034 11:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:29:51.034 11:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:51.034 11:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:51.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:51.034 --rc genhtml_branch_coverage=1 00:29:51.034 --rc genhtml_function_coverage=1 00:29:51.034 --rc genhtml_legend=1 00:29:51.034 --rc geninfo_all_blocks=1 00:29:51.034 --rc geninfo_unexecuted_blocks=1 00:29:51.034 00:29:51.034 ' 00:29:51.034 11:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:51.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:51.034 --rc genhtml_branch_coverage=1 00:29:51.034 --rc genhtml_function_coverage=1 00:29:51.034 --rc genhtml_legend=1 00:29:51.034 --rc geninfo_all_blocks=1 00:29:51.034 --rc geninfo_unexecuted_blocks=1 00:29:51.034 00:29:51.034 ' 00:29:51.034 11:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:51.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:51.034 --rc genhtml_branch_coverage=1 00:29:51.034 --rc genhtml_function_coverage=1 00:29:51.034 --rc genhtml_legend=1 00:29:51.034 --rc geninfo_all_blocks=1 00:29:51.034 --rc geninfo_unexecuted_blocks=1 00:29:51.034 00:29:51.034 ' 00:29:51.034 11:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:51.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:51.034 --rc genhtml_branch_coverage=1 00:29:51.034 --rc genhtml_function_coverage=1 00:29:51.034 --rc genhtml_legend=1 00:29:51.034 --rc geninfo_all_blocks=1 00:29:51.034 --rc geninfo_unexecuted_blocks=1 00:29:51.034 00:29:51.034 ' 00:29:51.034 11:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:51.034 11:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:29:51.034 11:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:51.034 11:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:51.034 11:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:51.034 11:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:51.034 11:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:51.034 11:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:51.034 11:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:51.034 11:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:51.034 11:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:51.034 11:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:51.034 11:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:51.034 11:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:51.034 11:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:51.034 11:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:51.034 11:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:51.034 11:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:51.034 11:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:51.034 11:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:29:51.034 11:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:51.034 11:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:51.034 11:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:51.034 11:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:51.034 11:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:51.034 11:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:51.034 11:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:29:51.034 11:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:51.034 11:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:29:51.034 11:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:51.034 11:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:51.034 11:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:51.034 11:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:51.034 11:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:51.034 11:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:51.034 11:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:51.034 11:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:51.034 11:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:51.034 11:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:51.034 11:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:51.034 11:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:51.034 11:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:29:51.034 11:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:51.035 11:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:51.035 11:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:51.035 11:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:51.035 11:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:51.035 11:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:51.035 11:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:51.035 11:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:51.035 11:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:51.035 11:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:51.035 11:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:29:51.035 11:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:57.604 11:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:57.604 11:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:29:57.604 11:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:57.604 11:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:57.604 11:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:57.604 11:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:57.604 11:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:57.604 11:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:29:57.604 11:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:57.604 11:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:29:57.604 11:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:29:57.604 11:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:29:57.604 11:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:29:57.604 11:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:29:57.604 11:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:29:57.604 11:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:57.604 11:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:57.604 11:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:57.604 11:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:57.604 11:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:57.604 11:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:57.604 11:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:57.604 11:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:57.604 11:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:57.604 11:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:57.604 11:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:57.604 11:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:57.605 11:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:57.605 11:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:57.605 11:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:57.605 11:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:57.605 11:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:57.605 11:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:57.605 11:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:57.605 11:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:57.605 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:57.605 11:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:57.605 11:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:57.605 11:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:57.605 11:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:57.605 11:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:57.605 11:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:57.605 11:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:57.605 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:57.605 11:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:57.605 11:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:57.605 11:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:57.605 11:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:57.605 11:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:57.605 11:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:57.605 11:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:57.605 11:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:57.605 11:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:57.605 11:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:57.605 11:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:57.605 11:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:57.605 11:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:57.605 11:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:57.605 11:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:57.605 11:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:57.605 Found net devices under 0000:86:00.0: cvl_0_0 00:29:57.605 11:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:57.605 11:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:57.605 11:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:57.605 11:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:57.605 11:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:57.605 11:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:57.605 11:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:57.605 11:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:57.605 11:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:57.605 Found net devices under 0000:86:00.1: cvl_0_1 00:29:57.605 11:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:57.605 11:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:57.605 11:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:29:57.605 11:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:57.605 11:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:57.605 11:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:57.605 11:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:57.605 11:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:57.605 11:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:57.605 11:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:57.605 11:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:57.605 11:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:57.605 11:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:57.605 11:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:57.605 11:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:57.605 11:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:57.605 11:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:57.605 11:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:57.605 11:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:57.605 11:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:57.605 11:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:57.605 11:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:57.605 11:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:57.605 11:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:57.605 11:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:57.605 11:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:57.605 11:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:57.605 11:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:57.605 11:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:57.605 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:57.605 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.416 ms 00:29:57.605 00:29:57.605 --- 10.0.0.2 ping statistics --- 00:29:57.605 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:57.605 rtt min/avg/max/mdev = 0.416/0.416/0.416/0.000 ms 00:29:57.605 11:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:57.605 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:57.605 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:29:57.605 00:29:57.605 --- 10.0.0.1 ping statistics --- 00:29:57.605 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:57.605 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:29:57.605 11:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:57.605 11:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:29:57.605 11:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:57.605 11:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:57.605 11:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:57.606 11:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:57.606 11:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:57.606 11:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:57.606 11:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:57.606 11:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:29:57.606 11:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:57.606 11:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:57.606 11:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:57.606 11:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=67086 00:29:57.606 11:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 67086 00:29:57.606 11:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:29:57.606 11:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 67086 ']' 00:29:57.606 11:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:57.606 11:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:57.606 11:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:57.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:57.606 11:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:57.606 11:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:57.606 [2024-11-20 11:24:24.444674] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:57.606 [2024-11-20 11:24:24.445651] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:29:57.606 [2024-11-20 11:24:24.445691] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:57.606 [2024-11-20 11:24:24.526940] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:57.606 [2024-11-20 11:24:24.568542] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:57.606 [2024-11-20 11:24:24.568582] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:57.606 [2024-11-20 11:24:24.568588] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:57.606 [2024-11-20 11:24:24.568595] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:57.606 [2024-11-20 11:24:24.568601] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:57.606 [2024-11-20 11:24:24.570201] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:57.606 [2024-11-20 11:24:24.570309] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:57.606 [2024-11-20 11:24:24.570399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:57.606 [2024-11-20 11:24:24.570401] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:57.606 [2024-11-20 11:24:24.570784] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:57.866 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:57.866 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:29:57.866 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:57.866 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:57.866 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:57.866 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:57.866 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:29:57.866 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:57.866 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:57.866 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:57.866 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:29:57.866 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:57.866 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:58.125 [2024-11-20 11:24:25.387942] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:58.125 [2024-11-20 11:24:25.388770] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:58.125 [2024-11-20 11:24:25.388933] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:29:58.125 [2024-11-20 11:24:25.389069] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:58.125 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:58.125 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:58.125 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:58.125 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:58.125 [2024-11-20 11:24:25.399178] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:58.125 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:58.125 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:58.125 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:58.125 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:58.125 Malloc0 00:29:58.126 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:58.126 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:58.126 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:58.126 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:58.126 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:58.126 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:58.126 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:58.126 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:58.126 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:58.126 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:58.126 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:58.126 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:58.126 [2024-11-20 11:24:25.467226] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:58.126 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:58.126 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=67334 00:29:58.126 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:29:58.126 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:29:58.126 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=67336 00:29:58.126 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:29:58.126 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:29:58.126 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:58.126 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:58.126 { 00:29:58.126 "params": { 00:29:58.126 "name": "Nvme$subsystem", 00:29:58.126 "trtype": "$TEST_TRANSPORT", 00:29:58.126 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:58.126 "adrfam": "ipv4", 00:29:58.126 "trsvcid": "$NVMF_PORT", 00:29:58.126 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:58.126 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:58.126 "hdgst": ${hdgst:-false}, 00:29:58.126 "ddgst": ${ddgst:-false} 00:29:58.126 }, 00:29:58.126 "method": "bdev_nvme_attach_controller" 00:29:58.126 } 00:29:58.126 EOF 00:29:58.126 )") 00:29:58.126 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:29:58.126 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=67338 00:29:58.126 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:29:58.126 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:29:58.126 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:29:58.126 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:58.126 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:29:58.126 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:29:58.126 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:58.126 { 00:29:58.126 "params": { 00:29:58.126 "name": "Nvme$subsystem", 00:29:58.126 "trtype": "$TEST_TRANSPORT", 00:29:58.126 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:58.126 "adrfam": "ipv4", 00:29:58.126 "trsvcid": "$NVMF_PORT", 00:29:58.126 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:58.126 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:58.126 "hdgst": ${hdgst:-false}, 00:29:58.126 "ddgst": ${ddgst:-false} 00:29:58.126 }, 00:29:58.126 "method": "bdev_nvme_attach_controller" 00:29:58.126 } 00:29:58.126 EOF 00:29:58.126 )") 00:29:58.126 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=67341 00:29:58.126 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:29:58.126 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:29:58.126 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:29:58.126 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:29:58.126 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:58.126 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:58.126 { 00:29:58.126 "params": { 00:29:58.126 "name": "Nvme$subsystem", 00:29:58.126 "trtype": "$TEST_TRANSPORT", 00:29:58.126 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:58.126 "adrfam": "ipv4", 00:29:58.126 "trsvcid": "$NVMF_PORT", 00:29:58.126 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:58.126 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:58.126 "hdgst": ${hdgst:-false}, 00:29:58.126 "ddgst": ${ddgst:-false} 00:29:58.126 }, 00:29:58.126 "method": "bdev_nvme_attach_controller" 00:29:58.126 } 00:29:58.126 EOF 00:29:58.126 )") 00:29:58.126 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:29:58.126 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:29:58.126 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:29:58.126 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:29:58.126 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:29:58.126 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:58.126 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:58.126 { 00:29:58.126 "params": { 00:29:58.126 "name": "Nvme$subsystem", 00:29:58.126 "trtype": "$TEST_TRANSPORT", 00:29:58.126 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:58.126 "adrfam": "ipv4", 00:29:58.126 "trsvcid": "$NVMF_PORT", 00:29:58.126 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:58.126 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:58.126 "hdgst": ${hdgst:-false}, 00:29:58.126 "ddgst": ${ddgst:-false} 00:29:58.126 }, 00:29:58.126 "method": "bdev_nvme_attach_controller" 00:29:58.126 } 00:29:58.126 EOF 00:29:58.126 )") 00:29:58.126 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:29:58.126 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 67334 00:29:58.126 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:29:58.126 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:29:58.126 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:29:58.126 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:29:58.126 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:29:58.126 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:58.126 "params": { 00:29:58.126 "name": "Nvme1", 00:29:58.126 "trtype": "tcp", 00:29:58.126 "traddr": "10.0.0.2", 00:29:58.126 "adrfam": "ipv4", 00:29:58.126 "trsvcid": "4420", 00:29:58.126 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:58.126 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:58.126 "hdgst": false, 00:29:58.126 "ddgst": false 00:29:58.126 }, 00:29:58.126 "method": "bdev_nvme_attach_controller" 00:29:58.126 }' 00:29:58.127 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:29:58.127 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:29:58.127 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:58.127 "params": { 00:29:58.127 "name": "Nvme1", 00:29:58.127 "trtype": "tcp", 00:29:58.127 "traddr": "10.0.0.2", 00:29:58.127 "adrfam": "ipv4", 00:29:58.127 "trsvcid": "4420", 00:29:58.127 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:58.127 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:58.127 "hdgst": false, 00:29:58.127 "ddgst": false 00:29:58.127 }, 00:29:58.127 "method": "bdev_nvme_attach_controller" 00:29:58.127 }' 00:29:58.127 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:29:58.127 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:58.127 "params": { 00:29:58.127 "name": "Nvme1", 00:29:58.127 "trtype": "tcp", 00:29:58.127 "traddr": "10.0.0.2", 00:29:58.127 "adrfam": "ipv4", 00:29:58.127 "trsvcid": "4420", 00:29:58.127 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:58.127 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:58.127 "hdgst": false, 00:29:58.127 "ddgst": false 00:29:58.127 }, 00:29:58.127 "method": "bdev_nvme_attach_controller" 00:29:58.127 }' 00:29:58.127 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:29:58.127 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:58.127 "params": { 00:29:58.127 "name": "Nvme1", 00:29:58.127 "trtype": "tcp", 00:29:58.127 "traddr": "10.0.0.2", 00:29:58.127 "adrfam": "ipv4", 00:29:58.127 "trsvcid": "4420", 00:29:58.127 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:58.127 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:58.127 "hdgst": false, 00:29:58.127 "ddgst": false 00:29:58.127 }, 00:29:58.127 "method": "bdev_nvme_attach_controller" 00:29:58.127 }' 00:29:58.127 [2024-11-20 11:24:25.519838] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:29:58.127 [2024-11-20 11:24:25.519887] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:29:58.127 [2024-11-20 11:24:25.520373] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:29:58.127 [2024-11-20 11:24:25.520416] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:29:58.127 [2024-11-20 11:24:25.522424] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:29:58.127 [2024-11-20 11:24:25.522467] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:29:58.127 [2024-11-20 11:24:25.523286] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:29:58.127 [2024-11-20 11:24:25.523328] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:29:58.385 [2024-11-20 11:24:25.713980] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:58.385 [2024-11-20 11:24:25.757016] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:58.385 [2024-11-20 11:24:25.806538] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:58.385 [2024-11-20 11:24:25.863099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:29:58.644 [2024-11-20 11:24:25.888955] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:58.644 [2024-11-20 11:24:25.931767] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:29:58.644 [2024-11-20 11:24:25.936017] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:58.644 [2024-11-20 11:24:25.978975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:29:58.644 Running I/O for 1 seconds... 00:29:58.644 Running I/O for 1 seconds... 00:29:58.644 Running I/O for 1 seconds... 00:29:58.902 Running I/O for 1 seconds... 00:29:59.841 13759.00 IOPS, 53.75 MiB/s 00:29:59.841 Latency(us) 00:29:59.841 [2024-11-20T10:24:27.337Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:59.841 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:29:59.841 Nvme1n1 : 1.01 13824.30 54.00 0.00 0.00 9232.62 1866.35 10599.74 00:29:59.841 [2024-11-20T10:24:27.337Z] =================================================================================================================== 00:29:59.841 [2024-11-20T10:24:27.337Z] Total : 13824.30 54.00 0.00 0.00 9232.62 1866.35 10599.74 00:29:59.841 7188.00 IOPS, 28.08 MiB/s 00:29:59.841 Latency(us) 00:29:59.841 [2024-11-20T10:24:27.337Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:59.841 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:29:59.841 Nvme1n1 : 1.01 7217.55 28.19 0.00 0.00 17624.39 4274.09 31229.33 00:29:59.841 [2024-11-20T10:24:27.337Z] =================================================================================================================== 00:29:59.841 [2024-11-20T10:24:27.337Z] Total : 7217.55 28.19 0.00 0.00 17624.39 4274.09 31229.33 00:29:59.841 238232.00 IOPS, 930.59 MiB/s 00:29:59.841 Latency(us) 00:29:59.841 [2024-11-20T10:24:27.337Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:59.841 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:29:59.841 Nvme1n1 : 1.00 237861.32 929.15 0.00 0.00 535.10 235.07 1545.79 00:29:59.841 [2024-11-20T10:24:27.337Z] =================================================================================================================== 00:29:59.841 [2024-11-20T10:24:27.337Z] Total : 237861.32 929.15 0.00 0.00 535.10 235.07 1545.79 00:29:59.841 11:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 67336 00:29:59.841 11:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 67338 00:29:59.841 11:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 67341 00:29:59.841 8495.00 IOPS, 33.18 MiB/s 00:29:59.841 Latency(us) 00:29:59.841 [2024-11-20T10:24:27.337Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:59.841 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:29:59.841 Nvme1n1 : 1.00 8603.24 33.61 0.00 0.00 14848.27 2607.19 36928.11 00:29:59.841 [2024-11-20T10:24:27.337Z] =================================================================================================================== 00:29:59.841 [2024-11-20T10:24:27.337Z] Total : 8603.24 33.61 0.00 0.00 14848.27 2607.19 36928.11 00:30:00.105 11:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:00.105 11:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:00.105 11:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:00.105 11:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:00.105 11:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:30:00.105 11:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:30:00.105 11:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:00.105 11:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:30:00.105 11:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:00.105 11:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:30:00.105 11:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:00.105 11:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:00.105 rmmod nvme_tcp 00:30:00.105 rmmod nvme_fabrics 00:30:00.105 rmmod nvme_keyring 00:30:00.105 11:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:00.105 11:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:30:00.105 11:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:30:00.105 11:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 67086 ']' 00:30:00.105 11:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 67086 00:30:00.105 11:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 67086 ']' 00:30:00.105 11:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 67086 00:30:00.105 11:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:30:00.105 11:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:00.105 11:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67086 00:30:00.105 11:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:00.105 11:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:00.105 11:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67086' 00:30:00.105 killing process with pid 67086 00:30:00.105 11:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 67086 00:30:00.105 11:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 67086 00:30:00.364 11:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:00.364 11:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:00.364 11:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:00.364 11:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:30:00.364 11:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:30:00.364 11:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:00.364 11:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:30:00.364 11:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:00.364 11:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:00.364 11:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:00.364 11:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:00.364 11:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:02.271 11:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:02.271 00:30:02.271 real 0m11.488s 00:30:02.271 user 0m15.127s 00:30:02.271 sys 0m6.535s 00:30:02.271 11:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:02.271 11:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:02.271 ************************************ 00:30:02.271 END TEST nvmf_bdev_io_wait 00:30:02.271 ************************************ 00:30:02.271 11:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:30:02.271 11:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:02.271 11:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:02.271 11:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:02.531 ************************************ 00:30:02.531 START TEST nvmf_queue_depth 00:30:02.531 ************************************ 00:30:02.531 11:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:30:02.531 * Looking for test storage... 00:30:02.531 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:02.531 11:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:02.531 11:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:30:02.531 11:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:02.531 11:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:02.531 11:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:02.531 11:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:02.531 11:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:02.531 11:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:30:02.531 11:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:30:02.531 11:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:30:02.531 11:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:30:02.531 11:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:30:02.531 11:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:30:02.531 11:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:30:02.531 11:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:02.531 11:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:30:02.531 11:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:30:02.531 11:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:02.531 11:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:02.531 11:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:30:02.531 11:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:30:02.531 11:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:02.531 11:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:30:02.531 11:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:30:02.531 11:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:30:02.531 11:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:30:02.531 11:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:02.531 11:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:30:02.531 11:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:30:02.531 11:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:02.531 11:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:02.531 11:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:30:02.531 11:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:02.531 11:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:02.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:02.531 --rc genhtml_branch_coverage=1 00:30:02.531 --rc genhtml_function_coverage=1 00:30:02.531 --rc genhtml_legend=1 00:30:02.531 --rc geninfo_all_blocks=1 00:30:02.531 --rc geninfo_unexecuted_blocks=1 00:30:02.531 00:30:02.531 ' 00:30:02.531 11:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:02.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:02.531 --rc genhtml_branch_coverage=1 00:30:02.531 --rc genhtml_function_coverage=1 00:30:02.531 --rc genhtml_legend=1 00:30:02.531 --rc geninfo_all_blocks=1 00:30:02.531 --rc geninfo_unexecuted_blocks=1 00:30:02.531 00:30:02.531 ' 00:30:02.531 11:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:02.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:02.531 --rc genhtml_branch_coverage=1 00:30:02.531 --rc genhtml_function_coverage=1 00:30:02.531 --rc genhtml_legend=1 00:30:02.531 --rc geninfo_all_blocks=1 00:30:02.531 --rc geninfo_unexecuted_blocks=1 00:30:02.531 00:30:02.531 ' 00:30:02.531 11:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:02.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:02.531 --rc genhtml_branch_coverage=1 00:30:02.531 --rc genhtml_function_coverage=1 00:30:02.531 --rc genhtml_legend=1 00:30:02.531 --rc geninfo_all_blocks=1 00:30:02.531 --rc geninfo_unexecuted_blocks=1 00:30:02.531 00:30:02.531 ' 00:30:02.531 11:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:02.531 11:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:30:02.531 11:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:02.531 11:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:02.531 11:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:02.531 11:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:02.532 11:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:02.532 11:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:02.532 11:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:02.532 11:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:02.532 11:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:02.532 11:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:02.532 11:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:02.532 11:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:30:02.532 11:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:02.532 11:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:02.532 11:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:02.532 11:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:02.532 11:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:02.532 11:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:30:02.532 11:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:02.532 11:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:02.532 11:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:02.532 11:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:02.532 11:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:02.532 11:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:02.532 11:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:30:02.532 11:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:02.532 11:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:30:02.532 11:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:02.532 11:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:02.532 11:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:02.532 11:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:02.532 11:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:02.532 11:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:02.532 11:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:02.532 11:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:02.532 11:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:02.532 11:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:02.532 11:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:30:02.532 11:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:30:02.532 11:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:02.532 11:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:30:02.532 11:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:02.532 11:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:02.532 11:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:02.532 11:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:02.532 11:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:02.532 11:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:02.532 11:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:02.532 11:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:02.532 11:24:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:02.532 11:24:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:02.532 11:24:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:30:02.532 11:24:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:09.103 11:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:09.103 11:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:30:09.103 11:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:09.103 11:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:09.103 11:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:09.104 11:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:09.104 11:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:09.104 11:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:30:09.104 11:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:09.104 11:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:30:09.104 11:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:30:09.104 11:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:30:09.104 11:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:30:09.104 11:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:30:09.104 11:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:30:09.104 11:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:09.104 11:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:09.104 11:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:09.104 11:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:09.104 11:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:09.104 11:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:09.104 11:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:09.104 11:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:09.104 11:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:09.104 11:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:09.104 11:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:09.104 11:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:09.104 11:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:09.104 11:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:09.104 11:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:09.104 11:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:09.104 11:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:09.104 11:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:09.104 11:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:09.104 11:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:09.104 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:09.104 11:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:09.104 11:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:09.104 11:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:09.104 11:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:09.104 11:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:09.104 11:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:09.104 11:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:09.104 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:09.104 11:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:09.104 11:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:09.104 11:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:09.104 11:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:09.104 11:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:09.104 11:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:09.104 11:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:09.104 11:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:09.104 11:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:09.104 11:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:09.104 11:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:09.104 11:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:09.104 11:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:09.104 11:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:09.104 11:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:09.104 11:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:09.104 Found net devices under 0000:86:00.0: cvl_0_0 00:30:09.104 11:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:09.104 11:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:09.104 11:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:09.104 11:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:09.104 11:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:09.104 11:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:09.104 11:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:09.104 11:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:09.104 11:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:09.104 Found net devices under 0000:86:00.1: cvl_0_1 00:30:09.104 11:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:09.104 11:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:09.104 11:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:30:09.104 11:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:09.104 11:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:09.104 11:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:09.104 11:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:09.104 11:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:09.104 11:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:09.104 11:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:09.104 11:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:09.104 11:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:09.104 11:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:09.104 11:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:09.104 11:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:09.104 11:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:09.104 11:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:09.104 11:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:09.104 11:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:09.105 11:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:09.105 11:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:09.105 11:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:09.105 11:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:09.105 11:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:09.105 11:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:09.105 11:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:09.105 11:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:09.105 11:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:09.105 11:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:09.105 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:09.105 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.386 ms 00:30:09.105 00:30:09.105 --- 10.0.0.2 ping statistics --- 00:30:09.105 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:09.105 rtt min/avg/max/mdev = 0.386/0.386/0.386/0.000 ms 00:30:09.105 11:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:09.105 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:09.105 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:30:09.105 00:30:09.105 --- 10.0.0.1 ping statistics --- 00:30:09.105 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:09.105 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:30:09.105 11:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:09.105 11:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:30:09.105 11:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:09.105 11:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:09.105 11:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:09.105 11:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:09.105 11:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:09.105 11:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:09.105 11:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:09.105 11:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:30:09.105 11:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:09.105 11:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:09.105 11:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:09.105 11:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=71123 00:30:09.105 11:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:30:09.105 11:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 71123 00:30:09.105 11:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 71123 ']' 00:30:09.105 11:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:09.105 11:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:09.105 11:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:09.105 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:09.105 11:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:09.105 11:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:09.105 [2024-11-20 11:24:35.992373] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:09.105 [2024-11-20 11:24:35.993321] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:30:09.105 [2024-11-20 11:24:35.993355] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:09.105 [2024-11-20 11:24:36.070493] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:09.105 [2024-11-20 11:24:36.121880] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:09.105 [2024-11-20 11:24:36.121923] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:09.105 [2024-11-20 11:24:36.121934] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:09.105 [2024-11-20 11:24:36.121942] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:09.105 [2024-11-20 11:24:36.121967] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:09.105 [2024-11-20 11:24:36.122674] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:09.105 [2024-11-20 11:24:36.201386] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:09.105 [2024-11-20 11:24:36.201659] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:09.364 11:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:09.364 11:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:30:09.364 11:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:09.364 11:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:09.364 11:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:09.624 11:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:09.624 11:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:09.624 11:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:09.624 11:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:09.624 [2024-11-20 11:24:36.867169] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:09.624 11:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:09.624 11:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:09.624 11:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:09.624 11:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:09.624 Malloc0 00:30:09.624 11:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:09.624 11:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:09.624 11:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:09.624 11:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:09.624 11:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:09.624 11:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:09.624 11:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:09.624 11:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:09.624 11:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:09.624 11:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:09.624 11:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:09.624 11:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:09.624 [2024-11-20 11:24:36.931317] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:09.624 11:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:09.624 11:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=71253 00:30:09.624 11:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:30:09.624 11:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:09.624 11:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 71253 /var/tmp/bdevperf.sock 00:30:09.624 11:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 71253 ']' 00:30:09.624 11:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:09.624 11:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:09.624 11:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:09.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:09.624 11:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:09.624 11:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:09.624 [2024-11-20 11:24:36.980643] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:30:09.624 [2024-11-20 11:24:36.980687] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71253 ] 00:30:09.624 [2024-11-20 11:24:37.054301] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:09.624 [2024-11-20 11:24:37.099661] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:09.884 11:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:09.884 11:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:30:09.884 11:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:09.884 11:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:09.884 11:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:09.884 NVMe0n1 00:30:09.884 11:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:09.884 11:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:10.143 Running I/O for 10 seconds... 00:30:12.081 11258.00 IOPS, 43.98 MiB/s [2024-11-20T10:24:40.513Z] 11776.00 IOPS, 46.00 MiB/s [2024-11-20T10:24:41.449Z] 11932.67 IOPS, 46.61 MiB/s [2024-11-20T10:24:42.827Z] 11972.50 IOPS, 46.77 MiB/s [2024-11-20T10:24:43.763Z] 12065.00 IOPS, 47.13 MiB/s [2024-11-20T10:24:44.700Z] 12102.67 IOPS, 47.28 MiB/s [2024-11-20T10:24:45.637Z] 12139.71 IOPS, 47.42 MiB/s [2024-11-20T10:24:46.574Z] 12162.88 IOPS, 47.51 MiB/s [2024-11-20T10:24:47.692Z] 12186.44 IOPS, 47.60 MiB/s [2024-11-20T10:24:47.692Z] 12199.60 IOPS, 47.65 MiB/s 00:30:20.196 Latency(us) 00:30:20.196 [2024-11-20T10:24:47.692Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:20.196 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:30:20.196 Verification LBA range: start 0x0 length 0x4000 00:30:20.196 NVMe0n1 : 10.05 12237.45 47.80 0.00 0.00 83400.43 12936.24 55848.07 00:30:20.196 [2024-11-20T10:24:47.692Z] =================================================================================================================== 00:30:20.196 [2024-11-20T10:24:47.692Z] Total : 12237.45 47.80 0.00 0.00 83400.43 12936.24 55848.07 00:30:20.196 { 00:30:20.196 "results": [ 00:30:20.196 { 00:30:20.196 "job": "NVMe0n1", 00:30:20.196 "core_mask": "0x1", 00:30:20.196 "workload": "verify", 00:30:20.196 "status": "finished", 00:30:20.196 "verify_range": { 00:30:20.196 "start": 0, 00:30:20.196 "length": 16384 00:30:20.196 }, 00:30:20.196 "queue_depth": 1024, 00:30:20.196 "io_size": 4096, 00:30:20.196 "runtime": 10.05087, 00:30:20.196 "iops": 12237.44810150763, 00:30:20.196 "mibps": 47.80253164651418, 00:30:20.196 "io_failed": 0, 00:30:20.196 "io_timeout": 0, 00:30:20.196 "avg_latency_us": 83400.43411711349, 00:30:20.196 "min_latency_us": 12936.23652173913, 00:30:20.196 "max_latency_us": 55848.06956521739 00:30:20.196 } 00:30:20.196 ], 00:30:20.196 "core_count": 1 00:30:20.196 } 00:30:20.196 11:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 71253 00:30:20.196 11:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 71253 ']' 00:30:20.196 11:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 71253 00:30:20.196 11:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:30:20.196 11:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:20.196 11:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71253 00:30:20.196 11:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:20.196 11:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:20.197 11:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71253' 00:30:20.197 killing process with pid 71253 00:30:20.197 11:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 71253 00:30:20.197 Received shutdown signal, test time was about 10.000000 seconds 00:30:20.197 00:30:20.197 Latency(us) 00:30:20.197 [2024-11-20T10:24:47.693Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:20.197 [2024-11-20T10:24:47.693Z] =================================================================================================================== 00:30:20.197 [2024-11-20T10:24:47.693Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:20.197 11:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 71253 00:30:20.455 11:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:30:20.455 11:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:30:20.455 11:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:20.455 11:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:30:20.455 11:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:20.455 11:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:30:20.455 11:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:20.455 11:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:20.455 rmmod nvme_tcp 00:30:20.455 rmmod nvme_fabrics 00:30:20.455 rmmod nvme_keyring 00:30:20.455 11:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:20.455 11:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:30:20.455 11:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:30:20.455 11:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 71123 ']' 00:30:20.455 11:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 71123 00:30:20.455 11:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 71123 ']' 00:30:20.455 11:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 71123 00:30:20.455 11:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:30:20.455 11:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:20.455 11:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71123 00:30:20.455 11:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:20.455 11:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:20.455 11:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71123' 00:30:20.455 killing process with pid 71123 00:30:20.455 11:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 71123 00:30:20.455 11:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 71123 00:30:20.714 11:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:20.714 11:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:20.714 11:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:20.714 11:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:30:20.714 11:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:30:20.714 11:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:20.714 11:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:30:20.714 11:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:20.714 11:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:20.714 11:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:20.714 11:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:20.714 11:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:22.619 11:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:22.619 00:30:22.619 real 0m20.273s 00:30:22.619 user 0m22.679s 00:30:22.619 sys 0m6.431s 00:30:22.619 11:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:22.619 11:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:22.619 ************************************ 00:30:22.619 END TEST nvmf_queue_depth 00:30:22.619 ************************************ 00:30:22.619 11:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:30:22.619 11:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:22.619 11:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:22.619 11:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:22.877 ************************************ 00:30:22.877 START TEST nvmf_target_multipath 00:30:22.877 ************************************ 00:30:22.877 11:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:30:22.877 * Looking for test storage... 00:30:22.877 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:22.877 11:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:22.877 11:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:22.877 11:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:30:22.877 11:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:22.877 11:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:22.877 11:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:22.877 11:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:22.877 11:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:30:22.877 11:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:30:22.877 11:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:30:22.877 11:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:30:22.877 11:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:30:22.877 11:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:30:22.877 11:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:30:22.877 11:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:22.877 11:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:30:22.877 11:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:30:22.877 11:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:22.877 11:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:22.877 11:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:30:22.877 11:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:30:22.878 11:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:22.878 11:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:30:22.878 11:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:30:22.878 11:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:30:22.878 11:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:30:22.878 11:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:22.878 11:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:30:22.878 11:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:30:22.878 11:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:22.878 11:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:22.878 11:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:30:22.878 11:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:22.878 11:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:22.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:22.878 --rc genhtml_branch_coverage=1 00:30:22.878 --rc genhtml_function_coverage=1 00:30:22.878 --rc genhtml_legend=1 00:30:22.878 --rc geninfo_all_blocks=1 00:30:22.878 --rc geninfo_unexecuted_blocks=1 00:30:22.878 00:30:22.878 ' 00:30:22.878 11:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:22.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:22.878 --rc genhtml_branch_coverage=1 00:30:22.878 --rc genhtml_function_coverage=1 00:30:22.878 --rc genhtml_legend=1 00:30:22.878 --rc geninfo_all_blocks=1 00:30:22.878 --rc geninfo_unexecuted_blocks=1 00:30:22.878 00:30:22.878 ' 00:30:22.878 11:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:22.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:22.878 --rc genhtml_branch_coverage=1 00:30:22.878 --rc genhtml_function_coverage=1 00:30:22.878 --rc genhtml_legend=1 00:30:22.878 --rc geninfo_all_blocks=1 00:30:22.878 --rc geninfo_unexecuted_blocks=1 00:30:22.878 00:30:22.878 ' 00:30:22.878 11:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:22.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:22.878 --rc genhtml_branch_coverage=1 00:30:22.878 --rc genhtml_function_coverage=1 00:30:22.878 --rc genhtml_legend=1 00:30:22.878 --rc geninfo_all_blocks=1 00:30:22.878 --rc geninfo_unexecuted_blocks=1 00:30:22.878 00:30:22.878 ' 00:30:22.878 11:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:22.878 11:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:30:22.878 11:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:22.878 11:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:22.878 11:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:22.878 11:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:22.878 11:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:22.878 11:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:22.878 11:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:22.878 11:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:22.878 11:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:22.878 11:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:22.878 11:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:22.878 11:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:30:22.878 11:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:22.878 11:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:22.878 11:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:22.878 11:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:22.878 11:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:22.878 11:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:30:22.878 11:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:22.878 11:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:22.878 11:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:22.878 11:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:22.878 11:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:22.878 11:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:22.878 11:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:30:22.878 11:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:22.878 11:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:30:22.878 11:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:22.878 11:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:22.878 11:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:22.878 11:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:22.878 11:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:22.878 11:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:22.878 11:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:22.878 11:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:22.878 11:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:22.878 11:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:22.878 11:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:22.878 11:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:22.878 11:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:30:22.878 11:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:22.878 11:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:30:22.878 11:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:22.878 11:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:22.878 11:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:22.878 11:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:22.878 11:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:22.878 11:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:22.878 11:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:22.878 11:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:22.878 11:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:22.878 11:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:22.878 11:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:30:22.878 11:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:30:29.447 11:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:29.447 11:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:30:29.447 11:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:29.447 11:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:29.447 11:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:29.447 11:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:29.447 11:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:29.447 11:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:30:29.447 11:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:29.447 11:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:30:29.447 11:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:30:29.447 11:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:30:29.447 11:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:30:29.447 11:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:30:29.447 11:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:30:29.447 11:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:29.447 11:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:29.447 11:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:29.447 11:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:29.447 11:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:29.447 11:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:29.447 11:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:29.447 11:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:29.447 11:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:29.447 11:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:29.447 11:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:29.447 11:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:29.447 11:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:29.447 11:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:29.447 11:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:29.447 11:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:29.447 11:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:29.447 11:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:29.447 11:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:29.447 11:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:29.447 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:29.447 11:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:29.447 11:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:29.447 11:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:29.447 11:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:29.447 11:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:29.447 11:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:29.447 11:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:29.447 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:29.447 11:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:29.447 11:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:29.447 11:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:29.447 11:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:29.447 11:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:29.447 11:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:29.447 11:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:29.447 11:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:29.447 11:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:29.447 11:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:29.447 11:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:29.447 11:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:29.447 11:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:29.447 11:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:29.447 11:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:29.447 11:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:29.447 Found net devices under 0000:86:00.0: cvl_0_0 00:30:29.448 11:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:29.448 11:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:29.448 11:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:29.448 11:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:29.448 11:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:29.448 11:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:29.448 11:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:29.448 11:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:29.448 11:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:29.448 Found net devices under 0000:86:00.1: cvl_0_1 00:30:29.448 11:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:29.448 11:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:29.448 11:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:30:29.448 11:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:29.448 11:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:29.448 11:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:29.448 11:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:29.448 11:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:29.448 11:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:29.448 11:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:29.448 11:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:29.448 11:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:29.448 11:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:29.448 11:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:29.448 11:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:29.448 11:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:29.448 11:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:29.448 11:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:29.448 11:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:29.448 11:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:29.448 11:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:29.448 11:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:29.448 11:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:29.448 11:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:29.448 11:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:29.448 11:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:29.448 11:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:29.448 11:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:29.448 11:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:29.448 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:29.448 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.423 ms 00:30:29.448 00:30:29.448 --- 10.0.0.2 ping statistics --- 00:30:29.448 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:29.448 rtt min/avg/max/mdev = 0.423/0.423/0.423/0.000 ms 00:30:29.448 11:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:29.448 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:29.448 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:30:29.448 00:30:29.448 --- 10.0.0.1 ping statistics --- 00:30:29.448 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:29.448 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:30:29.448 11:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:29.448 11:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:30:29.448 11:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:29.448 11:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:29.448 11:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:29.448 11:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:29.448 11:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:29.448 11:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:29.448 11:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:29.448 11:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:30:29.448 11:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:30:29.448 only one NIC for nvmf test 00:30:29.448 11:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:30:29.448 11:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:29.448 11:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:30:29.448 11:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:29.448 11:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:30:29.448 11:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:29.448 11:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:29.448 rmmod nvme_tcp 00:30:29.448 rmmod nvme_fabrics 00:30:29.448 rmmod nvme_keyring 00:30:29.448 11:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:29.448 11:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:30:29.448 11:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:30:29.449 11:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:30:29.449 11:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:29.449 11:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:29.449 11:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:29.449 11:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:30:29.449 11:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:30:29.449 11:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:29.449 11:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:30:29.449 11:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:29.449 11:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:29.449 11:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:29.449 11:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:29.449 11:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:31.357 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:31.357 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:30:31.357 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:30:31.357 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:31.357 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:30:31.357 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:31.357 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:30:31.357 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:31.357 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:31.357 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:31.357 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:30:31.357 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:30:31.357 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:30:31.357 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:31.357 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:31.357 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:31.357 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:30:31.357 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:30:31.357 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:31.357 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:30:31.357 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:31.357 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:31.357 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:31.357 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:31.357 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:31.357 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:31.357 00:30:31.357 real 0m8.295s 00:30:31.357 user 0m1.871s 00:30:31.357 sys 0m4.438s 00:30:31.357 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:31.357 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:30:31.357 ************************************ 00:30:31.357 END TEST nvmf_target_multipath 00:30:31.357 ************************************ 00:30:31.357 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:30:31.357 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:31.357 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:31.357 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:31.357 ************************************ 00:30:31.357 START TEST nvmf_zcopy 00:30:31.357 ************************************ 00:30:31.357 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:30:31.357 * Looking for test storage... 00:30:31.357 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:31.357 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:31.357 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:30:31.357 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:31.357 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:31.357 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:31.357 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:31.357 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:31.357 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:30:31.357 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:30:31.357 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:30:31.357 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:30:31.357 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:30:31.357 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:30:31.357 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:30:31.357 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:31.357 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:30:31.357 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:30:31.357 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:31.357 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:31.357 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:30:31.357 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:30:31.358 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:31.358 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:30:31.358 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:30:31.358 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:30:31.358 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:30:31.358 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:31.358 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:30:31.358 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:30:31.358 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:31.358 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:31.358 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:30:31.358 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:31.358 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:31.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:31.358 --rc genhtml_branch_coverage=1 00:30:31.358 --rc genhtml_function_coverage=1 00:30:31.358 --rc genhtml_legend=1 00:30:31.358 --rc geninfo_all_blocks=1 00:30:31.358 --rc geninfo_unexecuted_blocks=1 00:30:31.358 00:30:31.358 ' 00:30:31.358 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:31.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:31.358 --rc genhtml_branch_coverage=1 00:30:31.358 --rc genhtml_function_coverage=1 00:30:31.358 --rc genhtml_legend=1 00:30:31.358 --rc geninfo_all_blocks=1 00:30:31.358 --rc geninfo_unexecuted_blocks=1 00:30:31.358 00:30:31.358 ' 00:30:31.358 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:31.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:31.358 --rc genhtml_branch_coverage=1 00:30:31.358 --rc genhtml_function_coverage=1 00:30:31.358 --rc genhtml_legend=1 00:30:31.358 --rc geninfo_all_blocks=1 00:30:31.358 --rc geninfo_unexecuted_blocks=1 00:30:31.358 00:30:31.358 ' 00:30:31.358 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:31.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:31.358 --rc genhtml_branch_coverage=1 00:30:31.358 --rc genhtml_function_coverage=1 00:30:31.358 --rc genhtml_legend=1 00:30:31.358 --rc geninfo_all_blocks=1 00:30:31.358 --rc geninfo_unexecuted_blocks=1 00:30:31.358 00:30:31.358 ' 00:30:31.358 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:31.358 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:30:31.358 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:31.358 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:31.358 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:31.358 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:31.358 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:31.358 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:31.358 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:31.358 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:31.358 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:31.358 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:31.358 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:31.358 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:30:31.358 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:31.358 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:31.358 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:31.358 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:31.358 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:31.358 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:30:31.358 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:31.358 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:31.358 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:31.358 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:31.358 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:31.358 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:31.358 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:30:31.358 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:31.358 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:30:31.358 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:31.358 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:31.358 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:31.358 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:31.358 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:31.358 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:31.358 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:31.358 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:31.358 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:31.359 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:31.359 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:30:31.359 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:31.359 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:31.359 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:31.359 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:31.359 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:31.359 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:31.359 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:31.359 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:31.359 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:31.359 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:31.359 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:30:31.359 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:37.932 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:37.932 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:30:37.932 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:37.932 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:37.932 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:37.933 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:37.933 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:37.933 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:30:37.933 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:37.933 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:30:37.933 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:30:37.933 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:30:37.933 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:30:37.933 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:30:37.933 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:30:37.933 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:37.933 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:37.933 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:37.933 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:37.933 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:37.933 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:37.933 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:37.933 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:37.933 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:37.933 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:37.933 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:37.933 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:37.933 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:37.933 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:37.933 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:37.933 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:37.933 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:37.933 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:37.933 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:37.933 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:37.933 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:37.933 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:37.933 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:37.933 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:37.933 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:37.933 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:37.933 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:37.933 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:37.933 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:37.933 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:37.933 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:37.933 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:37.933 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:37.933 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:37.933 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:37.933 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:37.933 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:37.933 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:37.933 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:37.933 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:37.933 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:37.933 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:37.933 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:37.933 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:37.933 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:37.933 Found net devices under 0000:86:00.0: cvl_0_0 00:30:37.933 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:37.933 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:37.933 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:37.933 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:37.933 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:37.933 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:37.933 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:37.933 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:37.933 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:37.933 Found net devices under 0000:86:00.1: cvl_0_1 00:30:37.933 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:37.933 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:37.933 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:30:37.933 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:37.933 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:37.933 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:37.933 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:37.933 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:37.933 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:37.933 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:37.933 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:37.933 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:37.933 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:37.933 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:37.933 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:37.933 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:37.933 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:37.933 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:37.933 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:37.933 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:37.933 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:37.933 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:37.933 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:37.934 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:37.934 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:37.934 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:37.934 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:37.934 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:37.934 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:37.934 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:37.934 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.244 ms 00:30:37.934 00:30:37.934 --- 10.0.0.2 ping statistics --- 00:30:37.934 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:37.934 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:30:37.934 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:37.934 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:37.934 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.159 ms 00:30:37.934 00:30:37.934 --- 10.0.0.1 ping statistics --- 00:30:37.934 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:37.934 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:30:37.934 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:37.934 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:30:37.934 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:37.934 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:37.934 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:37.934 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:37.934 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:37.934 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:37.934 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:37.934 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:30:37.934 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:37.934 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:37.934 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:37.934 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=79895 00:30:37.934 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:30:37.934 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 79895 00:30:37.934 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 79895 ']' 00:30:37.934 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:37.934 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:37.934 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:37.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:37.934 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:37.934 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:37.934 [2024-11-20 11:25:04.630883] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:37.934 [2024-11-20 11:25:04.631872] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:30:37.934 [2024-11-20 11:25:04.631909] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:37.934 [2024-11-20 11:25:04.710238] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:37.934 [2024-11-20 11:25:04.751294] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:37.934 [2024-11-20 11:25:04.751331] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:37.934 [2024-11-20 11:25:04.751341] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:37.934 [2024-11-20 11:25:04.751348] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:37.934 [2024-11-20 11:25:04.751354] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:37.934 [2024-11-20 11:25:04.751977] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:37.934 [2024-11-20 11:25:04.819277] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:37.934 [2024-11-20 11:25:04.819509] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:37.934 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:37.934 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:30:37.934 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:37.934 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:37.934 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:37.934 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:37.934 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:30:37.934 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:30:37.934 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:37.934 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:37.934 [2024-11-20 11:25:04.884658] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:37.934 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:37.934 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:30:37.934 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:37.934 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:37.934 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:37.934 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:37.934 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:37.934 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:37.934 [2024-11-20 11:25:04.912877] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:37.934 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:37.934 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:37.934 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:37.934 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:37.934 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:37.934 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:30:37.934 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:37.934 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:37.934 malloc0 00:30:37.934 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:37.934 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:30:37.934 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:37.934 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:37.934 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:37.934 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:30:37.934 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:30:37.934 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:30:37.935 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:30:37.935 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:37.935 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:37.935 { 00:30:37.935 "params": { 00:30:37.935 "name": "Nvme$subsystem", 00:30:37.935 "trtype": "$TEST_TRANSPORT", 00:30:37.935 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:37.935 "adrfam": "ipv4", 00:30:37.935 "trsvcid": "$NVMF_PORT", 00:30:37.935 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:37.935 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:37.935 "hdgst": ${hdgst:-false}, 00:30:37.935 "ddgst": ${ddgst:-false} 00:30:37.935 }, 00:30:37.935 "method": "bdev_nvme_attach_controller" 00:30:37.935 } 00:30:37.935 EOF 00:30:37.935 )") 00:30:37.935 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:30:37.935 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:30:37.935 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:30:37.935 11:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:37.935 "params": { 00:30:37.935 "name": "Nvme1", 00:30:37.935 "trtype": "tcp", 00:30:37.935 "traddr": "10.0.0.2", 00:30:37.935 "adrfam": "ipv4", 00:30:37.935 "trsvcid": "4420", 00:30:37.935 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:37.935 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:37.935 "hdgst": false, 00:30:37.935 "ddgst": false 00:30:37.935 }, 00:30:37.935 "method": "bdev_nvme_attach_controller" 00:30:37.935 }' 00:30:37.935 [2024-11-20 11:25:05.009026] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:30:37.935 [2024-11-20 11:25:05.009085] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80048 ] 00:30:37.935 [2024-11-20 11:25:05.084563] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:37.935 [2024-11-20 11:25:05.126001] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:37.935 Running I/O for 10 seconds... 00:30:39.809 8328.00 IOPS, 65.06 MiB/s [2024-11-20T10:25:08.683Z] 8360.00 IOPS, 65.31 MiB/s [2024-11-20T10:25:09.620Z] 8378.33 IOPS, 65.46 MiB/s [2024-11-20T10:25:10.556Z] 8390.25 IOPS, 65.55 MiB/s [2024-11-20T10:25:11.491Z] 8382.80 IOPS, 65.49 MiB/s [2024-11-20T10:25:12.425Z] 8367.50 IOPS, 65.37 MiB/s [2024-11-20T10:25:13.360Z] 8375.29 IOPS, 65.43 MiB/s [2024-11-20T10:25:14.738Z] 8381.00 IOPS, 65.48 MiB/s [2024-11-20T10:25:15.676Z] 8385.33 IOPS, 65.51 MiB/s [2024-11-20T10:25:15.676Z] 8388.60 IOPS, 65.54 MiB/s 00:30:48.180 Latency(us) 00:30:48.180 [2024-11-20T10:25:15.676Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:48.180 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:30:48.180 Verification LBA range: start 0x0 length 0x1000 00:30:48.180 Nvme1n1 : 10.01 8390.98 65.55 0.00 0.00 15211.68 2550.21 21655.37 00:30:48.180 [2024-11-20T10:25:15.676Z] =================================================================================================================== 00:30:48.180 [2024-11-20T10:25:15.676Z] Total : 8390.98 65.55 0.00 0.00 15211.68 2550.21 21655.37 00:30:48.180 11:25:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=81659 00:30:48.180 11:25:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:30:48.180 11:25:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:48.180 11:25:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:30:48.180 11:25:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:30:48.180 11:25:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:30:48.180 11:25:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:30:48.180 11:25:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:48.180 11:25:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:48.180 { 00:30:48.180 "params": { 00:30:48.180 "name": "Nvme$subsystem", 00:30:48.180 "trtype": "$TEST_TRANSPORT", 00:30:48.180 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:48.180 "adrfam": "ipv4", 00:30:48.180 "trsvcid": "$NVMF_PORT", 00:30:48.180 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:48.180 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:48.180 "hdgst": ${hdgst:-false}, 00:30:48.180 "ddgst": ${ddgst:-false} 00:30:48.180 }, 00:30:48.180 "method": "bdev_nvme_attach_controller" 00:30:48.180 } 00:30:48.180 EOF 00:30:48.180 )") 00:30:48.180 11:25:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:30:48.180 [2024-11-20 11:25:15.480336] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.180 [2024-11-20 11:25:15.480368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.180 11:25:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:30:48.180 11:25:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:30:48.180 11:25:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:48.180 "params": { 00:30:48.180 "name": "Nvme1", 00:30:48.180 "trtype": "tcp", 00:30:48.180 "traddr": "10.0.0.2", 00:30:48.180 "adrfam": "ipv4", 00:30:48.180 "trsvcid": "4420", 00:30:48.180 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:48.180 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:48.180 "hdgst": false, 00:30:48.180 "ddgst": false 00:30:48.180 }, 00:30:48.180 "method": "bdev_nvme_attach_controller" 00:30:48.180 }' 00:30:48.180 [2024-11-20 11:25:15.492313] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.180 [2024-11-20 11:25:15.492334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.180 [2024-11-20 11:25:15.504299] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.180 [2024-11-20 11:25:15.504312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.180 [2024-11-20 11:25:15.505686] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:30:48.180 [2024-11-20 11:25:15.505728] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81659 ] 00:30:48.180 [2024-11-20 11:25:15.516305] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.180 [2024-11-20 11:25:15.516322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.180 [2024-11-20 11:25:15.528298] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.180 [2024-11-20 11:25:15.528311] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.180 [2024-11-20 11:25:15.540297] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.180 [2024-11-20 11:25:15.540308] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.180 [2024-11-20 11:25:15.552297] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.180 [2024-11-20 11:25:15.552309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.180 [2024-11-20 11:25:15.562914] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:48.180 [2024-11-20 11:25:15.564296] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.180 [2024-11-20 11:25:15.564308] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.180 [2024-11-20 11:25:15.576299] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.180 [2024-11-20 11:25:15.576315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.180 [2024-11-20 11:25:15.588295] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.180 [2024-11-20 11:25:15.588307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.180 [2024-11-20 11:25:15.600294] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.180 [2024-11-20 11:25:15.600311] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.180 [2024-11-20 11:25:15.604911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:48.180 [2024-11-20 11:25:15.612301] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.180 [2024-11-20 11:25:15.612315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.180 [2024-11-20 11:25:15.624308] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.180 [2024-11-20 11:25:15.624328] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.180 [2024-11-20 11:25:15.636305] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.180 [2024-11-20 11:25:15.636323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.180 [2024-11-20 11:25:15.648303] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.180 [2024-11-20 11:25:15.648317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.180 [2024-11-20 11:25:15.660301] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.181 [2024-11-20 11:25:15.660314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.181 [2024-11-20 11:25:15.672302] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.181 [2024-11-20 11:25:15.672315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.440 [2024-11-20 11:25:15.684300] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.440 [2024-11-20 11:25:15.684310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.440 [2024-11-20 11:25:15.696309] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.440 [2024-11-20 11:25:15.696329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.440 [2024-11-20 11:25:15.708302] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.440 [2024-11-20 11:25:15.708317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.440 [2024-11-20 11:25:15.720299] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.440 [2024-11-20 11:25:15.720314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.440 [2024-11-20 11:25:15.732300] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.440 [2024-11-20 11:25:15.732316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.440 [2024-11-20 11:25:15.783187] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.440 [2024-11-20 11:25:15.783205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.440 [2024-11-20 11:25:15.792311] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.440 [2024-11-20 11:25:15.792326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.440 Running I/O for 5 seconds... 00:30:48.440 [2024-11-20 11:25:15.806376] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.440 [2024-11-20 11:25:15.806395] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.440 [2024-11-20 11:25:15.821688] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.440 [2024-11-20 11:25:15.821710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.440 [2024-11-20 11:25:15.837104] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.440 [2024-11-20 11:25:15.837122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.440 [2024-11-20 11:25:15.852537] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.440 [2024-11-20 11:25:15.852556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.440 [2024-11-20 11:25:15.864179] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.440 [2024-11-20 11:25:15.864203] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.440 [2024-11-20 11:25:15.878618] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.440 [2024-11-20 11:25:15.878637] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.440 [2024-11-20 11:25:15.893674] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.440 [2024-11-20 11:25:15.893694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.440 [2024-11-20 11:25:15.908542] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.440 [2024-11-20 11:25:15.908562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.440 [2024-11-20 11:25:15.920254] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.440 [2024-11-20 11:25:15.920273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.700 [2024-11-20 11:25:15.934788] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.700 [2024-11-20 11:25:15.934807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.700 [2024-11-20 11:25:15.949831] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.700 [2024-11-20 11:25:15.949850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.700 [2024-11-20 11:25:15.965060] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.700 [2024-11-20 11:25:15.965079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.700 [2024-11-20 11:25:15.976739] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.700 [2024-11-20 11:25:15.976758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.700 [2024-11-20 11:25:15.989465] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.700 [2024-11-20 11:25:15.989486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.700 [2024-11-20 11:25:15.999961] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.700 [2024-11-20 11:25:15.999981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.700 [2024-11-20 11:25:16.014598] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.700 [2024-11-20 11:25:16.014619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.700 [2024-11-20 11:25:16.029463] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.700 [2024-11-20 11:25:16.029483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.700 [2024-11-20 11:25:16.044372] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.700 [2024-11-20 11:25:16.044393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.700 [2024-11-20 11:25:16.055723] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.700 [2024-11-20 11:25:16.055743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.700 [2024-11-20 11:25:16.070336] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.700 [2024-11-20 11:25:16.070356] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.700 [2024-11-20 11:25:16.085602] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.700 [2024-11-20 11:25:16.085622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.700 [2024-11-20 11:25:16.100643] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.700 [2024-11-20 11:25:16.100662] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.700 [2024-11-20 11:25:16.115515] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.700 [2024-11-20 11:25:16.115536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.700 [2024-11-20 11:25:16.130438] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.700 [2024-11-20 11:25:16.130465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.700 [2024-11-20 11:25:16.145556] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.700 [2024-11-20 11:25:16.145576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.700 [2024-11-20 11:25:16.160266] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.700 [2024-11-20 11:25:16.160287] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.700 [2024-11-20 11:25:16.174020] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.700 [2024-11-20 11:25:16.174040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.700 [2024-11-20 11:25:16.189052] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.700 [2024-11-20 11:25:16.189072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.960 [2024-11-20 11:25:16.203932] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.960 [2024-11-20 11:25:16.203961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.960 [2024-11-20 11:25:16.217476] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.960 [2024-11-20 11:25:16.217496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.960 [2024-11-20 11:25:16.232699] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.960 [2024-11-20 11:25:16.232719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.960 [2024-11-20 11:25:16.248571] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.960 [2024-11-20 11:25:16.248591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.960 [2024-11-20 11:25:16.262529] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.960 [2024-11-20 11:25:16.262550] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.960 [2024-11-20 11:25:16.277731] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.960 [2024-11-20 11:25:16.277751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.960 [2024-11-20 11:25:16.292744] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.960 [2024-11-20 11:25:16.292763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.960 [2024-11-20 11:25:16.308196] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.960 [2024-11-20 11:25:16.308217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.960 [2024-11-20 11:25:16.322554] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.960 [2024-11-20 11:25:16.322574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.960 [2024-11-20 11:25:16.337646] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.960 [2024-11-20 11:25:16.337666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.960 [2024-11-20 11:25:16.352672] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.960 [2024-11-20 11:25:16.352690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.960 [2024-11-20 11:25:16.368332] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.960 [2024-11-20 11:25:16.368352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.960 [2024-11-20 11:25:16.381393] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.960 [2024-11-20 11:25:16.381412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.960 [2024-11-20 11:25:16.396614] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.960 [2024-11-20 11:25:16.396633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.960 [2024-11-20 11:25:16.412266] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.960 [2024-11-20 11:25:16.412291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.960 [2024-11-20 11:25:16.425850] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.960 [2024-11-20 11:25:16.425869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.960 [2024-11-20 11:25:16.441049] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.960 [2024-11-20 11:25:16.441067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.220 [2024-11-20 11:25:16.456116] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.220 [2024-11-20 11:25:16.456135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.220 [2024-11-20 11:25:16.470045] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.220 [2024-11-20 11:25:16.470064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.220 [2024-11-20 11:25:16.485323] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.220 [2024-11-20 11:25:16.485342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.220 [2024-11-20 11:25:16.500485] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.220 [2024-11-20 11:25:16.500504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.220 [2024-11-20 11:25:16.512250] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.220 [2024-11-20 11:25:16.512270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.220 [2024-11-20 11:25:16.526522] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.220 [2024-11-20 11:25:16.526541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.220 [2024-11-20 11:25:16.541346] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.220 [2024-11-20 11:25:16.541365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.220 [2024-11-20 11:25:16.556109] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.220 [2024-11-20 11:25:16.556128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.220 [2024-11-20 11:25:16.569297] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.220 [2024-11-20 11:25:16.569316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.220 [2024-11-20 11:25:16.580491] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.220 [2024-11-20 11:25:16.580509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.220 [2024-11-20 11:25:16.594478] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.220 [2024-11-20 11:25:16.594497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.220 [2024-11-20 11:25:16.609638] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.220 [2024-11-20 11:25:16.609657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.220 [2024-11-20 11:25:16.625037] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.220 [2024-11-20 11:25:16.625056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.220 [2024-11-20 11:25:16.640032] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.220 [2024-11-20 11:25:16.640051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.220 [2024-11-20 11:25:16.651147] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.220 [2024-11-20 11:25:16.651166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.220 [2024-11-20 11:25:16.666194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.220 [2024-11-20 11:25:16.666213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.220 [2024-11-20 11:25:16.681487] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.220 [2024-11-20 11:25:16.681506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.220 [2024-11-20 11:25:16.696120] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.220 [2024-11-20 11:25:16.696139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.220 [2024-11-20 11:25:16.709082] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.220 [2024-11-20 11:25:16.709101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.479 [2024-11-20 11:25:16.721938] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.479 [2024-11-20 11:25:16.721964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.479 [2024-11-20 11:25:16.736777] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.479 [2024-11-20 11:25:16.736796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.479 [2024-11-20 11:25:16.752474] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.479 [2024-11-20 11:25:16.752493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.479 [2024-11-20 11:25:16.765245] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.479 [2024-11-20 11:25:16.765264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.479 [2024-11-20 11:25:16.780653] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.479 [2024-11-20 11:25:16.780671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.479 [2024-11-20 11:25:16.791922] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.479 [2024-11-20 11:25:16.791941] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.479 16367.00 IOPS, 127.87 MiB/s [2024-11-20T10:25:16.975Z] [2024-11-20 11:25:16.806243] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.480 [2024-11-20 11:25:16.806261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.480 [2024-11-20 11:25:16.821737] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.480 [2024-11-20 11:25:16.821756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.480 [2024-11-20 11:25:16.836365] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.480 [2024-11-20 11:25:16.836385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.480 [2024-11-20 11:25:16.849490] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.480 [2024-11-20 11:25:16.849509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.480 [2024-11-20 11:25:16.864837] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.480 [2024-11-20 11:25:16.864856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.480 [2024-11-20 11:25:16.880537] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.480 [2024-11-20 11:25:16.880556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.480 [2024-11-20 11:25:16.892980] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.480 [2024-11-20 11:25:16.892999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.480 [2024-11-20 11:25:16.905818] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.480 [2024-11-20 11:25:16.905836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.480 [2024-11-20 11:25:16.921135] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.480 [2024-11-20 11:25:16.921153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.480 [2024-11-20 11:25:16.936457] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.480 [2024-11-20 11:25:16.936475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.480 [2024-11-20 11:25:16.949172] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.480 [2024-11-20 11:25:16.949192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.480 [2024-11-20 11:25:16.964623] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.480 [2024-11-20 11:25:16.964642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.739 [2024-11-20 11:25:16.976447] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.739 [2024-11-20 11:25:16.976466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.739 [2024-11-20 11:25:16.990053] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.739 [2024-11-20 11:25:16.990072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.739 [2024-11-20 11:25:17.005163] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.739 [2024-11-20 11:25:17.005182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.739 [2024-11-20 11:25:17.020629] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.739 [2024-11-20 11:25:17.020649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.739 [2024-11-20 11:25:17.036227] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.739 [2024-11-20 11:25:17.036247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.739 [2024-11-20 11:25:17.049936] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.739 [2024-11-20 11:25:17.049962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.739 [2024-11-20 11:25:17.064959] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.739 [2024-11-20 11:25:17.064979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.739 [2024-11-20 11:25:17.080372] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.739 [2024-11-20 11:25:17.080392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.739 [2024-11-20 11:25:17.093079] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.739 [2024-11-20 11:25:17.093098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.739 [2024-11-20 11:25:17.108048] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.739 [2024-11-20 11:25:17.108067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.739 [2024-11-20 11:25:17.121098] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.739 [2024-11-20 11:25:17.121116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.739 [2024-11-20 11:25:17.136064] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.739 [2024-11-20 11:25:17.136083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.739 [2024-11-20 11:25:17.148513] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.739 [2024-11-20 11:25:17.148533] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.739 [2024-11-20 11:25:17.162194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.739 [2024-11-20 11:25:17.162212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.739 [2024-11-20 11:25:17.176966] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.739 [2024-11-20 11:25:17.176984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.739 [2024-11-20 11:25:17.191933] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.739 [2024-11-20 11:25:17.191964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.739 [2024-11-20 11:25:17.205770] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.739 [2024-11-20 11:25:17.205789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.739 [2024-11-20 11:25:17.220839] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.739 [2024-11-20 11:25:17.220861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.999 [2024-11-20 11:25:17.236661] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.999 [2024-11-20 11:25:17.236680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.999 [2024-11-20 11:25:17.250264] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.999 [2024-11-20 11:25:17.250283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.999 [2024-11-20 11:25:17.265134] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.999 [2024-11-20 11:25:17.265152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.999 [2024-11-20 11:25:17.280165] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.999 [2024-11-20 11:25:17.280184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.999 [2024-11-20 11:25:17.291768] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.999 [2024-11-20 11:25:17.291788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.999 [2024-11-20 11:25:17.305841] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.999 [2024-11-20 11:25:17.305860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.999 [2024-11-20 11:25:17.321214] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.999 [2024-11-20 11:25:17.321233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.999 [2024-11-20 11:25:17.336119] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.999 [2024-11-20 11:25:17.336138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.999 [2024-11-20 11:25:17.350564] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.999 [2024-11-20 11:25:17.350583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.999 [2024-11-20 11:25:17.365671] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.999 [2024-11-20 11:25:17.365691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.999 [2024-11-20 11:25:17.380861] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.999 [2024-11-20 11:25:17.380880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.999 [2024-11-20 11:25:17.396429] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.999 [2024-11-20 11:25:17.396448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.999 [2024-11-20 11:25:17.410378] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.999 [2024-11-20 11:25:17.410396] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.999 [2024-11-20 11:25:17.425509] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.999 [2024-11-20 11:25:17.425529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.999 [2024-11-20 11:25:17.440205] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.999 [2024-11-20 11:25:17.440224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.999 [2024-11-20 11:25:17.454519] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.999 [2024-11-20 11:25:17.454540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.999 [2024-11-20 11:25:17.469906] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.999 [2024-11-20 11:25:17.469926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.999 [2024-11-20 11:25:17.484792] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.999 [2024-11-20 11:25:17.484816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.259 [2024-11-20 11:25:17.496686] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.259 [2024-11-20 11:25:17.496706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.259 [2024-11-20 11:25:17.510284] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.259 [2024-11-20 11:25:17.510303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.259 [2024-11-20 11:25:17.525268] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.259 [2024-11-20 11:25:17.525288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.259 [2024-11-20 11:25:17.540217] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.259 [2024-11-20 11:25:17.540238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.259 [2024-11-20 11:25:17.552927] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.259 [2024-11-20 11:25:17.552957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.259 [2024-11-20 11:25:17.566177] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.259 [2024-11-20 11:25:17.566197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.259 [2024-11-20 11:25:17.581622] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.259 [2024-11-20 11:25:17.581642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.259 [2024-11-20 11:25:17.596577] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.259 [2024-11-20 11:25:17.596603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.259 [2024-11-20 11:25:17.607597] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.259 [2024-11-20 11:25:17.607617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.259 [2024-11-20 11:25:17.622006] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.259 [2024-11-20 11:25:17.622025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.259 [2024-11-20 11:25:17.636923] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.259 [2024-11-20 11:25:17.636942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.259 [2024-11-20 11:25:17.647720] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.259 [2024-11-20 11:25:17.647739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.259 [2024-11-20 11:25:17.662137] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.259 [2024-11-20 11:25:17.662166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.259 [2024-11-20 11:25:17.677500] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.259 [2024-11-20 11:25:17.677520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.259 [2024-11-20 11:25:17.692821] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.259 [2024-11-20 11:25:17.692841] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.259 [2024-11-20 11:25:17.705307] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.259 [2024-11-20 11:25:17.705326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.259 [2024-11-20 11:25:17.717929] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.259 [2024-11-20 11:25:17.717956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.259 [2024-11-20 11:25:17.733301] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.259 [2024-11-20 11:25:17.733320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.259 [2024-11-20 11:25:17.748316] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.259 [2024-11-20 11:25:17.748340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.520 [2024-11-20 11:25:17.760076] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.520 [2024-11-20 11:25:17.760096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.520 [2024-11-20 11:25:17.774337] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.520 [2024-11-20 11:25:17.774357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.520 [2024-11-20 11:25:17.789518] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.520 [2024-11-20 11:25:17.789538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.520 16386.00 IOPS, 128.02 MiB/s [2024-11-20T10:25:18.016Z] [2024-11-20 11:25:17.804644] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.520 [2024-11-20 11:25:17.804663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.520 [2024-11-20 11:25:17.820520] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.520 [2024-11-20 11:25:17.820540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.520 [2024-11-20 11:25:17.833061] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.521 [2024-11-20 11:25:17.833081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.521 [2024-11-20 11:25:17.848207] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.521 [2024-11-20 11:25:17.848226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.521 [2024-11-20 11:25:17.862120] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.521 [2024-11-20 11:25:17.862139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.521 [2024-11-20 11:25:17.877409] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.521 [2024-11-20 11:25:17.877428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.521 [2024-11-20 11:25:17.892485] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.521 [2024-11-20 11:25:17.892504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.521 [2024-11-20 11:25:17.904058] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.521 [2024-11-20 11:25:17.904080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.521 [2024-11-20 11:25:17.918151] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.521 [2024-11-20 11:25:17.918169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.521 [2024-11-20 11:25:17.933403] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.521 [2024-11-20 11:25:17.933422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.521 [2024-11-20 11:25:17.948501] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.521 [2024-11-20 11:25:17.948520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.521 [2024-11-20 11:25:17.959866] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.521 [2024-11-20 11:25:17.959885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.521 [2024-11-20 11:25:17.974164] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.521 [2024-11-20 11:25:17.974194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.521 [2024-11-20 11:25:17.988984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.521 [2024-11-20 11:25:17.989003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.521 [2024-11-20 11:25:18.004871] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.521 [2024-11-20 11:25:18.004890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.780 [2024-11-20 11:25:18.020954] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.780 [2024-11-20 11:25:18.020977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.780 [2024-11-20 11:25:18.036268] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.780 [2024-11-20 11:25:18.036288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.780 [2024-11-20 11:25:18.049618] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.780 [2024-11-20 11:25:18.049637] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.780 [2024-11-20 11:25:18.065891] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.780 [2024-11-20 11:25:18.065910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.780 [2024-11-20 11:25:18.081403] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.780 [2024-11-20 11:25:18.081423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.780 [2024-11-20 11:25:18.091986] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.780 [2024-11-20 11:25:18.092005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.780 [2024-11-20 11:25:18.106529] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.780 [2024-11-20 11:25:18.106548] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.780 [2024-11-20 11:25:18.121392] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.780 [2024-11-20 11:25:18.121412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.780 [2024-11-20 11:25:18.136277] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.780 [2024-11-20 11:25:18.136296] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.780 [2024-11-20 11:25:18.149148] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.780 [2024-11-20 11:25:18.149166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.780 [2024-11-20 11:25:18.161768] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.780 [2024-11-20 11:25:18.161787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.780 [2024-11-20 11:25:18.177239] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.780 [2024-11-20 11:25:18.177258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.780 [2024-11-20 11:25:18.192128] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.780 [2024-11-20 11:25:18.192147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.780 [2024-11-20 11:25:18.205535] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.780 [2024-11-20 11:25:18.205553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.780 [2024-11-20 11:25:18.216731] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.780 [2024-11-20 11:25:18.216749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.780 [2024-11-20 11:25:18.230089] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.780 [2024-11-20 11:25:18.230108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.780 [2024-11-20 11:25:18.245364] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.780 [2024-11-20 11:25:18.245383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.780 [2024-11-20 11:25:18.260209] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.780 [2024-11-20 11:25:18.260228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.780 [2024-11-20 11:25:18.271578] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.780 [2024-11-20 11:25:18.271597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.039 [2024-11-20 11:25:18.285872] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.039 [2024-11-20 11:25:18.285891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.039 [2024-11-20 11:25:18.300860] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.039 [2024-11-20 11:25:18.300879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.039 [2024-11-20 11:25:18.316508] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.039 [2024-11-20 11:25:18.316528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.039 [2024-11-20 11:25:18.329743] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.039 [2024-11-20 11:25:18.329762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.039 [2024-11-20 11:25:18.344759] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.039 [2024-11-20 11:25:18.344778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.039 [2024-11-20 11:25:18.356755] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.039 [2024-11-20 11:25:18.356775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.039 [2024-11-20 11:25:18.370167] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.039 [2024-11-20 11:25:18.370186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.039 [2024-11-20 11:25:18.385404] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.039 [2024-11-20 11:25:18.385423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.039 [2024-11-20 11:25:18.400399] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.039 [2024-11-20 11:25:18.400419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.039 [2024-11-20 11:25:18.411961] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.039 [2024-11-20 11:25:18.411979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.039 [2024-11-20 11:25:18.426573] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.039 [2024-11-20 11:25:18.426592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.039 [2024-11-20 11:25:18.441328] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.039 [2024-11-20 11:25:18.441348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.039 [2024-11-20 11:25:18.456059] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.039 [2024-11-20 11:25:18.456079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.039 [2024-11-20 11:25:18.469181] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.039 [2024-11-20 11:25:18.469199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.039 [2024-11-20 11:25:18.481988] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.039 [2024-11-20 11:25:18.482007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.039 [2024-11-20 11:25:18.497294] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.039 [2024-11-20 11:25:18.497313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.039 [2024-11-20 11:25:18.512191] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.039 [2024-11-20 11:25:18.512211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.039 [2024-11-20 11:25:18.524576] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.039 [2024-11-20 11:25:18.524595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.298 [2024-11-20 11:25:18.537936] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.298 [2024-11-20 11:25:18.537962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.298 [2024-11-20 11:25:18.552998] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.298 [2024-11-20 11:25:18.553016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.298 [2024-11-20 11:25:18.568450] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.299 [2024-11-20 11:25:18.568469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.299 [2024-11-20 11:25:18.582289] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.299 [2024-11-20 11:25:18.582307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.299 [2024-11-20 11:25:18.597405] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.299 [2024-11-20 11:25:18.597424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.299 [2024-11-20 11:25:18.612898] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.299 [2024-11-20 11:25:18.612916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.299 [2024-11-20 11:25:18.625355] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.299 [2024-11-20 11:25:18.625374] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.299 [2024-11-20 11:25:18.640552] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.299 [2024-11-20 11:25:18.640571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.299 [2024-11-20 11:25:18.653300] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.299 [2024-11-20 11:25:18.653320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.299 [2024-11-20 11:25:18.668222] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.299 [2024-11-20 11:25:18.668242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.299 [2024-11-20 11:25:18.681081] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.299 [2024-11-20 11:25:18.681100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.299 [2024-11-20 11:25:18.696340] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.299 [2024-11-20 11:25:18.696360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.299 [2024-11-20 11:25:18.709354] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.299 [2024-11-20 11:25:18.709374] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.299 [2024-11-20 11:25:18.724914] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.299 [2024-11-20 11:25:18.724933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.299 [2024-11-20 11:25:18.740693] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.299 [2024-11-20 11:25:18.740711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.299 [2024-11-20 11:25:18.756477] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.299 [2024-11-20 11:25:18.756496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.299 [2024-11-20 11:25:18.768308] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.299 [2024-11-20 11:25:18.768326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.299 [2024-11-20 11:25:18.782334] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.299 [2024-11-20 11:25:18.782353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.558 [2024-11-20 11:25:18.797275] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.558 [2024-11-20 11:25:18.797294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.558 16391.67 IOPS, 128.06 MiB/s [2024-11-20T10:25:19.054Z] [2024-11-20 11:25:18.812562] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.558 [2024-11-20 11:25:18.812585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.558 [2024-11-20 11:25:18.825255] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.558 [2024-11-20 11:25:18.825274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.558 [2024-11-20 11:25:18.840494] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.558 [2024-11-20 11:25:18.840513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.558 [2024-11-20 11:25:18.851592] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.558 [2024-11-20 11:25:18.851611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.558 [2024-11-20 11:25:18.866077] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.558 [2024-11-20 11:25:18.866096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.558 [2024-11-20 11:25:18.881544] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.558 [2024-11-20 11:25:18.881563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.558 [2024-11-20 11:25:18.896753] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.558 [2024-11-20 11:25:18.896772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.558 [2024-11-20 11:25:18.912185] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.558 [2024-11-20 11:25:18.912205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.558 [2024-11-20 11:25:18.926920] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.558 [2024-11-20 11:25:18.926940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.558 [2024-11-20 11:25:18.941538] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.558 [2024-11-20 11:25:18.941557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.558 [2024-11-20 11:25:18.956887] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.558 [2024-11-20 11:25:18.956906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.558 [2024-11-20 11:25:18.972415] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.558 [2024-11-20 11:25:18.972435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.558 [2024-11-20 11:25:18.982972] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.558 [2024-11-20 11:25:18.982992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.558 [2024-11-20 11:25:18.998135] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.558 [2024-11-20 11:25:18.998155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.558 [2024-11-20 11:25:19.012999] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.558 [2024-11-20 11:25:19.013017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.558 [2024-11-20 11:25:19.028392] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.558 [2024-11-20 11:25:19.028412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.558 [2024-11-20 11:25:19.041192] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.558 [2024-11-20 11:25:19.041211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.817 [2024-11-20 11:25:19.052702] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.817 [2024-11-20 11:25:19.052721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.817 [2024-11-20 11:25:19.066325] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.817 [2024-11-20 11:25:19.066345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.817 [2024-11-20 11:25:19.081897] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.817 [2024-11-20 11:25:19.081922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.817 [2024-11-20 11:25:19.097016] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.817 [2024-11-20 11:25:19.097036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.817 [2024-11-20 11:25:19.112467] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.817 [2024-11-20 11:25:19.112488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.817 [2024-11-20 11:25:19.122993] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.817 [2024-11-20 11:25:19.123013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.817 [2024-11-20 11:25:19.138152] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.817 [2024-11-20 11:25:19.138172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.817 [2024-11-20 11:25:19.153337] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.817 [2024-11-20 11:25:19.153356] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.817 [2024-11-20 11:25:19.168684] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.817 [2024-11-20 11:25:19.168703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.817 [2024-11-20 11:25:19.180108] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.817 [2024-11-20 11:25:19.180127] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.817 [2024-11-20 11:25:19.194744] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.817 [2024-11-20 11:25:19.194763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.817 [2024-11-20 11:25:19.209496] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.817 [2024-11-20 11:25:19.209515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.817 [2024-11-20 11:25:19.224731] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.817 [2024-11-20 11:25:19.224750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.817 [2024-11-20 11:25:19.239616] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.817 [2024-11-20 11:25:19.239635] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.817 [2024-11-20 11:25:19.254276] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.817 [2024-11-20 11:25:19.254296] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.817 [2024-11-20 11:25:19.269215] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.817 [2024-11-20 11:25:19.269234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.818 [2024-11-20 11:25:19.284657] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.818 [2024-11-20 11:25:19.284676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.818 [2024-11-20 11:25:19.300531] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.818 [2024-11-20 11:25:19.300550] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.076 [2024-11-20 11:25:19.314121] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.076 [2024-11-20 11:25:19.314140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.076 [2024-11-20 11:25:19.329340] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.076 [2024-11-20 11:25:19.329359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.076 [2024-11-20 11:25:19.345294] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.076 [2024-11-20 11:25:19.345314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.076 [2024-11-20 11:25:19.360266] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.076 [2024-11-20 11:25:19.360293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.076 [2024-11-20 11:25:19.372578] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.076 [2024-11-20 11:25:19.372597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.076 [2024-11-20 11:25:19.388564] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.076 [2024-11-20 11:25:19.388585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.076 [2024-11-20 11:25:19.398704] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.076 [2024-11-20 11:25:19.398723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.076 [2024-11-20 11:25:19.413604] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.076 [2024-11-20 11:25:19.413622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.076 [2024-11-20 11:25:19.428187] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.076 [2024-11-20 11:25:19.428206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.076 [2024-11-20 11:25:19.442259] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.076 [2024-11-20 11:25:19.442278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.076 [2024-11-20 11:25:19.457073] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.076 [2024-11-20 11:25:19.457092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.076 [2024-11-20 11:25:19.471911] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.076 [2024-11-20 11:25:19.471930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.076 [2024-11-20 11:25:19.485065] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.076 [2024-11-20 11:25:19.485084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.076 [2024-11-20 11:25:19.500857] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.076 [2024-11-20 11:25:19.500875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.076 [2024-11-20 11:25:19.513437] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.076 [2024-11-20 11:25:19.513456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.076 [2024-11-20 11:25:19.528558] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.076 [2024-11-20 11:25:19.528577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.076 [2024-11-20 11:25:19.539419] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.076 [2024-11-20 11:25:19.539439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.076 [2024-11-20 11:25:19.554341] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.076 [2024-11-20 11:25:19.554360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.076 [2024-11-20 11:25:19.569029] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.076 [2024-11-20 11:25:19.569048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.336 [2024-11-20 11:25:19.584101] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.336 [2024-11-20 11:25:19.584120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.336 [2024-11-20 11:25:19.596809] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.336 [2024-11-20 11:25:19.596828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.336 [2024-11-20 11:25:19.610408] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.336 [2024-11-20 11:25:19.610426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.336 [2024-11-20 11:25:19.625299] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.336 [2024-11-20 11:25:19.625321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.336 [2024-11-20 11:25:19.640175] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.336 [2024-11-20 11:25:19.640195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.336 [2024-11-20 11:25:19.654395] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.336 [2024-11-20 11:25:19.654414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.336 [2024-11-20 11:25:19.669146] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.336 [2024-11-20 11:25:19.669164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.336 [2024-11-20 11:25:19.684087] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.336 [2024-11-20 11:25:19.684106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.336 [2024-11-20 11:25:19.697183] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.336 [2024-11-20 11:25:19.697201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.336 [2024-11-20 11:25:19.709868] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.336 [2024-11-20 11:25:19.709887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.336 [2024-11-20 11:25:19.725157] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.336 [2024-11-20 11:25:19.725177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.336 [2024-11-20 11:25:19.740097] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.336 [2024-11-20 11:25:19.740117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.336 [2024-11-20 11:25:19.751623] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.336 [2024-11-20 11:25:19.751642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.336 [2024-11-20 11:25:19.766165] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.336 [2024-11-20 11:25:19.766184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.336 [2024-11-20 11:25:19.781856] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.336 [2024-11-20 11:25:19.781874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.336 [2024-11-20 11:25:19.796925] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.336 [2024-11-20 11:25:19.796944] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.336 16371.75 IOPS, 127.90 MiB/s [2024-11-20T10:25:19.832Z] [2024-11-20 11:25:19.813001] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.336 [2024-11-20 11:25:19.813020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.336 [2024-11-20 11:25:19.828155] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.336 [2024-11-20 11:25:19.828175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.595 [2024-11-20 11:25:19.841021] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.595 [2024-11-20 11:25:19.841040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.595 [2024-11-20 11:25:19.854071] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.595 [2024-11-20 11:25:19.854091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.595 [2024-11-20 11:25:19.869186] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.595 [2024-11-20 11:25:19.869206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.595 [2024-11-20 11:25:19.884097] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.595 [2024-11-20 11:25:19.884116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.595 [2024-11-20 11:25:19.896324] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.595 [2024-11-20 11:25:19.896342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.595 [2024-11-20 11:25:19.910643] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.595 [2024-11-20 11:25:19.910662] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.595 [2024-11-20 11:25:19.925493] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.595 [2024-11-20 11:25:19.925512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.595 [2024-11-20 11:25:19.940527] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.595 [2024-11-20 11:25:19.940546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.595 [2024-11-20 11:25:19.952118] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.595 [2024-11-20 11:25:19.952138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.595 [2024-11-20 11:25:19.965960] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.595 [2024-11-20 11:25:19.965978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.595 [2024-11-20 11:25:19.980858] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.595 [2024-11-20 11:25:19.980876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.595 [2024-11-20 11:25:19.996382] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.595 [2024-11-20 11:25:19.996401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.595 [2024-11-20 11:25:20.009124] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.595 [2024-11-20 11:25:20.009143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.595 [2024-11-20 11:25:20.026036] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.595 [2024-11-20 11:25:20.026057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.595 [2024-11-20 11:25:20.040778] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.595 [2024-11-20 11:25:20.040797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.595 [2024-11-20 11:25:20.056793] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.595 [2024-11-20 11:25:20.056813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.595 [2024-11-20 11:25:20.070030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.595 [2024-11-20 11:25:20.070050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.595 [2024-11-20 11:25:20.085311] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.595 [2024-11-20 11:25:20.085331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.854 [2024-11-20 11:25:20.100333] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.854 [2024-11-20 11:25:20.100352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.854 [2024-11-20 11:25:20.113393] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.854 [2024-11-20 11:25:20.113414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.854 [2024-11-20 11:25:20.124944] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.854 [2024-11-20 11:25:20.124969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.854 [2024-11-20 11:25:20.137894] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.854 [2024-11-20 11:25:20.137913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.854 [2024-11-20 11:25:20.153408] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.854 [2024-11-20 11:25:20.153427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.855 [2024-11-20 11:25:20.168427] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.855 [2024-11-20 11:25:20.168447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.855 [2024-11-20 11:25:20.182317] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.855 [2024-11-20 11:25:20.182337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.855 [2024-11-20 11:25:20.197676] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.855 [2024-11-20 11:25:20.197696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.855 [2024-11-20 11:25:20.212670] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.855 [2024-11-20 11:25:20.212689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.855 [2024-11-20 11:25:20.228781] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.855 [2024-11-20 11:25:20.228800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.855 [2024-11-20 11:25:20.244484] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.855 [2024-11-20 11:25:20.244503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.855 [2024-11-20 11:25:20.257297] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.855 [2024-11-20 11:25:20.257316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.855 [2024-11-20 11:25:20.268634] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.855 [2024-11-20 11:25:20.268653] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.855 [2024-11-20 11:25:20.282436] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.855 [2024-11-20 11:25:20.282455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.855 [2024-11-20 11:25:20.297763] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.855 [2024-11-20 11:25:20.297782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.855 [2024-11-20 11:25:20.312968] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.855 [2024-11-20 11:25:20.312987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.855 [2024-11-20 11:25:20.328414] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.855 [2024-11-20 11:25:20.328435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.855 [2024-11-20 11:25:20.342345] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.855 [2024-11-20 11:25:20.342366] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.114 [2024-11-20 11:25:20.357750] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.114 [2024-11-20 11:25:20.357770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.114 [2024-11-20 11:25:20.373407] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.114 [2024-11-20 11:25:20.373428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.114 [2024-11-20 11:25:20.388710] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.114 [2024-11-20 11:25:20.388730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.114 [2024-11-20 11:25:20.401348] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.114 [2024-11-20 11:25:20.401368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.114 [2024-11-20 11:25:20.416663] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.114 [2024-11-20 11:25:20.416682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.114 [2024-11-20 11:25:20.432799] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.114 [2024-11-20 11:25:20.432819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.114 [2024-11-20 11:25:20.448377] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.114 [2024-11-20 11:25:20.448397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.114 [2024-11-20 11:25:20.461250] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.114 [2024-11-20 11:25:20.461270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.114 [2024-11-20 11:25:20.476382] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.114 [2024-11-20 11:25:20.476402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.114 [2024-11-20 11:25:20.487122] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.114 [2024-11-20 11:25:20.487142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.114 [2024-11-20 11:25:20.502683] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.114 [2024-11-20 11:25:20.502703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.114 [2024-11-20 11:25:20.517768] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.114 [2024-11-20 11:25:20.517788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.114 [2024-11-20 11:25:20.532686] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.114 [2024-11-20 11:25:20.532705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.114 [2024-11-20 11:25:20.548303] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.114 [2024-11-20 11:25:20.548323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.114 [2024-11-20 11:25:20.559687] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.114 [2024-11-20 11:25:20.559707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.114 [2024-11-20 11:25:20.574257] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.114 [2024-11-20 11:25:20.574277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.114 [2024-11-20 11:25:20.589455] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.114 [2024-11-20 11:25:20.589475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.114 [2024-11-20 11:25:20.604220] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.114 [2024-11-20 11:25:20.604242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.373 [2024-11-20 11:25:20.615330] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.373 [2024-11-20 11:25:20.615350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.373 [2024-11-20 11:25:20.630644] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.373 [2024-11-20 11:25:20.630664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.373 [2024-11-20 11:25:20.645428] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.373 [2024-11-20 11:25:20.645447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.373 [2024-11-20 11:25:20.660208] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.374 [2024-11-20 11:25:20.660229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.374 [2024-11-20 11:25:20.673381] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.374 [2024-11-20 11:25:20.673401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.374 [2024-11-20 11:25:20.684902] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.374 [2024-11-20 11:25:20.684920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.374 [2024-11-20 11:25:20.696404] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.374 [2024-11-20 11:25:20.696428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.374 [2024-11-20 11:25:20.710231] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.374 [2024-11-20 11:25:20.710251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.374 [2024-11-20 11:25:20.725333] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.374 [2024-11-20 11:25:20.725352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.374 [2024-11-20 11:25:20.740781] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.374 [2024-11-20 11:25:20.740801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.374 [2024-11-20 11:25:20.752471] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.374 [2024-11-20 11:25:20.752490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.374 [2024-11-20 11:25:20.766687] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.374 [2024-11-20 11:25:20.766706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.374 [2024-11-20 11:25:20.781555] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.374 [2024-11-20 11:25:20.781574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.374 [2024-11-20 11:25:20.796644] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.374 [2024-11-20 11:25:20.796662] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.374 16344.60 IOPS, 127.69 MiB/s [2024-11-20T10:25:20.870Z] [2024-11-20 11:25:20.811772] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.374 [2024-11-20 11:25:20.811791] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.374 00:30:53.374 Latency(us) 00:30:53.374 [2024-11-20T10:25:20.870Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:53.374 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:30:53.374 Nvme1n1 : 5.01 16347.30 127.71 0.00 0.00 7822.66 2037.31 13107.20 00:30:53.374 [2024-11-20T10:25:20.870Z] =================================================================================================================== 00:30:53.374 [2024-11-20T10:25:20.870Z] Total : 16347.30 127.71 0.00 0.00 7822.66 2037.31 13107.20 00:30:53.374 [2024-11-20 11:25:20.820302] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.374 [2024-11-20 11:25:20.820320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.374 [2024-11-20 11:25:20.832297] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.374 [2024-11-20 11:25:20.832313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.374 [2024-11-20 11:25:20.844311] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.374 [2024-11-20 11:25:20.844328] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.374 [2024-11-20 11:25:20.856310] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.374 [2024-11-20 11:25:20.856328] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.633 [2024-11-20 11:25:20.868309] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.633 [2024-11-20 11:25:20.868324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.633 [2024-11-20 11:25:20.880307] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.633 [2024-11-20 11:25:20.880324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.633 [2024-11-20 11:25:20.892306] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.633 [2024-11-20 11:25:20.892323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.633 [2024-11-20 11:25:20.904303] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.633 [2024-11-20 11:25:20.904325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.633 [2024-11-20 11:25:20.916303] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.633 [2024-11-20 11:25:20.916318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.633 [2024-11-20 11:25:20.928301] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.633 [2024-11-20 11:25:20.928313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.633 [2024-11-20 11:25:20.940298] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.633 [2024-11-20 11:25:20.940308] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.633 [2024-11-20 11:25:20.952303] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.633 [2024-11-20 11:25:20.952315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.633 [2024-11-20 11:25:20.964298] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.633 [2024-11-20 11:25:20.964309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.633 [2024-11-20 11:25:20.976299] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:53.633 [2024-11-20 11:25:20.976311] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:53.633 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (81659) - No such process 00:30:53.633 11:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 81659 00:30:53.633 11:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:53.633 11:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:53.633 11:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:53.633 11:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:53.633 11:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:30:53.633 11:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:53.633 11:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:53.633 delay0 00:30:53.633 11:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:53.633 11:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:30:53.633 11:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:53.633 11:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:53.633 11:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:53.633 11:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:30:53.633 [2024-11-20 11:25:21.121902] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:31:01.750 Initializing NVMe Controllers 00:31:01.750 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:01.750 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:01.750 Initialization complete. Launching workers. 00:31:01.750 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 236, failed: 28204 00:31:01.750 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 28322, failed to submit 118 00:31:01.750 success 28224, unsuccessful 98, failed 0 00:31:01.750 11:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:31:01.750 11:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:31:01.750 11:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:01.750 11:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:31:01.750 11:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:01.750 11:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:31:01.750 11:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:01.750 11:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:01.750 rmmod nvme_tcp 00:31:01.750 rmmod nvme_fabrics 00:31:01.750 rmmod nvme_keyring 00:31:01.750 11:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:01.750 11:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:31:01.750 11:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:31:01.750 11:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 79895 ']' 00:31:01.750 11:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 79895 00:31:01.750 11:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 79895 ']' 00:31:01.750 11:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 79895 00:31:01.750 11:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:31:01.750 11:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:01.750 11:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79895 00:31:01.751 11:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:01.751 11:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:01.751 11:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79895' 00:31:01.751 killing process with pid 79895 00:31:01.751 11:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 79895 00:31:01.751 11:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 79895 00:31:01.751 11:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:01.751 11:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:01.751 11:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:01.751 11:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:31:01.751 11:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:31:01.751 11:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:01.751 11:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:31:01.751 11:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:01.751 11:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:01.751 11:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:01.751 11:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:01.751 11:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:03.132 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:03.391 00:31:03.391 real 0m32.121s 00:31:03.391 user 0m41.313s 00:31:03.391 sys 0m13.042s 00:31:03.391 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:03.391 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:03.391 ************************************ 00:31:03.391 END TEST nvmf_zcopy 00:31:03.391 ************************************ 00:31:03.391 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:31:03.391 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:03.391 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:03.391 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:03.391 ************************************ 00:31:03.391 START TEST nvmf_nmic 00:31:03.391 ************************************ 00:31:03.391 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:31:03.391 * Looking for test storage... 00:31:03.391 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:03.391 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:03.391 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:31:03.391 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:03.391 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:03.391 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:03.391 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:03.391 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:03.391 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:31:03.391 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:31:03.391 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:31:03.391 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:31:03.391 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:31:03.391 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:31:03.391 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:31:03.391 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:03.391 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:31:03.391 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:31:03.391 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:03.391 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:03.391 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:31:03.391 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:31:03.391 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:03.391 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:31:03.391 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:31:03.391 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:31:03.391 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:31:03.391 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:03.391 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:31:03.651 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:31:03.651 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:03.651 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:03.651 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:31:03.651 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:03.651 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:03.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:03.651 --rc genhtml_branch_coverage=1 00:31:03.651 --rc genhtml_function_coverage=1 00:31:03.651 --rc genhtml_legend=1 00:31:03.651 --rc geninfo_all_blocks=1 00:31:03.651 --rc geninfo_unexecuted_blocks=1 00:31:03.651 00:31:03.651 ' 00:31:03.651 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:03.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:03.651 --rc genhtml_branch_coverage=1 00:31:03.651 --rc genhtml_function_coverage=1 00:31:03.651 --rc genhtml_legend=1 00:31:03.651 --rc geninfo_all_blocks=1 00:31:03.651 --rc geninfo_unexecuted_blocks=1 00:31:03.651 00:31:03.651 ' 00:31:03.651 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:03.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:03.651 --rc genhtml_branch_coverage=1 00:31:03.651 --rc genhtml_function_coverage=1 00:31:03.651 --rc genhtml_legend=1 00:31:03.651 --rc geninfo_all_blocks=1 00:31:03.651 --rc geninfo_unexecuted_blocks=1 00:31:03.651 00:31:03.651 ' 00:31:03.651 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:03.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:03.651 --rc genhtml_branch_coverage=1 00:31:03.651 --rc genhtml_function_coverage=1 00:31:03.651 --rc genhtml_legend=1 00:31:03.651 --rc geninfo_all_blocks=1 00:31:03.651 --rc geninfo_unexecuted_blocks=1 00:31:03.651 00:31:03.651 ' 00:31:03.651 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:03.651 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:31:03.651 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:03.651 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:03.651 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:03.651 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:03.651 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:03.651 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:03.651 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:03.651 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:03.651 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:03.651 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:03.652 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:03.652 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:31:03.652 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:03.652 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:03.652 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:03.652 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:03.652 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:03.652 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:31:03.652 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:03.652 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:03.652 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:03.652 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:03.652 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:03.652 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:03.652 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:31:03.652 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:03.652 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:31:03.652 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:03.652 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:03.652 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:03.652 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:03.652 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:03.652 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:03.652 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:03.652 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:03.652 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:03.652 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:03.652 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:03.652 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:03.652 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:31:03.652 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:03.652 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:03.652 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:03.652 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:03.652 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:03.652 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:03.652 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:03.652 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:03.652 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:03.652 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:03.652 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:31:03.652 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:10.218 11:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:10.218 11:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:31:10.218 11:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:10.218 11:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:10.218 11:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:10.218 11:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:10.218 11:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:10.218 11:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:31:10.218 11:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:10.218 11:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:31:10.218 11:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:31:10.218 11:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:31:10.218 11:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:31:10.218 11:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:31:10.218 11:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:31:10.218 11:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:10.218 11:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:10.218 11:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:10.218 11:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:10.218 11:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:10.218 11:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:10.218 11:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:10.218 11:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:10.218 11:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:10.218 11:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:10.218 11:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:10.218 11:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:10.218 11:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:10.218 11:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:10.218 11:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:10.218 11:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:10.218 11:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:10.218 11:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:10.218 11:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:10.218 11:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:10.218 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:10.218 11:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:10.218 11:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:10.218 11:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:10.218 11:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:10.218 11:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:10.218 11:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:10.218 11:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:10.218 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:10.218 11:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:10.218 11:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:10.219 11:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:10.219 11:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:10.219 11:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:10.219 11:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:10.219 11:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:10.219 11:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:10.219 11:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:10.219 11:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:10.219 11:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:10.219 11:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:10.219 11:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:10.219 11:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:10.219 11:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:10.219 11:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:10.219 Found net devices under 0000:86:00.0: cvl_0_0 00:31:10.219 11:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:10.219 11:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:10.219 11:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:10.219 11:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:10.219 11:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:10.219 11:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:10.219 11:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:10.219 11:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:10.219 11:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:10.219 Found net devices under 0000:86:00.1: cvl_0_1 00:31:10.219 11:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:10.219 11:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:10.219 11:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:31:10.219 11:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:10.219 11:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:10.219 11:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:10.219 11:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:10.219 11:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:10.219 11:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:10.219 11:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:10.219 11:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:10.219 11:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:10.219 11:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:10.219 11:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:10.219 11:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:10.219 11:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:10.219 11:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:10.219 11:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:10.219 11:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:10.219 11:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:10.219 11:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:10.219 11:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:10.219 11:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:10.219 11:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:10.219 11:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:10.219 11:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:10.219 11:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:10.219 11:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:10.219 11:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:10.219 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:10.219 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.179 ms 00:31:10.219 00:31:10.219 --- 10.0.0.2 ping statistics --- 00:31:10.219 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:10.219 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:31:10.219 11:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:10.219 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:10.219 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.057 ms 00:31:10.219 00:31:10.219 --- 10.0.0.1 ping statistics --- 00:31:10.219 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:10.219 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:31:10.219 11:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:10.219 11:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:31:10.219 11:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:10.219 11:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:10.219 11:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:10.220 11:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:10.220 11:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:10.220 11:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:10.220 11:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:10.220 11:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:31:10.220 11:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:10.220 11:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:10.220 11:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:10.220 11:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=87229 00:31:10.220 11:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:31:10.220 11:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 87229 00:31:10.220 11:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 87229 ']' 00:31:10.220 11:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:10.220 11:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:10.220 11:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:10.220 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:10.220 11:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:10.220 11:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:10.220 [2024-11-20 11:25:36.830964] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:10.220 [2024-11-20 11:25:36.831907] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:31:10.220 [2024-11-20 11:25:36.831943] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:10.220 [2024-11-20 11:25:36.913748] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:10.220 [2024-11-20 11:25:36.957338] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:10.220 [2024-11-20 11:25:36.957377] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:10.220 [2024-11-20 11:25:36.957384] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:10.220 [2024-11-20 11:25:36.957390] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:10.220 [2024-11-20 11:25:36.957395] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:10.220 [2024-11-20 11:25:36.958920] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:10.220 [2024-11-20 11:25:36.958974] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:10.220 [2024-11-20 11:25:36.959082] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:10.220 [2024-11-20 11:25:36.959082] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:10.220 [2024-11-20 11:25:37.026813] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:10.220 [2024-11-20 11:25:37.027837] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:10.220 [2024-11-20 11:25:37.027870] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:31:10.220 [2024-11-20 11:25:37.028226] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:10.220 [2024-11-20 11:25:37.028280] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:10.220 11:25:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:10.220 11:25:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:31:10.220 11:25:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:10.220 11:25:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:10.220 11:25:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:10.220 11:25:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:10.220 11:25:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:10.220 11:25:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:10.220 11:25:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:10.220 [2024-11-20 11:25:37.095903] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:10.220 11:25:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:10.220 11:25:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:10.220 11:25:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:10.220 11:25:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:10.220 Malloc0 00:31:10.220 11:25:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:10.220 11:25:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:31:10.220 11:25:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:10.220 11:25:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:10.220 11:25:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:10.220 11:25:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:10.220 11:25:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:10.220 11:25:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:10.220 11:25:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:10.220 11:25:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:10.220 11:25:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:10.220 11:25:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:10.220 [2024-11-20 11:25:37.184096] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:10.220 11:25:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:10.220 11:25:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:31:10.221 test case1: single bdev can't be used in multiple subsystems 00:31:10.221 11:25:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:31:10.221 11:25:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:10.221 11:25:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:10.221 11:25:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:10.221 11:25:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:31:10.221 11:25:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:10.221 11:25:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:10.221 11:25:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:10.221 11:25:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:31:10.221 11:25:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:31:10.221 11:25:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:10.221 11:25:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:10.221 [2024-11-20 11:25:37.219597] bdev.c:8203:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:31:10.221 [2024-11-20 11:25:37.219616] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:31:10.221 [2024-11-20 11:25:37.219624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.221 request: 00:31:10.221 { 00:31:10.221 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:31:10.221 "namespace": { 00:31:10.221 "bdev_name": "Malloc0", 00:31:10.221 "no_auto_visible": false 00:31:10.221 }, 00:31:10.221 "method": "nvmf_subsystem_add_ns", 00:31:10.221 "req_id": 1 00:31:10.221 } 00:31:10.221 Got JSON-RPC error response 00:31:10.221 response: 00:31:10.221 { 00:31:10.221 "code": -32602, 00:31:10.221 "message": "Invalid parameters" 00:31:10.221 } 00:31:10.221 11:25:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:31:10.221 11:25:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:31:10.221 11:25:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:31:10.221 11:25:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:31:10.221 Adding namespace failed - expected result. 00:31:10.221 11:25:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:31:10.221 test case2: host connect to nvmf target in multiple paths 00:31:10.221 11:25:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:10.221 11:25:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:10.221 11:25:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:10.221 [2024-11-20 11:25:37.231691] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:10.221 11:25:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:10.221 11:25:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:31:10.221 11:25:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:31:10.480 11:25:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:31:10.480 11:25:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:31:10.480 11:25:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:31:10.480 11:25:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:31:10.480 11:25:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:31:12.382 11:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:31:12.383 11:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:31:12.383 11:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:31:12.383 11:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:31:12.383 11:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:31:12.383 11:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:31:12.383 11:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:31:12.383 [global] 00:31:12.383 thread=1 00:31:12.383 invalidate=1 00:31:12.383 rw=write 00:31:12.383 time_based=1 00:31:12.383 runtime=1 00:31:12.383 ioengine=libaio 00:31:12.383 direct=1 00:31:12.383 bs=4096 00:31:12.383 iodepth=1 00:31:12.383 norandommap=0 00:31:12.383 numjobs=1 00:31:12.383 00:31:12.383 verify_dump=1 00:31:12.383 verify_backlog=512 00:31:12.383 verify_state_save=0 00:31:12.383 do_verify=1 00:31:12.383 verify=crc32c-intel 00:31:12.383 [job0] 00:31:12.383 filename=/dev/nvme0n1 00:31:12.383 Could not set queue depth (nvme0n1) 00:31:12.641 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:12.641 fio-3.35 00:31:12.641 Starting 1 thread 00:31:14.017 00:31:14.017 job0: (groupid=0, jobs=1): err= 0: pid=87852: Wed Nov 20 11:25:41 2024 00:31:14.017 read: IOPS=22, BW=89.9KiB/s (92.1kB/s)(92.0KiB/1023msec) 00:31:14.017 slat (nsec): min=11545, max=23720, avg=21885.09, stdev=2481.06 00:31:14.017 clat (usec): min=40817, max=41260, avg=40979.68, stdev=89.30 00:31:14.017 lat (usec): min=40841, max=41272, avg=41001.56, stdev=87.27 00:31:14.017 clat percentiles (usec): 00:31:14.017 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:31:14.017 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:14.017 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:14.017 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:31:14.017 | 99.99th=[41157] 00:31:14.017 write: IOPS=500, BW=2002KiB/s (2050kB/s)(2048KiB/1023msec); 0 zone resets 00:31:14.017 slat (nsec): min=9899, max=40315, avg=11080.01, stdev=2289.31 00:31:14.017 clat (usec): min=123, max=328, avg=141.54, stdev=10.79 00:31:14.017 lat (usec): min=141, max=369, avg=152.62, stdev=11.89 00:31:14.017 clat percentiles (usec): 00:31:14.017 | 1.00th=[ 135], 5.00th=[ 137], 10.00th=[ 137], 20.00th=[ 139], 00:31:14.017 | 30.00th=[ 139], 40.00th=[ 141], 50.00th=[ 141], 60.00th=[ 141], 00:31:14.017 | 70.00th=[ 143], 80.00th=[ 145], 90.00th=[ 147], 95.00th=[ 149], 00:31:14.017 | 99.00th=[ 165], 99.50th=[ 219], 99.90th=[ 330], 99.95th=[ 330], 00:31:14.017 | 99.99th=[ 330] 00:31:14.018 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:31:14.018 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:31:14.018 lat (usec) : 250=95.51%, 500=0.19% 00:31:14.018 lat (msec) : 50=4.30% 00:31:14.018 cpu : usr=0.29%, sys=0.98%, ctx=535, majf=0, minf=1 00:31:14.018 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:14.018 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:14.018 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:14.018 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:14.018 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:14.018 00:31:14.018 Run status group 0 (all jobs): 00:31:14.018 READ: bw=89.9KiB/s (92.1kB/s), 89.9KiB/s-89.9KiB/s (92.1kB/s-92.1kB/s), io=92.0KiB (94.2kB), run=1023-1023msec 00:31:14.018 WRITE: bw=2002KiB/s (2050kB/s), 2002KiB/s-2002KiB/s (2050kB/s-2050kB/s), io=2048KiB (2097kB), run=1023-1023msec 00:31:14.018 00:31:14.018 Disk stats (read/write): 00:31:14.018 nvme0n1: ios=69/512, merge=0/0, ticks=805/64, in_queue=869, util=91.38% 00:31:14.018 11:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:31:14.018 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:31:14.018 11:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:31:14.018 11:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:31:14.018 11:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:31:14.018 11:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:14.018 11:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:31:14.018 11:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:14.018 11:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:31:14.018 11:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:31:14.018 11:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:31:14.018 11:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:14.018 11:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:31:14.018 11:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:14.018 11:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:31:14.018 11:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:14.018 11:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:14.018 rmmod nvme_tcp 00:31:14.018 rmmod nvme_fabrics 00:31:14.018 rmmod nvme_keyring 00:31:14.018 11:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:14.018 11:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:31:14.018 11:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:31:14.018 11:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 87229 ']' 00:31:14.018 11:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 87229 00:31:14.018 11:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 87229 ']' 00:31:14.018 11:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 87229 00:31:14.018 11:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:31:14.018 11:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:14.018 11:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87229 00:31:14.018 11:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:14.018 11:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:14.018 11:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87229' 00:31:14.018 killing process with pid 87229 00:31:14.018 11:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 87229 00:31:14.018 11:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 87229 00:31:14.278 11:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:14.278 11:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:14.278 11:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:14.278 11:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:31:14.278 11:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:31:14.278 11:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:14.278 11:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:31:14.278 11:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:14.278 11:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:14.278 11:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:14.278 11:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:14.278 11:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:16.816 11:25:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:16.816 00:31:16.816 real 0m13.058s 00:31:16.816 user 0m24.001s 00:31:16.816 sys 0m6.051s 00:31:16.816 11:25:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:16.816 11:25:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:16.816 ************************************ 00:31:16.816 END TEST nvmf_nmic 00:31:16.816 ************************************ 00:31:16.816 11:25:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:31:16.816 11:25:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:16.816 11:25:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:16.816 11:25:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:16.816 ************************************ 00:31:16.816 START TEST nvmf_fio_target 00:31:16.816 ************************************ 00:31:16.816 11:25:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:31:16.816 * Looking for test storage... 00:31:16.816 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:16.816 11:25:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:16.816 11:25:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:31:16.816 11:25:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:16.816 11:25:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:16.816 11:25:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:16.816 11:25:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:16.816 11:25:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:16.816 11:25:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:31:16.816 11:25:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:31:16.816 11:25:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:31:16.816 11:25:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:31:16.816 11:25:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:31:16.816 11:25:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:31:16.816 11:25:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:31:16.816 11:25:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:16.816 11:25:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:31:16.816 11:25:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:31:16.816 11:25:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:16.817 11:25:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:16.817 11:25:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:31:16.817 11:25:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:31:16.817 11:25:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:16.817 11:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:31:16.817 11:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:31:16.817 11:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:31:16.817 11:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:31:16.817 11:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:16.817 11:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:31:16.817 11:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:31:16.817 11:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:16.817 11:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:16.817 11:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:31:16.817 11:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:16.817 11:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:16.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:16.817 --rc genhtml_branch_coverage=1 00:31:16.817 --rc genhtml_function_coverage=1 00:31:16.817 --rc genhtml_legend=1 00:31:16.817 --rc geninfo_all_blocks=1 00:31:16.817 --rc geninfo_unexecuted_blocks=1 00:31:16.817 00:31:16.817 ' 00:31:16.817 11:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:16.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:16.817 --rc genhtml_branch_coverage=1 00:31:16.817 --rc genhtml_function_coverage=1 00:31:16.817 --rc genhtml_legend=1 00:31:16.817 --rc geninfo_all_blocks=1 00:31:16.817 --rc geninfo_unexecuted_blocks=1 00:31:16.817 00:31:16.817 ' 00:31:16.817 11:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:16.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:16.817 --rc genhtml_branch_coverage=1 00:31:16.817 --rc genhtml_function_coverage=1 00:31:16.817 --rc genhtml_legend=1 00:31:16.817 --rc geninfo_all_blocks=1 00:31:16.817 --rc geninfo_unexecuted_blocks=1 00:31:16.817 00:31:16.817 ' 00:31:16.817 11:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:16.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:16.817 --rc genhtml_branch_coverage=1 00:31:16.817 --rc genhtml_function_coverage=1 00:31:16.817 --rc genhtml_legend=1 00:31:16.817 --rc geninfo_all_blocks=1 00:31:16.817 --rc geninfo_unexecuted_blocks=1 00:31:16.817 00:31:16.817 ' 00:31:16.817 11:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:16.817 11:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:31:16.817 11:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:16.817 11:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:16.817 11:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:16.817 11:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:16.817 11:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:16.817 11:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:16.817 11:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:16.817 11:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:16.817 11:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:16.817 11:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:16.817 11:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:16.817 11:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:31:16.817 11:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:16.817 11:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:16.817 11:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:16.817 11:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:16.817 11:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:16.817 11:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:31:16.817 11:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:16.817 11:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:16.817 11:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:16.817 11:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:16.817 11:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:16.817 11:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:16.817 11:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:31:16.817 11:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:16.817 11:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:31:16.817 11:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:16.817 11:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:16.817 11:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:16.817 11:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:16.817 11:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:16.817 11:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:16.817 11:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:16.817 11:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:16.817 11:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:16.818 11:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:16.818 11:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:16.818 11:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:16.818 11:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:16.818 11:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:31:16.818 11:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:16.818 11:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:16.818 11:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:16.818 11:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:16.818 11:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:16.818 11:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:16.818 11:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:16.818 11:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:16.818 11:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:16.818 11:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:16.818 11:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:31:16.818 11:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:23.461 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:23.461 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:31:23.461 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:23.461 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:23.461 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:23.461 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:23.461 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:23.461 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:31:23.461 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:23.461 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:31:23.461 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:31:23.461 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:31:23.461 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:31:23.461 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:31:23.461 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:31:23.461 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:23.461 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:23.461 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:23.461 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:23.461 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:23.461 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:23.461 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:23.461 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:23.461 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:23.461 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:23.461 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:23.461 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:23.461 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:23.461 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:23.462 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:23.462 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:23.462 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:23.462 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:23.462 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:23.462 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:23.462 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:23.462 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:23.462 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:23.462 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:23.462 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:23.462 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:23.462 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:23.462 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:23.462 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:23.462 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:23.462 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:23.462 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:23.462 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:23.462 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:23.462 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:23.462 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:23.462 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:23.462 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:23.462 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:23.462 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:23.462 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:23.462 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:23.462 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:23.462 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:23.462 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:23.462 Found net devices under 0000:86:00.0: cvl_0_0 00:31:23.462 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:23.462 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:23.462 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:23.462 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:23.462 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:23.462 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:23.462 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:23.462 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:23.462 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:23.462 Found net devices under 0000:86:00.1: cvl_0_1 00:31:23.462 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:23.462 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:23.462 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:31:23.462 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:23.462 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:23.462 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:23.462 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:23.462 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:23.462 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:23.462 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:23.462 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:23.462 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:23.462 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:23.462 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:23.462 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:23.462 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:23.462 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:23.462 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:23.462 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:23.462 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:23.462 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:23.462 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:23.462 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:23.462 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:23.462 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:23.462 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:23.462 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:23.462 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:23.462 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:23.462 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:23.462 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.466 ms 00:31:23.462 00:31:23.462 --- 10.0.0.2 ping statistics --- 00:31:23.462 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:23.462 rtt min/avg/max/mdev = 0.466/0.466/0.466/0.000 ms 00:31:23.462 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:23.462 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:23.462 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.228 ms 00:31:23.462 00:31:23.462 --- 10.0.0.1 ping statistics --- 00:31:23.462 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:23.462 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:31:23.462 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:23.462 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:31:23.462 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:23.462 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:23.462 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:23.462 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:23.462 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:23.462 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:23.463 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:23.463 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:31:23.463 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:23.463 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:23.463 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:23.463 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=91612 00:31:23.463 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:31:23.463 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 91612 00:31:23.463 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 91612 ']' 00:31:23.463 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:23.463 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:23.463 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:23.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:23.463 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:23.463 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:23.463 [2024-11-20 11:25:50.072169] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:23.463 [2024-11-20 11:25:50.073133] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:31:23.463 [2024-11-20 11:25:50.073170] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:23.463 [2024-11-20 11:25:50.154943] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:23.463 [2024-11-20 11:25:50.197838] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:23.463 [2024-11-20 11:25:50.197877] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:23.463 [2024-11-20 11:25:50.197884] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:23.463 [2024-11-20 11:25:50.197891] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:23.463 [2024-11-20 11:25:50.197896] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:23.463 [2024-11-20 11:25:50.199320] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:23.463 [2024-11-20 11:25:50.199353] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:23.463 [2024-11-20 11:25:50.199375] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:23.463 [2024-11-20 11:25:50.199377] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:23.463 [2024-11-20 11:25:50.267528] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:23.463 [2024-11-20 11:25:50.268009] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:23.463 [2024-11-20 11:25:50.268444] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:31:23.463 [2024-11-20 11:25:50.268456] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:23.463 [2024-11-20 11:25:50.268547] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:23.463 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:23.463 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:31:23.463 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:23.463 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:23.463 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:23.463 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:23.463 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:23.463 [2024-11-20 11:25:50.516266] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:23.463 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:23.463 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:31:23.463 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:23.722 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:31:23.722 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:23.722 11:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:31:23.723 11:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:23.982 11:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:31:23.982 11:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:31:24.240 11:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:24.500 11:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:31:24.500 11:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:24.759 11:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:31:24.759 11:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:24.759 11:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:31:24.759 11:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:31:25.017 11:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:31:25.277 11:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:31:25.277 11:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:25.535 11:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:31:25.535 11:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:31:25.535 11:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:25.794 [2024-11-20 11:25:53.176170] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:25.794 11:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:31:26.053 11:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:31:26.312 11:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:31:26.571 11:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:31:26.571 11:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:31:26.571 11:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:31:26.571 11:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:31:26.571 11:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:31:26.571 11:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:31:28.474 11:25:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:31:28.474 11:25:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:31:28.475 11:25:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:31:28.475 11:25:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:31:28.475 11:25:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:31:28.475 11:25:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:31:28.475 11:25:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:31:28.475 [global] 00:31:28.475 thread=1 00:31:28.475 invalidate=1 00:31:28.475 rw=write 00:31:28.475 time_based=1 00:31:28.475 runtime=1 00:31:28.475 ioengine=libaio 00:31:28.475 direct=1 00:31:28.475 bs=4096 00:31:28.475 iodepth=1 00:31:28.475 norandommap=0 00:31:28.475 numjobs=1 00:31:28.475 00:31:28.475 verify_dump=1 00:31:28.475 verify_backlog=512 00:31:28.475 verify_state_save=0 00:31:28.475 do_verify=1 00:31:28.475 verify=crc32c-intel 00:31:28.475 [job0] 00:31:28.475 filename=/dev/nvme0n1 00:31:28.475 [job1] 00:31:28.475 filename=/dev/nvme0n2 00:31:28.475 [job2] 00:31:28.475 filename=/dev/nvme0n3 00:31:28.475 [job3] 00:31:28.475 filename=/dev/nvme0n4 00:31:28.475 Could not set queue depth (nvme0n1) 00:31:28.475 Could not set queue depth (nvme0n2) 00:31:28.475 Could not set queue depth (nvme0n3) 00:31:28.475 Could not set queue depth (nvme0n4) 00:31:28.733 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:28.733 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:28.733 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:28.733 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:28.733 fio-3.35 00:31:28.733 Starting 4 threads 00:31:30.111 00:31:30.111 job0: (groupid=0, jobs=1): err= 0: pid=92731: Wed Nov 20 11:25:57 2024 00:31:30.111 read: IOPS=22, BW=88.7KiB/s (90.8kB/s)(92.0KiB/1037msec) 00:31:30.111 slat (nsec): min=10352, max=36289, avg=23038.09, stdev=4223.44 00:31:30.111 clat (usec): min=40909, max=41954, avg=41019.05, stdev=209.46 00:31:30.111 lat (usec): min=40934, max=41978, avg=41042.08, stdev=209.21 00:31:30.111 clat percentiles (usec): 00:31:30.111 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:31:30.111 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:30.111 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:30.111 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:31:30.111 | 99.99th=[42206] 00:31:30.111 write: IOPS=493, BW=1975KiB/s (2022kB/s)(2048KiB/1037msec); 0 zone resets 00:31:30.111 slat (nsec): min=10673, max=48430, avg=12050.72, stdev=2031.81 00:31:30.111 clat (usec): min=149, max=244, avg=165.23, stdev= 9.68 00:31:30.111 lat (usec): min=161, max=257, avg=177.28, stdev=10.20 00:31:30.111 clat percentiles (usec): 00:31:30.111 | 1.00th=[ 151], 5.00th=[ 153], 10.00th=[ 155], 20.00th=[ 157], 00:31:30.111 | 30.00th=[ 161], 40.00th=[ 161], 50.00th=[ 165], 60.00th=[ 167], 00:31:30.111 | 70.00th=[ 169], 80.00th=[ 172], 90.00th=[ 176], 95.00th=[ 182], 00:31:30.111 | 99.00th=[ 192], 99.50th=[ 208], 99.90th=[ 245], 99.95th=[ 245], 00:31:30.111 | 99.99th=[ 245] 00:31:30.111 bw ( KiB/s): min= 4096, max= 4096, per=20.97%, avg=4096.00, stdev= 0.00, samples=1 00:31:30.111 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:31:30.111 lat (usec) : 250=95.70% 00:31:30.111 lat (msec) : 50=4.30% 00:31:30.111 cpu : usr=0.19%, sys=1.06%, ctx=538, majf=0, minf=1 00:31:30.111 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:30.111 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.111 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.111 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:30.111 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:30.111 job1: (groupid=0, jobs=1): err= 0: pid=92732: Wed Nov 20 11:25:57 2024 00:31:30.111 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:31:30.111 slat (nsec): min=2159, max=22524, avg=3230.61, stdev=2140.06 00:31:30.111 clat (usec): min=145, max=349, avg=168.04, stdev=18.53 00:31:30.111 lat (usec): min=148, max=353, avg=171.27, stdev=19.63 00:31:30.111 clat percentiles (usec): 00:31:30.111 | 1.00th=[ 151], 5.00th=[ 153], 10.00th=[ 155], 20.00th=[ 157], 00:31:30.111 | 30.00th=[ 159], 40.00th=[ 161], 50.00th=[ 163], 60.00th=[ 165], 00:31:30.111 | 70.00th=[ 169], 80.00th=[ 176], 90.00th=[ 186], 95.00th=[ 194], 00:31:30.111 | 99.00th=[ 253], 99.50th=[ 260], 99.90th=[ 277], 99.95th=[ 302], 00:31:30.111 | 99.99th=[ 351] 00:31:30.111 write: IOPS=3524, BW=13.8MiB/s (14.4MB/s)(13.8MiB/1001msec); 0 zone resets 00:31:30.111 slat (nsec): min=3073, max=25763, avg=4395.81, stdev=2496.62 00:31:30.111 clat (usec): min=101, max=418, avg=127.73, stdev=29.81 00:31:30.111 lat (usec): min=105, max=425, avg=132.12, stdev=30.37 00:31:30.111 clat percentiles (usec): 00:31:30.111 | 1.00th=[ 108], 5.00th=[ 109], 10.00th=[ 110], 20.00th=[ 111], 00:31:30.111 | 30.00th=[ 113], 40.00th=[ 114], 50.00th=[ 115], 60.00th=[ 117], 00:31:30.111 | 70.00th=[ 122], 80.00th=[ 137], 90.00th=[ 186], 95.00th=[ 200], 00:31:30.111 | 99.00th=[ 217], 99.50th=[ 227], 99.90th=[ 262], 99.95th=[ 416], 00:31:30.111 | 99.99th=[ 420] 00:31:30.111 bw ( KiB/s): min=12288, max=12288, per=62.91%, avg=12288.00, stdev= 0.00, samples=1 00:31:30.111 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:31:30.111 lat (usec) : 250=99.29%, 500=0.71% 00:31:30.111 cpu : usr=3.00%, sys=3.20%, ctx=6600, majf=0, minf=1 00:31:30.111 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:30.111 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.111 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.111 issued rwts: total=3072,3528,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:30.111 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:30.111 job2: (groupid=0, jobs=1): err= 0: pid=92733: Wed Nov 20 11:25:57 2024 00:31:30.111 read: IOPS=21, BW=87.6KiB/s (89.7kB/s)(88.0KiB/1005msec) 00:31:30.111 slat (nsec): min=11749, max=26993, avg=24349.77, stdev=3133.08 00:31:30.111 clat (usec): min=40848, max=41162, avg=40973.85, stdev=68.65 00:31:30.111 lat (usec): min=40873, max=41174, avg=40998.20, stdev=66.68 00:31:30.111 clat percentiles (usec): 00:31:30.111 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:31:30.111 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:30.111 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:30.111 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:31:30.111 | 99.99th=[41157] 00:31:30.111 write: IOPS=509, BW=2038KiB/s (2087kB/s)(2048KiB/1005msec); 0 zone resets 00:31:30.111 slat (nsec): min=12544, max=46762, avg=13954.15, stdev=2606.16 00:31:30.111 clat (usec): min=141, max=377, avg=182.57, stdev=19.07 00:31:30.111 lat (usec): min=154, max=391, avg=196.52, stdev=19.50 00:31:30.111 clat percentiles (usec): 00:31:30.111 | 1.00th=[ 147], 5.00th=[ 161], 10.00th=[ 165], 20.00th=[ 172], 00:31:30.111 | 30.00th=[ 174], 40.00th=[ 178], 50.00th=[ 180], 60.00th=[ 186], 00:31:30.111 | 70.00th=[ 190], 80.00th=[ 194], 90.00th=[ 200], 95.00th=[ 206], 00:31:30.111 | 99.00th=[ 245], 99.50th=[ 310], 99.90th=[ 379], 99.95th=[ 379], 00:31:30.111 | 99.99th=[ 379] 00:31:30.111 bw ( KiB/s): min= 4096, max= 4096, per=20.97%, avg=4096.00, stdev= 0.00, samples=1 00:31:30.111 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:31:30.111 lat (usec) : 250=94.94%, 500=0.94% 00:31:30.111 lat (msec) : 50=4.12% 00:31:30.111 cpu : usr=0.70%, sys=0.80%, ctx=535, majf=0, minf=1 00:31:30.111 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:30.111 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.111 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.111 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:30.111 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:30.111 job3: (groupid=0, jobs=1): err= 0: pid=92734: Wed Nov 20 11:25:57 2024 00:31:30.111 read: IOPS=21, BW=87.7KiB/s (89.8kB/s)(88.0KiB/1003msec) 00:31:30.111 slat (nsec): min=9935, max=24941, avg=22195.95, stdev=2992.71 00:31:30.111 clat (usec): min=40805, max=42047, avg=41003.61, stdev=239.40 00:31:30.111 lat (usec): min=40814, max=42069, avg=41025.80, stdev=239.60 00:31:30.112 clat percentiles (usec): 00:31:30.112 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:31:30.112 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:30.112 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:30.112 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:31:30.112 | 99.99th=[42206] 00:31:30.112 write: IOPS=510, BW=2042KiB/s (2091kB/s)(2048KiB/1003msec); 0 zone resets 00:31:30.112 slat (nsec): min=10380, max=36919, avg=11582.21, stdev=1904.87 00:31:30.112 clat (usec): min=154, max=282, avg=180.46, stdev=11.13 00:31:30.112 lat (usec): min=169, max=293, avg=192.04, stdev=11.59 00:31:30.112 clat percentiles (usec): 00:31:30.112 | 1.00th=[ 163], 5.00th=[ 167], 10.00th=[ 169], 20.00th=[ 172], 00:31:30.112 | 30.00th=[ 176], 40.00th=[ 178], 50.00th=[ 180], 60.00th=[ 182], 00:31:30.112 | 70.00th=[ 186], 80.00th=[ 188], 90.00th=[ 192], 95.00th=[ 200], 00:31:30.112 | 99.00th=[ 212], 99.50th=[ 223], 99.90th=[ 281], 99.95th=[ 281], 00:31:30.112 | 99.99th=[ 281] 00:31:30.112 bw ( KiB/s): min= 4096, max= 4096, per=20.97%, avg=4096.00, stdev= 0.00, samples=1 00:31:30.112 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:31:30.112 lat (usec) : 250=95.51%, 500=0.37% 00:31:30.112 lat (msec) : 50=4.12% 00:31:30.112 cpu : usr=0.20%, sys=1.10%, ctx=534, majf=0, minf=1 00:31:30.112 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:30.112 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.112 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.112 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:30.112 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:30.112 00:31:30.112 Run status group 0 (all jobs): 00:31:30.112 READ: bw=11.8MiB/s (12.4MB/s), 87.6KiB/s-12.0MiB/s (89.7kB/s-12.6MB/s), io=12.3MiB (12.9MB), run=1001-1037msec 00:31:30.112 WRITE: bw=19.1MiB/s (20.0MB/s), 1975KiB/s-13.8MiB/s (2022kB/s-14.4MB/s), io=19.8MiB (20.7MB), run=1001-1037msec 00:31:30.112 00:31:30.112 Disk stats (read/write): 00:31:30.112 nvme0n1: ios=40/512, merge=0/0, ticks=1600/78, in_queue=1678, util=85.76% 00:31:30.112 nvme0n2: ios=2610/2980, merge=0/0, ticks=477/382, in_queue=859, util=91.05% 00:31:30.112 nvme0n3: ios=41/512, merge=0/0, ticks=1642/76, in_queue=1718, util=93.54% 00:31:30.112 nvme0n4: ios=75/512, merge=0/0, ticks=797/89, in_queue=886, util=95.17% 00:31:30.112 11:25:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:31:30.112 [global] 00:31:30.112 thread=1 00:31:30.112 invalidate=1 00:31:30.112 rw=randwrite 00:31:30.112 time_based=1 00:31:30.112 runtime=1 00:31:30.112 ioengine=libaio 00:31:30.112 direct=1 00:31:30.112 bs=4096 00:31:30.112 iodepth=1 00:31:30.112 norandommap=0 00:31:30.112 numjobs=1 00:31:30.112 00:31:30.112 verify_dump=1 00:31:30.112 verify_backlog=512 00:31:30.112 verify_state_save=0 00:31:30.112 do_verify=1 00:31:30.112 verify=crc32c-intel 00:31:30.112 [job0] 00:31:30.112 filename=/dev/nvme0n1 00:31:30.112 [job1] 00:31:30.112 filename=/dev/nvme0n2 00:31:30.112 [job2] 00:31:30.112 filename=/dev/nvme0n3 00:31:30.112 [job3] 00:31:30.112 filename=/dev/nvme0n4 00:31:30.112 Could not set queue depth (nvme0n1) 00:31:30.112 Could not set queue depth (nvme0n2) 00:31:30.112 Could not set queue depth (nvme0n3) 00:31:30.112 Could not set queue depth (nvme0n4) 00:31:30.370 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:30.370 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:30.371 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:30.371 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:30.371 fio-3.35 00:31:30.371 Starting 4 threads 00:31:31.747 00:31:31.747 job0: (groupid=0, jobs=1): err= 0: pid=93107: Wed Nov 20 11:25:59 2024 00:31:31.747 read: IOPS=653, BW=2612KiB/s (2675kB/s)(2636KiB/1009msec) 00:31:31.747 slat (nsec): min=7207, max=36891, avg=8838.91, stdev=2677.65 00:31:31.747 clat (usec): min=194, max=41310, avg=1207.47, stdev=6274.47 00:31:31.747 lat (usec): min=202, max=41319, avg=1216.31, stdev=6275.37 00:31:31.747 clat percentiles (usec): 00:31:31.747 | 1.00th=[ 200], 5.00th=[ 204], 10.00th=[ 206], 20.00th=[ 210], 00:31:31.747 | 30.00th=[ 212], 40.00th=[ 215], 50.00th=[ 217], 60.00th=[ 219], 00:31:31.747 | 70.00th=[ 221], 80.00th=[ 225], 90.00th=[ 231], 95.00th=[ 247], 00:31:31.747 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:31:31.747 | 99.99th=[41157] 00:31:31.747 write: IOPS=1014, BW=4059KiB/s (4157kB/s)(4096KiB/1009msec); 0 zone resets 00:31:31.747 slat (nsec): min=10618, max=35412, avg=11815.32, stdev=1830.72 00:31:31.747 clat (usec): min=137, max=399, avg=181.27, stdev=36.63 00:31:31.747 lat (usec): min=148, max=434, avg=193.09, stdev=36.85 00:31:31.747 clat percentiles (usec): 00:31:31.747 | 1.00th=[ 141], 5.00th=[ 145], 10.00th=[ 147], 20.00th=[ 149], 00:31:31.747 | 30.00th=[ 153], 40.00th=[ 159], 50.00th=[ 176], 60.00th=[ 186], 00:31:31.747 | 70.00th=[ 194], 80.00th=[ 208], 90.00th=[ 239], 95.00th=[ 253], 00:31:31.747 | 99.00th=[ 281], 99.50th=[ 293], 99.90th=[ 347], 99.95th=[ 400], 00:31:31.747 | 99.99th=[ 400] 00:31:31.747 bw ( KiB/s): min= 1248, max= 6944, per=34.03%, avg=4096.00, stdev=4027.68, samples=2 00:31:31.747 iops : min= 312, max= 1736, avg=1024.00, stdev=1006.92, samples=2 00:31:31.747 lat (usec) : 250=94.47%, 500=4.52%, 1000=0.06% 00:31:31.747 lat (msec) : 50=0.95% 00:31:31.747 cpu : usr=1.49%, sys=2.68%, ctx=1685, majf=0, minf=1 00:31:31.747 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:31.747 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:31.747 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:31.747 issued rwts: total=659,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:31.747 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:31.747 job1: (groupid=0, jobs=1): err= 0: pid=93108: Wed Nov 20 11:25:59 2024 00:31:31.747 read: IOPS=21, BW=87.0KiB/s (89.0kB/s)(88.0KiB/1012msec) 00:31:31.747 slat (nsec): min=11327, max=26775, avg=20747.95, stdev=2465.06 00:31:31.747 clat (usec): min=40803, max=41994, avg=41031.01, stdev=233.98 00:31:31.747 lat (usec): min=40824, max=42021, avg=41051.76, stdev=234.60 00:31:31.747 clat percentiles (usec): 00:31:31.747 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:31:31.747 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:31.747 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:31.747 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:31:31.747 | 99.99th=[42206] 00:31:31.747 write: IOPS=505, BW=2024KiB/s (2072kB/s)(2048KiB/1012msec); 0 zone resets 00:31:31.747 slat (nsec): min=3499, max=61146, avg=7926.47, stdev=4815.73 00:31:31.747 clat (usec): min=133, max=311, avg=201.12, stdev=27.83 00:31:31.747 lat (usec): min=137, max=345, avg=209.05, stdev=29.28 00:31:31.747 clat percentiles (usec): 00:31:31.747 | 1.00th=[ 147], 5.00th=[ 161], 10.00th=[ 169], 20.00th=[ 182], 00:31:31.747 | 30.00th=[ 190], 40.00th=[ 192], 50.00th=[ 198], 60.00th=[ 202], 00:31:31.747 | 70.00th=[ 208], 80.00th=[ 219], 90.00th=[ 239], 95.00th=[ 260], 00:31:31.747 | 99.00th=[ 285], 99.50th=[ 302], 99.90th=[ 314], 99.95th=[ 314], 00:31:31.747 | 99.99th=[ 314] 00:31:31.747 bw ( KiB/s): min= 4096, max= 4096, per=34.03%, avg=4096.00, stdev= 0.00, samples=1 00:31:31.747 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:31:31.747 lat (usec) : 250=89.70%, 500=6.18% 00:31:31.747 lat (msec) : 50=4.12% 00:31:31.747 cpu : usr=0.30%, sys=0.59%, ctx=535, majf=0, minf=2 00:31:31.747 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:31.747 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:31.747 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:31.747 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:31.747 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:31.747 job2: (groupid=0, jobs=1): err= 0: pid=93109: Wed Nov 20 11:25:59 2024 00:31:31.747 read: IOPS=21, BW=87.2KiB/s (89.3kB/s)(88.0KiB/1009msec) 00:31:31.747 slat (nsec): min=10446, max=28891, avg=22589.64, stdev=3282.64 00:31:31.747 clat (usec): min=40619, max=41959, avg=41046.29, stdev=306.30 00:31:31.747 lat (usec): min=40629, max=41984, avg=41068.88, stdev=308.21 00:31:31.747 clat percentiles (usec): 00:31:31.747 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:31:31.747 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:31.747 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:31:31.747 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:31:31.747 | 99.99th=[42206] 00:31:31.747 write: IOPS=507, BW=2030KiB/s (2078kB/s)(2048KiB/1009msec); 0 zone resets 00:31:31.747 slat (nsec): min=10849, max=36170, avg=12640.57, stdev=1939.10 00:31:31.747 clat (usec): min=154, max=286, avg=181.07, stdev=11.84 00:31:31.747 lat (usec): min=165, max=322, avg=193.71, stdev=12.47 00:31:31.747 clat percentiles (usec): 00:31:31.747 | 1.00th=[ 161], 5.00th=[ 165], 10.00th=[ 169], 20.00th=[ 174], 00:31:31.747 | 30.00th=[ 176], 40.00th=[ 178], 50.00th=[ 180], 60.00th=[ 184], 00:31:31.747 | 70.00th=[ 186], 80.00th=[ 190], 90.00th=[ 194], 95.00th=[ 200], 00:31:31.748 | 99.00th=[ 219], 99.50th=[ 229], 99.90th=[ 285], 99.95th=[ 285], 00:31:31.748 | 99.99th=[ 285] 00:31:31.748 bw ( KiB/s): min= 4096, max= 4096, per=34.03%, avg=4096.00, stdev= 0.00, samples=1 00:31:31.748 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:31:31.748 lat (usec) : 250=95.69%, 500=0.19% 00:31:31.748 lat (msec) : 50=4.12% 00:31:31.748 cpu : usr=0.50%, sys=0.89%, ctx=537, majf=0, minf=1 00:31:31.748 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:31.748 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:31.748 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:31.748 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:31.748 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:31.748 job3: (groupid=0, jobs=1): err= 0: pid=93110: Wed Nov 20 11:25:59 2024 00:31:31.748 read: IOPS=648, BW=2594KiB/s (2656kB/s)(2648KiB/1021msec) 00:31:31.748 slat (nsec): min=6925, max=25023, avg=8128.54, stdev=2686.56 00:31:31.748 clat (usec): min=186, max=41585, avg=1257.02, stdev=6448.89 00:31:31.748 lat (usec): min=194, max=41594, avg=1265.15, stdev=6450.24 00:31:31.748 clat percentiles (usec): 00:31:31.748 | 1.00th=[ 190], 5.00th=[ 192], 10.00th=[ 196], 20.00th=[ 198], 00:31:31.748 | 30.00th=[ 200], 40.00th=[ 202], 50.00th=[ 206], 60.00th=[ 208], 00:31:31.748 | 70.00th=[ 212], 80.00th=[ 221], 90.00th=[ 247], 95.00th=[ 253], 00:31:31.748 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:31:31.748 | 99.99th=[41681] 00:31:31.748 write: IOPS=1002, BW=4012KiB/s (4108kB/s)(4096KiB/1021msec); 0 zone resets 00:31:31.748 slat (nsec): min=8473, max=53740, avg=10907.06, stdev=2596.10 00:31:31.748 clat (usec): min=128, max=381, avg=160.51, stdev=23.68 00:31:31.748 lat (usec): min=142, max=410, avg=171.42, stdev=24.27 00:31:31.748 clat percentiles (usec): 00:31:31.748 | 1.00th=[ 135], 5.00th=[ 137], 10.00th=[ 137], 20.00th=[ 139], 00:31:31.748 | 30.00th=[ 141], 40.00th=[ 145], 50.00th=[ 159], 60.00th=[ 169], 00:31:31.748 | 70.00th=[ 176], 80.00th=[ 180], 90.00th=[ 188], 95.00th=[ 194], 00:31:31.748 | 99.00th=[ 231], 99.50th=[ 265], 99.90th=[ 293], 99.95th=[ 383], 00:31:31.748 | 99.99th=[ 383] 00:31:31.748 bw ( KiB/s): min= 8192, max= 8192, per=68.07%, avg=8192.00, stdev= 0.00, samples=1 00:31:31.748 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:31:31.748 lat (usec) : 250=96.98%, 500=2.02% 00:31:31.748 lat (msec) : 50=1.01% 00:31:31.748 cpu : usr=0.69%, sys=1.67%, ctx=1687, majf=0, minf=1 00:31:31.748 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:31.748 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:31.748 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:31.748 issued rwts: total=662,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:31.748 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:31.748 00:31:31.748 Run status group 0 (all jobs): 00:31:31.748 READ: bw=5348KiB/s (5476kB/s), 87.0KiB/s-2612KiB/s (89.0kB/s-2675kB/s), io=5460KiB (5591kB), run=1009-1021msec 00:31:31.748 WRITE: bw=11.8MiB/s (12.3MB/s), 2024KiB/s-4059KiB/s (2072kB/s-4157kB/s), io=12.0MiB (12.6MB), run=1009-1021msec 00:31:31.748 00:31:31.748 Disk stats (read/write): 00:31:31.748 nvme0n1: ios=701/1024, merge=0/0, ticks=958/176, in_queue=1134, util=100.00% 00:31:31.748 nvme0n2: ios=46/512, merge=0/0, ticks=830/97, in_queue=927, util=94.11% 00:31:31.748 nvme0n3: ios=43/512, merge=0/0, ticks=1724/89, in_queue=1813, util=97.71% 00:31:31.748 nvme0n4: ios=681/1024, merge=0/0, ticks=1588/157, in_queue=1745, util=99.90% 00:31:31.748 11:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:31:31.748 [global] 00:31:31.748 thread=1 00:31:31.748 invalidate=1 00:31:31.748 rw=write 00:31:31.748 time_based=1 00:31:31.748 runtime=1 00:31:31.748 ioengine=libaio 00:31:31.748 direct=1 00:31:31.748 bs=4096 00:31:31.748 iodepth=128 00:31:31.748 norandommap=0 00:31:31.748 numjobs=1 00:31:31.748 00:31:31.748 verify_dump=1 00:31:31.748 verify_backlog=512 00:31:31.748 verify_state_save=0 00:31:31.748 do_verify=1 00:31:31.748 verify=crc32c-intel 00:31:31.748 [job0] 00:31:31.748 filename=/dev/nvme0n1 00:31:31.748 [job1] 00:31:31.748 filename=/dev/nvme0n2 00:31:31.748 [job2] 00:31:31.748 filename=/dev/nvme0n3 00:31:31.748 [job3] 00:31:31.748 filename=/dev/nvme0n4 00:31:31.748 Could not set queue depth (nvme0n1) 00:31:31.748 Could not set queue depth (nvme0n2) 00:31:31.748 Could not set queue depth (nvme0n3) 00:31:31.748 Could not set queue depth (nvme0n4) 00:31:32.006 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:32.006 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:32.006 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:32.006 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:32.006 fio-3.35 00:31:32.006 Starting 4 threads 00:31:33.381 00:31:33.381 job0: (groupid=0, jobs=1): err= 0: pid=93479: Wed Nov 20 11:26:00 2024 00:31:33.381 read: IOPS=3050, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1007msec) 00:31:33.381 slat (nsec): min=1084, max=30166k, avg=150255.06, stdev=1108783.84 00:31:33.381 clat (usec): min=2270, max=72987, avg=18412.11, stdev=11439.83 00:31:33.381 lat (usec): min=2273, max=73016, avg=18562.36, stdev=11530.33 00:31:33.381 clat percentiles (usec): 00:31:33.381 | 1.00th=[ 5997], 5.00th=[ 8455], 10.00th=[10290], 20.00th=[10945], 00:31:33.381 | 30.00th=[12780], 40.00th=[13566], 50.00th=[14484], 60.00th=[16188], 00:31:33.381 | 70.00th=[17957], 80.00th=[21365], 90.00th=[34866], 95.00th=[44303], 00:31:33.381 | 99.00th=[59507], 99.50th=[67634], 99.90th=[67634], 99.95th=[69731], 00:31:33.381 | 99.99th=[72877] 00:31:33.381 write: IOPS=3520, BW=13.8MiB/s (14.4MB/s)(13.8MiB/1007msec); 0 zone resets 00:31:33.381 slat (nsec): min=1945, max=12235k, avg=141987.21, stdev=804209.68 00:31:33.381 clat (usec): min=479, max=90845, avg=19749.82, stdev=15434.00 00:31:33.381 lat (usec): min=805, max=90856, avg=19891.81, stdev=15513.52 00:31:33.381 clat percentiles (usec): 00:31:33.381 | 1.00th=[ 988], 5.00th=[ 7046], 10.00th=[ 8094], 20.00th=[ 9634], 00:31:33.381 | 30.00th=[10683], 40.00th=[12518], 50.00th=[15401], 60.00th=[17433], 00:31:33.381 | 70.00th=[20055], 80.00th=[26870], 90.00th=[34341], 95.00th=[49546], 00:31:33.381 | 99.00th=[86508], 99.50th=[90702], 99.90th=[90702], 99.95th=[90702], 00:31:33.381 | 99.99th=[90702] 00:31:33.381 bw ( KiB/s): min=12288, max=15048, per=19.46%, avg=13668.00, stdev=1951.61, samples=2 00:31:33.381 iops : min= 3072, max= 3762, avg=3417.00, stdev=487.90, samples=2 00:31:33.381 lat (usec) : 500=0.02%, 1000=0.63% 00:31:33.381 lat (msec) : 2=0.33%, 4=0.21%, 10=15.94%, 20=56.66%, 50=22.05% 00:31:33.381 lat (msec) : 100=4.16% 00:31:33.381 cpu : usr=1.49%, sys=4.97%, ctx=346, majf=0, minf=1 00:31:33.381 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:31:33.381 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.381 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:33.381 issued rwts: total=3072,3545,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:33.381 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:33.381 job1: (groupid=0, jobs=1): err= 0: pid=93480: Wed Nov 20 11:26:00 2024 00:31:33.381 read: IOPS=4811, BW=18.8MiB/s (19.7MB/s)(18.8MiB/1002msec) 00:31:33.381 slat (nsec): min=1395, max=10920k, avg=95609.34, stdev=600319.28 00:31:33.381 clat (usec): min=1475, max=54100, avg=11841.31, stdev=5734.21 00:31:33.381 lat (usec): min=1480, max=54107, avg=11936.92, stdev=5782.35 00:31:33.381 clat percentiles (usec): 00:31:33.381 | 1.00th=[ 5604], 5.00th=[ 7767], 10.00th=[ 8455], 20.00th=[ 9110], 00:31:33.381 | 30.00th=[ 9503], 40.00th=[ 9896], 50.00th=[10290], 60.00th=[10683], 00:31:33.381 | 70.00th=[11469], 80.00th=[12649], 90.00th=[17695], 95.00th=[25035], 00:31:33.381 | 99.00th=[38011], 99.50th=[43254], 99.90th=[54264], 99.95th=[54264], 00:31:33.381 | 99.99th=[54264] 00:31:33.381 write: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec); 0 zone resets 00:31:33.381 slat (usec): min=2, max=20571, avg=99.50, stdev=687.19 00:31:33.381 clat (usec): min=4952, max=55008, avg=13511.10, stdev=9038.74 00:31:33.381 lat (usec): min=4958, max=55019, avg=13610.60, stdev=9099.81 00:31:33.381 clat percentiles (usec): 00:31:33.381 | 1.00th=[ 6128], 5.00th=[ 8848], 10.00th=[ 9372], 20.00th=[ 9896], 00:31:33.381 | 30.00th=[10028], 40.00th=[10159], 50.00th=[10421], 60.00th=[10552], 00:31:33.381 | 70.00th=[10683], 80.00th=[12518], 90.00th=[21365], 95.00th=[42206], 00:31:33.381 | 99.00th=[45876], 99.50th=[46400], 99.90th=[54789], 99.95th=[54789], 00:31:33.381 | 99.99th=[54789] 00:31:33.381 bw ( KiB/s): min=20480, max=20480, per=29.15%, avg=20480.00, stdev= 0.00, samples=2 00:31:33.381 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:31:33.381 lat (msec) : 2=0.08%, 4=0.16%, 10=36.51%, 20=54.71%, 50=8.31% 00:31:33.381 lat (msec) : 100=0.23% 00:31:33.382 cpu : usr=3.40%, sys=6.09%, ctx=442, majf=0, minf=1 00:31:33.382 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:31:33.382 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.382 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:33.382 issued rwts: total=4821,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:33.382 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:33.382 job2: (groupid=0, jobs=1): err= 0: pid=93487: Wed Nov 20 11:26:00 2024 00:31:33.382 read: IOPS=3875, BW=15.1MiB/s (15.9MB/s)(15.2MiB/1004msec) 00:31:33.382 slat (nsec): min=1146, max=12780k, avg=112033.64, stdev=811626.98 00:31:33.382 clat (usec): min=525, max=43474, avg=16711.94, stdev=7518.41 00:31:33.382 lat (usec): min=3032, max=43492, avg=16823.97, stdev=7544.24 00:31:33.382 clat percentiles (usec): 00:31:33.382 | 1.00th=[ 5669], 5.00th=[ 6980], 10.00th=[ 8586], 20.00th=[10421], 00:31:33.382 | 30.00th=[12518], 40.00th=[13566], 50.00th=[14353], 60.00th=[16909], 00:31:33.382 | 70.00th=[19006], 80.00th=[21365], 90.00th=[27657], 95.00th=[31851], 00:31:33.382 | 99.00th=[41681], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:31:33.382 | 99.99th=[43254] 00:31:33.382 write: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec); 0 zone resets 00:31:33.382 slat (usec): min=2, max=11320, avg=103.38, stdev=717.06 00:31:33.382 clat (usec): min=346, max=33026, avg=15214.14, stdev=7294.17 00:31:33.382 lat (usec): min=357, max=33034, avg=15317.52, stdev=7339.15 00:31:33.382 clat percentiles (usec): 00:31:33.382 | 1.00th=[ 3654], 5.00th=[ 4228], 10.00th=[ 7701], 20.00th=[ 9372], 00:31:33.382 | 30.00th=[11863], 40.00th=[12256], 50.00th=[12911], 60.00th=[14877], 00:31:33.382 | 70.00th=[16909], 80.00th=[18744], 90.00th=[29492], 95.00th=[30802], 00:31:33.382 | 99.00th=[32900], 99.50th=[32900], 99.90th=[32900], 99.95th=[32900], 00:31:33.382 | 99.99th=[32900] 00:31:33.382 bw ( KiB/s): min=14120, max=18648, per=23.32%, avg=16384.00, stdev=3201.78, samples=2 00:31:33.382 iops : min= 3530, max= 4662, avg=4096.00, stdev=800.44, samples=2 00:31:33.382 lat (usec) : 500=0.04%, 750=0.05% 00:31:33.382 lat (msec) : 4=1.54%, 10=17.82%, 20=58.57%, 50=21.99% 00:31:33.382 cpu : usr=2.79%, sys=5.08%, ctx=275, majf=0, minf=2 00:31:33.382 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:31:33.382 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.382 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:33.382 issued rwts: total=3891,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:33.382 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:33.382 job3: (groupid=0, jobs=1): err= 0: pid=93488: Wed Nov 20 11:26:00 2024 00:31:33.382 read: IOPS=4566, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1009msec) 00:31:33.382 slat (nsec): min=1118, max=18333k, avg=105769.63, stdev=803212.22 00:31:33.382 clat (usec): min=3912, max=36531, avg=13831.99, stdev=4255.79 00:31:33.382 lat (usec): min=3923, max=36556, avg=13937.76, stdev=4294.03 00:31:33.382 clat percentiles (usec): 00:31:33.382 | 1.00th=[ 7898], 5.00th=[ 8848], 10.00th=[ 9765], 20.00th=[10814], 00:31:33.382 | 30.00th=[11338], 40.00th=[12125], 50.00th=[12649], 60.00th=[13304], 00:31:33.382 | 70.00th=[14484], 80.00th=[17171], 90.00th=[20317], 95.00th=[23200], 00:31:33.382 | 99.00th=[27657], 99.50th=[28705], 99.90th=[35914], 99.95th=[35914], 00:31:33.382 | 99.99th=[36439] 00:31:33.382 write: IOPS=4914, BW=19.2MiB/s (20.1MB/s)(19.4MiB/1009msec); 0 zone resets 00:31:33.382 slat (usec): min=2, max=14788, avg=96.93, stdev=661.34 00:31:33.382 clat (usec): min=2428, max=36842, avg=12929.09, stdev=4362.54 00:31:33.382 lat (usec): min=2461, max=36877, avg=13026.03, stdev=4403.55 00:31:33.382 clat percentiles (usec): 00:31:33.382 | 1.00th=[ 5276], 5.00th=[ 7832], 10.00th=[ 8848], 20.00th=[10814], 00:31:33.382 | 30.00th=[11338], 40.00th=[11863], 50.00th=[11994], 60.00th=[12125], 00:31:33.382 | 70.00th=[12780], 80.00th=[15008], 90.00th=[17695], 95.00th=[21890], 00:31:33.382 | 99.00th=[28443], 99.50th=[30278], 99.90th=[32900], 99.95th=[32900], 00:31:33.382 | 99.99th=[36963] 00:31:33.382 bw ( KiB/s): min=17320, max=21328, per=27.51%, avg=19324.00, stdev=2834.08, samples=2 00:31:33.382 iops : min= 4330, max= 5332, avg=4831.00, stdev=708.52, samples=2 00:31:33.382 lat (msec) : 4=0.32%, 10=13.65%, 20=77.68%, 50=8.34% 00:31:33.382 cpu : usr=3.37%, sys=4.76%, ctx=513, majf=0, minf=1 00:31:33.382 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:31:33.382 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.382 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:33.382 issued rwts: total=4608,4959,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:33.382 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:33.382 00:31:33.382 Run status group 0 (all jobs): 00:31:33.382 READ: bw=63.5MiB/s (66.5MB/s), 11.9MiB/s-18.8MiB/s (12.5MB/s-19.7MB/s), io=64.0MiB (67.1MB), run=1002-1009msec 00:31:33.382 WRITE: bw=68.6MiB/s (71.9MB/s), 13.8MiB/s-20.0MiB/s (14.4MB/s-20.9MB/s), io=69.2MiB (72.6MB), run=1002-1009msec 00:31:33.382 00:31:33.382 Disk stats (read/write): 00:31:33.382 nvme0n1: ios=2237/2560, merge=0/0, ticks=21987/28761, in_queue=50748, util=84.57% 00:31:33.382 nvme0n2: ios=4117/4519, merge=0/0, ticks=24831/26343, in_queue=51174, util=99.69% 00:31:33.382 nvme0n3: ios=3072/3552, merge=0/0, ticks=40978/36821, in_queue=77799, util=88.30% 00:31:33.382 nvme0n4: ios=3674/4096, merge=0/0, ticks=41132/36634, in_queue=77766, util=89.44% 00:31:33.382 11:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:31:33.382 [global] 00:31:33.382 thread=1 00:31:33.382 invalidate=1 00:31:33.382 rw=randwrite 00:31:33.382 time_based=1 00:31:33.382 runtime=1 00:31:33.382 ioengine=libaio 00:31:33.382 direct=1 00:31:33.382 bs=4096 00:31:33.382 iodepth=128 00:31:33.382 norandommap=0 00:31:33.382 numjobs=1 00:31:33.382 00:31:33.382 verify_dump=1 00:31:33.382 verify_backlog=512 00:31:33.382 verify_state_save=0 00:31:33.382 do_verify=1 00:31:33.382 verify=crc32c-intel 00:31:33.382 [job0] 00:31:33.382 filename=/dev/nvme0n1 00:31:33.382 [job1] 00:31:33.382 filename=/dev/nvme0n2 00:31:33.382 [job2] 00:31:33.382 filename=/dev/nvme0n3 00:31:33.382 [job3] 00:31:33.382 filename=/dev/nvme0n4 00:31:33.382 Could not set queue depth (nvme0n1) 00:31:33.382 Could not set queue depth (nvme0n2) 00:31:33.382 Could not set queue depth (nvme0n3) 00:31:33.382 Could not set queue depth (nvme0n4) 00:31:33.640 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:33.640 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:33.640 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:33.641 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:33.641 fio-3.35 00:31:33.641 Starting 4 threads 00:31:35.013 00:31:35.013 job0: (groupid=0, jobs=1): err= 0: pid=93852: Wed Nov 20 11:26:02 2024 00:31:35.013 read: IOPS=7115, BW=27.8MiB/s (29.1MB/s)(27.9MiB/1005msec) 00:31:35.013 slat (nsec): min=1261, max=8855.1k, avg=74562.19, stdev=605721.44 00:31:35.013 clat (usec): min=2178, max=18705, avg=9457.67, stdev=2389.68 00:31:35.013 lat (usec): min=3048, max=19977, avg=9532.23, stdev=2448.03 00:31:35.013 clat percentiles (usec): 00:31:35.013 | 1.00th=[ 5735], 5.00th=[ 7046], 10.00th=[ 7373], 20.00th=[ 7701], 00:31:35.013 | 30.00th=[ 7898], 40.00th=[ 8225], 50.00th=[ 8586], 60.00th=[ 9372], 00:31:35.013 | 70.00th=[10028], 80.00th=[11076], 90.00th=[13304], 95.00th=[14353], 00:31:35.013 | 99.00th=[16712], 99.50th=[17957], 99.90th=[18220], 99.95th=[18744], 00:31:35.013 | 99.99th=[18744] 00:31:35.013 write: IOPS=7132, BW=27.9MiB/s (29.2MB/s)(28.0MiB/1005msec); 0 zone resets 00:31:35.013 slat (usec): min=2, max=8442, avg=59.63, stdev=454.96 00:31:35.013 clat (usec): min=1423, max=18663, avg=8341.72, stdev=2364.23 00:31:35.013 lat (usec): min=1463, max=18665, avg=8401.35, stdev=2387.39 00:31:35.013 clat percentiles (usec): 00:31:35.013 | 1.00th=[ 3556], 5.00th=[ 5145], 10.00th=[ 5669], 20.00th=[ 6390], 00:31:35.013 | 30.00th=[ 6980], 40.00th=[ 7439], 50.00th=[ 7963], 60.00th=[ 8356], 00:31:35.013 | 70.00th=[ 9503], 80.00th=[10552], 90.00th=[11207], 95.00th=[13435], 00:31:35.013 | 99.00th=[14615], 99.50th=[14877], 99.90th=[15401], 99.95th=[16909], 00:31:35.013 | 99.99th=[18744] 00:31:35.013 bw ( KiB/s): min=28320, max=29024, per=42.84%, avg=28672.00, stdev=497.80, samples=2 00:31:35.013 iops : min= 7080, max= 7256, avg=7168.00, stdev=124.45, samples=2 00:31:35.014 lat (msec) : 2=0.05%, 4=0.59%, 10=71.01%, 20=28.35% 00:31:35.014 cpu : usr=6.08%, sys=6.77%, ctx=460, majf=0, minf=1 00:31:35.014 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:31:35.014 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:35.014 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:35.014 issued rwts: total=7151,7168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:35.014 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:35.014 job1: (groupid=0, jobs=1): err= 0: pid=93853: Wed Nov 20 11:26:02 2024 00:31:35.014 read: IOPS=2027, BW=8111KiB/s (8306kB/s)(8192KiB/1010msec) 00:31:35.014 slat (nsec): min=1476, max=21125k, avg=226408.12, stdev=1492175.71 00:31:35.014 clat (usec): min=11424, max=67359, avg=29191.61, stdev=12665.59 00:31:35.014 lat (usec): min=11434, max=71131, avg=29418.02, stdev=12770.57 00:31:35.014 clat percentiles (usec): 00:31:35.014 | 1.00th=[11994], 5.00th=[12518], 10.00th=[13042], 20.00th=[14222], 00:31:35.014 | 30.00th=[19006], 40.00th=[25297], 50.00th=[30802], 60.00th=[33817], 00:31:35.014 | 70.00th=[36963], 80.00th=[40109], 90.00th=[44827], 95.00th=[51643], 00:31:35.014 | 99.00th=[57410], 99.50th=[61080], 99.90th=[65799], 99.95th=[67634], 00:31:35.014 | 99.99th=[67634] 00:31:35.014 write: IOPS=2208, BW=8836KiB/s (9048kB/s)(8924KiB/1010msec); 0 zone resets 00:31:35.014 slat (usec): min=2, max=27563, avg=233.89, stdev=1669.81 00:31:35.014 clat (usec): min=5775, max=69346, avg=30535.52, stdev=12807.88 00:31:35.014 lat (usec): min=5787, max=69368, avg=30769.42, stdev=12945.13 00:31:35.014 clat percentiles (usec): 00:31:35.014 | 1.00th=[11731], 5.00th=[12911], 10.00th=[14353], 20.00th=[18744], 00:31:35.014 | 30.00th=[24249], 40.00th=[26608], 50.00th=[27395], 60.00th=[31065], 00:31:35.014 | 70.00th=[34341], 80.00th=[45351], 90.00th=[51643], 95.00th=[53216], 00:31:35.014 | 99.00th=[61080], 99.50th=[62653], 99.90th=[64226], 99.95th=[65799], 00:31:35.014 | 99.99th=[69731] 00:31:35.014 bw ( KiB/s): min= 8184, max= 8648, per=12.57%, avg=8416.00, stdev=328.10, samples=2 00:31:35.014 iops : min= 2046, max= 2162, avg=2104.00, stdev=82.02, samples=2 00:31:35.014 lat (msec) : 10=0.12%, 20=28.70%, 50=62.21%, 100=8.97% 00:31:35.014 cpu : usr=2.28%, sys=2.48%, ctx=175, majf=0, minf=1 00:31:35.014 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:31:35.014 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:35.014 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:35.014 issued rwts: total=2048,2231,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:35.014 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:35.014 job2: (groupid=0, jobs=1): err= 0: pid=93854: Wed Nov 20 11:26:02 2024 00:31:35.014 read: IOPS=3070, BW=12.0MiB/s (12.6MB/s)(12.1MiB/1013msec) 00:31:35.014 slat (nsec): min=1837, max=25125k, avg=142228.78, stdev=1079732.97 00:31:35.014 clat (usec): min=8967, max=46730, avg=18376.65, stdev=6140.15 00:31:35.014 lat (usec): min=8975, max=46757, avg=18518.88, stdev=6214.65 00:31:35.014 clat percentiles (usec): 00:31:35.014 | 1.00th=[ 9241], 5.00th=[12256], 10.00th=[12518], 20.00th=[13566], 00:31:35.014 | 30.00th=[15008], 40.00th=[16450], 50.00th=[16909], 60.00th=[17695], 00:31:35.014 | 70.00th=[19792], 80.00th=[22414], 90.00th=[24249], 95.00th=[27919], 00:31:35.014 | 99.00th=[40109], 99.50th=[43779], 99.90th=[46400], 99.95th=[46400], 00:31:35.014 | 99.99th=[46924] 00:31:35.014 write: IOPS=3538, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1013msec); 0 zone resets 00:31:35.014 slat (usec): min=3, max=13831, avg=148.72, stdev=946.38 00:31:35.014 clat (usec): min=3279, max=64814, avg=19861.50, stdev=11102.38 00:31:35.014 lat (usec): min=3291, max=64827, avg=20010.22, stdev=11182.25 00:31:35.014 clat percentiles (usec): 00:31:35.014 | 1.00th=[ 9372], 5.00th=[ 9896], 10.00th=[10814], 20.00th=[12256], 00:31:35.014 | 30.00th=[13566], 40.00th=[14877], 50.00th=[15795], 60.00th=[17171], 00:31:35.014 | 70.00th=[21103], 80.00th=[26608], 90.00th=[32113], 95.00th=[46924], 00:31:35.014 | 99.00th=[60556], 99.50th=[62129], 99.90th=[64750], 99.95th=[64750], 00:31:35.014 | 99.99th=[64750] 00:31:35.014 bw ( KiB/s): min=11576, max=16384, per=20.89%, avg=13980.00, stdev=3399.77, samples=2 00:31:35.014 iops : min= 2894, max= 4096, avg=3495.00, stdev=849.94, samples=2 00:31:35.014 lat (msec) : 4=0.09%, 10=4.91%, 20=62.70%, 50=30.10%, 100=2.20% 00:31:35.014 cpu : usr=3.16%, sys=4.74%, ctx=229, majf=0, minf=1 00:31:35.014 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:31:35.014 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:35.014 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:35.014 issued rwts: total=3110,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:35.014 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:35.014 job3: (groupid=0, jobs=1): err= 0: pid=93855: Wed Nov 20 11:26:02 2024 00:31:35.014 read: IOPS=3548, BW=13.9MiB/s (14.5MB/s)(14.0MiB/1010msec) 00:31:35.014 slat (usec): min=2, max=21168, avg=122.44, stdev=1015.01 00:31:35.014 clat (usec): min=3215, max=68259, avg=15924.86, stdev=9600.34 00:31:35.014 lat (usec): min=3224, max=68263, avg=16047.31, stdev=9708.78 00:31:35.014 clat percentiles (usec): 00:31:35.014 | 1.00th=[ 3359], 5.00th=[ 7767], 10.00th=[ 8979], 20.00th=[10290], 00:31:35.014 | 30.00th=[10683], 40.00th=[11338], 50.00th=[11863], 60.00th=[13173], 00:31:35.014 | 70.00th=[16319], 80.00th=[22676], 90.00th=[27657], 95.00th=[36963], 00:31:35.014 | 99.00th=[56361], 99.50th=[62129], 99.90th=[68682], 99.95th=[68682], 00:31:35.014 | 99.99th=[68682] 00:31:35.014 write: IOPS=3927, BW=15.3MiB/s (16.1MB/s)(15.5MiB/1010msec); 0 zone resets 00:31:35.014 slat (nsec): min=1933, max=24733k, avg=117869.95, stdev=738363.46 00:31:35.014 clat (usec): min=6107, max=70441, avg=17813.58, stdev=12171.51 00:31:35.014 lat (usec): min=6394, max=70448, avg=17931.45, stdev=12237.43 00:31:35.014 clat percentiles (usec): 00:31:35.014 | 1.00th=[ 6783], 5.00th=[ 7177], 10.00th=[ 7701], 20.00th=[10290], 00:31:35.014 | 30.00th=[11207], 40.00th=[11600], 50.00th=[11994], 60.00th=[12256], 00:31:35.014 | 70.00th=[20579], 80.00th=[26870], 90.00th=[36963], 95.00th=[43254], 00:31:35.014 | 99.00th=[64226], 99.50th=[67634], 99.90th=[70779], 99.95th=[70779], 00:31:35.014 | 99.99th=[70779] 00:31:35.014 bw ( KiB/s): min=10240, max=20480, per=22.95%, avg=15360.00, stdev=7240.77, samples=2 00:31:35.014 iops : min= 2560, max= 5120, avg=3840.00, stdev=1810.19, samples=2 00:31:35.014 lat (msec) : 4=0.49%, 10=17.08%, 20=55.34%, 50=25.49%, 100=1.59% 00:31:35.014 cpu : usr=2.18%, sys=4.36%, ctx=376, majf=0, minf=1 00:31:35.014 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:31:35.014 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:35.014 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:35.014 issued rwts: total=3584,3967,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:35.014 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:35.014 00:31:35.014 Run status group 0 (all jobs): 00:31:35.014 READ: bw=61.3MiB/s (64.3MB/s), 8111KiB/s-27.8MiB/s (8306kB/s-29.1MB/s), io=62.1MiB (65.1MB), run=1005-1013msec 00:31:35.014 WRITE: bw=65.4MiB/s (68.5MB/s), 8836KiB/s-27.9MiB/s (9048kB/s-29.2MB/s), io=66.2MiB (69.4MB), run=1005-1013msec 00:31:35.014 00:31:35.014 Disk stats (read/write): 00:31:35.014 nvme0n1: ios=5762/6144, merge=0/0, ticks=53209/50608, in_queue=103817, util=86.87% 00:31:35.014 nvme0n2: ios=1480/1536, merge=0/0, ticks=25981/26782, in_queue=52763, util=86.89% 00:31:35.014 nvme0n3: ios=3102/3079, merge=0/0, ticks=55844/49075, in_queue=104919, util=98.96% 00:31:35.014 nvme0n4: ios=3276/3584, merge=0/0, ticks=30943/31788, in_queue=62731, util=96.22% 00:31:35.014 11:26:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:31:35.014 11:26:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=94084 00:31:35.014 11:26:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:31:35.014 11:26:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:31:35.014 [global] 00:31:35.014 thread=1 00:31:35.014 invalidate=1 00:31:35.014 rw=read 00:31:35.014 time_based=1 00:31:35.014 runtime=10 00:31:35.014 ioengine=libaio 00:31:35.014 direct=1 00:31:35.014 bs=4096 00:31:35.014 iodepth=1 00:31:35.014 norandommap=1 00:31:35.014 numjobs=1 00:31:35.014 00:31:35.014 [job0] 00:31:35.014 filename=/dev/nvme0n1 00:31:35.014 [job1] 00:31:35.014 filename=/dev/nvme0n2 00:31:35.014 [job2] 00:31:35.014 filename=/dev/nvme0n3 00:31:35.014 [job3] 00:31:35.014 filename=/dev/nvme0n4 00:31:35.014 Could not set queue depth (nvme0n1) 00:31:35.014 Could not set queue depth (nvme0n2) 00:31:35.014 Could not set queue depth (nvme0n3) 00:31:35.014 Could not set queue depth (nvme0n4) 00:31:35.274 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:35.274 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:35.274 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:35.274 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:35.274 fio-3.35 00:31:35.274 Starting 4 threads 00:31:37.807 11:26:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:31:38.066 11:26:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:31:38.066 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=307200, buflen=4096 00:31:38.066 fio: pid=94231, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:31:38.324 11:26:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:38.324 11:26:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:31:38.324 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=561152, buflen=4096 00:31:38.324 fio: pid=94230, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:31:38.324 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=49197056, buflen=4096 00:31:38.324 fio: pid=94228, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:31:38.324 11:26:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:38.583 11:26:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:31:38.583 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=50319360, buflen=4096 00:31:38.583 fio: pid=94229, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:31:38.583 11:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:38.583 11:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:31:38.583 00:31:38.583 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=94228: Wed Nov 20 11:26:06 2024 00:31:38.583 read: IOPS=3826, BW=14.9MiB/s (15.7MB/s)(46.9MiB/3139msec) 00:31:38.583 slat (usec): min=6, max=13947, avg=11.13, stdev=166.05 00:31:38.583 clat (usec): min=177, max=773, avg=246.28, stdev=23.14 00:31:38.583 lat (usec): min=184, max=14235, avg=257.42, stdev=168.51 00:31:38.583 clat percentiles (usec): 00:31:38.583 | 1.00th=[ 206], 5.00th=[ 223], 10.00th=[ 235], 20.00th=[ 241], 00:31:38.583 | 30.00th=[ 243], 40.00th=[ 245], 50.00th=[ 247], 60.00th=[ 247], 00:31:38.583 | 70.00th=[ 249], 80.00th=[ 251], 90.00th=[ 255], 95.00th=[ 258], 00:31:38.583 | 99.00th=[ 281], 99.50th=[ 363], 99.90th=[ 506], 99.95th=[ 725], 00:31:38.583 | 99.99th=[ 766] 00:31:38.583 bw ( KiB/s): min=15260, max=15520, per=53.08%, avg=15451.33, stdev=97.14, samples=6 00:31:38.583 iops : min= 3815, max= 3880, avg=3862.83, stdev=24.29, samples=6 00:31:38.583 lat (usec) : 250=73.43%, 500=26.42%, 750=0.11%, 1000=0.03% 00:31:38.583 cpu : usr=2.45%, sys=6.18%, ctx=12014, majf=0, minf=1 00:31:38.583 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:38.583 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.583 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.583 issued rwts: total=12012,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:38.583 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:38.583 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=94229: Wed Nov 20 11:26:06 2024 00:31:38.583 read: IOPS=3647, BW=14.2MiB/s (14.9MB/s)(48.0MiB/3368msec) 00:31:38.583 slat (usec): min=6, max=16614, avg=12.85, stdev=265.99 00:31:38.583 clat (usec): min=172, max=2639, avg=257.26, stdev=78.21 00:31:38.583 lat (usec): min=179, max=16933, avg=270.11, stdev=278.54 00:31:38.583 clat percentiles (usec): 00:31:38.583 | 1.00th=[ 192], 5.00th=[ 204], 10.00th=[ 212], 20.00th=[ 217], 00:31:38.583 | 30.00th=[ 221], 40.00th=[ 223], 50.00th=[ 225], 60.00th=[ 229], 00:31:38.583 | 70.00th=[ 233], 80.00th=[ 253], 90.00th=[ 404], 95.00th=[ 408], 00:31:38.583 | 99.00th=[ 420], 99.50th=[ 433], 99.90th=[ 709], 99.95th=[ 750], 00:31:38.583 | 99.99th=[ 1647] 00:31:38.583 bw ( KiB/s): min= 9728, max=17168, per=50.50%, avg=14698.33, stdev=3491.32, samples=6 00:31:38.583 iops : min= 2432, max= 4292, avg=3674.50, stdev=872.78, samples=6 00:31:38.583 lat (usec) : 250=79.16%, 500=20.57%, 750=0.20%, 1000=0.03% 00:31:38.583 lat (msec) : 2=0.02%, 4=0.01% 00:31:38.583 cpu : usr=2.41%, sys=5.44%, ctx=12290, majf=0, minf=2 00:31:38.583 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:38.583 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.583 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.583 issued rwts: total=12286,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:38.583 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:38.583 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=94230: Wed Nov 20 11:26:06 2024 00:31:38.583 read: IOPS=46, BW=185KiB/s (189kB/s)(548KiB/2970msec) 00:31:38.584 slat (nsec): min=8285, max=32870, avg=17439.87, stdev=7473.61 00:31:38.584 clat (usec): min=216, max=42231, avg=21497.55, stdev=20436.42 00:31:38.584 lat (usec): min=225, max=42242, avg=21514.93, stdev=20436.48 00:31:38.584 clat percentiles (usec): 00:31:38.584 | 1.00th=[ 221], 5.00th=[ 249], 10.00th=[ 269], 20.00th=[ 338], 00:31:38.584 | 30.00th=[ 375], 40.00th=[ 478], 50.00th=[40633], 60.00th=[41157], 00:31:38.584 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:31:38.584 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:31:38.584 | 99.99th=[42206] 00:31:38.584 bw ( KiB/s): min= 120, max= 224, per=0.60%, avg=174.40, stdev=40.56, samples=5 00:31:38.584 iops : min= 30, max= 56, avg=43.60, stdev=10.14, samples=5 00:31:38.584 lat (usec) : 250=5.80%, 500=37.68%, 750=2.90% 00:31:38.584 lat (msec) : 2=1.45%, 50=51.45% 00:31:38.584 cpu : usr=0.03%, sys=0.13%, ctx=138, majf=0, minf=2 00:31:38.584 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:38.584 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.584 complete : 0=0.7%, 4=99.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.584 issued rwts: total=138,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:38.584 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:38.584 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=94231: Wed Nov 20 11:26:06 2024 00:31:38.584 read: IOPS=27, BW=110KiB/s (112kB/s)(300KiB/2733msec) 00:31:38.584 slat (nsec): min=8238, max=37338, avg=17895.83, stdev=6979.65 00:31:38.584 clat (usec): min=246, max=42033, avg=36126.87, stdev=13327.81 00:31:38.584 lat (usec): min=270, max=42043, avg=36144.72, stdev=13325.35 00:31:38.584 clat percentiles (usec): 00:31:38.584 | 1.00th=[ 247], 5.00th=[ 258], 10.00th=[ 330], 20.00th=[40633], 00:31:38.584 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:38.584 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:31:38.584 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:31:38.584 | 99.99th=[42206] 00:31:38.584 bw ( KiB/s): min= 104, max= 128, per=0.38%, avg=112.00, stdev=11.31, samples=5 00:31:38.584 iops : min= 26, max= 32, avg=28.00, stdev= 2.83, samples=5 00:31:38.584 lat (usec) : 250=2.63%, 500=9.21% 00:31:38.584 lat (msec) : 50=86.84% 00:31:38.584 cpu : usr=0.00%, sys=0.07%, ctx=76, majf=0, minf=1 00:31:38.584 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:38.584 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.584 complete : 0=1.3%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.584 issued rwts: total=76,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:38.584 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:38.584 00:31:38.584 Run status group 0 (all jobs): 00:31:38.584 READ: bw=28.4MiB/s (29.8MB/s), 110KiB/s-14.9MiB/s (112kB/s-15.7MB/s), io=95.7MiB (100MB), run=2733-3368msec 00:31:38.584 00:31:38.584 Disk stats (read/write): 00:31:38.584 nvme0n1: ios=11970/0, merge=0/0, ticks=2793/0, in_queue=2793, util=94.98% 00:31:38.584 nvme0n2: ios=12286/0, merge=0/0, ticks=2985/0, in_queue=2985, util=94.69% 00:31:38.584 nvme0n3: ios=131/0, merge=0/0, ticks=2819/0, in_queue=2819, util=96.52% 00:31:38.584 nvme0n4: ios=72/0, merge=0/0, ticks=2585/0, in_queue=2585, util=96.45% 00:31:38.843 11:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:38.843 11:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:31:39.102 11:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:39.102 11:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:31:39.360 11:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:39.360 11:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:31:39.619 11:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:39.619 11:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:31:39.619 11:26:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:31:39.619 11:26:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 94084 00:31:39.619 11:26:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:31:39.619 11:26:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:31:39.877 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:31:39.877 11:26:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:31:39.877 11:26:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:31:39.877 11:26:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:31:39.877 11:26:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:39.877 11:26:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:31:39.877 11:26:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:39.877 11:26:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:31:39.877 11:26:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:31:39.877 11:26:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:31:39.877 nvmf hotplug test: fio failed as expected 00:31:39.877 11:26:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:40.136 11:26:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:31:40.136 11:26:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:31:40.136 11:26:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:31:40.136 11:26:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:31:40.136 11:26:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:31:40.136 11:26:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:40.136 11:26:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:31:40.136 11:26:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:40.136 11:26:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:31:40.136 11:26:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:40.136 11:26:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:40.136 rmmod nvme_tcp 00:31:40.136 rmmod nvme_fabrics 00:31:40.136 rmmod nvme_keyring 00:31:40.136 11:26:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:40.136 11:26:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:31:40.136 11:26:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:31:40.136 11:26:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 91612 ']' 00:31:40.136 11:26:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 91612 00:31:40.136 11:26:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 91612 ']' 00:31:40.136 11:26:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 91612 00:31:40.136 11:26:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:31:40.136 11:26:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:40.136 11:26:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 91612 00:31:40.136 11:26:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:40.136 11:26:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:40.136 11:26:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 91612' 00:31:40.136 killing process with pid 91612 00:31:40.136 11:26:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 91612 00:31:40.136 11:26:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 91612 00:31:40.394 11:26:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:40.394 11:26:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:40.394 11:26:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:40.394 11:26:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:31:40.394 11:26:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:31:40.394 11:26:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:40.394 11:26:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:31:40.394 11:26:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:40.394 11:26:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:40.394 11:26:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:40.394 11:26:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:40.394 11:26:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:42.298 11:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:42.298 00:31:42.298 real 0m25.950s 00:31:42.298 user 1m31.574s 00:31:42.298 sys 0m11.306s 00:31:42.298 11:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:42.298 11:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:42.298 ************************************ 00:31:42.298 END TEST nvmf_fio_target 00:31:42.298 ************************************ 00:31:42.558 11:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:31:42.558 11:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:42.558 11:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:42.558 11:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:42.558 ************************************ 00:31:42.558 START TEST nvmf_bdevio 00:31:42.558 ************************************ 00:31:42.558 11:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:31:42.558 * Looking for test storage... 00:31:42.558 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:42.558 11:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:42.558 11:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:31:42.558 11:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:42.558 11:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:42.558 11:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:42.558 11:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:42.558 11:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:42.558 11:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:31:42.558 11:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:31:42.558 11:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:31:42.558 11:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:31:42.558 11:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:31:42.558 11:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:31:42.558 11:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:31:42.558 11:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:42.558 11:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:31:42.558 11:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:31:42.558 11:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:42.558 11:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:42.558 11:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:31:42.558 11:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:31:42.558 11:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:42.558 11:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:31:42.558 11:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:31:42.558 11:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:31:42.558 11:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:31:42.558 11:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:42.558 11:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:31:42.558 11:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:31:42.558 11:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:42.558 11:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:42.558 11:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:31:42.558 11:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:42.558 11:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:42.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:42.558 --rc genhtml_branch_coverage=1 00:31:42.558 --rc genhtml_function_coverage=1 00:31:42.558 --rc genhtml_legend=1 00:31:42.558 --rc geninfo_all_blocks=1 00:31:42.558 --rc geninfo_unexecuted_blocks=1 00:31:42.558 00:31:42.558 ' 00:31:42.558 11:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:42.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:42.558 --rc genhtml_branch_coverage=1 00:31:42.558 --rc genhtml_function_coverage=1 00:31:42.558 --rc genhtml_legend=1 00:31:42.558 --rc geninfo_all_blocks=1 00:31:42.558 --rc geninfo_unexecuted_blocks=1 00:31:42.558 00:31:42.558 ' 00:31:42.558 11:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:42.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:42.558 --rc genhtml_branch_coverage=1 00:31:42.558 --rc genhtml_function_coverage=1 00:31:42.558 --rc genhtml_legend=1 00:31:42.558 --rc geninfo_all_blocks=1 00:31:42.558 --rc geninfo_unexecuted_blocks=1 00:31:42.558 00:31:42.558 ' 00:31:42.558 11:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:42.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:42.558 --rc genhtml_branch_coverage=1 00:31:42.558 --rc genhtml_function_coverage=1 00:31:42.558 --rc genhtml_legend=1 00:31:42.558 --rc geninfo_all_blocks=1 00:31:42.558 --rc geninfo_unexecuted_blocks=1 00:31:42.558 00:31:42.558 ' 00:31:42.558 11:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:42.558 11:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:31:42.558 11:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:42.558 11:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:42.558 11:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:42.558 11:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:42.558 11:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:42.558 11:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:42.558 11:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:42.558 11:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:42.558 11:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:42.558 11:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:42.558 11:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:42.558 11:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:31:42.558 11:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:42.558 11:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:42.558 11:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:42.558 11:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:42.818 11:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:42.818 11:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:31:42.818 11:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:42.818 11:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:42.818 11:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:42.818 11:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:42.818 11:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:42.818 11:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:42.818 11:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:31:42.818 11:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:42.818 11:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:31:42.818 11:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:42.818 11:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:42.818 11:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:42.818 11:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:42.818 11:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:42.818 11:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:42.818 11:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:42.818 11:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:42.818 11:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:42.818 11:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:42.818 11:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:42.818 11:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:42.818 11:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:31:42.818 11:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:42.818 11:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:42.818 11:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:42.818 11:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:42.818 11:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:42.818 11:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:42.818 11:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:42.818 11:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:42.818 11:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:42.818 11:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:42.818 11:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:31:42.818 11:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:49.386 11:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:49.386 11:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:31:49.386 11:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:49.386 11:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:49.386 11:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:49.386 11:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:49.386 11:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:49.386 11:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:31:49.386 11:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:49.386 11:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:31:49.386 11:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:31:49.386 11:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:31:49.386 11:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:31:49.386 11:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:31:49.386 11:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:31:49.386 11:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:49.386 11:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:49.386 11:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:49.386 11:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:49.386 11:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:49.386 11:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:49.386 11:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:49.386 11:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:49.386 11:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:49.386 11:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:49.386 11:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:49.386 11:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:49.386 11:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:49.386 11:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:49.386 11:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:49.386 11:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:49.386 11:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:49.386 11:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:49.386 11:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:49.386 11:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:49.386 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:49.386 11:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:49.386 11:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:49.386 11:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:49.386 11:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:49.386 11:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:49.386 11:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:49.386 11:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:49.386 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:49.386 11:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:49.386 11:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:49.386 11:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:49.386 11:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:49.386 11:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:49.386 11:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:49.386 11:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:49.386 11:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:49.386 11:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:49.386 11:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:49.386 11:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:49.386 11:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:49.386 11:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:49.386 11:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:49.386 11:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:49.386 11:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:49.386 Found net devices under 0000:86:00.0: cvl_0_0 00:31:49.386 11:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:49.386 11:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:49.386 11:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:49.386 11:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:49.386 11:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:49.386 11:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:49.386 11:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:49.386 11:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:49.386 11:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:49.386 Found net devices under 0000:86:00.1: cvl_0_1 00:31:49.386 11:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:49.386 11:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:49.386 11:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:31:49.386 11:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:49.386 11:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:49.387 11:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:49.387 11:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:49.387 11:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:49.387 11:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:49.387 11:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:49.387 11:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:49.387 11:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:49.387 11:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:49.387 11:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:49.387 11:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:49.387 11:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:49.387 11:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:49.387 11:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:49.387 11:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:49.387 11:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:49.387 11:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:49.387 11:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:49.387 11:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:49.387 11:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:49.387 11:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:49.387 11:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:49.387 11:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:49.387 11:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:49.387 11:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:49.387 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:49.387 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.371 ms 00:31:49.387 00:31:49.387 --- 10.0.0.2 ping statistics --- 00:31:49.387 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:49.387 rtt min/avg/max/mdev = 0.371/0.371/0.371/0.000 ms 00:31:49.387 11:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:49.387 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:49.387 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:31:49.387 00:31:49.387 --- 10.0.0.1 ping statistics --- 00:31:49.387 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:49.387 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:31:49.387 11:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:49.387 11:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:31:49.387 11:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:49.387 11:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:49.387 11:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:49.387 11:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:49.387 11:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:49.387 11:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:49.387 11:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:49.387 11:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:31:49.387 11:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:49.387 11:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:49.387 11:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:49.387 11:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=98571 00:31:49.387 11:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 98571 00:31:49.387 11:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:31:49.387 11:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 98571 ']' 00:31:49.387 11:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:49.387 11:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:49.387 11:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:49.387 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:49.387 11:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:49.387 11:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:49.387 [2024-11-20 11:26:16.036250] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:49.387 [2024-11-20 11:26:16.037176] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:31:49.387 [2024-11-20 11:26:16.037209] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:49.387 [2024-11-20 11:26:16.118050] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:49.387 [2024-11-20 11:26:16.159536] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:49.387 [2024-11-20 11:26:16.159575] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:49.387 [2024-11-20 11:26:16.159582] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:49.387 [2024-11-20 11:26:16.159588] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:49.387 [2024-11-20 11:26:16.159594] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:49.387 [2024-11-20 11:26:16.161229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:31:49.387 [2024-11-20 11:26:16.161341] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:31:49.387 [2024-11-20 11:26:16.161449] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:49.387 [2024-11-20 11:26:16.161450] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:31:49.387 [2024-11-20 11:26:16.227640] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:49.387 [2024-11-20 11:26:16.228471] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:49.387 [2024-11-20 11:26:16.228540] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:31:49.387 [2024-11-20 11:26:16.229008] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:49.387 [2024-11-20 11:26:16.229050] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:49.387 11:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:49.387 11:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:31:49.387 11:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:49.387 11:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:49.387 11:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:49.387 11:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:49.387 11:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:49.387 11:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:49.387 11:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:49.387 [2024-11-20 11:26:16.294149] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:49.387 11:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:49.387 11:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:49.387 11:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:49.387 11:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:49.387 Malloc0 00:31:49.387 11:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:49.387 11:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:49.387 11:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:49.387 11:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:49.387 11:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:49.387 11:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:49.387 11:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:49.387 11:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:49.387 11:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:49.387 11:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:49.388 11:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:49.388 11:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:49.388 [2024-11-20 11:26:16.378304] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:49.388 11:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:49.388 11:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:31:49.388 11:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:31:49.388 11:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:31:49.388 11:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:31:49.388 11:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:49.388 11:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:49.388 { 00:31:49.388 "params": { 00:31:49.388 "name": "Nvme$subsystem", 00:31:49.388 "trtype": "$TEST_TRANSPORT", 00:31:49.388 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:49.388 "adrfam": "ipv4", 00:31:49.388 "trsvcid": "$NVMF_PORT", 00:31:49.388 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:49.388 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:49.388 "hdgst": ${hdgst:-false}, 00:31:49.388 "ddgst": ${ddgst:-false} 00:31:49.388 }, 00:31:49.388 "method": "bdev_nvme_attach_controller" 00:31:49.388 } 00:31:49.388 EOF 00:31:49.388 )") 00:31:49.388 11:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:31:49.388 11:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:31:49.388 11:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:31:49.388 11:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:49.388 "params": { 00:31:49.388 "name": "Nvme1", 00:31:49.388 "trtype": "tcp", 00:31:49.388 "traddr": "10.0.0.2", 00:31:49.388 "adrfam": "ipv4", 00:31:49.388 "trsvcid": "4420", 00:31:49.388 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:49.388 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:49.388 "hdgst": false, 00:31:49.388 "ddgst": false 00:31:49.388 }, 00:31:49.388 "method": "bdev_nvme_attach_controller" 00:31:49.388 }' 00:31:49.388 [2024-11-20 11:26:16.430621] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:31:49.388 [2024-11-20 11:26:16.430668] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98709 ] 00:31:49.388 [2024-11-20 11:26:16.505394] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:49.388 [2024-11-20 11:26:16.549321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:49.388 [2024-11-20 11:26:16.549428] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:49.388 [2024-11-20 11:26:16.549429] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:49.647 I/O targets: 00:31:49.647 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:31:49.647 00:31:49.647 00:31:49.647 CUnit - A unit testing framework for C - Version 2.1-3 00:31:49.647 http://cunit.sourceforge.net/ 00:31:49.647 00:31:49.647 00:31:49.647 Suite: bdevio tests on: Nvme1n1 00:31:49.647 Test: blockdev write read block ...passed 00:31:49.647 Test: blockdev write zeroes read block ...passed 00:31:49.647 Test: blockdev write zeroes read no split ...passed 00:31:49.647 Test: blockdev write zeroes read split ...passed 00:31:49.647 Test: blockdev write zeroes read split partial ...passed 00:31:49.647 Test: blockdev reset ...[2024-11-20 11:26:17.014592] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:31:49.647 [2024-11-20 11:26:17.014661] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23a7340 (9): Bad file descriptor 00:31:49.647 [2024-11-20 11:26:17.067089] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:31:49.647 passed 00:31:49.647 Test: blockdev write read 8 blocks ...passed 00:31:49.647 Test: blockdev write read size > 128k ...passed 00:31:49.647 Test: blockdev write read invalid size ...passed 00:31:49.906 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:31:49.906 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:31:49.906 Test: blockdev write read max offset ...passed 00:31:49.906 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:31:49.906 Test: blockdev writev readv 8 blocks ...passed 00:31:49.906 Test: blockdev writev readv 30 x 1block ...passed 00:31:49.906 Test: blockdev writev readv block ...passed 00:31:49.906 Test: blockdev writev readv size > 128k ...passed 00:31:49.906 Test: blockdev writev readv size > 128k in two iovs ...passed 00:31:49.906 Test: blockdev comparev and writev ...[2024-11-20 11:26:17.321866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:49.906 [2024-11-20 11:26:17.321895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:49.906 [2024-11-20 11:26:17.321910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:49.906 [2024-11-20 11:26:17.321918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:49.906 [2024-11-20 11:26:17.322216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:49.906 [2024-11-20 11:26:17.322228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:49.906 [2024-11-20 11:26:17.322241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:49.906 [2024-11-20 11:26:17.322248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:49.906 [2024-11-20 11:26:17.322544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:49.906 [2024-11-20 11:26:17.322556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:49.906 [2024-11-20 11:26:17.322568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:49.906 [2024-11-20 11:26:17.322576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:49.906 [2024-11-20 11:26:17.322864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:49.906 [2024-11-20 11:26:17.322877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:49.906 [2024-11-20 11:26:17.322889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:49.906 [2024-11-20 11:26:17.322900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:49.906 passed 00:31:50.165 Test: blockdev nvme passthru rw ...passed 00:31:50.165 Test: blockdev nvme passthru vendor specific ...[2024-11-20 11:26:17.405293] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:50.165 [2024-11-20 11:26:17.405309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:50.165 [2024-11-20 11:26:17.405430] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:50.165 [2024-11-20 11:26:17.405441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:50.165 [2024-11-20 11:26:17.405553] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:50.165 [2024-11-20 11:26:17.405564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:50.165 [2024-11-20 11:26:17.405677] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:50.165 [2024-11-20 11:26:17.405687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:50.165 passed 00:31:50.165 Test: blockdev nvme admin passthru ...passed 00:31:50.165 Test: blockdev copy ...passed 00:31:50.165 00:31:50.165 Run Summary: Type Total Ran Passed Failed Inactive 00:31:50.165 suites 1 1 n/a 0 0 00:31:50.165 tests 23 23 23 0 0 00:31:50.165 asserts 152 152 152 0 n/a 00:31:50.165 00:31:50.165 Elapsed time = 1.133 seconds 00:31:50.165 11:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:50.165 11:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:50.165 11:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:50.165 11:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:50.165 11:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:31:50.165 11:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:31:50.165 11:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:50.165 11:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:31:50.165 11:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:50.165 11:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:31:50.165 11:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:50.165 11:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:50.165 rmmod nvme_tcp 00:31:50.165 rmmod nvme_fabrics 00:31:50.165 rmmod nvme_keyring 00:31:50.425 11:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:50.425 11:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:31:50.425 11:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:31:50.425 11:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 98571 ']' 00:31:50.425 11:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 98571 00:31:50.425 11:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 98571 ']' 00:31:50.425 11:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 98571 00:31:50.425 11:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:31:50.425 11:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:50.425 11:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 98571 00:31:50.425 11:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:31:50.425 11:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:31:50.425 11:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 98571' 00:31:50.425 killing process with pid 98571 00:31:50.425 11:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 98571 00:31:50.425 11:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 98571 00:31:50.425 11:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:50.425 11:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:50.425 11:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:50.425 11:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:31:50.684 11:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:31:50.684 11:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:50.684 11:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:31:50.684 11:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:50.684 11:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:50.684 11:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:50.684 11:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:50.684 11:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:52.589 11:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:52.589 00:31:52.589 real 0m10.133s 00:31:52.589 user 0m9.844s 00:31:52.589 sys 0m5.235s 00:31:52.589 11:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:52.589 11:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:52.589 ************************************ 00:31:52.589 END TEST nvmf_bdevio 00:31:52.589 ************************************ 00:31:52.589 11:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:31:52.589 00:31:52.589 real 4m33.674s 00:31:52.589 user 9m4.526s 00:31:52.589 sys 1m52.107s 00:31:52.589 11:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:52.589 11:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:52.589 ************************************ 00:31:52.589 END TEST nvmf_target_core_interrupt_mode 00:31:52.589 ************************************ 00:31:52.589 11:26:20 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:31:52.589 11:26:20 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:52.589 11:26:20 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:52.589 11:26:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:52.849 ************************************ 00:31:52.849 START TEST nvmf_interrupt 00:31:52.849 ************************************ 00:31:52.849 11:26:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:31:52.849 * Looking for test storage... 00:31:52.849 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:52.849 11:26:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:52.849 11:26:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lcov --version 00:31:52.849 11:26:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:52.849 11:26:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:52.849 11:26:20 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:52.849 11:26:20 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:52.849 11:26:20 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:52.849 11:26:20 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:31:52.849 11:26:20 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:31:52.849 11:26:20 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:31:52.849 11:26:20 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:31:52.849 11:26:20 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:31:52.849 11:26:20 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:31:52.849 11:26:20 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:31:52.849 11:26:20 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:52.849 11:26:20 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:31:52.849 11:26:20 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:31:52.849 11:26:20 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:52.849 11:26:20 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:52.849 11:26:20 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:31:52.849 11:26:20 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:31:52.849 11:26:20 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:52.849 11:26:20 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:31:52.849 11:26:20 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:31:52.849 11:26:20 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:31:52.849 11:26:20 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:31:52.849 11:26:20 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:52.849 11:26:20 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:31:52.849 11:26:20 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:31:52.849 11:26:20 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:52.849 11:26:20 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:52.849 11:26:20 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:31:52.849 11:26:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:52.849 11:26:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:52.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:52.849 --rc genhtml_branch_coverage=1 00:31:52.849 --rc genhtml_function_coverage=1 00:31:52.849 --rc genhtml_legend=1 00:31:52.849 --rc geninfo_all_blocks=1 00:31:52.849 --rc geninfo_unexecuted_blocks=1 00:31:52.849 00:31:52.849 ' 00:31:52.849 11:26:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:52.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:52.849 --rc genhtml_branch_coverage=1 00:31:52.849 --rc genhtml_function_coverage=1 00:31:52.849 --rc genhtml_legend=1 00:31:52.849 --rc geninfo_all_blocks=1 00:31:52.849 --rc geninfo_unexecuted_blocks=1 00:31:52.849 00:31:52.849 ' 00:31:52.849 11:26:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:52.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:52.849 --rc genhtml_branch_coverage=1 00:31:52.849 --rc genhtml_function_coverage=1 00:31:52.850 --rc genhtml_legend=1 00:31:52.850 --rc geninfo_all_blocks=1 00:31:52.850 --rc geninfo_unexecuted_blocks=1 00:31:52.850 00:31:52.850 ' 00:31:52.850 11:26:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:52.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:52.850 --rc genhtml_branch_coverage=1 00:31:52.850 --rc genhtml_function_coverage=1 00:31:52.850 --rc genhtml_legend=1 00:31:52.850 --rc geninfo_all_blocks=1 00:31:52.850 --rc geninfo_unexecuted_blocks=1 00:31:52.850 00:31:52.850 ' 00:31:52.850 11:26:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:52.850 11:26:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:31:52.850 11:26:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:52.850 11:26:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:52.850 11:26:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:52.850 11:26:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:52.850 11:26:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:52.850 11:26:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:52.850 11:26:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:52.850 11:26:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:52.850 11:26:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:52.850 11:26:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:52.850 11:26:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:52.850 11:26:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:31:52.850 11:26:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:52.850 11:26:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:52.850 11:26:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:52.850 11:26:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:52.850 11:26:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:52.850 11:26:20 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:31:52.850 11:26:20 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:52.850 11:26:20 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:52.850 11:26:20 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:52.850 11:26:20 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:52.850 11:26:20 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:52.850 11:26:20 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:52.850 11:26:20 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:31:52.850 11:26:20 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:52.850 11:26:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:31:52.850 11:26:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:52.850 11:26:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:52.850 11:26:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:52.850 11:26:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:52.850 11:26:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:52.850 11:26:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:52.850 11:26:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:52.850 11:26:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:52.850 11:26:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:52.850 11:26:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:52.850 11:26:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:31:52.850 11:26:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:31:52.850 11:26:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:31:52.850 11:26:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:52.850 11:26:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:52.850 11:26:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:52.850 11:26:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:52.850 11:26:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:52.850 11:26:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:52.850 11:26:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:52.850 11:26:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:52.850 11:26:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:52.850 11:26:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:52.850 11:26:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:31:52.850 11:26:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:59.422 11:26:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:59.422 11:26:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:31:59.422 11:26:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:59.422 11:26:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:59.423 11:26:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:59.423 11:26:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:59.423 11:26:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:59.423 11:26:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:31:59.423 11:26:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:59.423 11:26:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:31:59.423 11:26:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:31:59.423 11:26:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:31:59.423 11:26:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:31:59.423 11:26:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:31:59.423 11:26:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:31:59.423 11:26:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:59.423 11:26:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:59.423 11:26:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:59.423 11:26:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:59.423 11:26:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:59.423 11:26:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:59.423 11:26:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:59.423 11:26:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:59.423 11:26:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:59.423 11:26:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:59.423 11:26:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:59.423 11:26:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:59.423 11:26:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:59.423 11:26:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:59.423 11:26:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:59.423 11:26:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:59.423 11:26:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:59.423 11:26:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:59.423 11:26:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:59.423 11:26:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:59.423 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:59.423 11:26:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:59.423 11:26:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:59.423 11:26:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:59.423 11:26:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:59.423 11:26:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:59.423 11:26:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:59.423 11:26:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:59.423 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:59.423 11:26:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:59.423 11:26:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:59.423 11:26:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:59.423 11:26:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:59.423 11:26:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:59.423 11:26:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:59.423 11:26:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:59.423 11:26:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:59.423 11:26:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:59.423 11:26:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:59.423 11:26:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:59.423 11:26:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:59.423 11:26:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:59.423 11:26:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:59.423 11:26:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:59.423 11:26:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:59.423 Found net devices under 0000:86:00.0: cvl_0_0 00:31:59.423 11:26:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:59.423 11:26:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:59.423 11:26:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:59.423 11:26:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:59.423 11:26:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:59.423 11:26:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:59.423 11:26:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:59.423 11:26:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:59.423 11:26:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:59.423 Found net devices under 0000:86:00.1: cvl_0_1 00:31:59.423 11:26:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:59.423 11:26:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:59.423 11:26:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:31:59.423 11:26:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:59.423 11:26:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:59.423 11:26:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:59.423 11:26:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:59.423 11:26:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:59.423 11:26:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:59.423 11:26:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:59.423 11:26:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:59.423 11:26:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:59.423 11:26:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:59.423 11:26:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:59.423 11:26:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:59.423 11:26:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:59.423 11:26:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:59.423 11:26:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:59.423 11:26:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:59.423 11:26:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:59.423 11:26:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:59.423 11:26:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:59.423 11:26:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:59.423 11:26:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:59.423 11:26:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:59.423 11:26:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:59.423 11:26:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:59.423 11:26:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:59.423 11:26:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:59.423 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:59.423 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.409 ms 00:31:59.423 00:31:59.423 --- 10.0.0.2 ping statistics --- 00:31:59.423 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:59.423 rtt min/avg/max/mdev = 0.409/0.409/0.409/0.000 ms 00:31:59.423 11:26:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:59.423 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:59.423 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:31:59.423 00:31:59.423 --- 10.0.0.1 ping statistics --- 00:31:59.423 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:59.423 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:31:59.423 11:26:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:59.423 11:26:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:31:59.423 11:26:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:59.423 11:26:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:59.423 11:26:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:59.423 11:26:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:59.423 11:26:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:59.423 11:26:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:59.423 11:26:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:59.423 11:26:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:31:59.423 11:26:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:59.423 11:26:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:59.424 11:26:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:59.424 11:26:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=102347 00:31:59.424 11:26:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 102347 00:31:59.424 11:26:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:31:59.424 11:26:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 102347 ']' 00:31:59.424 11:26:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:59.424 11:26:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:59.424 11:26:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:59.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:59.424 11:26:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:59.424 11:26:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:59.424 [2024-11-20 11:26:26.251306] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:59.424 [2024-11-20 11:26:26.252336] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:31:59.424 [2024-11-20 11:26:26.252376] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:59.424 [2024-11-20 11:26:26.332884] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:59.424 [2024-11-20 11:26:26.374773] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:59.424 [2024-11-20 11:26:26.374812] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:59.424 [2024-11-20 11:26:26.374820] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:59.424 [2024-11-20 11:26:26.374826] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:59.424 [2024-11-20 11:26:26.374832] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:59.424 [2024-11-20 11:26:26.376045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:59.424 [2024-11-20 11:26:26.376045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:59.424 [2024-11-20 11:26:26.443685] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:59.424 [2024-11-20 11:26:26.444299] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:59.424 [2024-11-20 11:26:26.444506] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:59.424 11:26:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:59.424 11:26:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:31:59.424 11:26:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:59.424 11:26:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:59.424 11:26:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:59.424 11:26:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:59.424 11:26:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:31:59.424 11:26:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:31:59.424 11:26:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:31:59.424 11:26:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:31:59.424 5000+0 records in 00:31:59.424 5000+0 records out 00:31:59.424 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0186111 s, 550 MB/s 00:31:59.424 11:26:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:31:59.424 11:26:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:59.424 11:26:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:59.424 AIO0 00:31:59.424 11:26:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:59.424 11:26:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:31:59.424 11:26:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:59.424 11:26:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:59.424 [2024-11-20 11:26:26.568745] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:59.424 11:26:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:59.424 11:26:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:31:59.424 11:26:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:59.424 11:26:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:59.424 11:26:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:59.424 11:26:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:31:59.424 11:26:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:59.424 11:26:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:59.424 11:26:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:59.424 11:26:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:59.424 11:26:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:59.424 11:26:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:59.424 [2024-11-20 11:26:26.609131] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:59.424 11:26:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:59.424 11:26:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:31:59.424 11:26:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 102347 0 00:31:59.424 11:26:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 102347 0 idle 00:31:59.424 11:26:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=102347 00:31:59.424 11:26:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:31:59.424 11:26:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:31:59.424 11:26:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:31:59.424 11:26:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:31:59.424 11:26:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:31:59.424 11:26:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:31:59.424 11:26:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:31:59.424 11:26:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:31:59.424 11:26:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:31:59.424 11:26:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 102347 -w 256 00:31:59.424 11:26:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:31:59.424 11:26:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 102347 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.26 reactor_0' 00:31:59.424 11:26:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 102347 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.26 reactor_0 00:31:59.424 11:26:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:31:59.424 11:26:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:31:59.424 11:26:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:31:59.424 11:26:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:31:59.424 11:26:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:31:59.424 11:26:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:31:59.424 11:26:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:31:59.424 11:26:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:31:59.424 11:26:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:31:59.424 11:26:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 102347 1 00:31:59.424 11:26:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 102347 1 idle 00:31:59.424 11:26:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=102347 00:31:59.424 11:26:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:31:59.424 11:26:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:31:59.424 11:26:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:31:59.424 11:26:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:31:59.424 11:26:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:31:59.424 11:26:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:31:59.424 11:26:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:31:59.424 11:26:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:31:59.424 11:26:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:31:59.424 11:26:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 102347 -w 256 00:31:59.424 11:26:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:31:59.684 11:26:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 102387 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.00 reactor_1' 00:31:59.684 11:26:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 102387 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.00 reactor_1 00:31:59.684 11:26:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:31:59.684 11:26:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:31:59.684 11:26:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:31:59.684 11:26:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:31:59.684 11:26:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:31:59.684 11:26:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:31:59.684 11:26:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:31:59.684 11:26:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:31:59.684 11:26:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:31:59.684 11:26:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=102520 00:31:59.684 11:26:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:31:59.684 11:26:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:31:59.684 11:26:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:31:59.684 11:26:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 102347 0 00:31:59.684 11:26:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 102347 0 busy 00:31:59.684 11:26:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=102347 00:31:59.684 11:26:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:31:59.684 11:26:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:31:59.684 11:26:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:31:59.684 11:26:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:31:59.684 11:26:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:31:59.684 11:26:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:31:59.684 11:26:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:31:59.684 11:26:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:31:59.684 11:26:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 102347 -w 256 00:31:59.684 11:26:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:31:59.684 11:26:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 102347 root 20 0 128.2g 46848 33792 R 73.3 0.0 0:00.37 reactor_0' 00:31:59.684 11:26:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 102347 root 20 0 128.2g 46848 33792 R 73.3 0.0 0:00.37 reactor_0 00:31:59.684 11:26:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:31:59.684 11:26:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:31:59.684 11:26:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=73.3 00:31:59.684 11:26:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=73 00:31:59.684 11:26:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:31:59.684 11:26:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:31:59.684 11:26:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:31:59.684 11:26:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:31:59.684 11:26:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:31:59.684 11:26:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:31:59.684 11:26:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 102347 1 00:31:59.684 11:26:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 102347 1 busy 00:31:59.684 11:26:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=102347 00:31:59.684 11:26:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:31:59.684 11:26:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:31:59.684 11:26:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:31:59.684 11:26:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:31:59.684 11:26:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:31:59.684 11:26:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:31:59.684 11:26:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:31:59.684 11:26:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:31:59.943 11:26:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 102347 -w 256 00:31:59.943 11:26:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:31:59.943 11:26:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 102387 root 20 0 128.2g 46848 33792 R 99.9 0.0 0:00.24 reactor_1' 00:31:59.943 11:26:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 102387 root 20 0 128.2g 46848 33792 R 99.9 0.0 0:00.24 reactor_1 00:31:59.943 11:26:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:31:59.943 11:26:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:31:59.943 11:26:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:31:59.943 11:26:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:31:59.943 11:26:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:31:59.943 11:26:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:31:59.943 11:26:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:31:59.943 11:26:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:31:59.943 11:26:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 102520 00:32:09.926 [2024-11-20 11:26:37.147406] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4f8d0 is same with the state(6) to be set 00:32:09.926 [2024-11-20 11:26:37.147447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4f8d0 is same with the state(6) to be set 00:32:09.926 [2024-11-20 11:26:37.147460] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4f8d0 is same with the state(6) to be set 00:32:09.926 [2024-11-20 11:26:37.147467] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4f8d0 is same with the state(6) to be set 00:32:09.926 [2024-11-20 11:26:37.147474] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4f8d0 is same with the state(6) to be set 00:32:09.926 [2024-11-20 11:26:37.147480] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4f8d0 is same with the state(6) to be set 00:32:09.926 Initializing NVMe Controllers 00:32:09.926 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:09.926 Controller IO queue size 256, less than required. 00:32:09.926 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:09.926 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:32:09.926 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:32:09.926 Initialization complete. Launching workers. 00:32:09.926 ======================================================== 00:32:09.926 Latency(us) 00:32:09.926 Device Information : IOPS MiB/s Average min max 00:32:09.926 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 15902.72 62.12 16107.20 3862.19 30916.36 00:32:09.926 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 16071.22 62.78 15934.20 7841.23 27951.68 00:32:09.926 ======================================================== 00:32:09.926 Total : 31973.94 124.90 16020.25 3862.19 30916.36 00:32:09.926 00:32:09.926 11:26:37 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:32:09.926 11:26:37 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 102347 0 00:32:09.926 11:26:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 102347 0 idle 00:32:09.926 11:26:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=102347 00:32:09.926 11:26:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:32:09.926 11:26:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:09.926 11:26:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:09.926 11:26:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:09.926 11:26:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:09.926 11:26:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:09.926 11:26:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:09.926 11:26:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:09.926 11:26:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:09.926 11:26:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 102347 -w 256 00:32:09.926 11:26:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:09.926 11:26:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 102347 root 20 0 128.2g 46848 33792 S 0.0 0.0 0:20.25 reactor_0' 00:32:09.926 11:26:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 102347 root 20 0 128.2g 46848 33792 S 0.0 0.0 0:20.25 reactor_0 00:32:09.926 11:26:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:09.926 11:26:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:09.926 11:26:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:09.926 11:26:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:09.926 11:26:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:09.926 11:26:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:09.926 11:26:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:09.926 11:26:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:09.926 11:26:37 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:32:09.926 11:26:37 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 102347 1 00:32:09.926 11:26:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 102347 1 idle 00:32:09.926 11:26:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=102347 00:32:09.926 11:26:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:32:09.926 11:26:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:09.926 11:26:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:09.926 11:26:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:09.926 11:26:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:09.926 11:26:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:09.926 11:26:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:09.926 11:26:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:09.926 11:26:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:09.926 11:26:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 102347 -w 256 00:32:09.926 11:26:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:32:10.186 11:26:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 102387 root 20 0 128.2g 46848 33792 S 0.0 0.0 0:10.00 reactor_1' 00:32:10.186 11:26:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 102387 root 20 0 128.2g 46848 33792 S 0.0 0.0 0:10.00 reactor_1 00:32:10.186 11:26:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:10.186 11:26:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:10.186 11:26:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:10.186 11:26:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:10.186 11:26:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:10.186 11:26:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:10.186 11:26:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:10.186 11:26:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:10.186 11:26:37 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:32:10.753 11:26:37 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:32:10.753 11:26:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:32:10.753 11:26:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:32:10.753 11:26:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:32:10.753 11:26:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:32:12.659 11:26:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:32:12.659 11:26:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:32:12.659 11:26:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:32:12.659 11:26:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:32:12.659 11:26:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:32:12.659 11:26:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:32:12.659 11:26:40 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:32:12.659 11:26:40 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 102347 0 00:32:12.659 11:26:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 102347 0 idle 00:32:12.659 11:26:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=102347 00:32:12.659 11:26:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:32:12.659 11:26:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:12.659 11:26:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:12.659 11:26:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:12.659 11:26:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:12.659 11:26:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:12.659 11:26:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:12.659 11:26:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:12.659 11:26:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:12.659 11:26:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 102347 -w 256 00:32:12.659 11:26:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:12.918 11:26:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 102347 root 20 0 128.2g 72960 33792 S 0.0 0.0 0:20.50 reactor_0' 00:32:12.918 11:26:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 102347 root 20 0 128.2g 72960 33792 S 0.0 0.0 0:20.50 reactor_0 00:32:12.918 11:26:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:12.918 11:26:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:12.918 11:26:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:12.918 11:26:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:12.918 11:26:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:12.918 11:26:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:12.918 11:26:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:12.918 11:26:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:12.918 11:26:40 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:32:12.918 11:26:40 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 102347 1 00:32:12.918 11:26:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 102347 1 idle 00:32:12.918 11:26:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=102347 00:32:12.918 11:26:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:32:12.918 11:26:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:12.918 11:26:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:12.918 11:26:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:12.918 11:26:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:12.918 11:26:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:12.918 11:26:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:12.918 11:26:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:12.918 11:26:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:12.918 11:26:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 102347 -w 256 00:32:12.918 11:26:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:32:12.918 11:26:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 102387 root 20 0 128.2g 72960 33792 S 0.0 0.0 0:10.10 reactor_1' 00:32:12.918 11:26:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 102387 root 20 0 128.2g 72960 33792 S 0.0 0.0 0:10.10 reactor_1 00:32:12.918 11:26:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:12.918 11:26:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:12.918 11:26:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:12.918 11:26:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:12.918 11:26:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:12.918 11:26:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:12.918 11:26:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:12.918 11:26:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:12.918 11:26:40 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:32:13.177 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:32:13.177 11:26:40 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:32:13.177 11:26:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:32:13.177 11:26:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:32:13.177 11:26:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:13.177 11:26:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:32:13.177 11:26:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:13.177 11:26:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:32:13.177 11:26:40 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:32:13.177 11:26:40 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:32:13.177 11:26:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:13.177 11:26:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:32:13.177 11:26:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:13.177 11:26:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:32:13.177 11:26:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:13.177 11:26:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:13.177 rmmod nvme_tcp 00:32:13.177 rmmod nvme_fabrics 00:32:13.177 rmmod nvme_keyring 00:32:13.177 11:26:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:13.177 11:26:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:32:13.177 11:26:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:32:13.177 11:26:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 102347 ']' 00:32:13.177 11:26:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 102347 00:32:13.177 11:26:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 102347 ']' 00:32:13.177 11:26:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 102347 00:32:13.177 11:26:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:32:13.177 11:26:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:13.177 11:26:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 102347 00:32:13.177 11:26:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:13.177 11:26:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:13.177 11:26:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 102347' 00:32:13.177 killing process with pid 102347 00:32:13.177 11:26:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 102347 00:32:13.177 11:26:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 102347 00:32:13.436 11:26:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:13.436 11:26:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:13.436 11:26:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:13.436 11:26:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:32:13.436 11:26:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:32:13.436 11:26:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:13.436 11:26:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:32:13.436 11:26:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:13.436 11:26:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:13.436 11:26:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:13.436 11:26:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:13.436 11:26:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:15.973 11:26:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:15.973 00:32:15.973 real 0m22.844s 00:32:15.973 user 0m39.722s 00:32:15.973 sys 0m8.349s 00:32:15.973 11:26:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:15.973 11:26:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:15.973 ************************************ 00:32:15.973 END TEST nvmf_interrupt 00:32:15.973 ************************************ 00:32:15.973 00:32:15.973 real 27m28.411s 00:32:15.973 user 56m37.283s 00:32:15.973 sys 9m23.050s 00:32:15.973 11:26:42 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:15.973 11:26:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:15.973 ************************************ 00:32:15.973 END TEST nvmf_tcp 00:32:15.973 ************************************ 00:32:15.973 11:26:43 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:32:15.973 11:26:43 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:32:15.973 11:26:43 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:15.973 11:26:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:15.973 11:26:43 -- common/autotest_common.sh@10 -- # set +x 00:32:15.973 ************************************ 00:32:15.973 START TEST spdkcli_nvmf_tcp 00:32:15.973 ************************************ 00:32:15.973 11:26:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:32:15.973 * Looking for test storage... 00:32:15.973 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:32:15.973 11:26:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:15.973 11:26:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:32:15.973 11:26:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:15.973 11:26:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:15.973 11:26:43 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:15.973 11:26:43 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:15.973 11:26:43 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:15.973 11:26:43 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:32:15.973 11:26:43 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:32:15.973 11:26:43 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:32:15.973 11:26:43 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:32:15.973 11:26:43 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:32:15.973 11:26:43 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:32:15.973 11:26:43 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:32:15.973 11:26:43 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:15.973 11:26:43 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:32:15.974 11:26:43 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:32:15.974 11:26:43 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:15.974 11:26:43 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:15.974 11:26:43 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:32:15.974 11:26:43 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:32:15.974 11:26:43 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:15.974 11:26:43 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:32:15.974 11:26:43 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:32:15.974 11:26:43 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:32:15.974 11:26:43 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:32:15.974 11:26:43 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:15.974 11:26:43 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:32:15.974 11:26:43 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:32:15.974 11:26:43 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:15.974 11:26:43 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:15.974 11:26:43 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:32:15.974 11:26:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:15.974 11:26:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:15.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:15.974 --rc genhtml_branch_coverage=1 00:32:15.974 --rc genhtml_function_coverage=1 00:32:15.974 --rc genhtml_legend=1 00:32:15.974 --rc geninfo_all_blocks=1 00:32:15.974 --rc geninfo_unexecuted_blocks=1 00:32:15.974 00:32:15.974 ' 00:32:15.974 11:26:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:15.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:15.974 --rc genhtml_branch_coverage=1 00:32:15.974 --rc genhtml_function_coverage=1 00:32:15.974 --rc genhtml_legend=1 00:32:15.974 --rc geninfo_all_blocks=1 00:32:15.974 --rc geninfo_unexecuted_blocks=1 00:32:15.974 00:32:15.974 ' 00:32:15.974 11:26:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:15.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:15.974 --rc genhtml_branch_coverage=1 00:32:15.974 --rc genhtml_function_coverage=1 00:32:15.974 --rc genhtml_legend=1 00:32:15.974 --rc geninfo_all_blocks=1 00:32:15.974 --rc geninfo_unexecuted_blocks=1 00:32:15.974 00:32:15.974 ' 00:32:15.974 11:26:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:15.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:15.974 --rc genhtml_branch_coverage=1 00:32:15.974 --rc genhtml_function_coverage=1 00:32:15.974 --rc genhtml_legend=1 00:32:15.974 --rc geninfo_all_blocks=1 00:32:15.974 --rc geninfo_unexecuted_blocks=1 00:32:15.974 00:32:15.974 ' 00:32:15.974 11:26:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:32:15.974 11:26:43 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:32:15.974 11:26:43 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:32:15.974 11:26:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:15.974 11:26:43 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:32:15.974 11:26:43 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:15.974 11:26:43 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:15.974 11:26:43 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:15.974 11:26:43 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:15.974 11:26:43 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:15.974 11:26:43 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:15.974 11:26:43 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:15.974 11:26:43 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:15.974 11:26:43 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:15.974 11:26:43 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:15.974 11:26:43 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:32:15.974 11:26:43 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:32:15.974 11:26:43 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:15.974 11:26:43 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:15.974 11:26:43 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:15.974 11:26:43 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:15.974 11:26:43 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:15.974 11:26:43 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:32:15.974 11:26:43 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:15.974 11:26:43 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:15.974 11:26:43 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:15.974 11:26:43 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:15.974 11:26:43 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:15.974 11:26:43 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:15.974 11:26:43 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:32:15.974 11:26:43 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:15.974 11:26:43 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:32:15.974 11:26:43 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:15.974 11:26:43 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:15.974 11:26:43 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:15.974 11:26:43 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:15.974 11:26:43 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:15.974 11:26:43 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:15.974 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:15.974 11:26:43 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:15.974 11:26:43 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:15.974 11:26:43 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:15.974 11:26:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:32:15.974 11:26:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:32:15.974 11:26:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:32:15.974 11:26:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:32:15.974 11:26:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:15.974 11:26:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:15.974 11:26:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:32:15.974 11:26:43 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=105200 00:32:15.974 11:26:43 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 105200 00:32:15.974 11:26:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 105200 ']' 00:32:15.974 11:26:43 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:32:15.974 11:26:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:15.974 11:26:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:15.975 11:26:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:15.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:15.975 11:26:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:15.975 11:26:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:15.975 [2024-11-20 11:26:43.327273] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:32:15.975 [2024-11-20 11:26:43.327322] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105200 ] 00:32:15.975 [2024-11-20 11:26:43.400802] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:15.975 [2024-11-20 11:26:43.446149] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:15.975 [2024-11-20 11:26:43.446151] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:16.234 11:26:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:16.234 11:26:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:32:16.234 11:26:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:32:16.234 11:26:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:16.234 11:26:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:16.234 11:26:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:32:16.234 11:26:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:32:16.234 11:26:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:32:16.234 11:26:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:16.234 11:26:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:16.234 11:26:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:32:16.234 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:32:16.234 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:32:16.234 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:32:16.234 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:32:16.234 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:32:16.234 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:32:16.234 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:32:16.234 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:32:16.234 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:32:16.234 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:32:16.234 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:16.234 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:32:16.234 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:32:16.234 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:16.234 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:32:16.234 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:32:16.234 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:32:16.234 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:32:16.234 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:16.234 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:32:16.234 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:32:16.234 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:32:16.234 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:32:16.234 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:16.234 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:32:16.234 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:32:16.234 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:32:16.234 ' 00:32:19.521 [2024-11-20 11:26:46.295504] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:20.458 [2024-11-20 11:26:47.636040] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:32:23.092 [2024-11-20 11:26:50.111776] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:32:24.993 [2024-11-20 11:26:52.290484] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:32:26.898 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:32:26.898 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:32:26.898 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:32:26.898 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:32:26.898 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:32:26.898 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:32:26.898 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:32:26.898 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:32:26.898 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:32:26.898 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:32:26.898 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:32:26.898 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:32:26.898 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:32:26.898 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:32:26.898 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:32:26.898 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:32:26.898 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:32:26.898 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:32:26.898 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:32:26.898 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:32:26.898 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:32:26.898 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:32:26.898 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:32:26.898 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:32:26.898 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:32:26.898 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:32:26.898 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:32:26.898 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:32:26.898 11:26:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:32:26.898 11:26:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:26.898 11:26:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:26.898 11:26:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:32:26.898 11:26:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:26.898 11:26:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:26.898 11:26:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:32:26.898 11:26:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:32:27.157 11:26:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:32:27.157 11:26:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:32:27.157 11:26:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:32:27.157 11:26:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:27.157 11:26:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:27.157 11:26:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:32:27.157 11:26:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:27.157 11:26:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:27.157 11:26:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:32:27.158 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:32:27.158 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:32:27.158 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:32:27.158 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:32:27.158 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:32:27.158 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:32:27.158 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:32:27.158 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:32:27.158 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:32:27.158 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:32:27.158 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:32:27.158 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:32:27.158 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:32:27.158 ' 00:32:33.726 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:32:33.726 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:32:33.726 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:32:33.726 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:32:33.726 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:32:33.726 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:32:33.726 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:32:33.726 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:32:33.726 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:32:33.726 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:32:33.726 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:32:33.726 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:32:33.726 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:32:33.727 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:32:33.727 11:27:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:32:33.727 11:27:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:33.727 11:27:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:33.727 11:27:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 105200 00:32:33.727 11:27:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 105200 ']' 00:32:33.727 11:27:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 105200 00:32:33.727 11:27:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:32:33.727 11:27:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:33.727 11:27:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 105200 00:32:33.727 11:27:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:33.727 11:27:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:33.727 11:27:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 105200' 00:32:33.727 killing process with pid 105200 00:32:33.727 11:27:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 105200 00:32:33.727 11:27:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 105200 00:32:33.727 11:27:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:32:33.727 11:27:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:32:33.727 11:27:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 105200 ']' 00:32:33.727 11:27:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 105200 00:32:33.727 11:27:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 105200 ']' 00:32:33.727 11:27:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 105200 00:32:33.727 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (105200) - No such process 00:32:33.727 11:27:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 105200 is not found' 00:32:33.727 Process with pid 105200 is not found 00:32:33.727 11:27:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:32:33.727 11:27:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:32:33.727 11:27:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:32:33.727 00:32:33.727 real 0m17.382s 00:32:33.727 user 0m38.273s 00:32:33.727 sys 0m0.822s 00:32:33.727 11:27:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:33.727 11:27:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:33.727 ************************************ 00:32:33.727 END TEST spdkcli_nvmf_tcp 00:32:33.727 ************************************ 00:32:33.727 11:27:00 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:32:33.727 11:27:00 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:33.727 11:27:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:33.727 11:27:00 -- common/autotest_common.sh@10 -- # set +x 00:32:33.727 ************************************ 00:32:33.727 START TEST nvmf_identify_passthru 00:32:33.727 ************************************ 00:32:33.727 11:27:00 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:32:33.727 * Looking for test storage... 00:32:33.727 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:33.727 11:27:00 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:33.727 11:27:00 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lcov --version 00:32:33.727 11:27:00 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:33.727 11:27:00 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:33.727 11:27:00 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:33.727 11:27:00 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:33.727 11:27:00 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:33.727 11:27:00 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:32:33.727 11:27:00 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:32:33.727 11:27:00 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:32:33.727 11:27:00 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:32:33.727 11:27:00 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:32:33.727 11:27:00 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:32:33.727 11:27:00 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:32:33.727 11:27:00 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:33.727 11:27:00 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:32:33.727 11:27:00 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:32:33.727 11:27:00 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:33.727 11:27:00 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:33.727 11:27:00 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:32:33.727 11:27:00 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:32:33.727 11:27:00 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:33.727 11:27:00 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:32:33.727 11:27:00 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:32:33.727 11:27:00 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:32:33.727 11:27:00 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:32:33.727 11:27:00 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:33.727 11:27:00 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:32:33.727 11:27:00 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:32:33.727 11:27:00 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:33.727 11:27:00 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:33.727 11:27:00 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:32:33.727 11:27:00 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:33.727 11:27:00 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:33.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:33.727 --rc genhtml_branch_coverage=1 00:32:33.727 --rc genhtml_function_coverage=1 00:32:33.727 --rc genhtml_legend=1 00:32:33.727 --rc geninfo_all_blocks=1 00:32:33.727 --rc geninfo_unexecuted_blocks=1 00:32:33.727 00:32:33.727 ' 00:32:33.727 11:27:00 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:33.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:33.727 --rc genhtml_branch_coverage=1 00:32:33.727 --rc genhtml_function_coverage=1 00:32:33.727 --rc genhtml_legend=1 00:32:33.727 --rc geninfo_all_blocks=1 00:32:33.727 --rc geninfo_unexecuted_blocks=1 00:32:33.727 00:32:33.727 ' 00:32:33.727 11:27:00 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:33.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:33.727 --rc genhtml_branch_coverage=1 00:32:33.727 --rc genhtml_function_coverage=1 00:32:33.727 --rc genhtml_legend=1 00:32:33.727 --rc geninfo_all_blocks=1 00:32:33.727 --rc geninfo_unexecuted_blocks=1 00:32:33.727 00:32:33.727 ' 00:32:33.727 11:27:00 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:33.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:33.727 --rc genhtml_branch_coverage=1 00:32:33.727 --rc genhtml_function_coverage=1 00:32:33.727 --rc genhtml_legend=1 00:32:33.727 --rc geninfo_all_blocks=1 00:32:33.727 --rc geninfo_unexecuted_blocks=1 00:32:33.727 00:32:33.727 ' 00:32:33.727 11:27:00 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:33.727 11:27:00 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:32:33.727 11:27:00 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:33.727 11:27:00 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:33.727 11:27:00 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:33.727 11:27:00 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:33.727 11:27:00 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:33.727 11:27:00 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:33.727 11:27:00 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:33.727 11:27:00 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:33.727 11:27:00 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:33.727 11:27:00 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:33.727 11:27:00 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:32:33.727 11:27:00 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:32:33.727 11:27:00 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:33.727 11:27:00 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:33.727 11:27:00 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:33.727 11:27:00 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:33.727 11:27:00 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:33.727 11:27:00 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:32:33.727 11:27:00 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:33.727 11:27:00 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:33.727 11:27:00 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:33.727 11:27:00 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:33.727 11:27:00 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:33.727 11:27:00 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:33.727 11:27:00 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:32:33.727 11:27:00 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:33.727 11:27:00 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:32:33.727 11:27:00 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:33.727 11:27:00 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:33.727 11:27:00 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:33.727 11:27:00 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:33.727 11:27:00 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:33.727 11:27:00 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:33.727 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:33.727 11:27:00 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:33.727 11:27:00 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:33.727 11:27:00 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:33.727 11:27:00 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:33.727 11:27:00 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:32:33.727 11:27:00 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:33.727 11:27:00 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:33.727 11:27:00 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:33.727 11:27:00 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:33.727 11:27:00 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:33.727 11:27:00 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:33.727 11:27:00 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:32:33.727 11:27:00 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:33.727 11:27:00 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:32:33.727 11:27:00 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:33.727 11:27:00 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:33.727 11:27:00 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:33.727 11:27:00 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:33.727 11:27:00 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:33.727 11:27:00 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:33.727 11:27:00 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:33.727 11:27:00 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:33.727 11:27:00 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:33.727 11:27:00 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:33.727 11:27:00 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:32:33.727 11:27:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:39.006 11:27:06 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:39.006 11:27:06 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:32:39.006 11:27:06 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:39.006 11:27:06 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:39.006 11:27:06 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:39.006 11:27:06 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:39.006 11:27:06 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:39.006 11:27:06 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:32:39.006 11:27:06 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:39.006 11:27:06 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:32:39.006 11:27:06 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:32:39.006 11:27:06 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:32:39.006 11:27:06 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:32:39.006 11:27:06 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:32:39.006 11:27:06 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:32:39.006 11:27:06 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:39.006 11:27:06 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:39.006 11:27:06 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:39.006 11:27:06 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:39.006 11:27:06 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:39.006 11:27:06 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:39.006 11:27:06 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:39.006 11:27:06 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:39.006 11:27:06 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:39.006 11:27:06 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:39.006 11:27:06 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:39.006 11:27:06 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:39.006 11:27:06 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:39.006 11:27:06 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:39.006 11:27:06 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:39.006 11:27:06 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:39.006 11:27:06 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:39.006 11:27:06 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:39.006 11:27:06 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:39.006 11:27:06 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:32:39.006 Found 0000:86:00.0 (0x8086 - 0x159b) 00:32:39.006 11:27:06 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:39.006 11:27:06 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:39.006 11:27:06 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:39.006 11:27:06 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:39.006 11:27:06 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:39.006 11:27:06 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:39.006 11:27:06 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:32:39.006 Found 0000:86:00.1 (0x8086 - 0x159b) 00:32:39.006 11:27:06 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:39.006 11:27:06 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:39.006 11:27:06 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:39.006 11:27:06 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:39.006 11:27:06 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:39.006 11:27:06 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:39.006 11:27:06 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:39.006 11:27:06 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:39.006 11:27:06 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:39.006 11:27:06 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:39.006 11:27:06 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:39.006 11:27:06 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:39.006 11:27:06 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:39.006 11:27:06 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:39.006 11:27:06 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:39.006 11:27:06 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:32:39.006 Found net devices under 0000:86:00.0: cvl_0_0 00:32:39.006 11:27:06 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:39.006 11:27:06 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:39.007 11:27:06 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:39.007 11:27:06 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:39.007 11:27:06 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:39.007 11:27:06 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:39.007 11:27:06 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:39.007 11:27:06 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:39.007 11:27:06 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:32:39.007 Found net devices under 0000:86:00.1: cvl_0_1 00:32:39.007 11:27:06 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:39.007 11:27:06 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:39.007 11:27:06 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:32:39.007 11:27:06 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:39.007 11:27:06 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:39.007 11:27:06 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:39.007 11:27:06 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:39.007 11:27:06 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:39.007 11:27:06 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:39.007 11:27:06 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:39.007 11:27:06 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:39.007 11:27:06 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:39.007 11:27:06 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:39.007 11:27:06 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:39.007 11:27:06 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:39.007 11:27:06 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:39.007 11:27:06 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:39.007 11:27:06 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:39.007 11:27:06 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:39.007 11:27:06 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:39.007 11:27:06 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:39.007 11:27:06 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:39.007 11:27:06 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:39.007 11:27:06 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:39.007 11:27:06 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:39.266 11:27:06 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:39.266 11:27:06 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:39.266 11:27:06 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:39.266 11:27:06 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:39.266 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:39.266 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.408 ms 00:32:39.266 00:32:39.266 --- 10.0.0.2 ping statistics --- 00:32:39.266 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:39.266 rtt min/avg/max/mdev = 0.408/0.408/0.408/0.000 ms 00:32:39.266 11:27:06 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:39.266 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:39.266 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.187 ms 00:32:39.266 00:32:39.266 --- 10.0.0.1 ping statistics --- 00:32:39.266 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:39.266 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:32:39.266 11:27:06 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:39.266 11:27:06 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:32:39.266 11:27:06 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:39.266 11:27:06 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:39.266 11:27:06 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:39.266 11:27:06 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:39.266 11:27:06 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:39.266 11:27:06 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:39.266 11:27:06 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:39.266 11:27:06 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:32:39.266 11:27:06 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:39.266 11:27:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:39.266 11:27:06 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:32:39.266 11:27:06 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:32:39.266 11:27:06 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:32:39.266 11:27:06 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:32:39.266 11:27:06 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:32:39.266 11:27:06 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:32:39.266 11:27:06 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:32:39.266 11:27:06 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:32:39.266 11:27:06 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:32:39.266 11:27:06 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:32:39.266 11:27:06 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:32:39.266 11:27:06 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:32:39.266 11:27:06 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:5e:00.0 00:32:39.266 11:27:06 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:5e:00.0 00:32:39.266 11:27:06 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:5e:00.0 ']' 00:32:39.266 11:27:06 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:32:39.266 11:27:06 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:32:39.266 11:27:06 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:32:43.560 11:27:10 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ72430F0E1P0FGN 00:32:43.560 11:27:10 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:32:43.560 11:27:10 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:32:43.560 11:27:10 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:32:47.749 11:27:15 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:32:47.749 11:27:15 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:32:47.749 11:27:15 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:47.749 11:27:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:47.749 11:27:15 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:32:47.749 11:27:15 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:47.749 11:27:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:47.749 11:27:15 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=112990 00:32:47.749 11:27:15 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:32:47.749 11:27:15 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:47.749 11:27:15 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 112990 00:32:47.749 11:27:15 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 112990 ']' 00:32:47.749 11:27:15 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:47.749 11:27:15 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:47.749 11:27:15 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:47.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:47.750 11:27:15 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:47.750 11:27:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:47.750 [2024-11-20 11:27:15.103643] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:32:47.750 [2024-11-20 11:27:15.103691] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:47.750 [2024-11-20 11:27:15.185931] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:47.750 [2024-11-20 11:27:15.229301] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:47.750 [2024-11-20 11:27:15.229340] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:47.750 [2024-11-20 11:27:15.229347] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:47.750 [2024-11-20 11:27:15.229352] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:47.750 [2024-11-20 11:27:15.229357] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:47.750 [2024-11-20 11:27:15.230838] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:47.750 [2024-11-20 11:27:15.230857] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:47.750 [2024-11-20 11:27:15.230981] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:47.750 [2024-11-20 11:27:15.230981] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:48.686 11:27:15 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:48.686 11:27:15 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:32:48.686 11:27:15 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:32:48.686 11:27:15 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:48.686 11:27:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:48.686 INFO: Log level set to 20 00:32:48.686 INFO: Requests: 00:32:48.686 { 00:32:48.686 "jsonrpc": "2.0", 00:32:48.686 "method": "nvmf_set_config", 00:32:48.686 "id": 1, 00:32:48.686 "params": { 00:32:48.686 "admin_cmd_passthru": { 00:32:48.686 "identify_ctrlr": true 00:32:48.686 } 00:32:48.686 } 00:32:48.686 } 00:32:48.686 00:32:48.686 INFO: response: 00:32:48.686 { 00:32:48.686 "jsonrpc": "2.0", 00:32:48.686 "id": 1, 00:32:48.686 "result": true 00:32:48.686 } 00:32:48.686 00:32:48.686 11:27:15 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:48.686 11:27:15 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:32:48.686 11:27:15 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:48.686 11:27:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:48.686 INFO: Setting log level to 20 00:32:48.686 INFO: Setting log level to 20 00:32:48.686 INFO: Log level set to 20 00:32:48.686 INFO: Log level set to 20 00:32:48.686 INFO: Requests: 00:32:48.686 { 00:32:48.686 "jsonrpc": "2.0", 00:32:48.686 "method": "framework_start_init", 00:32:48.686 "id": 1 00:32:48.686 } 00:32:48.686 00:32:48.686 INFO: Requests: 00:32:48.686 { 00:32:48.686 "jsonrpc": "2.0", 00:32:48.686 "method": "framework_start_init", 00:32:48.686 "id": 1 00:32:48.686 } 00:32:48.686 00:32:48.686 [2024-11-20 11:27:16.032281] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:32:48.686 INFO: response: 00:32:48.686 { 00:32:48.686 "jsonrpc": "2.0", 00:32:48.686 "id": 1, 00:32:48.686 "result": true 00:32:48.686 } 00:32:48.686 00:32:48.686 INFO: response: 00:32:48.686 { 00:32:48.686 "jsonrpc": "2.0", 00:32:48.686 "id": 1, 00:32:48.686 "result": true 00:32:48.686 } 00:32:48.686 00:32:48.686 11:27:16 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:48.686 11:27:16 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:48.686 11:27:16 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:48.686 11:27:16 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:48.686 INFO: Setting log level to 40 00:32:48.686 INFO: Setting log level to 40 00:32:48.686 INFO: Setting log level to 40 00:32:48.686 [2024-11-20 11:27:16.041630] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:48.686 11:27:16 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:48.686 11:27:16 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:32:48.686 11:27:16 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:48.686 11:27:16 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:48.686 11:27:16 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 00:32:48.686 11:27:16 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:48.686 11:27:16 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:51.975 Nvme0n1 00:32:51.975 11:27:18 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:51.975 11:27:18 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:32:51.975 11:27:18 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:51.975 11:27:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:51.975 11:27:18 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:51.975 11:27:18 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:32:51.975 11:27:18 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:51.975 11:27:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:51.975 11:27:18 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:51.975 11:27:18 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:51.975 11:27:18 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:51.975 11:27:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:51.975 [2024-11-20 11:27:18.950646] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:51.975 11:27:18 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:51.975 11:27:18 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:32:51.975 11:27:18 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:51.975 11:27:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:51.975 [ 00:32:51.975 { 00:32:51.975 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:32:51.975 "subtype": "Discovery", 00:32:51.975 "listen_addresses": [], 00:32:51.975 "allow_any_host": true, 00:32:51.975 "hosts": [] 00:32:51.975 }, 00:32:51.975 { 00:32:51.975 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:32:51.975 "subtype": "NVMe", 00:32:51.975 "listen_addresses": [ 00:32:51.975 { 00:32:51.975 "trtype": "TCP", 00:32:51.975 "adrfam": "IPv4", 00:32:51.975 "traddr": "10.0.0.2", 00:32:51.975 "trsvcid": "4420" 00:32:51.975 } 00:32:51.975 ], 00:32:51.975 "allow_any_host": true, 00:32:51.975 "hosts": [], 00:32:51.975 "serial_number": "SPDK00000000000001", 00:32:51.975 "model_number": "SPDK bdev Controller", 00:32:51.975 "max_namespaces": 1, 00:32:51.975 "min_cntlid": 1, 00:32:51.975 "max_cntlid": 65519, 00:32:51.975 "namespaces": [ 00:32:51.975 { 00:32:51.975 "nsid": 1, 00:32:51.975 "bdev_name": "Nvme0n1", 00:32:51.975 "name": "Nvme0n1", 00:32:51.975 "nguid": "29BEF1AF5DBC482AB25B2E28F3705C7E", 00:32:51.975 "uuid": "29bef1af-5dbc-482a-b25b-2e28f3705c7e" 00:32:51.975 } 00:32:51.975 ] 00:32:51.975 } 00:32:51.975 ] 00:32:51.975 11:27:18 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:51.975 11:27:18 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:32:51.975 11:27:18 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:32:51.975 11:27:18 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:32:51.975 11:27:19 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ72430F0E1P0FGN 00:32:51.975 11:27:19 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:32:51.975 11:27:19 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:32:51.975 11:27:19 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:32:51.975 11:27:19 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:32:51.975 11:27:19 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ72430F0E1P0FGN '!=' BTLJ72430F0E1P0FGN ']' 00:32:51.975 11:27:19 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:32:51.976 11:27:19 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:51.976 11:27:19 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:51.976 11:27:19 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:51.976 11:27:19 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:51.976 11:27:19 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:32:51.976 11:27:19 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:32:51.976 11:27:19 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:51.976 11:27:19 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:32:51.976 11:27:19 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:51.976 11:27:19 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:32:51.976 11:27:19 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:51.976 11:27:19 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:51.976 rmmod nvme_tcp 00:32:51.976 rmmod nvme_fabrics 00:32:51.976 rmmod nvme_keyring 00:32:51.976 11:27:19 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:51.976 11:27:19 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:32:51.976 11:27:19 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:32:51.976 11:27:19 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 112990 ']' 00:32:51.976 11:27:19 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 112990 00:32:51.976 11:27:19 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 112990 ']' 00:32:51.976 11:27:19 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 112990 00:32:51.976 11:27:19 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:32:51.976 11:27:19 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:51.976 11:27:19 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 112990 00:32:51.976 11:27:19 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:51.976 11:27:19 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:51.976 11:27:19 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 112990' 00:32:51.976 killing process with pid 112990 00:32:51.976 11:27:19 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 112990 00:32:51.976 11:27:19 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 112990 00:32:53.884 11:27:20 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:53.884 11:27:20 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:53.884 11:27:20 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:53.884 11:27:20 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:32:53.884 11:27:20 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:32:53.884 11:27:20 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:53.884 11:27:20 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:32:53.884 11:27:20 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:53.884 11:27:20 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:53.884 11:27:20 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:53.884 11:27:20 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:53.884 11:27:20 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:55.791 11:27:22 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:55.791 00:32:55.791 real 0m22.427s 00:32:55.791 user 0m29.330s 00:32:55.791 sys 0m6.240s 00:32:55.791 11:27:22 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:55.791 11:27:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:55.791 ************************************ 00:32:55.791 END TEST nvmf_identify_passthru 00:32:55.791 ************************************ 00:32:55.791 11:27:22 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:32:55.791 11:27:22 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:55.791 11:27:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:55.791 11:27:22 -- common/autotest_common.sh@10 -- # set +x 00:32:55.791 ************************************ 00:32:55.791 START TEST nvmf_dif 00:32:55.791 ************************************ 00:32:55.791 11:27:23 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:32:55.791 * Looking for test storage... 00:32:55.791 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:55.791 11:27:23 nvmf_dif -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:55.791 11:27:23 nvmf_dif -- common/autotest_common.sh@1693 -- # lcov --version 00:32:55.791 11:27:23 nvmf_dif -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:55.791 11:27:23 nvmf_dif -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:55.791 11:27:23 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:55.791 11:27:23 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:55.791 11:27:23 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:55.791 11:27:23 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:32:55.791 11:27:23 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:32:55.791 11:27:23 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:32:55.791 11:27:23 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:32:55.791 11:27:23 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:32:55.791 11:27:23 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:32:55.791 11:27:23 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:32:55.791 11:27:23 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:55.791 11:27:23 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:32:55.791 11:27:23 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:32:55.791 11:27:23 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:55.791 11:27:23 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:55.791 11:27:23 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:32:55.791 11:27:23 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:32:55.791 11:27:23 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:55.791 11:27:23 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:32:55.791 11:27:23 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:32:55.791 11:27:23 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:32:55.791 11:27:23 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:32:55.791 11:27:23 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:55.791 11:27:23 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:32:55.791 11:27:23 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:32:55.791 11:27:23 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:55.791 11:27:23 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:55.791 11:27:23 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:32:55.791 11:27:23 nvmf_dif -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:55.791 11:27:23 nvmf_dif -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:55.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:55.791 --rc genhtml_branch_coverage=1 00:32:55.791 --rc genhtml_function_coverage=1 00:32:55.791 --rc genhtml_legend=1 00:32:55.791 --rc geninfo_all_blocks=1 00:32:55.791 --rc geninfo_unexecuted_blocks=1 00:32:55.791 00:32:55.791 ' 00:32:55.791 11:27:23 nvmf_dif -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:55.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:55.791 --rc genhtml_branch_coverage=1 00:32:55.791 --rc genhtml_function_coverage=1 00:32:55.791 --rc genhtml_legend=1 00:32:55.791 --rc geninfo_all_blocks=1 00:32:55.791 --rc geninfo_unexecuted_blocks=1 00:32:55.791 00:32:55.791 ' 00:32:55.791 11:27:23 nvmf_dif -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:55.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:55.791 --rc genhtml_branch_coverage=1 00:32:55.791 --rc genhtml_function_coverage=1 00:32:55.791 --rc genhtml_legend=1 00:32:55.791 --rc geninfo_all_blocks=1 00:32:55.791 --rc geninfo_unexecuted_blocks=1 00:32:55.791 00:32:55.791 ' 00:32:55.791 11:27:23 nvmf_dif -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:55.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:55.791 --rc genhtml_branch_coverage=1 00:32:55.791 --rc genhtml_function_coverage=1 00:32:55.791 --rc genhtml_legend=1 00:32:55.791 --rc geninfo_all_blocks=1 00:32:55.791 --rc geninfo_unexecuted_blocks=1 00:32:55.791 00:32:55.791 ' 00:32:55.791 11:27:23 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:55.791 11:27:23 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:32:55.791 11:27:23 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:55.791 11:27:23 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:55.791 11:27:23 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:55.791 11:27:23 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:55.791 11:27:23 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:55.791 11:27:23 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:55.791 11:27:23 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:55.791 11:27:23 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:55.791 11:27:23 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:55.791 11:27:23 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:55.791 11:27:23 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:32:55.791 11:27:23 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:32:55.791 11:27:23 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:55.791 11:27:23 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:55.791 11:27:23 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:55.791 11:27:23 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:55.791 11:27:23 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:55.791 11:27:23 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:32:55.791 11:27:23 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:55.791 11:27:23 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:55.791 11:27:23 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:55.791 11:27:23 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:55.791 11:27:23 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:55.791 11:27:23 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:55.791 11:27:23 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:32:55.791 11:27:23 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:55.791 11:27:23 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:32:55.791 11:27:23 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:55.791 11:27:23 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:55.791 11:27:23 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:55.791 11:27:23 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:55.791 11:27:23 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:55.791 11:27:23 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:55.791 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:55.791 11:27:23 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:55.791 11:27:23 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:55.791 11:27:23 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:55.791 11:27:23 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:32:55.791 11:27:23 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:32:55.791 11:27:23 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:32:55.791 11:27:23 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:32:55.791 11:27:23 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:32:55.791 11:27:23 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:55.791 11:27:23 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:55.791 11:27:23 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:55.791 11:27:23 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:55.792 11:27:23 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:55.792 11:27:23 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:55.792 11:27:23 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:55.792 11:27:23 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:55.792 11:27:23 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:55.792 11:27:23 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:55.792 11:27:23 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:32:55.792 11:27:23 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:02.363 11:27:28 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:02.363 11:27:28 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:33:02.363 11:27:28 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:02.363 11:27:28 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:02.363 11:27:28 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:02.363 11:27:28 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:02.363 11:27:28 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:02.363 11:27:28 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:33:02.363 11:27:28 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:02.363 11:27:28 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:33:02.363 11:27:28 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:33:02.363 11:27:28 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:33:02.363 11:27:28 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:33:02.363 11:27:28 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:33:02.363 11:27:28 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:33:02.363 11:27:28 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:02.363 11:27:28 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:02.363 11:27:28 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:02.363 11:27:28 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:02.363 11:27:28 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:02.363 11:27:28 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:02.363 11:27:28 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:02.363 11:27:28 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:02.363 11:27:28 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:02.363 11:27:28 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:02.363 11:27:28 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:02.363 11:27:28 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:02.363 11:27:28 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:02.363 11:27:28 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:02.363 11:27:28 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:02.363 11:27:28 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:02.363 11:27:28 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:02.363 11:27:28 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:02.363 11:27:28 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:02.363 11:27:28 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:33:02.363 Found 0000:86:00.0 (0x8086 - 0x159b) 00:33:02.363 11:27:28 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:02.363 11:27:28 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:02.363 11:27:28 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:02.363 11:27:28 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:02.363 11:27:28 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:02.363 11:27:28 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:02.363 11:27:28 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:33:02.363 Found 0000:86:00.1 (0x8086 - 0x159b) 00:33:02.363 11:27:28 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:02.363 11:27:28 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:02.363 11:27:28 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:02.363 11:27:28 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:02.363 11:27:28 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:02.363 11:27:28 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:02.363 11:27:28 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:02.363 11:27:28 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:02.363 11:27:28 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:02.363 11:27:28 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:02.363 11:27:28 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:02.363 11:27:28 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:02.363 11:27:28 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:02.363 11:27:28 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:02.363 11:27:28 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:02.363 11:27:28 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:33:02.363 Found net devices under 0000:86:00.0: cvl_0_0 00:33:02.363 11:27:28 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:02.363 11:27:28 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:02.363 11:27:28 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:02.363 11:27:28 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:02.363 11:27:28 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:02.363 11:27:28 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:02.363 11:27:28 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:02.363 11:27:28 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:02.363 11:27:28 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:33:02.363 Found net devices under 0000:86:00.1: cvl_0_1 00:33:02.363 11:27:28 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:02.363 11:27:28 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:02.363 11:27:28 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:33:02.363 11:27:28 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:02.363 11:27:28 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:02.363 11:27:28 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:02.363 11:27:28 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:02.363 11:27:28 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:02.363 11:27:28 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:02.363 11:27:28 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:02.363 11:27:28 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:02.363 11:27:28 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:02.363 11:27:28 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:02.363 11:27:28 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:02.363 11:27:28 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:02.363 11:27:28 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:02.363 11:27:28 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:02.363 11:27:28 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:02.363 11:27:28 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:02.363 11:27:28 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:02.363 11:27:28 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:02.363 11:27:29 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:02.363 11:27:29 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:02.363 11:27:29 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:02.363 11:27:29 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:02.364 11:27:29 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:02.364 11:27:29 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:02.364 11:27:29 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:02.364 11:27:29 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:02.364 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:02.364 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.442 ms 00:33:02.364 00:33:02.364 --- 10.0.0.2 ping statistics --- 00:33:02.364 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:02.364 rtt min/avg/max/mdev = 0.442/0.442/0.442/0.000 ms 00:33:02.364 11:27:29 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:02.364 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:02.364 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.174 ms 00:33:02.364 00:33:02.364 --- 10.0.0.1 ping statistics --- 00:33:02.364 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:02.364 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:33:02.364 11:27:29 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:02.364 11:27:29 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:33:02.364 11:27:29 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:33:02.364 11:27:29 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:04.900 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:33:04.900 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:33:04.900 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:33:04.900 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:33:04.900 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:33:04.900 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:33:04.900 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:33:04.900 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:33:04.900 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:33:04.900 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:33:04.900 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:33:04.900 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:33:04.900 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:33:04.900 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:33:04.900 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:33:04.900 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:33:04.900 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:33:04.900 11:27:32 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:04.900 11:27:32 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:04.900 11:27:32 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:04.900 11:27:32 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:04.900 11:27:32 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:04.900 11:27:32 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:04.900 11:27:32 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:33:04.900 11:27:32 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:33:04.900 11:27:32 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:04.900 11:27:32 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:04.900 11:27:32 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:04.901 11:27:32 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=118683 00:33:04.901 11:27:32 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 118683 00:33:04.901 11:27:32 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:33:04.901 11:27:32 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 118683 ']' 00:33:04.901 11:27:32 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:04.901 11:27:32 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:04.901 11:27:32 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:04.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:04.901 11:27:32 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:04.901 11:27:32 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:04.901 [2024-11-20 11:27:32.103723] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:33:04.901 [2024-11-20 11:27:32.103766] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:04.901 [2024-11-20 11:27:32.180554] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:04.901 [2024-11-20 11:27:32.221573] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:04.901 [2024-11-20 11:27:32.221609] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:04.901 [2024-11-20 11:27:32.221616] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:04.901 [2024-11-20 11:27:32.221623] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:04.901 [2024-11-20 11:27:32.221629] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:04.901 [2024-11-20 11:27:32.222193] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:04.901 11:27:32 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:04.901 11:27:32 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:33:04.901 11:27:32 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:04.901 11:27:32 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:04.901 11:27:32 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:04.901 11:27:32 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:04.901 11:27:32 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:33:04.901 11:27:32 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:33:04.901 11:27:32 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:04.901 11:27:32 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:04.901 [2024-11-20 11:27:32.352487] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:04.901 11:27:32 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:04.901 11:27:32 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:33:04.901 11:27:32 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:04.901 11:27:32 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:04.901 11:27:32 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:04.901 ************************************ 00:33:04.901 START TEST fio_dif_1_default 00:33:04.901 ************************************ 00:33:04.901 11:27:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:33:04.901 11:27:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:33:04.901 11:27:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:33:04.901 11:27:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:33:04.901 11:27:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:33:04.901 11:27:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:33:04.901 11:27:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:33:04.901 11:27:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:04.901 11:27:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:05.160 bdev_null0 00:33:05.160 11:27:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:05.160 11:27:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:05.160 11:27:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:05.160 11:27:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:05.160 11:27:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:05.160 11:27:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:05.160 11:27:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:05.160 11:27:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:05.160 11:27:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:05.160 11:27:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:05.160 11:27:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:05.160 11:27:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:05.160 [2024-11-20 11:27:32.420783] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:05.160 11:27:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:05.160 11:27:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:33:05.160 11:27:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:33:05.160 11:27:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:33:05.160 11:27:32 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:33:05.160 11:27:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:05.160 11:27:32 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:33:05.160 11:27:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:05.160 11:27:32 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:05.160 11:27:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:33:05.161 11:27:32 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:05.161 { 00:33:05.161 "params": { 00:33:05.161 "name": "Nvme$subsystem", 00:33:05.161 "trtype": "$TEST_TRANSPORT", 00:33:05.161 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:05.161 "adrfam": "ipv4", 00:33:05.161 "trsvcid": "$NVMF_PORT", 00:33:05.161 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:05.161 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:05.161 "hdgst": ${hdgst:-false}, 00:33:05.161 "ddgst": ${ddgst:-false} 00:33:05.161 }, 00:33:05.161 "method": "bdev_nvme_attach_controller" 00:33:05.161 } 00:33:05.161 EOF 00:33:05.161 )") 00:33:05.161 11:27:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:05.161 11:27:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:33:05.161 11:27:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:05.161 11:27:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:33:05.161 11:27:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:05.161 11:27:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:05.161 11:27:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:33:05.161 11:27:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:05.161 11:27:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:05.161 11:27:32 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:33:05.161 11:27:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:33:05.161 11:27:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:05.161 11:27:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:33:05.161 11:27:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:33:05.161 11:27:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:05.161 11:27:32 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:33:05.161 11:27:32 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:33:05.161 11:27:32 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:05.161 "params": { 00:33:05.161 "name": "Nvme0", 00:33:05.161 "trtype": "tcp", 00:33:05.161 "traddr": "10.0.0.2", 00:33:05.161 "adrfam": "ipv4", 00:33:05.161 "trsvcid": "4420", 00:33:05.161 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:05.161 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:05.161 "hdgst": false, 00:33:05.161 "ddgst": false 00:33:05.161 }, 00:33:05.161 "method": "bdev_nvme_attach_controller" 00:33:05.161 }' 00:33:05.161 11:27:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:05.161 11:27:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:05.161 11:27:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:05.161 11:27:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:05.161 11:27:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:05.161 11:27:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:05.161 11:27:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:05.161 11:27:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:05.161 11:27:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:05.161 11:27:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:05.419 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:33:05.420 fio-3.35 00:33:05.420 Starting 1 thread 00:33:17.630 00:33:17.630 filename0: (groupid=0, jobs=1): err= 0: pid=119053: Wed Nov 20 11:27:43 2024 00:33:17.630 read: IOPS=200, BW=803KiB/s (822kB/s)(8048KiB/10027msec) 00:33:17.630 slat (nsec): min=5833, max=32298, avg=6224.79, stdev=1429.94 00:33:17.630 clat (usec): min=373, max=42568, avg=19916.79, stdev=20449.55 00:33:17.630 lat (usec): min=378, max=42575, avg=19923.01, stdev=20449.45 00:33:17.630 clat percentiles (usec): 00:33:17.630 | 1.00th=[ 383], 5.00th=[ 396], 10.00th=[ 404], 20.00th=[ 412], 00:33:17.630 | 30.00th=[ 420], 40.00th=[ 478], 50.00th=[ 611], 60.00th=[40633], 00:33:17.630 | 70.00th=[41157], 80.00th=[41681], 90.00th=[41681], 95.00th=[41681], 00:33:17.630 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:33:17.630 | 99.99th=[42730] 00:33:17.630 bw ( KiB/s): min= 736, max= 960, per=100.00%, avg=803.20, stdev=61.33, samples=20 00:33:17.630 iops : min= 184, max= 240, avg=200.80, stdev=15.33, samples=20 00:33:17.630 lat (usec) : 500=40.71%, 750=11.58% 00:33:17.630 lat (msec) : 10=0.20%, 50=47.51% 00:33:17.630 cpu : usr=92.49%, sys=7.26%, ctx=10, majf=0, minf=0 00:33:17.630 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:17.630 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:17.630 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:17.630 issued rwts: total=2012,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:17.630 latency : target=0, window=0, percentile=100.00%, depth=4 00:33:17.630 00:33:17.630 Run status group 0 (all jobs): 00:33:17.630 READ: bw=803KiB/s (822kB/s), 803KiB/s-803KiB/s (822kB/s-822kB/s), io=8048KiB (8241kB), run=10027-10027msec 00:33:17.630 11:27:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:33:17.630 11:27:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:33:17.630 11:27:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:33:17.630 11:27:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:17.630 11:27:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:33:17.630 11:27:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:17.630 11:27:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.630 11:27:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:17.630 11:27:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.630 11:27:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:17.630 11:27:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.630 11:27:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:17.630 11:27:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.630 00:33:17.630 real 0m11.284s 00:33:17.630 user 0m15.781s 00:33:17.630 sys 0m1.092s 00:33:17.630 11:27:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:17.630 11:27:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:17.630 ************************************ 00:33:17.630 END TEST fio_dif_1_default 00:33:17.630 ************************************ 00:33:17.630 11:27:43 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:33:17.630 11:27:43 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:17.630 11:27:43 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:17.630 11:27:43 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:17.630 ************************************ 00:33:17.630 START TEST fio_dif_1_multi_subsystems 00:33:17.630 ************************************ 00:33:17.630 11:27:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:33:17.630 11:27:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:33:17.630 11:27:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:33:17.630 11:27:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:33:17.630 11:27:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:33:17.630 11:27:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:33:17.630 11:27:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:33:17.630 11:27:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:33:17.630 11:27:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.630 11:27:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:17.630 bdev_null0 00:33:17.630 11:27:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.630 11:27:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:17.630 11:27:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.630 11:27:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:17.630 11:27:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.630 11:27:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:17.630 11:27:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.630 11:27:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:17.630 11:27:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.630 11:27:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:17.630 11:27:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.630 11:27:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:17.630 [2024-11-20 11:27:43.778473] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:17.630 11:27:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.630 11:27:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:33:17.630 11:27:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:33:17.630 11:27:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:33:17.630 11:27:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:33:17.630 11:27:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.630 11:27:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:17.630 bdev_null1 00:33:17.630 11:27:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.630 11:27:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:33:17.630 11:27:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.630 11:27:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:17.630 11:27:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.630 11:27:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:33:17.630 11:27:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.630 11:27:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:17.630 11:27:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.630 11:27:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:17.631 11:27:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.631 11:27:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:17.631 11:27:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.631 11:27:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:33:17.631 11:27:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:33:17.631 11:27:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:33:17.631 11:27:43 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:33:17.631 11:27:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:17.631 11:27:43 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:33:17.631 11:27:43 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:17.631 11:27:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:17.631 11:27:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:33:17.631 11:27:43 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:17.631 { 00:33:17.631 "params": { 00:33:17.631 "name": "Nvme$subsystem", 00:33:17.631 "trtype": "$TEST_TRANSPORT", 00:33:17.631 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:17.631 "adrfam": "ipv4", 00:33:17.631 "trsvcid": "$NVMF_PORT", 00:33:17.631 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:17.631 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:17.631 "hdgst": ${hdgst:-false}, 00:33:17.631 "ddgst": ${ddgst:-false} 00:33:17.631 }, 00:33:17.631 "method": "bdev_nvme_attach_controller" 00:33:17.631 } 00:33:17.631 EOF 00:33:17.631 )") 00:33:17.631 11:27:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:17.631 11:27:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:33:17.631 11:27:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:17.631 11:27:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:33:17.631 11:27:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:17.631 11:27:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:17.631 11:27:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:33:17.631 11:27:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:17.631 11:27:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:17.631 11:27:43 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:33:17.631 11:27:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:33:17.631 11:27:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:33:17.631 11:27:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:17.631 11:27:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:33:17.631 11:27:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:33:17.631 11:27:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:17.631 11:27:43 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:17.631 11:27:43 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:17.631 { 00:33:17.631 "params": { 00:33:17.631 "name": "Nvme$subsystem", 00:33:17.631 "trtype": "$TEST_TRANSPORT", 00:33:17.631 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:17.631 "adrfam": "ipv4", 00:33:17.631 "trsvcid": "$NVMF_PORT", 00:33:17.631 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:17.631 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:17.631 "hdgst": ${hdgst:-false}, 00:33:17.631 "ddgst": ${ddgst:-false} 00:33:17.631 }, 00:33:17.631 "method": "bdev_nvme_attach_controller" 00:33:17.631 } 00:33:17.631 EOF 00:33:17.631 )") 00:33:17.631 11:27:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:33:17.631 11:27:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:33:17.631 11:27:43 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:33:17.631 11:27:43 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:33:17.631 11:27:43 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:33:17.631 11:27:43 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:17.631 "params": { 00:33:17.631 "name": "Nvme0", 00:33:17.631 "trtype": "tcp", 00:33:17.631 "traddr": "10.0.0.2", 00:33:17.631 "adrfam": "ipv4", 00:33:17.631 "trsvcid": "4420", 00:33:17.631 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:17.631 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:17.631 "hdgst": false, 00:33:17.631 "ddgst": false 00:33:17.631 }, 00:33:17.631 "method": "bdev_nvme_attach_controller" 00:33:17.631 },{ 00:33:17.631 "params": { 00:33:17.631 "name": "Nvme1", 00:33:17.631 "trtype": "tcp", 00:33:17.631 "traddr": "10.0.0.2", 00:33:17.631 "adrfam": "ipv4", 00:33:17.631 "trsvcid": "4420", 00:33:17.631 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:17.631 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:17.631 "hdgst": false, 00:33:17.631 "ddgst": false 00:33:17.631 }, 00:33:17.631 "method": "bdev_nvme_attach_controller" 00:33:17.631 }' 00:33:17.631 11:27:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:17.631 11:27:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:17.631 11:27:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:17.631 11:27:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:17.631 11:27:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:17.631 11:27:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:17.631 11:27:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:17.631 11:27:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:17.631 11:27:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:17.631 11:27:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:17.631 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:33:17.631 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:33:17.631 fio-3.35 00:33:17.631 Starting 2 threads 00:33:27.608 00:33:27.608 filename0: (groupid=0, jobs=1): err= 0: pid=121019: Wed Nov 20 11:27:54 2024 00:33:27.608 read: IOPS=97, BW=392KiB/s (401kB/s)(3920KiB/10009msec) 00:33:27.608 slat (nsec): min=5983, max=28048, avg=7651.23, stdev=2411.93 00:33:27.609 clat (usec): min=413, max=42310, avg=40827.79, stdev=2592.11 00:33:27.609 lat (usec): min=419, max=42336, avg=40835.44, stdev=2592.15 00:33:27.609 clat percentiles (usec): 00:33:27.609 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:33:27.609 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:33:27.609 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:33:27.609 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:33:27.609 | 99.99th=[42206] 00:33:27.609 bw ( KiB/s): min= 384, max= 416, per=33.74%, avg=390.40, stdev=13.13, samples=20 00:33:27.609 iops : min= 96, max= 104, avg=97.60, stdev= 3.28, samples=20 00:33:27.609 lat (usec) : 500=0.41% 00:33:27.609 lat (msec) : 50=99.59% 00:33:27.609 cpu : usr=97.17%, sys=2.58%, ctx=13, majf=0, minf=9 00:33:27.609 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:27.609 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:27.609 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:27.609 issued rwts: total=980,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:27.609 latency : target=0, window=0, percentile=100.00%, depth=4 00:33:27.609 filename1: (groupid=0, jobs=1): err= 0: pid=121020: Wed Nov 20 11:27:54 2024 00:33:27.609 read: IOPS=191, BW=764KiB/s (783kB/s)(7648KiB/10005msec) 00:33:27.609 slat (nsec): min=5974, max=31597, avg=7027.47, stdev=1867.51 00:33:27.609 clat (usec): min=394, max=42557, avg=20909.34, stdev=20459.63 00:33:27.609 lat (usec): min=400, max=42563, avg=20916.37, stdev=20459.07 00:33:27.609 clat percentiles (usec): 00:33:27.609 | 1.00th=[ 404], 5.00th=[ 412], 10.00th=[ 416], 20.00th=[ 424], 00:33:27.609 | 30.00th=[ 433], 40.00th=[ 474], 50.00th=[ 832], 60.00th=[41157], 00:33:27.609 | 70.00th=[41681], 80.00th=[41681], 90.00th=[41681], 95.00th=[41681], 00:33:27.609 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:33:27.609 | 99.99th=[42730] 00:33:27.609 bw ( KiB/s): min= 672, max= 832, per=66.28%, avg=766.32, stdev=36.13, samples=19 00:33:27.609 iops : min= 168, max= 208, avg=191.58, stdev= 9.03, samples=19 00:33:27.609 lat (usec) : 500=40.85%, 750=8.94%, 1000=0.21% 00:33:27.609 lat (msec) : 50=50.00% 00:33:27.609 cpu : usr=96.96%, sys=2.79%, ctx=6, majf=0, minf=9 00:33:27.609 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:27.609 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:27.609 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:27.609 issued rwts: total=1912,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:27.609 latency : target=0, window=0, percentile=100.00%, depth=4 00:33:27.609 00:33:27.609 Run status group 0 (all jobs): 00:33:27.609 READ: bw=1156KiB/s (1183kB/s), 392KiB/s-764KiB/s (401kB/s-783kB/s), io=11.3MiB (11.8MB), run=10005-10009msec 00:33:27.609 11:27:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:33:27.609 11:27:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:33:27.609 11:27:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:33:27.609 11:27:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:27.609 11:27:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:33:27.609 11:27:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:27.609 11:27:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:27.609 11:27:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:27.868 11:27:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:27.869 11:27:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:27.869 11:27:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:27.869 11:27:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:27.869 11:27:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:27.869 11:27:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:33:27.869 11:27:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:33:27.869 11:27:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:33:27.869 11:27:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:27.869 11:27:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:27.869 11:27:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:27.869 11:27:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:27.869 11:27:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:33:27.869 11:27:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:27.869 11:27:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:27.869 11:27:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:27.869 00:33:27.869 real 0m11.386s 00:33:27.869 user 0m26.439s 00:33:27.869 sys 0m0.853s 00:33:27.869 11:27:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:27.869 11:27:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:27.869 ************************************ 00:33:27.869 END TEST fio_dif_1_multi_subsystems 00:33:27.869 ************************************ 00:33:27.869 11:27:55 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:33:27.869 11:27:55 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:27.869 11:27:55 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:27.869 11:27:55 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:27.869 ************************************ 00:33:27.869 START TEST fio_dif_rand_params 00:33:27.869 ************************************ 00:33:27.869 11:27:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:33:27.869 11:27:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:33:27.869 11:27:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:33:27.869 11:27:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:33:27.869 11:27:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:33:27.869 11:27:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:33:27.869 11:27:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:33:27.869 11:27:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:33:27.869 11:27:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:33:27.869 11:27:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:33:27.869 11:27:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:27.869 11:27:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:33:27.869 11:27:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:33:27.869 11:27:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:33:27.869 11:27:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:27.869 11:27:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:27.869 bdev_null0 00:33:27.869 11:27:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:27.869 11:27:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:27.869 11:27:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:27.869 11:27:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:27.869 11:27:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:27.869 11:27:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:27.869 11:27:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:27.869 11:27:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:27.869 11:27:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:27.869 11:27:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:27.869 11:27:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:27.869 11:27:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:27.869 [2024-11-20 11:27:55.235012] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:27.869 11:27:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:27.869 11:27:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:33:27.869 11:27:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:33:27.869 11:27:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:33:27.869 11:27:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:33:27.869 11:27:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:27.869 11:27:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:33:27.869 11:27:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:27.869 11:27:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:33:27.869 11:27:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:27.869 11:27:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:27.869 11:27:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:27.869 { 00:33:27.869 "params": { 00:33:27.869 "name": "Nvme$subsystem", 00:33:27.869 "trtype": "$TEST_TRANSPORT", 00:33:27.869 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:27.869 "adrfam": "ipv4", 00:33:27.869 "trsvcid": "$NVMF_PORT", 00:33:27.869 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:27.869 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:27.869 "hdgst": ${hdgst:-false}, 00:33:27.869 "ddgst": ${ddgst:-false} 00:33:27.869 }, 00:33:27.869 "method": "bdev_nvme_attach_controller" 00:33:27.869 } 00:33:27.869 EOF 00:33:27.869 )") 00:33:27.869 11:27:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:33:27.869 11:27:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:27.869 11:27:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:33:27.869 11:27:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:27.869 11:27:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:27.869 11:27:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:33:27.869 11:27:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:27.869 11:27:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:27.869 11:27:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:33:27.869 11:27:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:27.869 11:27:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:33:27.869 11:27:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:27.869 11:27:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:33:27.869 11:27:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:27.869 11:27:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:33:27.869 11:27:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:33:27.869 11:27:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:27.869 "params": { 00:33:27.869 "name": "Nvme0", 00:33:27.869 "trtype": "tcp", 00:33:27.869 "traddr": "10.0.0.2", 00:33:27.869 "adrfam": "ipv4", 00:33:27.869 "trsvcid": "4420", 00:33:27.869 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:27.869 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:27.869 "hdgst": false, 00:33:27.869 "ddgst": false 00:33:27.869 }, 00:33:27.869 "method": "bdev_nvme_attach_controller" 00:33:27.869 }' 00:33:27.869 11:27:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:27.869 11:27:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:27.869 11:27:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:27.869 11:27:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:27.869 11:27:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:27.869 11:27:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:27.869 11:27:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:27.869 11:27:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:27.869 11:27:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:27.869 11:27:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:28.127 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:33:28.127 ... 00:33:28.127 fio-3.35 00:33:28.127 Starting 3 threads 00:33:34.692 00:33:34.692 filename0: (groupid=0, jobs=1): err= 0: pid=122859: Wed Nov 20 11:28:01 2024 00:33:34.692 read: IOPS=289, BW=36.2MiB/s (38.0MB/s)(183MiB/5046msec) 00:33:34.692 slat (nsec): min=6159, max=26391, avg=10517.08, stdev=1878.45 00:33:34.692 clat (usec): min=3116, max=89631, avg=10312.62, stdev=8397.44 00:33:34.692 lat (usec): min=3124, max=89643, avg=10323.14, stdev=8397.30 00:33:34.692 clat percentiles (usec): 00:33:34.692 | 1.00th=[ 4359], 5.00th=[ 6194], 10.00th=[ 6849], 20.00th=[ 7767], 00:33:34.692 | 30.00th=[ 8160], 40.00th=[ 8455], 50.00th=[ 8717], 60.00th=[ 8979], 00:33:34.692 | 70.00th=[ 9372], 80.00th=[ 9765], 90.00th=[10421], 95.00th=[11600], 00:33:34.692 | 99.00th=[49546], 99.50th=[50594], 99.90th=[51643], 99.95th=[89654], 00:33:34.692 | 99.99th=[89654] 00:33:34.692 bw ( KiB/s): min=26112, max=45568, per=32.16%, avg=37350.40, stdev=5950.68, samples=10 00:33:34.692 iops : min= 204, max= 356, avg=291.80, stdev=46.49, samples=10 00:33:34.692 lat (msec) : 4=0.68%, 10=84.68%, 20=10.26%, 50=3.76%, 100=0.62% 00:33:34.692 cpu : usr=94.77%, sys=4.96%, ctx=10, majf=0, minf=9 00:33:34.692 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:34.692 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:34.692 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:34.692 issued rwts: total=1462,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:34.692 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:34.692 filename0: (groupid=0, jobs=1): err= 0: pid=122860: Wed Nov 20 11:28:01 2024 00:33:34.692 read: IOPS=320, BW=40.1MiB/s (42.0MB/s)(201MiB/5003msec) 00:33:34.692 slat (nsec): min=6221, max=27016, avg=10630.46, stdev=2014.55 00:33:34.692 clat (usec): min=3495, max=49691, avg=9338.19, stdev=5755.81 00:33:34.692 lat (usec): min=3501, max=49703, avg=9348.82, stdev=5755.72 00:33:34.692 clat percentiles (usec): 00:33:34.692 | 1.00th=[ 3621], 5.00th=[ 5276], 10.00th=[ 5932], 20.00th=[ 6980], 00:33:34.692 | 30.00th=[ 8029], 40.00th=[ 8586], 50.00th=[ 8848], 60.00th=[ 9110], 00:33:34.692 | 70.00th=[ 9503], 80.00th=[10028], 90.00th=[10945], 95.00th=[11863], 00:33:34.692 | 99.00th=[46924], 99.50th=[48497], 99.90th=[49546], 99.95th=[49546], 00:33:34.692 | 99.99th=[49546] 00:33:34.692 bw ( KiB/s): min=24064, max=54272, per=35.34%, avg=41036.80, stdev=7725.42, samples=10 00:33:34.692 iops : min= 188, max= 424, avg=320.60, stdev=60.35, samples=10 00:33:34.692 lat (msec) : 4=3.05%, 10=76.45%, 20=18.44%, 50=2.06% 00:33:34.692 cpu : usr=94.32%, sys=5.40%, ctx=8, majf=0, minf=10 00:33:34.692 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:34.692 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:34.692 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:34.692 issued rwts: total=1605,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:34.692 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:34.692 filename0: (groupid=0, jobs=1): err= 0: pid=122861: Wed Nov 20 11:28:01 2024 00:33:34.692 read: IOPS=299, BW=37.5MiB/s (39.3MB/s)(189MiB/5043msec) 00:33:34.692 slat (nsec): min=6205, max=27074, avg=10830.07, stdev=1975.16 00:33:34.692 clat (usec): min=3548, max=50801, avg=9972.24, stdev=6071.69 00:33:34.692 lat (usec): min=3556, max=50808, avg=9983.07, stdev=6071.77 00:33:34.692 clat percentiles (usec): 00:33:34.692 | 1.00th=[ 3752], 5.00th=[ 5538], 10.00th=[ 6325], 20.00th=[ 7242], 00:33:34.692 | 30.00th=[ 8291], 40.00th=[ 8848], 50.00th=[ 9372], 60.00th=[ 9765], 00:33:34.692 | 70.00th=[10421], 80.00th=[11076], 90.00th=[11863], 95.00th=[12256], 00:33:34.692 | 99.00th=[47973], 99.50th=[49021], 99.90th=[50070], 99.95th=[50594], 00:33:34.692 | 99.99th=[50594] 00:33:34.692 bw ( KiB/s): min=31232, max=44800, per=33.26%, avg=38630.40, stdev=5532.14, samples=10 00:33:34.692 iops : min= 244, max= 350, avg=301.80, stdev=43.22, samples=10 00:33:34.692 lat (msec) : 4=1.72%, 10=61.15%, 20=34.81%, 50=2.25%, 100=0.07% 00:33:34.692 cpu : usr=94.55%, sys=5.18%, ctx=13, majf=0, minf=11 00:33:34.692 IO depths : 1=0.7%, 2=99.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:34.692 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:34.692 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:34.692 issued rwts: total=1511,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:34.692 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:34.692 00:33:34.692 Run status group 0 (all jobs): 00:33:34.692 READ: bw=113MiB/s (119MB/s), 36.2MiB/s-40.1MiB/s (38.0MB/s-42.0MB/s), io=572MiB (600MB), run=5003-5046msec 00:33:34.692 11:28:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:33:34.692 11:28:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:33:34.692 11:28:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:34.692 11:28:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:34.692 11:28:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:33:34.692 11:28:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:34.692 11:28:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:34.692 11:28:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:34.692 11:28:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:34.692 11:28:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:34.692 11:28:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:34.692 11:28:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:34.692 11:28:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:34.692 11:28:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:33:34.692 11:28:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:33:34.692 11:28:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:33:34.692 11:28:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:33:34.692 11:28:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:33:34.692 11:28:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:33:34.692 11:28:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:33:34.692 11:28:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:33:34.692 11:28:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:34.692 11:28:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:33:34.692 11:28:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:33:34.692 11:28:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:33:34.692 11:28:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:34.692 11:28:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:34.692 bdev_null0 00:33:34.692 11:28:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:34.692 11:28:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:34.692 11:28:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:34.692 11:28:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:34.692 11:28:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:34.692 11:28:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:34.692 11:28:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:34.692 11:28:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:34.692 11:28:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:34.692 11:28:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:34.692 11:28:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:34.692 11:28:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:34.692 [2024-11-20 11:28:01.507802] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:34.692 11:28:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:34.692 11:28:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:34.692 11:28:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:33:34.692 11:28:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:33:34.692 11:28:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:33:34.692 11:28:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:34.692 11:28:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:34.692 bdev_null1 00:33:34.692 11:28:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:34.692 11:28:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:33:34.692 11:28:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:34.692 11:28:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:34.692 11:28:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:34.692 11:28:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:33:34.692 11:28:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:34.692 11:28:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:34.692 11:28:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:34.692 11:28:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:34.692 11:28:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:34.692 11:28:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:34.692 11:28:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:34.692 11:28:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:34.693 11:28:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:33:34.693 11:28:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:33:34.693 11:28:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:33:34.693 11:28:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:34.693 11:28:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:34.693 bdev_null2 00:33:34.693 11:28:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:34.693 11:28:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:33:34.693 11:28:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:34.693 11:28:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:34.693 11:28:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:34.693 11:28:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:33:34.693 11:28:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:34.693 11:28:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:34.693 11:28:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:34.693 11:28:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:33:34.693 11:28:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:34.693 11:28:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:34.693 11:28:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:34.693 11:28:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:33:34.693 11:28:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:33:34.693 11:28:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:33:34.693 11:28:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:33:34.693 11:28:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:34.693 11:28:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:33:34.693 11:28:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:34.693 11:28:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:34.693 11:28:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:33:34.693 11:28:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:34.693 { 00:33:34.693 "params": { 00:33:34.693 "name": "Nvme$subsystem", 00:33:34.693 "trtype": "$TEST_TRANSPORT", 00:33:34.693 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:34.693 "adrfam": "ipv4", 00:33:34.693 "trsvcid": "$NVMF_PORT", 00:33:34.693 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:34.693 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:34.693 "hdgst": ${hdgst:-false}, 00:33:34.693 "ddgst": ${ddgst:-false} 00:33:34.693 }, 00:33:34.693 "method": "bdev_nvme_attach_controller" 00:33:34.693 } 00:33:34.693 EOF 00:33:34.693 )") 00:33:34.693 11:28:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:34.693 11:28:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:33:34.693 11:28:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:34.693 11:28:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:33:34.693 11:28:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:34.693 11:28:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:34.693 11:28:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:33:34.693 11:28:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:34.693 11:28:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:34.693 11:28:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:33:34.693 11:28:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:33:34.693 11:28:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:34.693 11:28:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:34.693 11:28:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:33:34.693 11:28:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:33:34.693 11:28:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:34.693 11:28:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:34.693 11:28:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:34.693 { 00:33:34.693 "params": { 00:33:34.693 "name": "Nvme$subsystem", 00:33:34.693 "trtype": "$TEST_TRANSPORT", 00:33:34.693 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:34.693 "adrfam": "ipv4", 00:33:34.693 "trsvcid": "$NVMF_PORT", 00:33:34.693 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:34.693 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:34.693 "hdgst": ${hdgst:-false}, 00:33:34.693 "ddgst": ${ddgst:-false} 00:33:34.693 }, 00:33:34.693 "method": "bdev_nvme_attach_controller" 00:33:34.693 } 00:33:34.693 EOF 00:33:34.693 )") 00:33:34.693 11:28:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:33:34.693 11:28:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:34.693 11:28:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:33:34.693 11:28:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:33:34.693 11:28:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:33:34.693 11:28:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:34.693 11:28:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:34.693 11:28:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:34.693 { 00:33:34.693 "params": { 00:33:34.693 "name": "Nvme$subsystem", 00:33:34.693 "trtype": "$TEST_TRANSPORT", 00:33:34.693 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:34.693 "adrfam": "ipv4", 00:33:34.693 "trsvcid": "$NVMF_PORT", 00:33:34.693 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:34.693 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:34.693 "hdgst": ${hdgst:-false}, 00:33:34.693 "ddgst": ${ddgst:-false} 00:33:34.693 }, 00:33:34.693 "method": "bdev_nvme_attach_controller" 00:33:34.693 } 00:33:34.693 EOF 00:33:34.693 )") 00:33:34.693 11:28:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:33:34.693 11:28:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:33:34.693 11:28:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:33:34.693 11:28:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:34.693 "params": { 00:33:34.693 "name": "Nvme0", 00:33:34.693 "trtype": "tcp", 00:33:34.693 "traddr": "10.0.0.2", 00:33:34.693 "adrfam": "ipv4", 00:33:34.693 "trsvcid": "4420", 00:33:34.693 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:34.693 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:34.693 "hdgst": false, 00:33:34.693 "ddgst": false 00:33:34.693 }, 00:33:34.693 "method": "bdev_nvme_attach_controller" 00:33:34.693 },{ 00:33:34.693 "params": { 00:33:34.693 "name": "Nvme1", 00:33:34.693 "trtype": "tcp", 00:33:34.693 "traddr": "10.0.0.2", 00:33:34.693 "adrfam": "ipv4", 00:33:34.693 "trsvcid": "4420", 00:33:34.693 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:34.693 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:34.693 "hdgst": false, 00:33:34.693 "ddgst": false 00:33:34.693 }, 00:33:34.693 "method": "bdev_nvme_attach_controller" 00:33:34.693 },{ 00:33:34.693 "params": { 00:33:34.693 "name": "Nvme2", 00:33:34.693 "trtype": "tcp", 00:33:34.693 "traddr": "10.0.0.2", 00:33:34.693 "adrfam": "ipv4", 00:33:34.693 "trsvcid": "4420", 00:33:34.693 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:33:34.693 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:33:34.693 "hdgst": false, 00:33:34.693 "ddgst": false 00:33:34.693 }, 00:33:34.693 "method": "bdev_nvme_attach_controller" 00:33:34.693 }' 00:33:34.693 11:28:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:34.693 11:28:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:34.693 11:28:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:34.693 11:28:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:34.693 11:28:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:34.693 11:28:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:34.693 11:28:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:34.693 11:28:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:34.693 11:28:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:34.693 11:28:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:34.693 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:33:34.693 ... 00:33:34.693 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:33:34.693 ... 00:33:34.693 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:33:34.693 ... 00:33:34.693 fio-3.35 00:33:34.693 Starting 24 threads 00:33:46.896 00:33:46.896 filename0: (groupid=0, jobs=1): err= 0: pid=124037: Wed Nov 20 11:28:12 2024 00:33:46.896 read: IOPS=619, BW=2476KiB/s (2536kB/s)(24.2MiB/10002msec) 00:33:46.896 slat (usec): min=6, max=246, avg=40.58, stdev=20.45 00:33:46.896 clat (usec): min=1280, max=34458, avg=25481.76, stdev=2802.83 00:33:46.896 lat (usec): min=1293, max=34511, avg=25522.34, stdev=2804.45 00:33:46.896 clat percentiles (usec): 00:33:46.897 | 1.00th=[ 5473], 5.00th=[24773], 10.00th=[25035], 20.00th=[25035], 00:33:46.897 | 30.00th=[25297], 40.00th=[25560], 50.00th=[25560], 60.00th=[25822], 00:33:46.897 | 70.00th=[26084], 80.00th=[26346], 90.00th=[27395], 95.00th=[27919], 00:33:46.897 | 99.00th=[28705], 99.50th=[28705], 99.90th=[28967], 99.95th=[28967], 00:33:46.897 | 99.99th=[34341] 00:33:46.897 bw ( KiB/s): min= 2304, max= 3193, per=4.20%, avg=2472.05, stdev=179.72, samples=19 00:33:46.897 iops : min= 576, max= 798, avg=618.00, stdev=44.88, samples=19 00:33:46.897 lat (msec) : 2=0.52%, 4=0.37%, 10=0.36%, 20=0.60%, 50=98.16% 00:33:46.897 cpu : usr=97.93%, sys=1.25%, ctx=160, majf=0, minf=9 00:33:46.897 IO depths : 1=6.2%, 2=12.4%, 4=24.8%, 8=50.3%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:46.897 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.897 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.897 issued rwts: total=6192,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:46.897 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:46.897 filename0: (groupid=0, jobs=1): err= 0: pid=124038: Wed Nov 20 11:28:12 2024 00:33:46.897 read: IOPS=611, BW=2446KiB/s (2504kB/s)(23.9MiB/10013msec) 00:33:46.897 slat (usec): min=5, max=110, avg=51.44, stdev=18.72 00:33:46.897 clat (usec): min=12821, max=32245, avg=25717.47, stdev=1206.77 00:33:46.897 lat (usec): min=12873, max=32260, avg=25768.91, stdev=1208.82 00:33:46.897 clat percentiles (usec): 00:33:46.897 | 1.00th=[23462], 5.00th=[24773], 10.00th=[24773], 20.00th=[25035], 00:33:46.897 | 30.00th=[25297], 40.00th=[25297], 50.00th=[25560], 60.00th=[25560], 00:33:46.897 | 70.00th=[26084], 80.00th=[26346], 90.00th=[27395], 95.00th=[27919], 00:33:46.897 | 99.00th=[28443], 99.50th=[28705], 99.90th=[29492], 99.95th=[29492], 00:33:46.897 | 99.99th=[32375] 00:33:46.897 bw ( KiB/s): min= 2304, max= 2565, per=4.16%, avg=2446.10, stdev=82.32, samples=20 00:33:46.897 iops : min= 576, max= 641, avg=611.50, stdev=20.54, samples=20 00:33:46.897 lat (msec) : 20=0.46%, 50=99.54% 00:33:46.897 cpu : usr=98.30%, sys=1.01%, ctx=135, majf=0, minf=9 00:33:46.897 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:46.897 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.897 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.897 issued rwts: total=6122,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:46.897 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:46.897 filename0: (groupid=0, jobs=1): err= 0: pid=124039: Wed Nov 20 11:28:12 2024 00:33:46.897 read: IOPS=612, BW=2449KiB/s (2507kB/s)(23.9MiB/10011msec) 00:33:46.897 slat (usec): min=5, max=110, avg=51.65, stdev=18.96 00:33:46.897 clat (usec): min=12728, max=28877, avg=25684.86, stdev=1248.99 00:33:46.897 lat (usec): min=12776, max=28945, avg=25736.51, stdev=1251.64 00:33:46.897 clat percentiles (usec): 00:33:46.897 | 1.00th=[23200], 5.00th=[24773], 10.00th=[24773], 20.00th=[25035], 00:33:46.897 | 30.00th=[25297], 40.00th=[25297], 50.00th=[25560], 60.00th=[25560], 00:33:46.897 | 70.00th=[25822], 80.00th=[26346], 90.00th=[27395], 95.00th=[27919], 00:33:46.897 | 99.00th=[28443], 99.50th=[28443], 99.90th=[28705], 99.95th=[28705], 00:33:46.897 | 99.99th=[28967] 00:33:46.897 bw ( KiB/s): min= 2304, max= 2560, per=4.14%, avg=2438.95, stdev=90.23, samples=19 00:33:46.897 iops : min= 576, max= 640, avg=609.74, stdev=22.56, samples=19 00:33:46.897 lat (msec) : 20=0.52%, 50=99.48% 00:33:46.897 cpu : usr=98.50%, sys=0.94%, ctx=117, majf=0, minf=9 00:33:46.897 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:46.897 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.897 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.897 issued rwts: total=6128,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:46.897 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:46.897 filename0: (groupid=0, jobs=1): err= 0: pid=124040: Wed Nov 20 11:28:12 2024 00:33:46.897 read: IOPS=612, BW=2449KiB/s (2508kB/s)(23.9MiB/10009msec) 00:33:46.897 slat (usec): min=6, max=155, avg=35.81, stdev=20.23 00:33:46.897 clat (usec): min=8630, max=29773, avg=25843.57, stdev=1429.32 00:33:46.897 lat (usec): min=8638, max=29793, avg=25879.37, stdev=1425.15 00:33:46.897 clat percentiles (usec): 00:33:46.897 | 1.00th=[22938], 5.00th=[24773], 10.00th=[25035], 20.00th=[25297], 00:33:46.897 | 30.00th=[25297], 40.00th=[25560], 50.00th=[25560], 60.00th=[25822], 00:33:46.897 | 70.00th=[26084], 80.00th=[26346], 90.00th=[27657], 95.00th=[28181], 00:33:46.897 | 99.00th=[28967], 99.50th=[28967], 99.90th=[29754], 99.95th=[29754], 00:33:46.897 | 99.99th=[29754] 00:33:46.897 bw ( KiB/s): min= 2304, max= 2560, per=4.15%, avg=2444.80, stdev=82.01, samples=20 00:33:46.897 iops : min= 576, max= 640, avg=611.20, stdev=20.50, samples=20 00:33:46.897 lat (msec) : 10=0.26%, 20=0.26%, 50=99.48% 00:33:46.897 cpu : usr=98.75%, sys=0.82%, ctx=60, majf=0, minf=9 00:33:46.897 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:46.897 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.897 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.897 issued rwts: total=6128,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:46.897 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:46.897 filename0: (groupid=0, jobs=1): err= 0: pid=124041: Wed Nov 20 11:28:12 2024 00:33:46.897 read: IOPS=612, BW=2450KiB/s (2509kB/s)(23.9MiB/10006msec) 00:33:46.897 slat (nsec): min=6241, max=80034, avg=36492.27, stdev=14294.61 00:33:46.897 clat (usec): min=10491, max=38351, avg=25815.62, stdev=1346.90 00:33:46.897 lat (usec): min=10500, max=38367, avg=25852.12, stdev=1349.75 00:33:46.897 clat percentiles (usec): 00:33:46.897 | 1.00th=[23462], 5.00th=[24773], 10.00th=[25035], 20.00th=[25297], 00:33:46.897 | 30.00th=[25297], 40.00th=[25560], 50.00th=[25560], 60.00th=[25822], 00:33:46.897 | 70.00th=[26084], 80.00th=[26346], 90.00th=[27395], 95.00th=[27919], 00:33:46.897 | 99.00th=[28705], 99.50th=[28705], 99.90th=[28967], 99.95th=[29230], 00:33:46.897 | 99.99th=[38536] 00:33:46.897 bw ( KiB/s): min= 2304, max= 2560, per=4.15%, avg=2445.47, stdev=84.20, samples=19 00:33:46.897 iops : min= 576, max= 640, avg=611.37, stdev=21.05, samples=19 00:33:46.897 lat (msec) : 20=0.55%, 50=99.45% 00:33:46.897 cpu : usr=98.65%, sys=0.98%, ctx=24, majf=0, minf=9 00:33:46.897 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:46.897 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.897 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.897 issued rwts: total=6128,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:46.897 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:46.897 filename0: (groupid=0, jobs=1): err= 0: pid=124042: Wed Nov 20 11:28:12 2024 00:33:46.897 read: IOPS=611, BW=2445KiB/s (2503kB/s)(23.9MiB/10001msec) 00:33:46.897 slat (nsec): min=4513, max=80022, avg=37773.80, stdev=14237.92 00:33:46.897 clat (usec): min=11461, max=42537, avg=25849.55, stdev=1853.95 00:33:46.897 lat (usec): min=11472, max=42563, avg=25887.32, stdev=1854.89 00:33:46.897 clat percentiles (usec): 00:33:46.897 | 1.00th=[23462], 5.00th=[24773], 10.00th=[25035], 20.00th=[25297], 00:33:46.897 | 30.00th=[25297], 40.00th=[25560], 50.00th=[25560], 60.00th=[25822], 00:33:46.897 | 70.00th=[26084], 80.00th=[26346], 90.00th=[27395], 95.00th=[27919], 00:33:46.897 | 99.00th=[28705], 99.50th=[38011], 99.90th=[42206], 99.95th=[42206], 00:33:46.897 | 99.99th=[42730] 00:33:46.897 bw ( KiB/s): min= 2176, max= 2560, per=4.13%, avg=2432.00, stdev=84.16, samples=19 00:33:46.897 iops : min= 544, max= 640, avg=608.00, stdev=21.04, samples=19 00:33:46.897 lat (msec) : 20=0.85%, 50=99.15% 00:33:46.897 cpu : usr=98.84%, sys=0.79%, ctx=13, majf=0, minf=9 00:33:46.897 IO depths : 1=5.2%, 2=11.4%, 4=25.0%, 8=51.1%, 16=7.3%, 32=0.0%, >=64=0.0% 00:33:46.897 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.897 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.897 issued rwts: total=6112,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:46.897 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:46.897 filename0: (groupid=0, jobs=1): err= 0: pid=124043: Wed Nov 20 11:28:12 2024 00:33:46.897 read: IOPS=613, BW=2453KiB/s (2512kB/s)(24.0MiB/10018msec) 00:33:46.897 slat (nsec): min=6789, max=74423, avg=17464.59, stdev=13848.67 00:33:46.897 clat (usec): min=7666, max=29014, avg=25938.28, stdev=1454.74 00:33:46.897 lat (usec): min=7678, max=29048, avg=25955.74, stdev=1454.66 00:33:46.897 clat percentiles (usec): 00:33:46.897 | 1.00th=[20055], 5.00th=[24773], 10.00th=[25297], 20.00th=[25560], 00:33:46.897 | 30.00th=[25560], 40.00th=[25560], 50.00th=[25822], 60.00th=[25822], 00:33:46.897 | 70.00th=[26084], 80.00th=[26608], 90.00th=[27657], 95.00th=[28181], 00:33:46.897 | 99.00th=[28705], 99.50th=[28705], 99.90th=[28967], 99.95th=[28967], 00:33:46.897 | 99.99th=[28967] 00:33:46.897 bw ( KiB/s): min= 2304, max= 2688, per=4.16%, avg=2451.20, stdev=85.87, samples=20 00:33:46.897 iops : min= 576, max= 672, avg=612.80, stdev=21.47, samples=20 00:33:46.897 lat (msec) : 10=0.03%, 20=0.90%, 50=99.07% 00:33:46.897 cpu : usr=98.45%, sys=1.04%, ctx=62, majf=0, minf=9 00:33:46.897 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:46.897 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.897 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.897 issued rwts: total=6144,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:46.897 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:46.897 filename0: (groupid=0, jobs=1): err= 0: pid=124044: Wed Nov 20 11:28:12 2024 00:33:46.897 read: IOPS=611, BW=2444KiB/s (2503kB/s)(23.9MiB/10002msec) 00:33:46.897 slat (nsec): min=4221, max=98698, avg=48303.66, stdev=16959.06 00:33:46.897 clat (usec): min=12972, max=38627, avg=25756.06, stdev=1387.86 00:33:46.897 lat (usec): min=13012, max=38639, avg=25804.36, stdev=1389.13 00:33:46.897 clat percentiles (usec): 00:33:46.897 | 1.00th=[23462], 5.00th=[24773], 10.00th=[25035], 20.00th=[25035], 00:33:46.897 | 30.00th=[25297], 40.00th=[25297], 50.00th=[25560], 60.00th=[25822], 00:33:46.897 | 70.00th=[26084], 80.00th=[26346], 90.00th=[27395], 95.00th=[27919], 00:33:46.898 | 99.00th=[28443], 99.50th=[28705], 99.90th=[38536], 99.95th=[38536], 00:33:46.898 | 99.99th=[38536] 00:33:46.898 bw ( KiB/s): min= 2304, max= 2560, per=4.14%, avg=2438.95, stdev=79.51, samples=19 00:33:46.898 iops : min= 576, max= 640, avg=609.74, stdev=19.88, samples=19 00:33:46.898 lat (msec) : 20=0.52%, 50=99.48% 00:33:46.898 cpu : usr=98.59%, sys=0.95%, ctx=45, majf=0, minf=9 00:33:46.898 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:46.898 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.898 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.898 issued rwts: total=6112,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:46.898 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:46.898 filename1: (groupid=0, jobs=1): err= 0: pid=124045: Wed Nov 20 11:28:12 2024 00:33:46.898 read: IOPS=612, BW=2449KiB/s (2507kB/s)(23.9MiB/10011msec) 00:33:46.898 slat (nsec): min=6097, max=92832, avg=40882.57, stdev=17109.19 00:33:46.898 clat (usec): min=11024, max=39535, avg=25761.72, stdev=1326.22 00:33:46.898 lat (usec): min=11036, max=39557, avg=25802.60, stdev=1328.97 00:33:46.898 clat percentiles (usec): 00:33:46.898 | 1.00th=[23462], 5.00th=[24773], 10.00th=[25035], 20.00th=[25035], 00:33:46.898 | 30.00th=[25297], 40.00th=[25560], 50.00th=[25560], 60.00th=[25822], 00:33:46.898 | 70.00th=[26084], 80.00th=[26346], 90.00th=[27395], 95.00th=[27919], 00:33:46.898 | 99.00th=[28443], 99.50th=[28705], 99.90th=[28967], 99.95th=[28967], 00:33:46.898 | 99.99th=[39584] 00:33:46.898 bw ( KiB/s): min= 2304, max= 2560, per=4.14%, avg=2438.74, stdev=79.52, samples=19 00:33:46.898 iops : min= 576, max= 640, avg=609.68, stdev=19.88, samples=19 00:33:46.898 lat (msec) : 20=0.55%, 50=99.45% 00:33:46.898 cpu : usr=97.96%, sys=1.38%, ctx=116, majf=0, minf=9 00:33:46.898 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:46.898 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.898 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.898 issued rwts: total=6128,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:46.898 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:46.898 filename1: (groupid=0, jobs=1): err= 0: pid=124046: Wed Nov 20 11:28:12 2024 00:33:46.898 read: IOPS=611, BW=2445KiB/s (2503kB/s)(23.9MiB/10001msec) 00:33:46.898 slat (nsec): min=4724, max=98616, avg=48229.89, stdev=17353.73 00:33:46.898 clat (usec): min=12929, max=39110, avg=25751.83, stdev=1398.42 00:33:46.898 lat (usec): min=12972, max=39123, avg=25800.06, stdev=1399.92 00:33:46.898 clat percentiles (usec): 00:33:46.898 | 1.00th=[23462], 5.00th=[24773], 10.00th=[24773], 20.00th=[25035], 00:33:46.898 | 30.00th=[25297], 40.00th=[25297], 50.00th=[25560], 60.00th=[25822], 00:33:46.898 | 70.00th=[26084], 80.00th=[26346], 90.00th=[27395], 95.00th=[27919], 00:33:46.898 | 99.00th=[28443], 99.50th=[28705], 99.90th=[39060], 99.95th=[39060], 00:33:46.898 | 99.99th=[39060] 00:33:46.898 bw ( KiB/s): min= 2304, max= 2560, per=4.14%, avg=2438.74, stdev=79.52, samples=19 00:33:46.898 iops : min= 576, max= 640, avg=609.68, stdev=19.88, samples=19 00:33:46.898 lat (msec) : 20=0.52%, 50=99.48% 00:33:46.898 cpu : usr=98.20%, sys=1.14%, ctx=162, majf=0, minf=9 00:33:46.898 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:46.898 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.898 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.898 issued rwts: total=6112,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:46.898 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:46.898 filename1: (groupid=0, jobs=1): err= 0: pid=124047: Wed Nov 20 11:28:12 2024 00:33:46.898 read: IOPS=612, BW=2450KiB/s (2509kB/s)(23.9MiB/10006msec) 00:33:46.898 slat (nsec): min=9457, max=98242, avg=42753.99, stdev=18550.28 00:33:46.898 clat (usec): min=10371, max=28985, avg=25792.23, stdev=1358.00 00:33:46.898 lat (usec): min=10390, max=29004, avg=25834.98, stdev=1360.08 00:33:46.898 clat percentiles (usec): 00:33:46.898 | 1.00th=[23462], 5.00th=[24773], 10.00th=[25035], 20.00th=[25297], 00:33:46.898 | 30.00th=[25297], 40.00th=[25560], 50.00th=[25560], 60.00th=[25822], 00:33:46.898 | 70.00th=[26084], 80.00th=[26346], 90.00th=[27395], 95.00th=[27919], 00:33:46.898 | 99.00th=[28443], 99.50th=[28705], 99.90th=[28967], 99.95th=[28967], 00:33:46.898 | 99.99th=[28967] 00:33:46.898 bw ( KiB/s): min= 2304, max= 2688, per=4.15%, avg=2445.47, stdev=94.40, samples=19 00:33:46.898 iops : min= 576, max= 672, avg=611.37, stdev=23.60, samples=19 00:33:46.898 lat (msec) : 20=0.52%, 50=99.48% 00:33:46.898 cpu : usr=98.62%, sys=0.95%, ctx=42, majf=0, minf=10 00:33:46.898 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:46.898 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.898 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.898 issued rwts: total=6128,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:46.898 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:46.898 filename1: (groupid=0, jobs=1): err= 0: pid=124048: Wed Nov 20 11:28:12 2024 00:33:46.898 read: IOPS=612, BW=2450KiB/s (2509kB/s)(23.9MiB/10006msec) 00:33:46.898 slat (nsec): min=6405, max=96743, avg=29329.19, stdev=17551.25 00:33:46.898 clat (usec): min=10290, max=29006, avg=25909.56, stdev=1375.43 00:33:46.898 lat (usec): min=10316, max=29019, avg=25938.89, stdev=1374.96 00:33:46.898 clat percentiles (usec): 00:33:46.898 | 1.00th=[23462], 5.00th=[24773], 10.00th=[25035], 20.00th=[25297], 00:33:46.898 | 30.00th=[25560], 40.00th=[25560], 50.00th=[25822], 60.00th=[25822], 00:33:46.898 | 70.00th=[26084], 80.00th=[26608], 90.00th=[27657], 95.00th=[28181], 00:33:46.898 | 99.00th=[28443], 99.50th=[28705], 99.90th=[28967], 99.95th=[28967], 00:33:46.898 | 99.99th=[28967] 00:33:46.898 bw ( KiB/s): min= 2304, max= 2688, per=4.15%, avg=2445.47, stdev=94.40, samples=19 00:33:46.898 iops : min= 576, max= 672, avg=611.37, stdev=23.60, samples=19 00:33:46.898 lat (msec) : 20=0.52%, 50=99.48% 00:33:46.898 cpu : usr=98.84%, sys=0.78%, ctx=18, majf=0, minf=9 00:33:46.898 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:46.898 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.898 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.898 issued rwts: total=6128,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:46.898 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:46.898 filename1: (groupid=0, jobs=1): err= 0: pid=124049: Wed Nov 20 11:28:12 2024 00:33:46.898 read: IOPS=612, BW=2450KiB/s (2509kB/s)(23.9MiB/10006msec) 00:33:46.898 slat (usec): min=7, max=112, avg=46.54, stdev=20.48 00:33:46.898 clat (usec): min=10302, max=28914, avg=25751.81, stdev=1368.14 00:33:46.898 lat (usec): min=10321, max=28950, avg=25798.35, stdev=1369.69 00:33:46.898 clat percentiles (usec): 00:33:46.898 | 1.00th=[23462], 5.00th=[24773], 10.00th=[24773], 20.00th=[25035], 00:33:46.898 | 30.00th=[25297], 40.00th=[25560], 50.00th=[25560], 60.00th=[25822], 00:33:46.898 | 70.00th=[26084], 80.00th=[26346], 90.00th=[27395], 95.00th=[27919], 00:33:46.898 | 99.00th=[28443], 99.50th=[28705], 99.90th=[28705], 99.95th=[28967], 00:33:46.898 | 99.99th=[28967] 00:33:46.898 bw ( KiB/s): min= 2304, max= 2688, per=4.15%, avg=2445.47, stdev=94.40, samples=19 00:33:46.898 iops : min= 576, max= 672, avg=611.37, stdev=23.60, samples=19 00:33:46.898 lat (msec) : 20=0.52%, 50=99.48% 00:33:46.898 cpu : usr=98.09%, sys=1.22%, ctx=98, majf=0, minf=9 00:33:46.898 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:46.898 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.898 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.898 issued rwts: total=6128,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:46.898 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:46.898 filename1: (groupid=0, jobs=1): err= 0: pid=124050: Wed Nov 20 11:28:12 2024 00:33:46.898 read: IOPS=612, BW=2450KiB/s (2509kB/s)(23.9MiB/10006msec) 00:33:46.898 slat (nsec): min=6965, max=96094, avg=29865.05, stdev=18035.25 00:33:46.898 clat (usec): min=9559, max=28995, avg=25901.85, stdev=1391.40 00:33:46.898 lat (usec): min=9571, max=29008, avg=25931.71, stdev=1389.60 00:33:46.898 clat percentiles (usec): 00:33:46.898 | 1.00th=[23462], 5.00th=[24773], 10.00th=[25035], 20.00th=[25297], 00:33:46.898 | 30.00th=[25560], 40.00th=[25560], 50.00th=[25822], 60.00th=[25822], 00:33:46.898 | 70.00th=[26084], 80.00th=[26608], 90.00th=[27657], 95.00th=[28181], 00:33:46.898 | 99.00th=[28705], 99.50th=[28705], 99.90th=[28967], 99.95th=[28967], 00:33:46.898 | 99.99th=[28967] 00:33:46.898 bw ( KiB/s): min= 2304, max= 2688, per=4.15%, avg=2445.47, stdev=94.40, samples=19 00:33:46.898 iops : min= 576, max= 672, avg=611.37, stdev=23.60, samples=19 00:33:46.898 lat (msec) : 10=0.03%, 20=0.52%, 50=99.45% 00:33:46.898 cpu : usr=98.23%, sys=1.30%, ctx=37, majf=0, minf=9 00:33:46.898 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:46.898 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.898 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.898 issued rwts: total=6128,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:46.898 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:46.898 filename1: (groupid=0, jobs=1): err= 0: pid=124051: Wed Nov 20 11:28:12 2024 00:33:46.898 read: IOPS=612, BW=2450KiB/s (2509kB/s)(23.9MiB/10006msec) 00:33:46.898 slat (nsec): min=4087, max=74420, avg=26205.93, stdev=17344.34 00:33:46.898 clat (usec): min=6804, max=40743, avg=25858.41, stdev=1601.58 00:33:46.898 lat (usec): min=6811, max=40756, avg=25884.61, stdev=1602.85 00:33:46.898 clat percentiles (usec): 00:33:46.898 | 1.00th=[23462], 5.00th=[24773], 10.00th=[25035], 20.00th=[25297], 00:33:46.898 | 30.00th=[25297], 40.00th=[25560], 50.00th=[25560], 60.00th=[25822], 00:33:46.898 | 70.00th=[26084], 80.00th=[26608], 90.00th=[27657], 95.00th=[27919], 00:33:46.898 | 99.00th=[28443], 99.50th=[28705], 99.90th=[34341], 99.95th=[34341], 00:33:46.898 | 99.99th=[40633] 00:33:46.898 bw ( KiB/s): min= 2304, max= 2560, per=4.14%, avg=2438.74, stdev=90.24, samples=19 00:33:46.898 iops : min= 576, max= 640, avg=609.68, stdev=22.56, samples=19 00:33:46.898 lat (msec) : 10=0.26%, 20=0.59%, 50=99.15% 00:33:46.898 cpu : usr=98.47%, sys=0.99%, ctx=76, majf=0, minf=9 00:33:46.898 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:46.898 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.898 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.899 issued rwts: total=6128,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:46.899 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:46.899 filename1: (groupid=0, jobs=1): err= 0: pid=124052: Wed Nov 20 11:28:12 2024 00:33:46.899 read: IOPS=611, BW=2445KiB/s (2503kB/s)(23.9MiB/10001msec) 00:33:46.899 slat (usec): min=5, max=110, avg=51.02, stdev=19.14 00:33:46.899 clat (usec): min=16511, max=30190, avg=25754.01, stdev=1094.55 00:33:46.899 lat (usec): min=16535, max=30205, avg=25805.03, stdev=1096.04 00:33:46.899 clat percentiles (usec): 00:33:46.899 | 1.00th=[23462], 5.00th=[24773], 10.00th=[24773], 20.00th=[25035], 00:33:46.899 | 30.00th=[25297], 40.00th=[25297], 50.00th=[25560], 60.00th=[25822], 00:33:46.899 | 70.00th=[26084], 80.00th=[26346], 90.00th=[27395], 95.00th=[27919], 00:33:46.899 | 99.00th=[28443], 99.50th=[28705], 99.90th=[30278], 99.95th=[30278], 00:33:46.899 | 99.99th=[30278] 00:33:46.899 bw ( KiB/s): min= 2304, max= 2565, per=4.14%, avg=2439.63, stdev=79.55, samples=19 00:33:46.899 iops : min= 576, max= 641, avg=609.89, stdev=19.87, samples=19 00:33:46.899 lat (msec) : 20=0.26%, 50=99.74% 00:33:46.899 cpu : usr=98.59%, sys=0.92%, ctx=72, majf=0, minf=9 00:33:46.899 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:46.899 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.899 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.899 issued rwts: total=6112,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:46.899 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:46.899 filename2: (groupid=0, jobs=1): err= 0: pid=124053: Wed Nov 20 11:28:12 2024 00:33:46.899 read: IOPS=612, BW=2450KiB/s (2509kB/s)(23.9MiB/10005msec) 00:33:46.899 slat (usec): min=6, max=100, avg=16.41, stdev=11.86 00:33:46.899 clat (usec): min=10484, max=29085, avg=25996.74, stdev=1369.41 00:33:46.899 lat (usec): min=10511, max=29099, avg=26013.14, stdev=1368.87 00:33:46.899 clat percentiles (usec): 00:33:46.899 | 1.00th=[23462], 5.00th=[25035], 10.00th=[25297], 20.00th=[25560], 00:33:46.899 | 30.00th=[25560], 40.00th=[25560], 50.00th=[25822], 60.00th=[26084], 00:33:46.899 | 70.00th=[26346], 80.00th=[26608], 90.00th=[27657], 95.00th=[28181], 00:33:46.899 | 99.00th=[28705], 99.50th=[28705], 99.90th=[28967], 99.95th=[28967], 00:33:46.899 | 99.99th=[28967] 00:33:46.899 bw ( KiB/s): min= 2304, max= 2688, per=4.15%, avg=2445.47, stdev=94.40, samples=19 00:33:46.899 iops : min= 576, max= 672, avg=611.37, stdev=23.60, samples=19 00:33:46.899 lat (msec) : 20=0.52%, 50=99.48% 00:33:46.899 cpu : usr=98.67%, sys=0.94%, ctx=50, majf=0, minf=9 00:33:46.899 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:46.899 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.899 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.899 issued rwts: total=6128,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:46.899 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:46.899 filename2: (groupid=0, jobs=1): err= 0: pid=124054: Wed Nov 20 11:28:12 2024 00:33:46.899 read: IOPS=611, BW=2445KiB/s (2503kB/s)(23.9MiB/10014msec) 00:33:46.899 slat (usec): min=5, max=110, avg=50.85, stdev=19.04 00:33:46.899 clat (usec): min=13907, max=29470, avg=25722.68, stdev=1162.60 00:33:46.899 lat (usec): min=13959, max=29485, avg=25773.53, stdev=1164.89 00:33:46.899 clat percentiles (usec): 00:33:46.899 | 1.00th=[23462], 5.00th=[24773], 10.00th=[24773], 20.00th=[25035], 00:33:46.899 | 30.00th=[25297], 40.00th=[25297], 50.00th=[25560], 60.00th=[25560], 00:33:46.899 | 70.00th=[26084], 80.00th=[26346], 90.00th=[27395], 95.00th=[27919], 00:33:46.899 | 99.00th=[28443], 99.50th=[28705], 99.90th=[29492], 99.95th=[29492], 00:33:46.899 | 99.99th=[29492] 00:33:46.899 bw ( KiB/s): min= 2304, max= 2565, per=4.16%, avg=2446.10, stdev=82.32, samples=20 00:33:46.899 iops : min= 576, max= 641, avg=611.50, stdev=20.54, samples=20 00:33:46.899 lat (msec) : 20=0.39%, 50=99.61% 00:33:46.899 cpu : usr=98.96%, sys=0.67%, ctx=33, majf=0, minf=9 00:33:46.899 IO depths : 1=6.3%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:46.899 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.899 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.899 issued rwts: total=6120,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:46.899 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:46.899 filename2: (groupid=0, jobs=1): err= 0: pid=124055: Wed Nov 20 11:28:12 2024 00:33:46.899 read: IOPS=610, BW=2443KiB/s (2501kB/s)(23.9MiB/10009msec) 00:33:46.899 slat (nsec): min=4088, max=74150, avg=26709.40, stdev=17357.50 00:33:46.899 clat (usec): min=14408, max=45171, avg=25923.37, stdev=1355.32 00:33:46.899 lat (usec): min=14420, max=45183, avg=25950.08, stdev=1355.98 00:33:46.899 clat percentiles (usec): 00:33:46.899 | 1.00th=[23462], 5.00th=[24773], 10.00th=[25035], 20.00th=[25297], 00:33:46.899 | 30.00th=[25297], 40.00th=[25560], 50.00th=[25560], 60.00th=[25822], 00:33:46.899 | 70.00th=[26084], 80.00th=[26608], 90.00th=[27657], 95.00th=[27919], 00:33:46.899 | 99.00th=[28443], 99.50th=[28705], 99.90th=[39060], 99.95th=[39060], 00:33:46.899 | 99.99th=[45351] 00:33:46.899 bw ( KiB/s): min= 2304, max= 2560, per=4.14%, avg=2438.74, stdev=90.24, samples=19 00:33:46.899 iops : min= 576, max= 640, avg=609.68, stdev=22.56, samples=19 00:33:46.899 lat (msec) : 20=0.56%, 50=99.44% 00:33:46.899 cpu : usr=98.30%, sys=1.05%, ctx=84, majf=0, minf=9 00:33:46.899 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:46.899 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.899 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.899 issued rwts: total=6112,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:46.899 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:46.899 filename2: (groupid=0, jobs=1): err= 0: pid=124056: Wed Nov 20 11:28:12 2024 00:33:46.899 read: IOPS=612, BW=2450KiB/s (2509kB/s)(23.9MiB/10006msec) 00:33:46.899 slat (usec): min=10, max=109, avg=48.77, stdev=19.83 00:33:46.899 clat (usec): min=10331, max=28947, avg=25722.92, stdev=1358.97 00:33:46.899 lat (usec): min=10349, max=29004, avg=25771.69, stdev=1361.73 00:33:46.899 clat percentiles (usec): 00:33:46.899 | 1.00th=[23462], 5.00th=[24773], 10.00th=[24773], 20.00th=[25035], 00:33:46.899 | 30.00th=[25297], 40.00th=[25560], 50.00th=[25560], 60.00th=[25822], 00:33:46.899 | 70.00th=[26084], 80.00th=[26346], 90.00th=[27395], 95.00th=[27919], 00:33:46.899 | 99.00th=[28443], 99.50th=[28443], 99.90th=[28705], 99.95th=[28967], 00:33:46.899 | 99.99th=[28967] 00:33:46.899 bw ( KiB/s): min= 2304, max= 2688, per=4.15%, avg=2445.47, stdev=94.40, samples=19 00:33:46.899 iops : min= 576, max= 672, avg=611.37, stdev=23.60, samples=19 00:33:46.899 lat (msec) : 20=0.52%, 50=99.48% 00:33:46.899 cpu : usr=98.78%, sys=0.86%, ctx=23, majf=0, minf=9 00:33:46.899 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:46.899 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.899 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.899 issued rwts: total=6128,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:46.899 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:46.899 filename2: (groupid=0, jobs=1): err= 0: pid=124057: Wed Nov 20 11:28:12 2024 00:33:46.899 read: IOPS=611, BW=2445KiB/s (2503kB/s)(23.9MiB/10001msec) 00:33:46.899 slat (usec): min=5, max=174, avg=50.08, stdev=20.37 00:33:46.899 clat (usec): min=12822, max=39126, avg=25713.57, stdev=1408.61 00:33:46.899 lat (usec): min=12853, max=39139, avg=25763.65, stdev=1410.69 00:33:46.899 clat percentiles (usec): 00:33:46.899 | 1.00th=[23462], 5.00th=[24773], 10.00th=[24773], 20.00th=[25035], 00:33:46.899 | 30.00th=[25297], 40.00th=[25297], 50.00th=[25560], 60.00th=[25560], 00:33:46.899 | 70.00th=[25822], 80.00th=[26346], 90.00th=[27395], 95.00th=[27919], 00:33:46.899 | 99.00th=[28443], 99.50th=[28705], 99.90th=[39060], 99.95th=[39060], 00:33:46.899 | 99.99th=[39060] 00:33:46.899 bw ( KiB/s): min= 2304, max= 2560, per=4.14%, avg=2438.74, stdev=79.52, samples=19 00:33:46.899 iops : min= 576, max= 640, avg=609.68, stdev=19.88, samples=19 00:33:46.899 lat (msec) : 20=0.52%, 50=99.48% 00:33:46.899 cpu : usr=98.11%, sys=1.22%, ctx=109, majf=0, minf=9 00:33:46.899 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:46.899 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.899 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.899 issued rwts: total=6112,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:46.899 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:46.899 filename2: (groupid=0, jobs=1): err= 0: pid=124058: Wed Nov 20 11:28:12 2024 00:33:46.899 read: IOPS=611, BW=2445KiB/s (2503kB/s)(23.9MiB/10001msec) 00:33:46.899 slat (nsec): min=6532, max=92700, avg=40782.17, stdev=19168.76 00:33:46.899 clat (usec): min=11985, max=54054, avg=25781.69, stdev=1586.27 00:33:46.899 lat (usec): min=11994, max=54069, avg=25822.47, stdev=1588.29 00:33:46.899 clat percentiles (usec): 00:33:46.899 | 1.00th=[23462], 5.00th=[24773], 10.00th=[25035], 20.00th=[25035], 00:33:46.899 | 30.00th=[25297], 40.00th=[25297], 50.00th=[25560], 60.00th=[25822], 00:33:46.899 | 70.00th=[26084], 80.00th=[26346], 90.00th=[27395], 95.00th=[27919], 00:33:46.899 | 99.00th=[28705], 99.50th=[28967], 99.90th=[41681], 99.95th=[41681], 00:33:46.899 | 99.99th=[54264] 00:33:46.899 bw ( KiB/s): min= 2176, max= 2560, per=4.13%, avg=2432.00, stdev=85.33, samples=19 00:33:46.899 iops : min= 544, max= 640, avg=608.00, stdev=21.33, samples=19 00:33:46.899 lat (msec) : 20=0.56%, 50=99.41%, 100=0.03% 00:33:46.899 cpu : usr=98.27%, sys=1.00%, ctx=97, majf=0, minf=9 00:33:46.899 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:46.899 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.899 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.899 issued rwts: total=6112,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:46.899 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:46.899 filename2: (groupid=0, jobs=1): err= 0: pid=124059: Wed Nov 20 11:28:12 2024 00:33:46.899 read: IOPS=651, BW=2604KiB/s (2667kB/s)(25.4MiB/10002msec) 00:33:46.899 slat (nsec): min=6157, max=91303, avg=20711.97, stdev=18015.86 00:33:46.899 clat (usec): min=3328, max=63307, avg=24438.18, stdev=4639.49 00:33:46.899 lat (usec): min=3335, max=63326, avg=24458.89, stdev=4640.76 00:33:46.899 clat percentiles (usec): 00:33:46.899 | 1.00th=[13304], 5.00th=[16450], 10.00th=[17957], 20.00th=[20579], 00:33:46.899 | 30.00th=[23200], 40.00th=[25035], 50.00th=[25560], 60.00th=[25560], 00:33:46.899 | 70.00th=[25822], 80.00th=[26608], 90.00th=[28181], 95.00th=[31851], 00:33:46.900 | 99.00th=[36963], 99.50th=[38536], 99.90th=[53740], 99.95th=[53740], 00:33:46.900 | 99.99th=[63177] 00:33:46.900 bw ( KiB/s): min= 2356, max= 2880, per=4.41%, avg=2598.95, stdev=143.45, samples=19 00:33:46.900 iops : min= 589, max= 720, avg=649.74, stdev=35.86, samples=19 00:33:46.900 lat (msec) : 4=0.09%, 10=0.41%, 20=16.62%, 50=82.63%, 100=0.25% 00:33:46.900 cpu : usr=98.65%, sys=0.94%, ctx=35, majf=0, minf=9 00:33:46.900 IO depths : 1=1.0%, 2=2.6%, 4=8.8%, 8=74.0%, 16=13.5%, 32=0.0%, >=64=0.0% 00:33:46.900 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.900 complete : 0=0.0%, 4=90.1%, 8=6.4%, 16=3.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.900 issued rwts: total=6512,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:46.900 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:46.900 filename2: (groupid=0, jobs=1): err= 0: pid=124060: Wed Nov 20 11:28:12 2024 00:33:46.900 read: IOPS=614, BW=2458KiB/s (2517kB/s)(24.0MiB/10002msec) 00:33:46.900 slat (usec): min=6, max=238, avg=32.90, stdev=19.65 00:33:46.900 clat (usec): min=2982, max=53996, avg=25747.80, stdev=2593.25 00:33:46.900 lat (usec): min=2988, max=54015, avg=25780.69, stdev=2592.81 00:33:46.900 clat percentiles (usec): 00:33:46.900 | 1.00th=[16909], 5.00th=[23987], 10.00th=[25035], 20.00th=[25297], 00:33:46.900 | 30.00th=[25297], 40.00th=[25560], 50.00th=[25560], 60.00th=[25822], 00:33:46.900 | 70.00th=[26084], 80.00th=[26346], 90.00th=[27657], 95.00th=[28181], 00:33:46.900 | 99.00th=[28967], 99.50th=[34341], 99.90th=[53740], 99.95th=[53740], 00:33:46.900 | 99.99th=[53740] 00:33:46.900 bw ( KiB/s): min= 2304, max= 2592, per=4.16%, avg=2446.53, stdev=85.62, samples=19 00:33:46.900 iops : min= 576, max= 648, avg=611.63, stdev=21.40, samples=19 00:33:46.900 lat (msec) : 4=0.26%, 20=1.72%, 50=97.75%, 100=0.26% 00:33:46.900 cpu : usr=98.77%, sys=0.84%, ctx=37, majf=0, minf=9 00:33:46.900 IO depths : 1=5.0%, 2=11.0%, 4=24.4%, 8=52.1%, 16=7.6%, 32=0.0%, >=64=0.0% 00:33:46.900 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.900 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.900 issued rwts: total=6146,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:46.900 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:46.900 00:33:46.900 Run status group 0 (all jobs): 00:33:46.900 READ: bw=57.5MiB/s (60.3MB/s), 2443KiB/s-2604KiB/s (2501kB/s-2667kB/s), io=576MiB (604MB), run=10001-10018msec 00:33:46.900 11:28:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:33:46.900 11:28:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:33:46.900 11:28:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:46.900 11:28:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:46.900 11:28:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:33:46.900 11:28:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:46.900 11:28:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:46.900 11:28:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:46.900 11:28:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:46.900 11:28:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:46.900 11:28:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:46.900 11:28:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:46.900 11:28:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:46.900 11:28:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:46.900 11:28:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:33:46.900 11:28:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:33:46.900 11:28:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:46.900 11:28:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:46.900 11:28:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:46.900 11:28:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:46.900 11:28:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:33:46.900 11:28:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:46.900 11:28:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:46.900 11:28:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:46.900 11:28:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:46.900 11:28:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:33:46.900 11:28:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:33:46.900 11:28:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:33:46.900 11:28:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:46.900 11:28:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:46.900 11:28:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:46.900 11:28:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:33:46.900 11:28:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:46.900 11:28:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:46.900 11:28:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:46.900 11:28:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:33:46.900 11:28:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:33:46.900 11:28:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:33:46.900 11:28:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:33:46.900 11:28:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:33:46.900 11:28:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:33:46.900 11:28:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:33:46.900 11:28:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:33:46.900 11:28:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:46.900 11:28:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:33:46.900 11:28:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:33:46.900 11:28:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:33:46.900 11:28:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:46.900 11:28:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:46.900 bdev_null0 00:33:46.900 11:28:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:46.900 11:28:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:46.900 11:28:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:46.900 11:28:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:46.900 11:28:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:46.900 11:28:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:46.900 11:28:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:46.900 11:28:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:46.900 11:28:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:46.900 11:28:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:46.900 11:28:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:46.900 11:28:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:46.900 [2024-11-20 11:28:13.376919] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:46.900 11:28:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:46.900 11:28:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:46.900 11:28:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:33:46.900 11:28:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:33:46.900 11:28:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:33:46.900 11:28:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:46.900 11:28:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:46.900 bdev_null1 00:33:46.900 11:28:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:46.900 11:28:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:33:46.900 11:28:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:46.900 11:28:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:46.900 11:28:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:46.900 11:28:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:33:46.900 11:28:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:46.900 11:28:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:46.900 11:28:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:46.900 11:28:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:46.900 11:28:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:46.900 11:28:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:46.900 11:28:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:46.900 11:28:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:33:46.900 11:28:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:33:46.900 11:28:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:33:46.900 11:28:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:33:46.900 11:28:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:46.900 11:28:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:33:46.900 11:28:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:33:46.900 11:28:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:46.900 11:28:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:46.900 11:28:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:46.900 { 00:33:46.901 "params": { 00:33:46.901 "name": "Nvme$subsystem", 00:33:46.901 "trtype": "$TEST_TRANSPORT", 00:33:46.901 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:46.901 "adrfam": "ipv4", 00:33:46.901 "trsvcid": "$NVMF_PORT", 00:33:46.901 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:46.901 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:46.901 "hdgst": ${hdgst:-false}, 00:33:46.901 "ddgst": ${ddgst:-false} 00:33:46.901 }, 00:33:46.901 "method": "bdev_nvme_attach_controller" 00:33:46.901 } 00:33:46.901 EOF 00:33:46.901 )") 00:33:46.901 11:28:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:33:46.901 11:28:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:46.901 11:28:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:33:46.901 11:28:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:46.901 11:28:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:46.901 11:28:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:46.901 11:28:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:33:46.901 11:28:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:46.901 11:28:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:46.901 11:28:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:33:46.901 11:28:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:33:46.901 11:28:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:46.901 11:28:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:46.901 11:28:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:33:46.901 11:28:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:33:46.901 11:28:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:46.901 11:28:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:46.901 11:28:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:46.901 { 00:33:46.901 "params": { 00:33:46.901 "name": "Nvme$subsystem", 00:33:46.901 "trtype": "$TEST_TRANSPORT", 00:33:46.901 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:46.901 "adrfam": "ipv4", 00:33:46.901 "trsvcid": "$NVMF_PORT", 00:33:46.901 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:46.901 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:46.901 "hdgst": ${hdgst:-false}, 00:33:46.901 "ddgst": ${ddgst:-false} 00:33:46.901 }, 00:33:46.901 "method": "bdev_nvme_attach_controller" 00:33:46.901 } 00:33:46.901 EOF 00:33:46.901 )") 00:33:46.901 11:28:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:33:46.901 11:28:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:46.901 11:28:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:33:46.901 11:28:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:33:46.901 11:28:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:33:46.901 11:28:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:46.901 "params": { 00:33:46.901 "name": "Nvme0", 00:33:46.901 "trtype": "tcp", 00:33:46.901 "traddr": "10.0.0.2", 00:33:46.901 "adrfam": "ipv4", 00:33:46.901 "trsvcid": "4420", 00:33:46.901 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:46.901 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:46.901 "hdgst": false, 00:33:46.901 "ddgst": false 00:33:46.901 }, 00:33:46.901 "method": "bdev_nvme_attach_controller" 00:33:46.901 },{ 00:33:46.901 "params": { 00:33:46.901 "name": "Nvme1", 00:33:46.901 "trtype": "tcp", 00:33:46.901 "traddr": "10.0.0.2", 00:33:46.901 "adrfam": "ipv4", 00:33:46.901 "trsvcid": "4420", 00:33:46.901 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:46.901 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:46.901 "hdgst": false, 00:33:46.901 "ddgst": false 00:33:46.901 }, 00:33:46.901 "method": "bdev_nvme_attach_controller" 00:33:46.901 }' 00:33:46.901 11:28:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:46.901 11:28:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:46.901 11:28:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:46.901 11:28:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:46.901 11:28:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:46.901 11:28:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:46.901 11:28:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:46.901 11:28:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:46.901 11:28:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:46.901 11:28:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:46.901 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:33:46.901 ... 00:33:46.901 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:33:46.901 ... 00:33:46.901 fio-3.35 00:33:46.901 Starting 4 threads 00:33:52.263 00:33:52.263 filename0: (groupid=0, jobs=1): err= 0: pid=126005: Wed Nov 20 11:28:19 2024 00:33:52.263 read: IOPS=2726, BW=21.3MiB/s (22.3MB/s)(107MiB/5001msec) 00:33:52.263 slat (nsec): min=6121, max=45555, avg=9170.84, stdev=3358.03 00:33:52.263 clat (usec): min=694, max=5463, avg=2907.25, stdev=383.89 00:33:52.263 lat (usec): min=706, max=5476, avg=2916.42, stdev=384.11 00:33:52.263 clat percentiles (usec): 00:33:52.263 | 1.00th=[ 1942], 5.00th=[ 2278], 10.00th=[ 2409], 20.00th=[ 2573], 00:33:52.263 | 30.00th=[ 2737], 40.00th=[ 2900], 50.00th=[ 2999], 60.00th=[ 3032], 00:33:52.263 | 70.00th=[ 3064], 80.00th=[ 3097], 90.00th=[ 3261], 95.00th=[ 3458], 00:33:52.263 | 99.00th=[ 4113], 99.50th=[ 4359], 99.90th=[ 4883], 99.95th=[ 5080], 00:33:52.263 | 99.99th=[ 5407] 00:33:52.263 bw ( KiB/s): min=20416, max=22768, per=26.19%, avg=21805.67, stdev=762.32, samples=9 00:33:52.263 iops : min= 2552, max= 2846, avg=2725.67, stdev=95.30, samples=9 00:33:52.263 lat (usec) : 750=0.01%, 1000=0.01% 00:33:52.263 lat (msec) : 2=1.16%, 4=97.43%, 10=1.40% 00:33:52.263 cpu : usr=95.50%, sys=4.16%, ctx=19, majf=0, minf=9 00:33:52.263 IO depths : 1=0.2%, 2=6.3%, 4=64.7%, 8=28.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:52.263 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.263 complete : 0=0.0%, 4=93.4%, 8=6.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.263 issued rwts: total=13634,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:52.263 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:52.263 filename0: (groupid=0, jobs=1): err= 0: pid=126006: Wed Nov 20 11:28:19 2024 00:33:52.263 read: IOPS=2524, BW=19.7MiB/s (20.7MB/s)(98.6MiB/5001msec) 00:33:52.263 slat (nsec): min=6109, max=54205, avg=9236.17, stdev=3525.90 00:33:52.263 clat (usec): min=671, max=6420, avg=3141.59, stdev=469.07 00:33:52.263 lat (usec): min=682, max=6433, avg=3150.83, stdev=468.86 00:33:52.263 clat percentiles (usec): 00:33:52.263 | 1.00th=[ 2147], 5.00th=[ 2540], 10.00th=[ 2737], 20.00th=[ 2933], 00:33:52.263 | 30.00th=[ 2999], 40.00th=[ 3032], 50.00th=[ 3032], 60.00th=[ 3097], 00:33:52.263 | 70.00th=[ 3130], 80.00th=[ 3294], 90.00th=[ 3621], 95.00th=[ 4080], 00:33:52.263 | 99.00th=[ 5080], 99.50th=[ 5211], 99.90th=[ 5604], 99.95th=[ 5604], 00:33:52.263 | 99.99th=[ 6390] 00:33:52.263 bw ( KiB/s): min=18656, max=21264, per=24.19%, avg=20145.78, stdev=741.18, samples=9 00:33:52.263 iops : min= 2332, max= 2658, avg=2518.22, stdev=92.65, samples=9 00:33:52.263 lat (usec) : 750=0.01%, 1000=0.03% 00:33:52.263 lat (msec) : 2=0.48%, 4=93.98%, 10=5.51% 00:33:52.263 cpu : usr=96.00%, sys=3.68%, ctx=10, majf=0, minf=9 00:33:52.263 IO depths : 1=0.1%, 2=3.8%, 4=68.5%, 8=27.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:52.263 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.263 complete : 0=0.0%, 4=92.3%, 8=7.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.263 issued rwts: total=12623,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:52.263 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:52.263 filename1: (groupid=0, jobs=1): err= 0: pid=126007: Wed Nov 20 11:28:19 2024 00:33:52.263 read: IOPS=2587, BW=20.2MiB/s (21.2MB/s)(101MiB/5001msec) 00:33:52.263 slat (nsec): min=6025, max=45640, avg=9306.18, stdev=3474.80 00:33:52.263 clat (usec): min=706, max=5586, avg=3062.63, stdev=410.00 00:33:52.263 lat (usec): min=712, max=5592, avg=3071.93, stdev=409.89 00:33:52.263 clat percentiles (usec): 00:33:52.263 | 1.00th=[ 2089], 5.00th=[ 2474], 10.00th=[ 2638], 20.00th=[ 2868], 00:33:52.263 | 30.00th=[ 2999], 40.00th=[ 3032], 50.00th=[ 3032], 60.00th=[ 3064], 00:33:52.263 | 70.00th=[ 3097], 80.00th=[ 3228], 90.00th=[ 3490], 95.00th=[ 3785], 00:33:52.263 | 99.00th=[ 4555], 99.50th=[ 4883], 99.90th=[ 5276], 99.95th=[ 5342], 00:33:52.263 | 99.99th=[ 5604] 00:33:52.263 bw ( KiB/s): min=20272, max=21280, per=24.86%, avg=20702.22, stdev=300.98, samples=9 00:33:52.263 iops : min= 2534, max= 2660, avg=2587.78, stdev=37.62, samples=9 00:33:52.263 lat (usec) : 750=0.02%, 1000=0.10% 00:33:52.263 lat (msec) : 2=0.46%, 4=96.30%, 10=3.12% 00:33:52.263 cpu : usr=95.98%, sys=3.70%, ctx=9, majf=0, minf=9 00:33:52.263 IO depths : 1=0.1%, 2=7.0%, 4=64.8%, 8=28.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:52.263 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.263 complete : 0=0.0%, 4=92.8%, 8=7.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.263 issued rwts: total=12941,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:52.263 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:52.263 filename1: (groupid=0, jobs=1): err= 0: pid=126008: Wed Nov 20 11:28:19 2024 00:33:52.263 read: IOPS=2572, BW=20.1MiB/s (21.1MB/s)(101MiB/5003msec) 00:33:52.263 slat (usec): min=6, max=164, avg= 9.24, stdev= 3.73 00:33:52.263 clat (usec): min=607, max=5693, avg=3081.83, stdev=405.50 00:33:52.263 lat (usec): min=618, max=5704, avg=3091.07, stdev=405.39 00:33:52.263 clat percentiles (usec): 00:33:52.263 | 1.00th=[ 2089], 5.00th=[ 2507], 10.00th=[ 2704], 20.00th=[ 2868], 00:33:52.263 | 30.00th=[ 2999], 40.00th=[ 3032], 50.00th=[ 3032], 60.00th=[ 3064], 00:33:52.263 | 70.00th=[ 3130], 80.00th=[ 3261], 90.00th=[ 3490], 95.00th=[ 3785], 00:33:52.263 | 99.00th=[ 4555], 99.50th=[ 4883], 99.90th=[ 5342], 99.95th=[ 5407], 00:33:52.263 | 99.99th=[ 5669] 00:33:52.263 bw ( KiB/s): min=20256, max=21008, per=24.71%, avg=20577.78, stdev=282.67, samples=9 00:33:52.263 iops : min= 2532, max= 2626, avg=2572.22, stdev=35.33, samples=9 00:33:52.263 lat (usec) : 750=0.01%, 1000=0.01% 00:33:52.263 lat (msec) : 2=0.75%, 4=95.89%, 10=3.35% 00:33:52.263 cpu : usr=95.72%, sys=3.96%, ctx=8, majf=0, minf=9 00:33:52.263 IO depths : 1=0.2%, 2=3.8%, 4=67.7%, 8=28.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:52.263 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.263 complete : 0=0.0%, 4=93.0%, 8=7.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.263 issued rwts: total=12872,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:52.263 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:52.263 00:33:52.263 Run status group 0 (all jobs): 00:33:52.264 READ: bw=81.3MiB/s (85.3MB/s), 19.7MiB/s-21.3MiB/s (20.7MB/s-22.3MB/s), io=407MiB (427MB), run=5001-5003msec 00:33:52.264 11:28:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:33:52.264 11:28:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:33:52.264 11:28:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:52.264 11:28:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:52.264 11:28:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:33:52.264 11:28:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:52.264 11:28:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.264 11:28:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:52.264 11:28:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.264 11:28:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:52.264 11:28:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.264 11:28:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:52.264 11:28:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.264 11:28:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:52.264 11:28:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:33:52.264 11:28:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:33:52.264 11:28:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:52.264 11:28:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.264 11:28:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:52.264 11:28:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.264 11:28:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:33:52.264 11:28:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.264 11:28:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:52.264 11:28:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.264 00:33:52.264 real 0m24.554s 00:33:52.264 user 4m51.480s 00:33:52.264 sys 0m5.049s 00:33:52.264 11:28:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:52.264 11:28:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:52.264 ************************************ 00:33:52.264 END TEST fio_dif_rand_params 00:33:52.264 ************************************ 00:33:52.523 11:28:19 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:33:52.523 11:28:19 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:52.523 11:28:19 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:52.523 11:28:19 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:52.523 ************************************ 00:33:52.523 START TEST fio_dif_digest 00:33:52.523 ************************************ 00:33:52.523 11:28:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:33:52.523 11:28:19 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:33:52.523 11:28:19 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:33:52.523 11:28:19 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:33:52.523 11:28:19 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:33:52.523 11:28:19 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:33:52.523 11:28:19 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:33:52.523 11:28:19 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:33:52.523 11:28:19 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:33:52.523 11:28:19 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:33:52.523 11:28:19 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:33:52.523 11:28:19 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:33:52.523 11:28:19 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:33:52.523 11:28:19 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:33:52.523 11:28:19 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:33:52.523 11:28:19 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:33:52.523 11:28:19 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:33:52.524 11:28:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.524 11:28:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:52.524 bdev_null0 00:33:52.524 11:28:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.524 11:28:19 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:52.524 11:28:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.524 11:28:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:52.524 11:28:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.524 11:28:19 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:52.524 11:28:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.524 11:28:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:52.524 11:28:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.524 11:28:19 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:52.524 11:28:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.524 11:28:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:52.524 [2024-11-20 11:28:19.866803] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:52.524 11:28:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.524 11:28:19 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:33:52.524 11:28:19 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:33:52.524 11:28:19 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:33:52.524 11:28:19 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:33:52.524 11:28:19 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:52.524 11:28:19 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:33:52.524 11:28:19 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:52.524 11:28:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:52.524 11:28:19 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:33:52.524 11:28:19 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:52.524 { 00:33:52.524 "params": { 00:33:52.524 "name": "Nvme$subsystem", 00:33:52.524 "trtype": "$TEST_TRANSPORT", 00:33:52.524 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:52.524 "adrfam": "ipv4", 00:33:52.524 "trsvcid": "$NVMF_PORT", 00:33:52.524 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:52.524 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:52.524 "hdgst": ${hdgst:-false}, 00:33:52.524 "ddgst": ${ddgst:-false} 00:33:52.524 }, 00:33:52.524 "method": "bdev_nvme_attach_controller" 00:33:52.524 } 00:33:52.524 EOF 00:33:52.524 )") 00:33:52.524 11:28:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:52.524 11:28:19 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:33:52.524 11:28:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:52.524 11:28:19 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:33:52.524 11:28:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:52.524 11:28:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:52.524 11:28:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:33:52.524 11:28:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:52.524 11:28:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:52.524 11:28:19 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:33:52.524 11:28:19 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:33:52.524 11:28:19 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:33:52.524 11:28:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:52.524 11:28:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:33:52.524 11:28:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:52.524 11:28:19 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:33:52.524 11:28:19 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:33:52.524 11:28:19 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:52.524 "params": { 00:33:52.524 "name": "Nvme0", 00:33:52.524 "trtype": "tcp", 00:33:52.524 "traddr": "10.0.0.2", 00:33:52.524 "adrfam": "ipv4", 00:33:52.524 "trsvcid": "4420", 00:33:52.524 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:52.524 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:52.524 "hdgst": true, 00:33:52.524 "ddgst": true 00:33:52.524 }, 00:33:52.524 "method": "bdev_nvme_attach_controller" 00:33:52.524 }' 00:33:52.524 11:28:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:52.524 11:28:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:52.524 11:28:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:52.524 11:28:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:52.524 11:28:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:52.524 11:28:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:52.524 11:28:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:52.524 11:28:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:52.524 11:28:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:52.524 11:28:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:52.783 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:33:52.783 ... 00:33:52.783 fio-3.35 00:33:52.783 Starting 3 threads 00:34:04.994 00:34:04.994 filename0: (groupid=0, jobs=1): err= 0: pid=127070: Wed Nov 20 11:28:30 2024 00:34:04.994 read: IOPS=289, BW=36.1MiB/s (37.9MB/s)(363MiB/10046msec) 00:34:04.994 slat (nsec): min=6397, max=37580, avg=11217.27, stdev=1719.05 00:34:04.994 clat (usec): min=8276, max=49788, avg=10350.78, stdev=1774.75 00:34:04.994 lat (usec): min=8288, max=49801, avg=10362.00, stdev=1774.75 00:34:04.994 clat percentiles (usec): 00:34:04.994 | 1.00th=[ 8586], 5.00th=[ 9110], 10.00th=[ 9372], 20.00th=[ 9634], 00:34:04.994 | 30.00th=[ 9896], 40.00th=[10159], 50.00th=[10290], 60.00th=[10421], 00:34:04.994 | 70.00th=[10683], 80.00th=[10814], 90.00th=[11207], 95.00th=[11469], 00:34:04.994 | 99.00th=[12256], 99.50th=[12649], 99.90th=[49546], 99.95th=[49546], 00:34:04.994 | 99.99th=[49546] 00:34:04.994 bw ( KiB/s): min=33792, max=38400, per=35.14%, avg=37145.60, stdev=1037.05, samples=20 00:34:04.994 iops : min= 264, max= 300, avg=290.20, stdev= 8.10, samples=20 00:34:04.994 lat (msec) : 10=34.40%, 20=65.43%, 50=0.17% 00:34:04.994 cpu : usr=94.55%, sys=5.16%, ctx=22, majf=0, minf=106 00:34:04.994 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:04.994 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:04.994 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:04.994 issued rwts: total=2904,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:04.994 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:04.994 filename0: (groupid=0, jobs=1): err= 0: pid=127071: Wed Nov 20 11:28:30 2024 00:34:04.994 read: IOPS=271, BW=33.9MiB/s (35.5MB/s)(340MiB/10044msec) 00:34:04.994 slat (nsec): min=6467, max=37576, avg=11371.13, stdev=1691.38 00:34:04.994 clat (usec): min=6966, max=50120, avg=11041.23, stdev=1285.12 00:34:04.994 lat (usec): min=6979, max=50131, avg=11052.60, stdev=1285.12 00:34:04.994 clat percentiles (usec): 00:34:04.994 | 1.00th=[ 8979], 5.00th=[ 9765], 10.00th=[10028], 20.00th=[10421], 00:34:04.994 | 30.00th=[10683], 40.00th=[10814], 50.00th=[11076], 60.00th=[11207], 00:34:04.994 | 70.00th=[11469], 80.00th=[11600], 90.00th=[11994], 95.00th=[12256], 00:34:04.994 | 99.00th=[12911], 99.50th=[13042], 99.90th=[13829], 99.95th=[46924], 00:34:04.994 | 99.99th=[50070] 00:34:04.994 bw ( KiB/s): min=34048, max=35328, per=32.93%, avg=34816.00, stdev=321.68, samples=20 00:34:04.994 iops : min= 266, max= 276, avg=272.00, stdev= 2.51, samples=20 00:34:04.994 lat (msec) : 10=8.74%, 20=91.18%, 50=0.04%, 100=0.04% 00:34:04.994 cpu : usr=94.50%, sys=5.20%, ctx=20, majf=0, minf=85 00:34:04.994 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:04.994 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:04.994 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:04.994 issued rwts: total=2722,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:04.994 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:04.994 filename0: (groupid=0, jobs=1): err= 0: pid=127072: Wed Nov 20 11:28:30 2024 00:34:04.995 read: IOPS=265, BW=33.2MiB/s (34.9MB/s)(334MiB/10043msec) 00:34:04.995 slat (nsec): min=6394, max=28005, avg=11225.52, stdev=1721.16 00:34:04.995 clat (usec): min=6116, max=48762, avg=11251.54, stdev=1285.06 00:34:04.995 lat (usec): min=6128, max=48774, avg=11262.77, stdev=1285.06 00:34:04.995 clat percentiles (usec): 00:34:04.995 | 1.00th=[ 8979], 5.00th=[10028], 10.00th=[10290], 20.00th=[10683], 00:34:04.995 | 30.00th=[10814], 40.00th=[11076], 50.00th=[11207], 60.00th=[11469], 00:34:04.995 | 70.00th=[11600], 80.00th=[11863], 90.00th=[12256], 95.00th=[12518], 00:34:04.995 | 99.00th=[13042], 99.50th=[13304], 99.90th=[14222], 99.95th=[46400], 00:34:04.995 | 99.99th=[49021] 00:34:04.995 bw ( KiB/s): min=33792, max=35584, per=32.32%, avg=34163.20, stdev=419.21, samples=20 00:34:04.995 iops : min= 264, max= 278, avg=266.90, stdev= 3.28, samples=20 00:34:04.995 lat (msec) : 10=4.75%, 20=95.17%, 50=0.07% 00:34:04.995 cpu : usr=94.38%, sys=5.32%, ctx=21, majf=0, minf=81 00:34:04.995 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:04.995 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:04.995 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:04.995 issued rwts: total=2671,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:04.995 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:04.995 00:34:04.995 Run status group 0 (all jobs): 00:34:04.995 READ: bw=103MiB/s (108MB/s), 33.2MiB/s-36.1MiB/s (34.9MB/s-37.9MB/s), io=1037MiB (1088MB), run=10043-10046msec 00:34:04.995 11:28:30 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:34:04.995 11:28:30 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:34:04.995 11:28:30 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:34:04.995 11:28:30 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:04.995 11:28:30 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:34:04.995 11:28:30 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:04.995 11:28:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:04.995 11:28:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:04.995 11:28:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:04.995 11:28:30 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:04.995 11:28:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:04.995 11:28:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:04.995 11:28:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:04.995 00:34:04.995 real 0m11.102s 00:34:04.995 user 0m34.855s 00:34:04.995 sys 0m1.854s 00:34:04.995 11:28:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:04.995 11:28:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:04.995 ************************************ 00:34:04.995 END TEST fio_dif_digest 00:34:04.995 ************************************ 00:34:04.995 11:28:30 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:34:04.995 11:28:30 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:34:04.995 11:28:30 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:04.995 11:28:30 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:34:04.995 11:28:30 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:04.995 11:28:30 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:34:04.995 11:28:30 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:04.995 11:28:30 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:04.995 rmmod nvme_tcp 00:34:04.995 rmmod nvme_fabrics 00:34:04.995 rmmod nvme_keyring 00:34:04.995 11:28:31 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:04.995 11:28:31 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:34:04.995 11:28:31 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:34:04.995 11:28:31 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 118683 ']' 00:34:04.995 11:28:31 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 118683 00:34:04.995 11:28:31 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 118683 ']' 00:34:04.995 11:28:31 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 118683 00:34:04.995 11:28:31 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:34:04.995 11:28:31 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:04.995 11:28:31 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 118683 00:34:04.995 11:28:31 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:04.995 11:28:31 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:04.995 11:28:31 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 118683' 00:34:04.995 killing process with pid 118683 00:34:04.995 11:28:31 nvmf_dif -- common/autotest_common.sh@973 -- # kill 118683 00:34:04.995 11:28:31 nvmf_dif -- common/autotest_common.sh@978 -- # wait 118683 00:34:04.995 11:28:31 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:34:04.995 11:28:31 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:06.900 Waiting for block devices as requested 00:34:06.900 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:34:06.900 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:06.900 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:06.900 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:06.900 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:07.159 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:07.159 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:07.159 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:07.159 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:07.417 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:07.417 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:07.417 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:07.677 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:07.677 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:07.677 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:07.677 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:07.937 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:07.937 11:28:35 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:07.937 11:28:35 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:07.937 11:28:35 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:34:07.937 11:28:35 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:34:07.937 11:28:35 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:07.937 11:28:35 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:34:07.937 11:28:35 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:07.937 11:28:35 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:07.937 11:28:35 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:07.937 11:28:35 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:07.937 11:28:35 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:10.472 11:28:37 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:10.472 00:34:10.472 real 1m14.391s 00:34:10.472 user 7m8.335s 00:34:10.472 sys 0m20.757s 00:34:10.472 11:28:37 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:10.472 11:28:37 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:10.472 ************************************ 00:34:10.472 END TEST nvmf_dif 00:34:10.472 ************************************ 00:34:10.472 11:28:37 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:34:10.472 11:28:37 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:10.472 11:28:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:10.472 11:28:37 -- common/autotest_common.sh@10 -- # set +x 00:34:10.472 ************************************ 00:34:10.472 START TEST nvmf_abort_qd_sizes 00:34:10.472 ************************************ 00:34:10.472 11:28:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:34:10.472 * Looking for test storage... 00:34:10.472 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:10.472 11:28:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:10.472 11:28:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lcov --version 00:34:10.472 11:28:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:10.472 11:28:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:10.472 11:28:37 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:10.472 11:28:37 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:10.472 11:28:37 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:10.472 11:28:37 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:34:10.472 11:28:37 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:34:10.472 11:28:37 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:34:10.472 11:28:37 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:34:10.472 11:28:37 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:34:10.472 11:28:37 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:34:10.472 11:28:37 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:34:10.472 11:28:37 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:10.472 11:28:37 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:34:10.472 11:28:37 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:34:10.472 11:28:37 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:10.472 11:28:37 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:10.472 11:28:37 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:34:10.472 11:28:37 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:34:10.472 11:28:37 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:10.472 11:28:37 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:34:10.472 11:28:37 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:34:10.472 11:28:37 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:34:10.472 11:28:37 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:34:10.472 11:28:37 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:10.472 11:28:37 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:34:10.472 11:28:37 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:34:10.472 11:28:37 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:10.472 11:28:37 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:10.472 11:28:37 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:34:10.472 11:28:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:10.472 11:28:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:10.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:10.472 --rc genhtml_branch_coverage=1 00:34:10.472 --rc genhtml_function_coverage=1 00:34:10.472 --rc genhtml_legend=1 00:34:10.472 --rc geninfo_all_blocks=1 00:34:10.472 --rc geninfo_unexecuted_blocks=1 00:34:10.472 00:34:10.472 ' 00:34:10.472 11:28:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:10.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:10.472 --rc genhtml_branch_coverage=1 00:34:10.472 --rc genhtml_function_coverage=1 00:34:10.472 --rc genhtml_legend=1 00:34:10.472 --rc geninfo_all_blocks=1 00:34:10.472 --rc geninfo_unexecuted_blocks=1 00:34:10.472 00:34:10.472 ' 00:34:10.472 11:28:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:10.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:10.472 --rc genhtml_branch_coverage=1 00:34:10.472 --rc genhtml_function_coverage=1 00:34:10.472 --rc genhtml_legend=1 00:34:10.472 --rc geninfo_all_blocks=1 00:34:10.472 --rc geninfo_unexecuted_blocks=1 00:34:10.472 00:34:10.472 ' 00:34:10.472 11:28:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:10.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:10.472 --rc genhtml_branch_coverage=1 00:34:10.472 --rc genhtml_function_coverage=1 00:34:10.472 --rc genhtml_legend=1 00:34:10.472 --rc geninfo_all_blocks=1 00:34:10.472 --rc geninfo_unexecuted_blocks=1 00:34:10.472 00:34:10.472 ' 00:34:10.472 11:28:37 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:10.472 11:28:37 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:34:10.472 11:28:37 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:10.472 11:28:37 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:10.472 11:28:37 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:10.472 11:28:37 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:10.472 11:28:37 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:10.472 11:28:37 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:10.472 11:28:37 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:10.472 11:28:37 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:10.472 11:28:37 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:10.472 11:28:37 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:10.472 11:28:37 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:34:10.472 11:28:37 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:34:10.472 11:28:37 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:10.472 11:28:37 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:10.472 11:28:37 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:10.472 11:28:37 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:10.472 11:28:37 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:10.472 11:28:37 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:34:10.472 11:28:37 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:10.472 11:28:37 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:10.472 11:28:37 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:10.472 11:28:37 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:10.472 11:28:37 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:10.472 11:28:37 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:10.472 11:28:37 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:34:10.472 11:28:37 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:10.473 11:28:37 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:34:10.473 11:28:37 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:10.473 11:28:37 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:10.473 11:28:37 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:10.473 11:28:37 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:10.473 11:28:37 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:10.473 11:28:37 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:10.473 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:10.473 11:28:37 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:10.473 11:28:37 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:10.473 11:28:37 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:10.473 11:28:37 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:34:10.473 11:28:37 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:10.473 11:28:37 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:10.473 11:28:37 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:10.473 11:28:37 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:10.473 11:28:37 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:10.473 11:28:37 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:10.473 11:28:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:10.473 11:28:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:10.473 11:28:37 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:10.473 11:28:37 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:10.473 11:28:37 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:34:10.473 11:28:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:17.047 11:28:43 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:17.047 11:28:43 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:34:17.047 11:28:43 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:17.047 11:28:43 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:17.047 11:28:43 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:17.047 11:28:43 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:17.047 11:28:43 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:17.047 11:28:43 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:34:17.047 11:28:43 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:17.047 11:28:43 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:34:17.047 11:28:43 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:34:17.047 11:28:43 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:34:17.047 11:28:43 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:34:17.047 11:28:43 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:34:17.047 11:28:43 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:34:17.047 11:28:43 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:17.047 11:28:43 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:17.047 11:28:43 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:17.047 11:28:43 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:17.047 11:28:43 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:17.047 11:28:43 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:17.047 11:28:43 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:17.047 11:28:43 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:17.048 11:28:43 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:17.048 11:28:43 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:17.048 11:28:43 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:17.048 11:28:43 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:17.048 11:28:43 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:17.048 11:28:43 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:17.048 11:28:43 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:17.048 11:28:43 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:17.048 11:28:43 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:17.048 11:28:43 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:17.048 11:28:43 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:17.048 11:28:43 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:34:17.048 Found 0000:86:00.0 (0x8086 - 0x159b) 00:34:17.048 11:28:43 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:17.048 11:28:43 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:17.048 11:28:43 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:17.048 11:28:43 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:17.048 11:28:43 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:17.048 11:28:43 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:17.048 11:28:43 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:34:17.048 Found 0000:86:00.1 (0x8086 - 0x159b) 00:34:17.048 11:28:43 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:17.048 11:28:43 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:17.048 11:28:43 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:17.048 11:28:43 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:17.048 11:28:43 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:17.048 11:28:43 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:17.048 11:28:43 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:17.048 11:28:43 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:17.048 11:28:43 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:17.048 11:28:43 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:17.048 11:28:43 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:17.048 11:28:43 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:17.048 11:28:43 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:17.048 11:28:43 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:17.048 11:28:43 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:17.048 11:28:43 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:34:17.048 Found net devices under 0000:86:00.0: cvl_0_0 00:34:17.048 11:28:43 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:17.048 11:28:43 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:17.048 11:28:43 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:17.048 11:28:43 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:17.048 11:28:43 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:17.048 11:28:43 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:17.048 11:28:43 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:17.048 11:28:43 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:17.048 11:28:43 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:34:17.048 Found net devices under 0000:86:00.1: cvl_0_1 00:34:17.048 11:28:43 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:17.048 11:28:43 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:17.048 11:28:43 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:34:17.048 11:28:43 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:17.048 11:28:43 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:17.048 11:28:43 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:17.048 11:28:43 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:17.048 11:28:43 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:17.048 11:28:43 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:17.048 11:28:43 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:17.048 11:28:43 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:17.048 11:28:43 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:17.048 11:28:43 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:17.048 11:28:43 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:17.048 11:28:43 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:17.048 11:28:43 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:17.048 11:28:43 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:17.048 11:28:43 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:17.048 11:28:43 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:17.048 11:28:43 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:17.048 11:28:43 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:17.048 11:28:43 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:17.048 11:28:43 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:17.048 11:28:43 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:17.048 11:28:43 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:17.048 11:28:43 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:17.048 11:28:43 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:17.048 11:28:43 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:17.048 11:28:43 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:17.048 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:17.048 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.472 ms 00:34:17.048 00:34:17.048 --- 10.0.0.2 ping statistics --- 00:34:17.048 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:17.048 rtt min/avg/max/mdev = 0.472/0.472/0.472/0.000 ms 00:34:17.048 11:28:43 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:17.048 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:17.048 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:34:17.048 00:34:17.048 --- 10.0.0.1 ping statistics --- 00:34:17.048 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:17.048 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:34:17.048 11:28:43 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:17.048 11:28:43 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:34:17.048 11:28:43 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:34:17.048 11:28:43 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:18.956 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:18.956 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:18.956 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:18.956 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:18.956 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:18.956 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:18.956 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:18.956 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:18.956 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:18.956 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:18.956 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:18.956 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:18.956 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:18.956 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:18.956 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:18.956 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:19.894 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:34:20.153 11:28:47 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:20.153 11:28:47 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:20.153 11:28:47 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:20.153 11:28:47 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:20.153 11:28:47 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:20.153 11:28:47 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:20.153 11:28:47 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:34:20.153 11:28:47 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:20.153 11:28:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:20.153 11:28:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:20.153 11:28:47 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=135074 00:34:20.153 11:28:47 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 135074 00:34:20.153 11:28:47 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:34:20.153 11:28:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 135074 ']' 00:34:20.153 11:28:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:20.153 11:28:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:20.153 11:28:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:20.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:20.153 11:28:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:20.153 11:28:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:20.153 [2024-11-20 11:28:47.525031] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:34:20.153 [2024-11-20 11:28:47.525080] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:20.153 [2024-11-20 11:28:47.603689] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:20.413 [2024-11-20 11:28:47.648235] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:20.413 [2024-11-20 11:28:47.648274] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:20.413 [2024-11-20 11:28:47.648281] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:20.413 [2024-11-20 11:28:47.648287] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:20.413 [2024-11-20 11:28:47.648292] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:20.413 [2024-11-20 11:28:47.649938] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:20.413 [2024-11-20 11:28:47.650050] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:20.413 [2024-11-20 11:28:47.650082] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:20.413 [2024-11-20 11:28:47.650083] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:20.413 11:28:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:20.413 11:28:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:34:20.413 11:28:47 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:20.413 11:28:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:20.413 11:28:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:20.413 11:28:47 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:20.413 11:28:47 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:34:20.413 11:28:47 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:34:20.413 11:28:47 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:34:20.413 11:28:47 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:34:20.413 11:28:47 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:34:20.413 11:28:47 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:5e:00.0 ]] 00:34:20.413 11:28:47 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:34:20.413 11:28:47 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:34:20.413 11:28:47 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5e:00.0 ]] 00:34:20.413 11:28:47 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:34:20.413 11:28:47 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:34:20.413 11:28:47 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:34:20.413 11:28:47 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:34:20.413 11:28:47 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:5e:00.0 00:34:20.413 11:28:47 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:34:20.413 11:28:47 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:5e:00.0 00:34:20.413 11:28:47 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:34:20.413 11:28:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:20.413 11:28:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:20.413 11:28:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:20.413 ************************************ 00:34:20.413 START TEST spdk_target_abort 00:34:20.413 ************************************ 00:34:20.413 11:28:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:34:20.413 11:28:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:34:20.413 11:28:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:5e:00.0 -b spdk_target 00:34:20.413 11:28:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.413 11:28:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:23.702 spdk_targetn1 00:34:23.702 11:28:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.702 11:28:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:23.702 11:28:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.702 11:28:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:23.702 [2024-11-20 11:28:50.666196] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:23.702 11:28:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.702 11:28:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:34:23.702 11:28:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.702 11:28:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:23.702 11:28:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.702 11:28:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:34:23.702 11:28:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.702 11:28:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:23.702 11:28:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.702 11:28:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:34:23.702 11:28:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.702 11:28:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:23.702 [2024-11-20 11:28:50.708894] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:23.702 11:28:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.702 11:28:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:34:23.702 11:28:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:34:23.702 11:28:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:34:23.702 11:28:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:34:23.702 11:28:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:34:23.702 11:28:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:34:23.702 11:28:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:34:23.702 11:28:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:34:23.702 11:28:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:34:23.702 11:28:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:23.702 11:28:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:34:23.702 11:28:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:23.702 11:28:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:34:23.702 11:28:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:23.702 11:28:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:34:23.702 11:28:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:23.702 11:28:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:23.702 11:28:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:23.702 11:28:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:23.702 11:28:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:23.702 11:28:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:26.991 Initializing NVMe Controllers 00:34:26.991 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:34:26.991 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:26.991 Initialization complete. Launching workers. 00:34:26.991 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 15161, failed: 0 00:34:26.991 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1360, failed to submit 13801 00:34:26.991 success 710, unsuccessful 650, failed 0 00:34:26.991 11:28:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:26.991 11:28:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:30.280 Initializing NVMe Controllers 00:34:30.280 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:34:30.280 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:30.280 Initialization complete. Launching workers. 00:34:30.280 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8618, failed: 0 00:34:30.280 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1264, failed to submit 7354 00:34:30.280 success 318, unsuccessful 946, failed 0 00:34:30.280 11:28:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:30.280 11:28:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:33.568 Initializing NVMe Controllers 00:34:33.568 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:34:33.568 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:33.568 Initialization complete. Launching workers. 00:34:33.568 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 37818, failed: 0 00:34:33.568 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2846, failed to submit 34972 00:34:33.568 success 581, unsuccessful 2265, failed 0 00:34:33.568 11:29:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:34:33.568 11:29:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.568 11:29:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:33.568 11:29:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.568 11:29:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:34:33.568 11:29:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.568 11:29:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:34.505 11:29:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.506 11:29:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 135074 00:34:34.506 11:29:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 135074 ']' 00:34:34.506 11:29:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 135074 00:34:34.506 11:29:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:34:34.506 11:29:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:34.506 11:29:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 135074 00:34:34.506 11:29:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:34.506 11:29:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:34.506 11:29:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 135074' 00:34:34.506 killing process with pid 135074 00:34:34.506 11:29:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 135074 00:34:34.506 11:29:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 135074 00:34:34.506 00:34:34.506 real 0m14.085s 00:34:34.506 user 0m53.712s 00:34:34.506 sys 0m2.546s 00:34:34.506 11:29:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:34.506 11:29:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:34.506 ************************************ 00:34:34.506 END TEST spdk_target_abort 00:34:34.506 ************************************ 00:34:34.506 11:29:01 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:34:34.506 11:29:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:34.506 11:29:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:34.506 11:29:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:34.506 ************************************ 00:34:34.506 START TEST kernel_target_abort 00:34:34.506 ************************************ 00:34:34.506 11:29:01 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:34:34.506 11:29:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:34:34.506 11:29:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:34:34.506 11:29:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:34.506 11:29:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:34.506 11:29:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:34.506 11:29:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:34.506 11:29:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:34.506 11:29:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:34.506 11:29:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:34.506 11:29:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:34.506 11:29:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:34.506 11:29:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:34:34.506 11:29:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:34:34.506 11:29:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:34:34.506 11:29:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:34.506 11:29:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:34.506 11:29:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:34:34.506 11:29:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:34:34.766 11:29:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:34:34.766 11:29:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:34:34.766 11:29:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:34:34.766 11:29:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:37.332 Waiting for block devices as requested 00:34:37.333 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:34:37.592 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:37.592 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:37.592 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:37.851 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:37.851 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:37.851 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:37.851 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:38.109 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:38.109 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:38.109 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:38.368 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:38.368 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:38.368 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:38.368 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:38.628 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:38.628 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:38.628 11:29:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:34:38.628 11:29:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:34:38.628 11:29:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:34:38.628 11:29:06 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:34:38.628 11:29:06 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:34:38.628 11:29:06 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:34:38.628 11:29:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:34:38.628 11:29:06 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:34:38.628 11:29:06 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:34:38.888 No valid GPT data, bailing 00:34:38.888 11:29:06 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:34:38.888 11:29:06 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:34:38.888 11:29:06 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:34:38.888 11:29:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:34:38.888 11:29:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:34:38.888 11:29:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:38.888 11:29:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:38.888 11:29:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:34:38.888 11:29:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:34:38.888 11:29:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:34:38.888 11:29:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:34:38.888 11:29:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:34:38.888 11:29:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:34:38.888 11:29:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:34:38.888 11:29:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:34:38.888 11:29:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:34:38.888 11:29:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:34:38.888 11:29:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:34:38.888 00:34:38.888 Discovery Log Number of Records 2, Generation counter 2 00:34:38.888 =====Discovery Log Entry 0====== 00:34:38.888 trtype: tcp 00:34:38.888 adrfam: ipv4 00:34:38.888 subtype: current discovery subsystem 00:34:38.888 treq: not specified, sq flow control disable supported 00:34:38.888 portid: 1 00:34:38.888 trsvcid: 4420 00:34:38.888 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:34:38.888 traddr: 10.0.0.1 00:34:38.888 eflags: none 00:34:38.888 sectype: none 00:34:38.888 =====Discovery Log Entry 1====== 00:34:38.888 trtype: tcp 00:34:38.888 adrfam: ipv4 00:34:38.888 subtype: nvme subsystem 00:34:38.888 treq: not specified, sq flow control disable supported 00:34:38.888 portid: 1 00:34:38.888 trsvcid: 4420 00:34:38.888 subnqn: nqn.2016-06.io.spdk:testnqn 00:34:38.888 traddr: 10.0.0.1 00:34:38.888 eflags: none 00:34:38.888 sectype: none 00:34:38.888 11:29:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:34:38.888 11:29:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:34:38.888 11:29:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:34:38.888 11:29:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:34:38.888 11:29:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:34:38.888 11:29:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:34:38.888 11:29:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:34:38.888 11:29:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:34:38.888 11:29:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:34:38.888 11:29:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:38.888 11:29:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:34:38.888 11:29:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:38.888 11:29:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:34:38.888 11:29:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:38.888 11:29:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:34:38.888 11:29:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:38.888 11:29:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:34:38.888 11:29:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:38.888 11:29:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:38.888 11:29:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:38.888 11:29:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:42.178 Initializing NVMe Controllers 00:34:42.178 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:34:42.178 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:42.178 Initialization complete. Launching workers. 00:34:42.178 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 93207, failed: 0 00:34:42.178 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 93207, failed to submit 0 00:34:42.178 success 0, unsuccessful 93207, failed 0 00:34:42.178 11:29:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:42.178 11:29:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:45.474 Initializing NVMe Controllers 00:34:45.474 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:34:45.474 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:45.474 Initialization complete. Launching workers. 00:34:45.474 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 143299, failed: 0 00:34:45.474 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 35906, failed to submit 107393 00:34:45.474 success 0, unsuccessful 35906, failed 0 00:34:45.475 11:29:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:45.475 11:29:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:48.764 Initializing NVMe Controllers 00:34:48.764 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:34:48.764 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:48.764 Initialization complete. Launching workers. 00:34:48.764 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 137292, failed: 0 00:34:48.764 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 34382, failed to submit 102910 00:34:48.764 success 0, unsuccessful 34382, failed 0 00:34:48.764 11:29:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:34:48.764 11:29:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:34:48.764 11:29:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:34:48.764 11:29:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:48.764 11:29:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:48.764 11:29:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:34:48.764 11:29:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:48.764 11:29:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:34:48.764 11:29:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:34:48.764 11:29:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:51.299 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:51.299 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:51.299 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:51.299 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:51.299 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:51.299 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:51.299 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:51.299 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:51.299 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:51.300 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:51.300 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:51.300 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:51.300 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:51.300 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:51.300 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:51.300 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:51.868 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:34:52.126 00:34:52.126 real 0m17.496s 00:34:52.126 user 0m9.177s 00:34:52.126 sys 0m5.023s 00:34:52.126 11:29:19 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:52.126 11:29:19 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:52.126 ************************************ 00:34:52.126 END TEST kernel_target_abort 00:34:52.126 ************************************ 00:34:52.126 11:29:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:34:52.126 11:29:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:34:52.126 11:29:19 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:52.126 11:29:19 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:34:52.126 11:29:19 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:52.126 11:29:19 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:34:52.126 11:29:19 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:52.126 11:29:19 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:52.126 rmmod nvme_tcp 00:34:52.126 rmmod nvme_fabrics 00:34:52.126 rmmod nvme_keyring 00:34:52.126 11:29:19 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:52.126 11:29:19 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:34:52.126 11:29:19 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:34:52.126 11:29:19 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 135074 ']' 00:34:52.126 11:29:19 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 135074 00:34:52.126 11:29:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 135074 ']' 00:34:52.126 11:29:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 135074 00:34:52.126 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (135074) - No such process 00:34:52.126 11:29:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 135074 is not found' 00:34:52.126 Process with pid 135074 is not found 00:34:52.126 11:29:19 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:34:52.126 11:29:19 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:55.416 Waiting for block devices as requested 00:34:55.416 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:34:55.416 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:55.416 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:55.416 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:55.416 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:55.416 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:55.416 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:55.675 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:55.675 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:55.675 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:55.675 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:55.935 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:55.935 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:55.935 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:56.194 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:56.194 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:56.194 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:56.453 11:29:23 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:56.453 11:29:23 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:56.453 11:29:23 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:34:56.453 11:29:23 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:34:56.453 11:29:23 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:34:56.453 11:29:23 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:56.453 11:29:23 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:56.453 11:29:23 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:56.453 11:29:23 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:56.453 11:29:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:56.453 11:29:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:58.452 11:29:25 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:58.452 00:34:58.452 real 0m48.307s 00:34:58.452 user 1m7.221s 00:34:58.452 sys 0m16.326s 00:34:58.452 11:29:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:58.452 11:29:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:58.452 ************************************ 00:34:58.452 END TEST nvmf_abort_qd_sizes 00:34:58.452 ************************************ 00:34:58.452 11:29:25 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:34:58.452 11:29:25 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:58.452 11:29:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:58.452 11:29:25 -- common/autotest_common.sh@10 -- # set +x 00:34:58.452 ************************************ 00:34:58.452 START TEST keyring_file 00:34:58.452 ************************************ 00:34:58.452 11:29:25 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:34:58.452 * Looking for test storage... 00:34:58.452 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:34:58.743 11:29:25 keyring_file -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:58.743 11:29:25 keyring_file -- common/autotest_common.sh@1693 -- # lcov --version 00:34:58.743 11:29:25 keyring_file -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:58.743 11:29:26 keyring_file -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:58.743 11:29:26 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:58.743 11:29:26 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:58.743 11:29:26 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:58.743 11:29:26 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:34:58.743 11:29:26 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:34:58.743 11:29:26 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:34:58.743 11:29:26 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:34:58.743 11:29:26 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:34:58.743 11:29:26 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:34:58.743 11:29:26 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:34:58.743 11:29:26 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:58.743 11:29:26 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:34:58.743 11:29:26 keyring_file -- scripts/common.sh@345 -- # : 1 00:34:58.743 11:29:26 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:58.743 11:29:26 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:58.743 11:29:26 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:34:58.743 11:29:26 keyring_file -- scripts/common.sh@353 -- # local d=1 00:34:58.743 11:29:26 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:58.743 11:29:26 keyring_file -- scripts/common.sh@355 -- # echo 1 00:34:58.743 11:29:26 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:34:58.743 11:29:26 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:34:58.743 11:29:26 keyring_file -- scripts/common.sh@353 -- # local d=2 00:34:58.743 11:29:26 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:58.743 11:29:26 keyring_file -- scripts/common.sh@355 -- # echo 2 00:34:58.743 11:29:26 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:34:58.743 11:29:26 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:58.743 11:29:26 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:58.743 11:29:26 keyring_file -- scripts/common.sh@368 -- # return 0 00:34:58.743 11:29:26 keyring_file -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:58.743 11:29:26 keyring_file -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:58.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:58.743 --rc genhtml_branch_coverage=1 00:34:58.743 --rc genhtml_function_coverage=1 00:34:58.743 --rc genhtml_legend=1 00:34:58.743 --rc geninfo_all_blocks=1 00:34:58.743 --rc geninfo_unexecuted_blocks=1 00:34:58.743 00:34:58.743 ' 00:34:58.743 11:29:26 keyring_file -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:58.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:58.743 --rc genhtml_branch_coverage=1 00:34:58.743 --rc genhtml_function_coverage=1 00:34:58.743 --rc genhtml_legend=1 00:34:58.743 --rc geninfo_all_blocks=1 00:34:58.743 --rc geninfo_unexecuted_blocks=1 00:34:58.743 00:34:58.743 ' 00:34:58.743 11:29:26 keyring_file -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:58.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:58.743 --rc genhtml_branch_coverage=1 00:34:58.743 --rc genhtml_function_coverage=1 00:34:58.743 --rc genhtml_legend=1 00:34:58.743 --rc geninfo_all_blocks=1 00:34:58.743 --rc geninfo_unexecuted_blocks=1 00:34:58.743 00:34:58.743 ' 00:34:58.743 11:29:26 keyring_file -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:58.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:58.743 --rc genhtml_branch_coverage=1 00:34:58.743 --rc genhtml_function_coverage=1 00:34:58.743 --rc genhtml_legend=1 00:34:58.743 --rc geninfo_all_blocks=1 00:34:58.743 --rc geninfo_unexecuted_blocks=1 00:34:58.743 00:34:58.743 ' 00:34:58.743 11:29:26 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:34:58.743 11:29:26 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:58.743 11:29:26 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:34:58.743 11:29:26 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:58.743 11:29:26 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:58.743 11:29:26 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:58.743 11:29:26 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:58.743 11:29:26 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:58.743 11:29:26 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:58.743 11:29:26 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:58.743 11:29:26 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:58.743 11:29:26 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:58.743 11:29:26 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:58.743 11:29:26 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:34:58.743 11:29:26 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:34:58.743 11:29:26 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:58.743 11:29:26 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:58.743 11:29:26 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:58.743 11:29:26 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:58.743 11:29:26 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:58.743 11:29:26 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:34:58.744 11:29:26 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:58.744 11:29:26 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:58.744 11:29:26 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:58.744 11:29:26 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:58.744 11:29:26 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:58.744 11:29:26 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:58.744 11:29:26 keyring_file -- paths/export.sh@5 -- # export PATH 00:34:58.744 11:29:26 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:58.744 11:29:26 keyring_file -- nvmf/common.sh@51 -- # : 0 00:34:58.744 11:29:26 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:58.744 11:29:26 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:58.744 11:29:26 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:58.744 11:29:26 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:58.744 11:29:26 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:58.744 11:29:26 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:58.744 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:58.744 11:29:26 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:58.744 11:29:26 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:58.744 11:29:26 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:58.744 11:29:26 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:34:58.744 11:29:26 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:34:58.744 11:29:26 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:34:58.744 11:29:26 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:34:58.744 11:29:26 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:34:58.744 11:29:26 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:34:58.744 11:29:26 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:34:58.744 11:29:26 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:34:58.744 11:29:26 keyring_file -- keyring/common.sh@17 -- # name=key0 00:34:58.744 11:29:26 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:34:58.744 11:29:26 keyring_file -- keyring/common.sh@17 -- # digest=0 00:34:58.744 11:29:26 keyring_file -- keyring/common.sh@18 -- # mktemp 00:34:58.744 11:29:26 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.cV17gzyDoo 00:34:58.744 11:29:26 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:34:58.744 11:29:26 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:34:58.744 11:29:26 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:34:58.744 11:29:26 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:34:58.744 11:29:26 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:34:58.744 11:29:26 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:34:58.744 11:29:26 keyring_file -- nvmf/common.sh@733 -- # python - 00:34:58.744 11:29:26 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.cV17gzyDoo 00:34:58.744 11:29:26 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.cV17gzyDoo 00:34:58.744 11:29:26 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.cV17gzyDoo 00:34:58.744 11:29:26 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:34:58.744 11:29:26 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:34:58.744 11:29:26 keyring_file -- keyring/common.sh@17 -- # name=key1 00:34:58.744 11:29:26 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:34:58.744 11:29:26 keyring_file -- keyring/common.sh@17 -- # digest=0 00:34:58.744 11:29:26 keyring_file -- keyring/common.sh@18 -- # mktemp 00:34:58.744 11:29:26 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.0uDabYV3zb 00:34:58.744 11:29:26 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:34:58.744 11:29:26 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:34:58.744 11:29:26 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:34:58.744 11:29:26 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:34:58.744 11:29:26 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:34:58.744 11:29:26 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:34:58.744 11:29:26 keyring_file -- nvmf/common.sh@733 -- # python - 00:34:58.744 11:29:26 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.0uDabYV3zb 00:34:58.744 11:29:26 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.0uDabYV3zb 00:34:58.744 11:29:26 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.0uDabYV3zb 00:34:58.744 11:29:26 keyring_file -- keyring/file.sh@30 -- # tgtpid=143852 00:34:58.744 11:29:26 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:34:58.744 11:29:26 keyring_file -- keyring/file.sh@32 -- # waitforlisten 143852 00:34:58.744 11:29:26 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 143852 ']' 00:34:58.744 11:29:26 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:58.744 11:29:26 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:58.744 11:29:26 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:58.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:58.744 11:29:26 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:58.744 11:29:26 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:34:58.744 [2024-11-20 11:29:26.213417] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:34:58.744 [2024-11-20 11:29:26.213466] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143852 ] 00:34:59.003 [2024-11-20 11:29:26.289682] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:59.003 [2024-11-20 11:29:26.331867] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:59.262 11:29:26 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:59.262 11:29:26 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:34:59.262 11:29:26 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:34:59.262 11:29:26 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.262 11:29:26 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:34:59.262 [2024-11-20 11:29:26.549708] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:59.262 null0 00:34:59.262 [2024-11-20 11:29:26.581763] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:34:59.262 [2024-11-20 11:29:26.581998] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:34:59.262 11:29:26 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.262 11:29:26 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:34:59.262 11:29:26 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:34:59.263 11:29:26 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:34:59.263 11:29:26 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:34:59.263 11:29:26 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:59.263 11:29:26 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:34:59.263 11:29:26 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:59.263 11:29:26 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:34:59.263 11:29:26 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.263 11:29:26 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:34:59.263 [2024-11-20 11:29:26.609823] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:34:59.263 request: 00:34:59.263 { 00:34:59.263 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:34:59.263 "secure_channel": false, 00:34:59.263 "listen_address": { 00:34:59.263 "trtype": "tcp", 00:34:59.263 "traddr": "127.0.0.1", 00:34:59.263 "trsvcid": "4420" 00:34:59.263 }, 00:34:59.263 "method": "nvmf_subsystem_add_listener", 00:34:59.263 "req_id": 1 00:34:59.263 } 00:34:59.263 Got JSON-RPC error response 00:34:59.263 response: 00:34:59.263 { 00:34:59.263 "code": -32602, 00:34:59.263 "message": "Invalid parameters" 00:34:59.263 } 00:34:59.263 11:29:26 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:59.263 11:29:26 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:34:59.263 11:29:26 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:59.263 11:29:26 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:59.263 11:29:26 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:59.263 11:29:26 keyring_file -- keyring/file.sh@47 -- # bperfpid=143858 00:34:59.263 11:29:26 keyring_file -- keyring/file.sh@49 -- # waitforlisten 143858 /var/tmp/bperf.sock 00:34:59.263 11:29:26 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:34:59.263 11:29:26 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 143858 ']' 00:34:59.263 11:29:26 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:59.263 11:29:26 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:59.263 11:29:26 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:59.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:59.263 11:29:26 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:59.263 11:29:26 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:34:59.263 [2024-11-20 11:29:26.662148] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:34:59.263 [2024-11-20 11:29:26.662189] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143858 ] 00:34:59.263 [2024-11-20 11:29:26.735186] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:59.521 [2024-11-20 11:29:26.778598] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:59.521 11:29:26 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:59.521 11:29:26 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:34:59.521 11:29:26 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.cV17gzyDoo 00:34:59.521 11:29:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.cV17gzyDoo 00:34:59.780 11:29:27 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.0uDabYV3zb 00:34:59.780 11:29:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.0uDabYV3zb 00:34:59.780 11:29:27 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:34:59.780 11:29:27 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:34:59.780 11:29:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:59.780 11:29:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:59.780 11:29:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:00.038 11:29:27 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.cV17gzyDoo == \/\t\m\p\/\t\m\p\.\c\V\1\7\g\z\y\D\o\o ]] 00:35:00.038 11:29:27 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:35:00.038 11:29:27 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:35:00.038 11:29:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:00.038 11:29:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:00.038 11:29:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:00.296 11:29:27 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.0uDabYV3zb == \/\t\m\p\/\t\m\p\.\0\u\D\a\b\Y\V\3\z\b ]] 00:35:00.296 11:29:27 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:35:00.296 11:29:27 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:00.296 11:29:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:00.296 11:29:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:00.296 11:29:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:00.296 11:29:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:00.555 11:29:27 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:35:00.555 11:29:27 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:35:00.555 11:29:27 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:00.555 11:29:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:00.555 11:29:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:00.555 11:29:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:00.555 11:29:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:00.813 11:29:28 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:35:00.813 11:29:28 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:00.813 11:29:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:00.813 [2024-11-20 11:29:28.253100] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:01.071 nvme0n1 00:35:01.071 11:29:28 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:35:01.071 11:29:28 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:01.071 11:29:28 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:01.071 11:29:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:01.071 11:29:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:01.071 11:29:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:01.071 11:29:28 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:35:01.071 11:29:28 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:35:01.071 11:29:28 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:01.071 11:29:28 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:01.071 11:29:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:01.071 11:29:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:01.071 11:29:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:01.329 11:29:28 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:35:01.329 11:29:28 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:01.329 Running I/O for 1 seconds... 00:35:02.702 18842.00 IOPS, 73.60 MiB/s 00:35:02.702 Latency(us) 00:35:02.702 [2024-11-20T10:29:30.198Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:02.702 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:35:02.702 nvme0n1 : 1.00 18891.68 73.80 0.00 0.00 6763.70 2735.42 10827.69 00:35:02.702 [2024-11-20T10:29:30.198Z] =================================================================================================================== 00:35:02.702 [2024-11-20T10:29:30.198Z] Total : 18891.68 73.80 0.00 0.00 6763.70 2735.42 10827.69 00:35:02.702 { 00:35:02.702 "results": [ 00:35:02.702 { 00:35:02.702 "job": "nvme0n1", 00:35:02.702 "core_mask": "0x2", 00:35:02.702 "workload": "randrw", 00:35:02.702 "percentage": 50, 00:35:02.702 "status": "finished", 00:35:02.702 "queue_depth": 128, 00:35:02.702 "io_size": 4096, 00:35:02.702 "runtime": 1.004146, 00:35:02.702 "iops": 18891.67511497332, 00:35:02.702 "mibps": 73.79560591786453, 00:35:02.702 "io_failed": 0, 00:35:02.702 "io_timeout": 0, 00:35:02.702 "avg_latency_us": 6763.6964206183675, 00:35:02.702 "min_latency_us": 2735.4156521739133, 00:35:02.702 "max_latency_us": 10827.686956521738 00:35:02.702 } 00:35:02.702 ], 00:35:02.702 "core_count": 1 00:35:02.702 } 00:35:02.702 11:29:29 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:35:02.702 11:29:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:35:02.702 11:29:30 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:35:02.702 11:29:30 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:02.702 11:29:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:02.702 11:29:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:02.702 11:29:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:02.702 11:29:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:02.961 11:29:30 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:35:02.961 11:29:30 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:35:02.961 11:29:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:02.961 11:29:30 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:02.961 11:29:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:02.961 11:29:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:02.961 11:29:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:03.220 11:29:30 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:35:03.220 11:29:30 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:03.220 11:29:30 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:35:03.221 11:29:30 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:03.221 11:29:30 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:35:03.221 11:29:30 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:03.221 11:29:30 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:35:03.221 11:29:30 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:03.221 11:29:30 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:03.221 11:29:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:03.221 [2024-11-20 11:29:30.654524] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_re[2024-11-20 11:29:30.654525] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x85cd00 (107):ad_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:35:03.221 Transport endpoint is not connected 00:35:03.221 [2024-11-20 11:29:30.655520] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x85cd00 (9): Bad file descriptor 00:35:03.221 [2024-11-20 11:29:30.656522] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:35:03.221 [2024-11-20 11:29:30.656531] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:35:03.221 [2024-11-20 11:29:30.656539] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:35:03.221 [2024-11-20 11:29:30.656548] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:35:03.221 request: 00:35:03.221 { 00:35:03.221 "name": "nvme0", 00:35:03.221 "trtype": "tcp", 00:35:03.221 "traddr": "127.0.0.1", 00:35:03.221 "adrfam": "ipv4", 00:35:03.221 "trsvcid": "4420", 00:35:03.221 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:03.221 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:03.221 "prchk_reftag": false, 00:35:03.221 "prchk_guard": false, 00:35:03.221 "hdgst": false, 00:35:03.221 "ddgst": false, 00:35:03.221 "psk": "key1", 00:35:03.221 "allow_unrecognized_csi": false, 00:35:03.221 "method": "bdev_nvme_attach_controller", 00:35:03.221 "req_id": 1 00:35:03.221 } 00:35:03.221 Got JSON-RPC error response 00:35:03.221 response: 00:35:03.221 { 00:35:03.221 "code": -5, 00:35:03.221 "message": "Input/output error" 00:35:03.221 } 00:35:03.221 11:29:30 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:35:03.221 11:29:30 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:03.221 11:29:30 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:03.221 11:29:30 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:03.221 11:29:30 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:35:03.221 11:29:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:03.221 11:29:30 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:03.221 11:29:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:03.221 11:29:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:03.221 11:29:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:03.480 11:29:30 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:35:03.480 11:29:30 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:35:03.480 11:29:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:03.480 11:29:30 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:03.480 11:29:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:03.480 11:29:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:03.480 11:29:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:03.739 11:29:31 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:35:03.739 11:29:31 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:35:03.739 11:29:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:35:03.998 11:29:31 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:35:03.998 11:29:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:35:03.998 11:29:31 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:35:03.998 11:29:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:03.998 11:29:31 keyring_file -- keyring/file.sh@78 -- # jq length 00:35:04.257 11:29:31 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:35:04.257 11:29:31 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.cV17gzyDoo 00:35:04.257 11:29:31 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.cV17gzyDoo 00:35:04.257 11:29:31 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:35:04.257 11:29:31 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.cV17gzyDoo 00:35:04.257 11:29:31 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:35:04.257 11:29:31 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:04.257 11:29:31 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:35:04.257 11:29:31 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:04.257 11:29:31 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.cV17gzyDoo 00:35:04.257 11:29:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.cV17gzyDoo 00:35:04.515 [2024-11-20 11:29:31.831649] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.cV17gzyDoo': 0100660 00:35:04.515 [2024-11-20 11:29:31.831675] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:35:04.515 request: 00:35:04.515 { 00:35:04.515 "name": "key0", 00:35:04.515 "path": "/tmp/tmp.cV17gzyDoo", 00:35:04.515 "method": "keyring_file_add_key", 00:35:04.515 "req_id": 1 00:35:04.515 } 00:35:04.515 Got JSON-RPC error response 00:35:04.515 response: 00:35:04.516 { 00:35:04.516 "code": -1, 00:35:04.516 "message": "Operation not permitted" 00:35:04.516 } 00:35:04.516 11:29:31 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:35:04.516 11:29:31 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:04.516 11:29:31 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:04.516 11:29:31 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:04.516 11:29:31 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.cV17gzyDoo 00:35:04.516 11:29:31 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.cV17gzyDoo 00:35:04.516 11:29:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.cV17gzyDoo 00:35:04.775 11:29:32 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.cV17gzyDoo 00:35:04.775 11:29:32 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:35:04.775 11:29:32 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:04.775 11:29:32 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:04.775 11:29:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:04.775 11:29:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:04.775 11:29:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:04.775 11:29:32 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:35:04.775 11:29:32 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:04.775 11:29:32 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:35:04.775 11:29:32 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:04.775 11:29:32 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:35:04.775 11:29:32 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:04.775 11:29:32 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:35:04.775 11:29:32 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:04.775 11:29:32 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:04.775 11:29:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:05.035 [2024-11-20 11:29:32.421225] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.cV17gzyDoo': No such file or directory 00:35:05.035 [2024-11-20 11:29:32.421242] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:35:05.035 [2024-11-20 11:29:32.421257] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:35:05.035 [2024-11-20 11:29:32.421264] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:35:05.035 [2024-11-20 11:29:32.421271] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:35:05.035 [2024-11-20 11:29:32.421277] bdev_nvme.c:6763:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:35:05.035 request: 00:35:05.035 { 00:35:05.035 "name": "nvme0", 00:35:05.035 "trtype": "tcp", 00:35:05.035 "traddr": "127.0.0.1", 00:35:05.035 "adrfam": "ipv4", 00:35:05.035 "trsvcid": "4420", 00:35:05.035 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:05.035 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:05.035 "prchk_reftag": false, 00:35:05.035 "prchk_guard": false, 00:35:05.035 "hdgst": false, 00:35:05.035 "ddgst": false, 00:35:05.035 "psk": "key0", 00:35:05.035 "allow_unrecognized_csi": false, 00:35:05.035 "method": "bdev_nvme_attach_controller", 00:35:05.035 "req_id": 1 00:35:05.035 } 00:35:05.035 Got JSON-RPC error response 00:35:05.035 response: 00:35:05.035 { 00:35:05.035 "code": -19, 00:35:05.035 "message": "No such device" 00:35:05.035 } 00:35:05.035 11:29:32 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:35:05.035 11:29:32 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:05.035 11:29:32 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:05.035 11:29:32 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:05.035 11:29:32 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:35:05.035 11:29:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:35:05.294 11:29:32 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:35:05.294 11:29:32 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:35:05.294 11:29:32 keyring_file -- keyring/common.sh@17 -- # name=key0 00:35:05.294 11:29:32 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:35:05.294 11:29:32 keyring_file -- keyring/common.sh@17 -- # digest=0 00:35:05.294 11:29:32 keyring_file -- keyring/common.sh@18 -- # mktemp 00:35:05.294 11:29:32 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.pyKNKAi4er 00:35:05.294 11:29:32 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:35:05.294 11:29:32 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:35:05.294 11:29:32 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:35:05.294 11:29:32 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:05.294 11:29:32 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:35:05.294 11:29:32 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:35:05.294 11:29:32 keyring_file -- nvmf/common.sh@733 -- # python - 00:35:05.294 11:29:32 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.pyKNKAi4er 00:35:05.294 11:29:32 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.pyKNKAi4er 00:35:05.294 11:29:32 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.pyKNKAi4er 00:35:05.294 11:29:32 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.pyKNKAi4er 00:35:05.294 11:29:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.pyKNKAi4er 00:35:05.556 11:29:32 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:05.556 11:29:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:05.816 nvme0n1 00:35:05.816 11:29:33 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:35:05.816 11:29:33 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:05.816 11:29:33 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:05.816 11:29:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:05.816 11:29:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:05.816 11:29:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:06.074 11:29:33 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:35:06.074 11:29:33 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:35:06.074 11:29:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:35:06.333 11:29:33 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:35:06.333 11:29:33 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:35:06.333 11:29:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:06.333 11:29:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:06.333 11:29:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:06.333 11:29:33 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:35:06.333 11:29:33 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:35:06.333 11:29:33 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:06.333 11:29:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:06.333 11:29:33 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:06.333 11:29:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:06.333 11:29:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:06.592 11:29:33 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:35:06.592 11:29:33 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:35:06.592 11:29:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:35:06.851 11:29:34 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:35:06.851 11:29:34 keyring_file -- keyring/file.sh@105 -- # jq length 00:35:06.851 11:29:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:07.110 11:29:34 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:35:07.110 11:29:34 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.pyKNKAi4er 00:35:07.110 11:29:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.pyKNKAi4er 00:35:07.110 11:29:34 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.0uDabYV3zb 00:35:07.110 11:29:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.0uDabYV3zb 00:35:07.368 11:29:34 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:07.368 11:29:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:07.627 nvme0n1 00:35:07.627 11:29:35 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:35:07.627 11:29:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:35:07.886 11:29:35 keyring_file -- keyring/file.sh@113 -- # config='{ 00:35:07.886 "subsystems": [ 00:35:07.886 { 00:35:07.886 "subsystem": "keyring", 00:35:07.886 "config": [ 00:35:07.886 { 00:35:07.886 "method": "keyring_file_add_key", 00:35:07.886 "params": { 00:35:07.886 "name": "key0", 00:35:07.886 "path": "/tmp/tmp.pyKNKAi4er" 00:35:07.886 } 00:35:07.886 }, 00:35:07.886 { 00:35:07.886 "method": "keyring_file_add_key", 00:35:07.886 "params": { 00:35:07.886 "name": "key1", 00:35:07.886 "path": "/tmp/tmp.0uDabYV3zb" 00:35:07.886 } 00:35:07.886 } 00:35:07.886 ] 00:35:07.886 }, 00:35:07.886 { 00:35:07.887 "subsystem": "iobuf", 00:35:07.887 "config": [ 00:35:07.887 { 00:35:07.887 "method": "iobuf_set_options", 00:35:07.887 "params": { 00:35:07.887 "small_pool_count": 8192, 00:35:07.887 "large_pool_count": 1024, 00:35:07.887 "small_bufsize": 8192, 00:35:07.887 "large_bufsize": 135168, 00:35:07.887 "enable_numa": false 00:35:07.887 } 00:35:07.887 } 00:35:07.887 ] 00:35:07.887 }, 00:35:07.887 { 00:35:07.887 "subsystem": "sock", 00:35:07.887 "config": [ 00:35:07.887 { 00:35:07.887 "method": "sock_set_default_impl", 00:35:07.887 "params": { 00:35:07.887 "impl_name": "posix" 00:35:07.887 } 00:35:07.887 }, 00:35:07.887 { 00:35:07.887 "method": "sock_impl_set_options", 00:35:07.887 "params": { 00:35:07.887 "impl_name": "ssl", 00:35:07.887 "recv_buf_size": 4096, 00:35:07.887 "send_buf_size": 4096, 00:35:07.887 "enable_recv_pipe": true, 00:35:07.887 "enable_quickack": false, 00:35:07.887 "enable_placement_id": 0, 00:35:07.887 "enable_zerocopy_send_server": true, 00:35:07.887 "enable_zerocopy_send_client": false, 00:35:07.887 "zerocopy_threshold": 0, 00:35:07.887 "tls_version": 0, 00:35:07.887 "enable_ktls": false 00:35:07.887 } 00:35:07.887 }, 00:35:07.887 { 00:35:07.887 "method": "sock_impl_set_options", 00:35:07.887 "params": { 00:35:07.887 "impl_name": "posix", 00:35:07.887 "recv_buf_size": 2097152, 00:35:07.887 "send_buf_size": 2097152, 00:35:07.887 "enable_recv_pipe": true, 00:35:07.887 "enable_quickack": false, 00:35:07.887 "enable_placement_id": 0, 00:35:07.887 "enable_zerocopy_send_server": true, 00:35:07.887 "enable_zerocopy_send_client": false, 00:35:07.887 "zerocopy_threshold": 0, 00:35:07.887 "tls_version": 0, 00:35:07.887 "enable_ktls": false 00:35:07.887 } 00:35:07.887 } 00:35:07.887 ] 00:35:07.887 }, 00:35:07.887 { 00:35:07.887 "subsystem": "vmd", 00:35:07.887 "config": [] 00:35:07.887 }, 00:35:07.887 { 00:35:07.887 "subsystem": "accel", 00:35:07.887 "config": [ 00:35:07.887 { 00:35:07.887 "method": "accel_set_options", 00:35:07.887 "params": { 00:35:07.887 "small_cache_size": 128, 00:35:07.887 "large_cache_size": 16, 00:35:07.887 "task_count": 2048, 00:35:07.887 "sequence_count": 2048, 00:35:07.887 "buf_count": 2048 00:35:07.887 } 00:35:07.887 } 00:35:07.887 ] 00:35:07.887 }, 00:35:07.887 { 00:35:07.887 "subsystem": "bdev", 00:35:07.887 "config": [ 00:35:07.887 { 00:35:07.887 "method": "bdev_set_options", 00:35:07.887 "params": { 00:35:07.887 "bdev_io_pool_size": 65535, 00:35:07.887 "bdev_io_cache_size": 256, 00:35:07.887 "bdev_auto_examine": true, 00:35:07.887 "iobuf_small_cache_size": 128, 00:35:07.887 "iobuf_large_cache_size": 16 00:35:07.887 } 00:35:07.887 }, 00:35:07.887 { 00:35:07.887 "method": "bdev_raid_set_options", 00:35:07.887 "params": { 00:35:07.887 "process_window_size_kb": 1024, 00:35:07.887 "process_max_bandwidth_mb_sec": 0 00:35:07.887 } 00:35:07.887 }, 00:35:07.887 { 00:35:07.887 "method": "bdev_iscsi_set_options", 00:35:07.887 "params": { 00:35:07.887 "timeout_sec": 30 00:35:07.887 } 00:35:07.887 }, 00:35:07.887 { 00:35:07.887 "method": "bdev_nvme_set_options", 00:35:07.887 "params": { 00:35:07.887 "action_on_timeout": "none", 00:35:07.887 "timeout_us": 0, 00:35:07.887 "timeout_admin_us": 0, 00:35:07.887 "keep_alive_timeout_ms": 10000, 00:35:07.887 "arbitration_burst": 0, 00:35:07.887 "low_priority_weight": 0, 00:35:07.887 "medium_priority_weight": 0, 00:35:07.887 "high_priority_weight": 0, 00:35:07.887 "nvme_adminq_poll_period_us": 10000, 00:35:07.887 "nvme_ioq_poll_period_us": 0, 00:35:07.887 "io_queue_requests": 512, 00:35:07.887 "delay_cmd_submit": true, 00:35:07.887 "transport_retry_count": 4, 00:35:07.887 "bdev_retry_count": 3, 00:35:07.887 "transport_ack_timeout": 0, 00:35:07.887 "ctrlr_loss_timeout_sec": 0, 00:35:07.887 "reconnect_delay_sec": 0, 00:35:07.887 "fast_io_fail_timeout_sec": 0, 00:35:07.887 "disable_auto_failback": false, 00:35:07.887 "generate_uuids": false, 00:35:07.887 "transport_tos": 0, 00:35:07.887 "nvme_error_stat": false, 00:35:07.887 "rdma_srq_size": 0, 00:35:07.887 "io_path_stat": false, 00:35:07.887 "allow_accel_sequence": false, 00:35:07.887 "rdma_max_cq_size": 0, 00:35:07.887 "rdma_cm_event_timeout_ms": 0, 00:35:07.887 "dhchap_digests": [ 00:35:07.887 "sha256", 00:35:07.887 "sha384", 00:35:07.887 "sha512" 00:35:07.887 ], 00:35:07.887 "dhchap_dhgroups": [ 00:35:07.887 "null", 00:35:07.887 "ffdhe2048", 00:35:07.887 "ffdhe3072", 00:35:07.887 "ffdhe4096", 00:35:07.887 "ffdhe6144", 00:35:07.887 "ffdhe8192" 00:35:07.887 ] 00:35:07.887 } 00:35:07.887 }, 00:35:07.887 { 00:35:07.887 "method": "bdev_nvme_attach_controller", 00:35:07.887 "params": { 00:35:07.887 "name": "nvme0", 00:35:07.887 "trtype": "TCP", 00:35:07.887 "adrfam": "IPv4", 00:35:07.887 "traddr": "127.0.0.1", 00:35:07.887 "trsvcid": "4420", 00:35:07.887 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:07.887 "prchk_reftag": false, 00:35:07.887 "prchk_guard": false, 00:35:07.887 "ctrlr_loss_timeout_sec": 0, 00:35:07.887 "reconnect_delay_sec": 0, 00:35:07.887 "fast_io_fail_timeout_sec": 0, 00:35:07.887 "psk": "key0", 00:35:07.887 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:07.887 "hdgst": false, 00:35:07.887 "ddgst": false, 00:35:07.887 "multipath": "multipath" 00:35:07.887 } 00:35:07.887 }, 00:35:07.887 { 00:35:07.887 "method": "bdev_nvme_set_hotplug", 00:35:07.887 "params": { 00:35:07.887 "period_us": 100000, 00:35:07.887 "enable": false 00:35:07.887 } 00:35:07.887 }, 00:35:07.887 { 00:35:07.887 "method": "bdev_wait_for_examine" 00:35:07.887 } 00:35:07.887 ] 00:35:07.887 }, 00:35:07.887 { 00:35:07.887 "subsystem": "nbd", 00:35:07.887 "config": [] 00:35:07.887 } 00:35:07.887 ] 00:35:07.887 }' 00:35:07.887 11:29:35 keyring_file -- keyring/file.sh@115 -- # killprocess 143858 00:35:07.887 11:29:35 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 143858 ']' 00:35:07.887 11:29:35 keyring_file -- common/autotest_common.sh@958 -- # kill -0 143858 00:35:07.887 11:29:35 keyring_file -- common/autotest_common.sh@959 -- # uname 00:35:07.887 11:29:35 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:07.887 11:29:35 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 143858 00:35:07.887 11:29:35 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:07.887 11:29:35 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:07.887 11:29:35 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 143858' 00:35:07.887 killing process with pid 143858 00:35:07.887 11:29:35 keyring_file -- common/autotest_common.sh@973 -- # kill 143858 00:35:07.887 Received shutdown signal, test time was about 1.000000 seconds 00:35:07.887 00:35:07.887 Latency(us) 00:35:07.887 [2024-11-20T10:29:35.383Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:07.887 [2024-11-20T10:29:35.383Z] =================================================================================================================== 00:35:07.887 [2024-11-20T10:29:35.383Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:07.887 11:29:35 keyring_file -- common/autotest_common.sh@978 -- # wait 143858 00:35:08.147 11:29:35 keyring_file -- keyring/file.sh@118 -- # bperfpid=145378 00:35:08.147 11:29:35 keyring_file -- keyring/file.sh@120 -- # waitforlisten 145378 /var/tmp/bperf.sock 00:35:08.147 11:29:35 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 145378 ']' 00:35:08.147 11:29:35 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:08.147 11:29:35 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:35:08.147 11:29:35 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:08.147 11:29:35 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:08.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:08.147 11:29:35 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:35:08.147 "subsystems": [ 00:35:08.147 { 00:35:08.147 "subsystem": "keyring", 00:35:08.147 "config": [ 00:35:08.147 { 00:35:08.147 "method": "keyring_file_add_key", 00:35:08.147 "params": { 00:35:08.147 "name": "key0", 00:35:08.147 "path": "/tmp/tmp.pyKNKAi4er" 00:35:08.147 } 00:35:08.147 }, 00:35:08.147 { 00:35:08.147 "method": "keyring_file_add_key", 00:35:08.147 "params": { 00:35:08.147 "name": "key1", 00:35:08.147 "path": "/tmp/tmp.0uDabYV3zb" 00:35:08.147 } 00:35:08.147 } 00:35:08.147 ] 00:35:08.147 }, 00:35:08.147 { 00:35:08.147 "subsystem": "iobuf", 00:35:08.147 "config": [ 00:35:08.147 { 00:35:08.147 "method": "iobuf_set_options", 00:35:08.147 "params": { 00:35:08.147 "small_pool_count": 8192, 00:35:08.147 "large_pool_count": 1024, 00:35:08.147 "small_bufsize": 8192, 00:35:08.147 "large_bufsize": 135168, 00:35:08.147 "enable_numa": false 00:35:08.147 } 00:35:08.147 } 00:35:08.147 ] 00:35:08.147 }, 00:35:08.147 { 00:35:08.147 "subsystem": "sock", 00:35:08.147 "config": [ 00:35:08.147 { 00:35:08.147 "method": "sock_set_default_impl", 00:35:08.147 "params": { 00:35:08.147 "impl_name": "posix" 00:35:08.147 } 00:35:08.147 }, 00:35:08.147 { 00:35:08.147 "method": "sock_impl_set_options", 00:35:08.147 "params": { 00:35:08.147 "impl_name": "ssl", 00:35:08.147 "recv_buf_size": 4096, 00:35:08.147 "send_buf_size": 4096, 00:35:08.147 "enable_recv_pipe": true, 00:35:08.147 "enable_quickack": false, 00:35:08.147 "enable_placement_id": 0, 00:35:08.147 "enable_zerocopy_send_server": true, 00:35:08.147 "enable_zerocopy_send_client": false, 00:35:08.147 "zerocopy_threshold": 0, 00:35:08.147 "tls_version": 0, 00:35:08.147 "enable_ktls": false 00:35:08.147 } 00:35:08.147 }, 00:35:08.147 { 00:35:08.147 "method": "sock_impl_set_options", 00:35:08.147 "params": { 00:35:08.147 "impl_name": "posix", 00:35:08.147 "recv_buf_size": 2097152, 00:35:08.147 "send_buf_size": 2097152, 00:35:08.147 "enable_recv_pipe": true, 00:35:08.147 "enable_quickack": false, 00:35:08.147 "enable_placement_id": 0, 00:35:08.147 "enable_zerocopy_send_server": true, 00:35:08.147 "enable_zerocopy_send_client": false, 00:35:08.147 "zerocopy_threshold": 0, 00:35:08.147 "tls_version": 0, 00:35:08.147 "enable_ktls": false 00:35:08.147 } 00:35:08.147 } 00:35:08.147 ] 00:35:08.147 }, 00:35:08.147 { 00:35:08.147 "subsystem": "vmd", 00:35:08.147 "config": [] 00:35:08.147 }, 00:35:08.147 { 00:35:08.147 "subsystem": "accel", 00:35:08.147 "config": [ 00:35:08.147 { 00:35:08.147 "method": "accel_set_options", 00:35:08.147 "params": { 00:35:08.147 "small_cache_size": 128, 00:35:08.147 "large_cache_size": 16, 00:35:08.147 "task_count": 2048, 00:35:08.147 "sequence_count": 2048, 00:35:08.147 "buf_count": 2048 00:35:08.147 } 00:35:08.147 } 00:35:08.147 ] 00:35:08.147 }, 00:35:08.147 { 00:35:08.147 "subsystem": "bdev", 00:35:08.147 "config": [ 00:35:08.147 { 00:35:08.147 "method": "bdev_set_options", 00:35:08.147 "params": { 00:35:08.147 "bdev_io_pool_size": 65535, 00:35:08.147 "bdev_io_cache_size": 256, 00:35:08.147 "bdev_auto_examine": true, 00:35:08.147 "iobuf_small_cache_size": 128, 00:35:08.147 "iobuf_large_cache_size": 16 00:35:08.147 } 00:35:08.147 }, 00:35:08.147 { 00:35:08.147 "method": "bdev_raid_set_options", 00:35:08.147 "params": { 00:35:08.147 "process_window_size_kb": 1024, 00:35:08.147 "process_max_bandwidth_mb_sec": 0 00:35:08.147 } 00:35:08.147 }, 00:35:08.147 { 00:35:08.147 "method": "bdev_iscsi_set_options", 00:35:08.147 "params": { 00:35:08.147 "timeout_sec": 30 00:35:08.147 } 00:35:08.147 }, 00:35:08.147 { 00:35:08.147 "method": "bdev_nvme_set_options", 00:35:08.147 "params": { 00:35:08.147 "action_on_timeout": "none", 00:35:08.147 "timeout_us": 0, 00:35:08.147 "timeout_admin_us": 0, 00:35:08.147 "keep_alive_timeout_ms": 10000, 00:35:08.147 "arbitration_burst": 0, 00:35:08.147 "low_priority_weight": 0, 00:35:08.147 "medium_priority_weight": 0, 00:35:08.148 "high_priority_weight": 0, 00:35:08.148 "nvme_adminq_poll_period_us": 10000, 00:35:08.148 "nvme_ioq_poll_period_us": 0, 00:35:08.148 "io_queue_requests": 512, 00:35:08.148 "delay_cmd_submit": true, 00:35:08.148 "transport_retry_count": 4, 00:35:08.148 "bdev_retry_count": 3, 00:35:08.148 "transport_ack_timeout": 0, 00:35:08.148 "ctrlr_loss_timeout_sec": 0, 00:35:08.148 "reconnect_delay_sec": 0, 00:35:08.148 "fast_io_fail_timeout_sec": 0, 00:35:08.148 "disable_auto_failback": false, 00:35:08.148 "generate_uuids": false, 00:35:08.148 "transport_tos": 0, 00:35:08.148 "nvme_error_stat": false, 00:35:08.148 "rdma_srq_size": 0, 00:35:08.148 "io_path_stat": false, 00:35:08.148 "allow_accel_sequence": false, 00:35:08.148 "rdma_max_cq_size": 0, 00:35:08.148 "rdma_cm_event_timeout_ms": 0, 00:35:08.148 "dhchap_digests": [ 00:35:08.148 "sha256", 00:35:08.148 "sha384", 00:35:08.148 "sha512" 00:35:08.148 ], 00:35:08.148 "dhchap_dhgroups": [ 00:35:08.148 "null", 00:35:08.148 "ffdhe2048", 00:35:08.148 "ffdhe3072", 00:35:08.148 "ffdhe4096", 00:35:08.148 "ffdhe6144", 00:35:08.148 "ffdhe8192" 00:35:08.148 ] 00:35:08.148 } 00:35:08.148 }, 00:35:08.148 { 00:35:08.148 "method": "bdev_nvme_attach_controller", 00:35:08.148 "params": { 00:35:08.148 "name": "nvme0", 00:35:08.148 "trtype": "TCP", 00:35:08.148 "adrfam": "IPv4", 00:35:08.148 "traddr": "127.0.0.1", 00:35:08.148 "trsvcid": "4420", 00:35:08.148 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:08.148 "prchk_reftag": false, 00:35:08.148 "prchk_guard": false, 00:35:08.148 "ctrlr_loss_timeout_sec": 0, 00:35:08.148 "reconnect_delay_sec": 0, 00:35:08.148 "fast_io_fail_timeout_sec": 0, 00:35:08.148 "psk": "key0", 00:35:08.148 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:08.148 "hdgst": false, 00:35:08.148 "ddgst": false, 00:35:08.148 "multipath": "multipath" 00:35:08.148 } 00:35:08.148 }, 00:35:08.148 { 00:35:08.148 "method": "bdev_nvme_set_hotplug", 00:35:08.148 "params": { 00:35:08.148 "period_us": 100000, 00:35:08.148 "enable": false 00:35:08.148 } 00:35:08.148 }, 00:35:08.148 { 00:35:08.148 "method": "bdev_wait_for_examine" 00:35:08.148 } 00:35:08.148 ] 00:35:08.148 }, 00:35:08.148 { 00:35:08.148 "subsystem": "nbd", 00:35:08.148 "config": [] 00:35:08.148 } 00:35:08.148 ] 00:35:08.148 }' 00:35:08.148 11:29:35 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:08.148 11:29:35 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:08.148 [2024-11-20 11:29:35.575816] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:35:08.148 [2024-11-20 11:29:35.575865] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145378 ] 00:35:08.406 [2024-11-20 11:29:35.651843] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:08.406 [2024-11-20 11:29:35.694660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:08.406 [2024-11-20 11:29:35.855260] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:08.973 11:29:36 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:08.973 11:29:36 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:35:08.973 11:29:36 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:35:08.973 11:29:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:08.973 11:29:36 keyring_file -- keyring/file.sh@121 -- # jq length 00:35:09.231 11:29:36 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:35:09.231 11:29:36 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:35:09.231 11:29:36 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:09.231 11:29:36 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:09.231 11:29:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:09.231 11:29:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:09.231 11:29:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:09.489 11:29:36 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:35:09.490 11:29:36 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:35:09.490 11:29:36 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:09.490 11:29:36 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:09.490 11:29:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:09.490 11:29:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:09.490 11:29:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:09.748 11:29:37 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:35:09.748 11:29:37 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:35:09.748 11:29:37 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:35:09.748 11:29:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:35:09.748 11:29:37 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:35:09.748 11:29:37 keyring_file -- keyring/file.sh@1 -- # cleanup 00:35:09.748 11:29:37 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.pyKNKAi4er /tmp/tmp.0uDabYV3zb 00:35:09.748 11:29:37 keyring_file -- keyring/file.sh@20 -- # killprocess 145378 00:35:09.748 11:29:37 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 145378 ']' 00:35:09.748 11:29:37 keyring_file -- common/autotest_common.sh@958 -- # kill -0 145378 00:35:09.748 11:29:37 keyring_file -- common/autotest_common.sh@959 -- # uname 00:35:09.748 11:29:37 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:09.748 11:29:37 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 145378 00:35:10.007 11:29:37 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:10.007 11:29:37 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:10.007 11:29:37 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 145378' 00:35:10.007 killing process with pid 145378 00:35:10.007 11:29:37 keyring_file -- common/autotest_common.sh@973 -- # kill 145378 00:35:10.007 Received shutdown signal, test time was about 1.000000 seconds 00:35:10.007 00:35:10.007 Latency(us) 00:35:10.007 [2024-11-20T10:29:37.504Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:10.008 [2024-11-20T10:29:37.504Z] =================================================================================================================== 00:35:10.008 [2024-11-20T10:29:37.504Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:35:10.008 11:29:37 keyring_file -- common/autotest_common.sh@978 -- # wait 145378 00:35:10.008 11:29:37 keyring_file -- keyring/file.sh@21 -- # killprocess 143852 00:35:10.008 11:29:37 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 143852 ']' 00:35:10.008 11:29:37 keyring_file -- common/autotest_common.sh@958 -- # kill -0 143852 00:35:10.008 11:29:37 keyring_file -- common/autotest_common.sh@959 -- # uname 00:35:10.008 11:29:37 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:10.008 11:29:37 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 143852 00:35:10.008 11:29:37 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:10.008 11:29:37 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:10.008 11:29:37 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 143852' 00:35:10.008 killing process with pid 143852 00:35:10.008 11:29:37 keyring_file -- common/autotest_common.sh@973 -- # kill 143852 00:35:10.008 11:29:37 keyring_file -- common/autotest_common.sh@978 -- # wait 143852 00:35:10.577 00:35:10.577 real 0m11.947s 00:35:10.577 user 0m29.696s 00:35:10.577 sys 0m2.784s 00:35:10.577 11:29:37 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:10.577 11:29:37 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:10.577 ************************************ 00:35:10.577 END TEST keyring_file 00:35:10.577 ************************************ 00:35:10.577 11:29:37 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:35:10.577 11:29:37 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:35:10.577 11:29:37 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:10.577 11:29:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:10.577 11:29:37 -- common/autotest_common.sh@10 -- # set +x 00:35:10.577 ************************************ 00:35:10.577 START TEST keyring_linux 00:35:10.577 ************************************ 00:35:10.577 11:29:37 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:35:10.577 Joined session keyring: 807696379 00:35:10.577 * Looking for test storage... 00:35:10.577 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:35:10.577 11:29:37 keyring_linux -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:10.577 11:29:37 keyring_linux -- common/autotest_common.sh@1693 -- # lcov --version 00:35:10.577 11:29:37 keyring_linux -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:10.577 11:29:38 keyring_linux -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:10.577 11:29:38 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:10.577 11:29:38 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:10.577 11:29:38 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:10.577 11:29:38 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:35:10.577 11:29:38 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:35:10.577 11:29:38 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:35:10.577 11:29:38 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:35:10.577 11:29:38 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:35:10.577 11:29:38 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:35:10.577 11:29:38 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:35:10.577 11:29:38 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:10.577 11:29:38 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:35:10.577 11:29:38 keyring_linux -- scripts/common.sh@345 -- # : 1 00:35:10.577 11:29:38 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:10.577 11:29:38 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:10.577 11:29:38 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:35:10.577 11:29:38 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:35:10.577 11:29:38 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:10.577 11:29:38 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:35:10.577 11:29:38 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:35:10.577 11:29:38 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:35:10.577 11:29:38 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:35:10.577 11:29:38 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:10.577 11:29:38 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:35:10.577 11:29:38 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:35:10.577 11:29:38 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:10.577 11:29:38 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:10.577 11:29:38 keyring_linux -- scripts/common.sh@368 -- # return 0 00:35:10.577 11:29:38 keyring_linux -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:10.577 11:29:38 keyring_linux -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:10.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:10.577 --rc genhtml_branch_coverage=1 00:35:10.577 --rc genhtml_function_coverage=1 00:35:10.577 --rc genhtml_legend=1 00:35:10.577 --rc geninfo_all_blocks=1 00:35:10.577 --rc geninfo_unexecuted_blocks=1 00:35:10.577 00:35:10.577 ' 00:35:10.577 11:29:38 keyring_linux -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:10.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:10.577 --rc genhtml_branch_coverage=1 00:35:10.577 --rc genhtml_function_coverage=1 00:35:10.577 --rc genhtml_legend=1 00:35:10.577 --rc geninfo_all_blocks=1 00:35:10.577 --rc geninfo_unexecuted_blocks=1 00:35:10.577 00:35:10.577 ' 00:35:10.577 11:29:38 keyring_linux -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:10.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:10.577 --rc genhtml_branch_coverage=1 00:35:10.578 --rc genhtml_function_coverage=1 00:35:10.578 --rc genhtml_legend=1 00:35:10.578 --rc geninfo_all_blocks=1 00:35:10.578 --rc geninfo_unexecuted_blocks=1 00:35:10.578 00:35:10.578 ' 00:35:10.578 11:29:38 keyring_linux -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:10.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:10.578 --rc genhtml_branch_coverage=1 00:35:10.578 --rc genhtml_function_coverage=1 00:35:10.578 --rc genhtml_legend=1 00:35:10.578 --rc geninfo_all_blocks=1 00:35:10.578 --rc geninfo_unexecuted_blocks=1 00:35:10.578 00:35:10.578 ' 00:35:10.578 11:29:38 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:35:10.578 11:29:38 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:10.578 11:29:38 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:35:10.578 11:29:38 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:10.578 11:29:38 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:10.578 11:29:38 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:10.578 11:29:38 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:10.578 11:29:38 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:10.578 11:29:38 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:10.578 11:29:38 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:10.578 11:29:38 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:10.578 11:29:38 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:10.578 11:29:38 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:10.578 11:29:38 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:35:10.578 11:29:38 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:35:10.578 11:29:38 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:10.578 11:29:38 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:10.578 11:29:38 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:10.578 11:29:38 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:10.578 11:29:38 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:10.578 11:29:38 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:35:10.837 11:29:38 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:10.837 11:29:38 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:10.837 11:29:38 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:10.837 11:29:38 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:10.837 11:29:38 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:10.837 11:29:38 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:10.837 11:29:38 keyring_linux -- paths/export.sh@5 -- # export PATH 00:35:10.837 11:29:38 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:10.837 11:29:38 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:35:10.837 11:29:38 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:10.837 11:29:38 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:10.837 11:29:38 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:10.837 11:29:38 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:10.837 11:29:38 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:10.837 11:29:38 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:10.837 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:10.837 11:29:38 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:10.837 11:29:38 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:10.837 11:29:38 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:10.837 11:29:38 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:35:10.837 11:29:38 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:35:10.837 11:29:38 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:35:10.837 11:29:38 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:35:10.837 11:29:38 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:35:10.837 11:29:38 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:35:10.837 11:29:38 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:35:10.837 11:29:38 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:35:10.837 11:29:38 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:35:10.837 11:29:38 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:35:10.837 11:29:38 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:35:10.837 11:29:38 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:35:10.837 11:29:38 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:35:10.837 11:29:38 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:35:10.837 11:29:38 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:35:10.837 11:29:38 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:10.837 11:29:38 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:35:10.837 11:29:38 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:35:10.837 11:29:38 keyring_linux -- nvmf/common.sh@733 -- # python - 00:35:10.837 11:29:38 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:35:10.837 11:29:38 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:35:10.837 /tmp/:spdk-test:key0 00:35:10.837 11:29:38 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:35:10.837 11:29:38 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:35:10.837 11:29:38 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:35:10.837 11:29:38 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:35:10.837 11:29:38 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:35:10.837 11:29:38 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:35:10.837 11:29:38 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:35:10.837 11:29:38 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:35:10.837 11:29:38 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:35:10.837 11:29:38 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:10.837 11:29:38 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:35:10.837 11:29:38 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:35:10.837 11:29:38 keyring_linux -- nvmf/common.sh@733 -- # python - 00:35:10.837 11:29:38 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:35:10.837 11:29:38 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:35:10.837 /tmp/:spdk-test:key1 00:35:10.837 11:29:38 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=145936 00:35:10.837 11:29:38 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 145936 00:35:10.837 11:29:38 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:35:10.837 11:29:38 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 145936 ']' 00:35:10.837 11:29:38 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:10.837 11:29:38 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:10.837 11:29:38 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:10.837 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:10.837 11:29:38 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:10.837 11:29:38 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:10.837 [2024-11-20 11:29:38.213173] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:35:10.837 [2024-11-20 11:29:38.213223] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145936 ] 00:35:10.837 [2024-11-20 11:29:38.289316] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:11.096 [2024-11-20 11:29:38.331818] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:11.096 11:29:38 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:11.096 11:29:38 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:35:11.096 11:29:38 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:35:11.096 11:29:38 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.096 11:29:38 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:11.096 [2024-11-20 11:29:38.539087] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:11.096 null0 00:35:11.096 [2024-11-20 11:29:38.571142] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:35:11.096 [2024-11-20 11:29:38.571511] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:35:11.355 11:29:38 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.355 11:29:38 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:35:11.355 725202747 00:35:11.355 11:29:38 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:35:11.355 417914264 00:35:11.355 11:29:38 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=145999 00:35:11.355 11:29:38 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 145999 /var/tmp/bperf.sock 00:35:11.355 11:29:38 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:35:11.355 11:29:38 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 145999 ']' 00:35:11.355 11:29:38 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:11.355 11:29:38 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:11.355 11:29:38 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:11.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:11.355 11:29:38 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:11.355 11:29:38 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:11.355 [2024-11-20 11:29:38.642209] Starting SPDK v25.01-pre git sha1 46fd068fc / DPDK 24.03.0 initialization... 00:35:11.355 [2024-11-20 11:29:38.642250] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145999 ] 00:35:11.355 [2024-11-20 11:29:38.699925] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:11.355 [2024-11-20 11:29:38.743429] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:11.355 11:29:38 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:11.355 11:29:38 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:35:11.355 11:29:38 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:35:11.355 11:29:38 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:35:11.613 11:29:38 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:35:11.613 11:29:38 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:11.873 11:29:39 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:35:11.873 11:29:39 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:35:12.133 [2024-11-20 11:29:39.424832] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:12.133 nvme0n1 00:35:12.133 11:29:39 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:35:12.133 11:29:39 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:35:12.133 11:29:39 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:35:12.133 11:29:39 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:35:12.133 11:29:39 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:35:12.133 11:29:39 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:12.391 11:29:39 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:35:12.391 11:29:39 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:35:12.391 11:29:39 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:35:12.391 11:29:39 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:35:12.391 11:29:39 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:12.392 11:29:39 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:35:12.392 11:29:39 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:12.650 11:29:39 keyring_linux -- keyring/linux.sh@25 -- # sn=725202747 00:35:12.650 11:29:39 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:35:12.650 11:29:39 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:35:12.650 11:29:39 keyring_linux -- keyring/linux.sh@26 -- # [[ 725202747 == \7\2\5\2\0\2\7\4\7 ]] 00:35:12.650 11:29:39 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 725202747 00:35:12.650 11:29:39 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:35:12.650 11:29:39 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:12.650 Running I/O for 1 seconds... 00:35:13.585 20927.00 IOPS, 81.75 MiB/s 00:35:13.585 Latency(us) 00:35:13.585 [2024-11-20T10:29:41.081Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:13.585 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:35:13.585 nvme0n1 : 1.01 20929.38 81.76 0.00 0.00 6095.50 4729.99 12024.43 00:35:13.585 [2024-11-20T10:29:41.081Z] =================================================================================================================== 00:35:13.585 [2024-11-20T10:29:41.081Z] Total : 20929.38 81.76 0.00 0.00 6095.50 4729.99 12024.43 00:35:13.585 { 00:35:13.585 "results": [ 00:35:13.585 { 00:35:13.585 "job": "nvme0n1", 00:35:13.585 "core_mask": "0x2", 00:35:13.585 "workload": "randread", 00:35:13.585 "status": "finished", 00:35:13.585 "queue_depth": 128, 00:35:13.585 "io_size": 4096, 00:35:13.585 "runtime": 1.00605, 00:35:13.585 "iops": 20929.377267531436, 00:35:13.585 "mibps": 81.75537995129467, 00:35:13.585 "io_failed": 0, 00:35:13.585 "io_timeout": 0, 00:35:13.585 "avg_latency_us": 6095.499045857011, 00:35:13.585 "min_latency_us": 4729.989565217391, 00:35:13.585 "max_latency_us": 12024.431304347827 00:35:13.585 } 00:35:13.585 ], 00:35:13.585 "core_count": 1 00:35:13.585 } 00:35:13.585 11:29:41 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:35:13.585 11:29:41 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:35:13.843 11:29:41 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:35:13.843 11:29:41 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:35:13.843 11:29:41 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:35:13.843 11:29:41 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:35:13.843 11:29:41 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:35:13.843 11:29:41 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:14.102 11:29:41 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:35:14.102 11:29:41 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:35:14.102 11:29:41 keyring_linux -- keyring/linux.sh@23 -- # return 00:35:14.102 11:29:41 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:14.102 11:29:41 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:35:14.102 11:29:41 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:14.102 11:29:41 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:35:14.102 11:29:41 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:14.102 11:29:41 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:35:14.102 11:29:41 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:14.102 11:29:41 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:14.102 11:29:41 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:14.361 [2024-11-20 11:29:41.650297] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:35:14.361 [2024-11-20 11:29:41.650984] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a0fa70 (107): Transport endpoint is not connected 00:35:14.361 [2024-11-20 11:29:41.651979] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a0fa70 (9): Bad file descriptor 00:35:14.361 [2024-11-20 11:29:41.652980] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:35:14.361 [2024-11-20 11:29:41.652990] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:35:14.361 [2024-11-20 11:29:41.652998] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:35:14.361 [2024-11-20 11:29:41.653006] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:35:14.361 request: 00:35:14.361 { 00:35:14.361 "name": "nvme0", 00:35:14.361 "trtype": "tcp", 00:35:14.361 "traddr": "127.0.0.1", 00:35:14.361 "adrfam": "ipv4", 00:35:14.361 "trsvcid": "4420", 00:35:14.361 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:14.361 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:14.361 "prchk_reftag": false, 00:35:14.361 "prchk_guard": false, 00:35:14.361 "hdgst": false, 00:35:14.361 "ddgst": false, 00:35:14.361 "psk": ":spdk-test:key1", 00:35:14.361 "allow_unrecognized_csi": false, 00:35:14.361 "method": "bdev_nvme_attach_controller", 00:35:14.361 "req_id": 1 00:35:14.361 } 00:35:14.361 Got JSON-RPC error response 00:35:14.361 response: 00:35:14.361 { 00:35:14.361 "code": -5, 00:35:14.361 "message": "Input/output error" 00:35:14.361 } 00:35:14.361 11:29:41 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:35:14.361 11:29:41 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:14.361 11:29:41 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:14.361 11:29:41 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:14.361 11:29:41 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:35:14.361 11:29:41 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:35:14.361 11:29:41 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:35:14.361 11:29:41 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:35:14.361 11:29:41 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:35:14.361 11:29:41 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:35:14.361 11:29:41 keyring_linux -- keyring/linux.sh@33 -- # sn=725202747 00:35:14.361 11:29:41 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 725202747 00:35:14.361 1 links removed 00:35:14.361 11:29:41 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:35:14.361 11:29:41 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:35:14.361 11:29:41 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:35:14.361 11:29:41 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:35:14.361 11:29:41 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:35:14.361 11:29:41 keyring_linux -- keyring/linux.sh@33 -- # sn=417914264 00:35:14.361 11:29:41 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 417914264 00:35:14.361 1 links removed 00:35:14.361 11:29:41 keyring_linux -- keyring/linux.sh@41 -- # killprocess 145999 00:35:14.361 11:29:41 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 145999 ']' 00:35:14.361 11:29:41 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 145999 00:35:14.361 11:29:41 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:35:14.361 11:29:41 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:14.361 11:29:41 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 145999 00:35:14.361 11:29:41 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:14.361 11:29:41 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:14.361 11:29:41 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 145999' 00:35:14.361 killing process with pid 145999 00:35:14.361 11:29:41 keyring_linux -- common/autotest_common.sh@973 -- # kill 145999 00:35:14.361 Received shutdown signal, test time was about 1.000000 seconds 00:35:14.361 00:35:14.361 Latency(us) 00:35:14.361 [2024-11-20T10:29:41.857Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:14.361 [2024-11-20T10:29:41.857Z] =================================================================================================================== 00:35:14.361 [2024-11-20T10:29:41.857Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:14.361 11:29:41 keyring_linux -- common/autotest_common.sh@978 -- # wait 145999 00:35:14.621 11:29:41 keyring_linux -- keyring/linux.sh@42 -- # killprocess 145936 00:35:14.621 11:29:41 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 145936 ']' 00:35:14.621 11:29:41 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 145936 00:35:14.621 11:29:41 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:35:14.621 11:29:41 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:14.621 11:29:41 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 145936 00:35:14.621 11:29:41 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:14.621 11:29:41 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:14.621 11:29:41 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 145936' 00:35:14.621 killing process with pid 145936 00:35:14.621 11:29:41 keyring_linux -- common/autotest_common.sh@973 -- # kill 145936 00:35:14.621 11:29:41 keyring_linux -- common/autotest_common.sh@978 -- # wait 145936 00:35:14.881 00:35:14.881 real 0m4.383s 00:35:14.881 user 0m8.330s 00:35:14.881 sys 0m1.422s 00:35:14.881 11:29:42 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:14.881 11:29:42 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:14.881 ************************************ 00:35:14.881 END TEST keyring_linux 00:35:14.881 ************************************ 00:35:14.881 11:29:42 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:35:14.881 11:29:42 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:35:14.881 11:29:42 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:35:14.881 11:29:42 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:35:14.881 11:29:42 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:35:14.881 11:29:42 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:35:14.881 11:29:42 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:35:14.881 11:29:42 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:35:14.881 11:29:42 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:35:14.881 11:29:42 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:35:14.881 11:29:42 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:35:14.881 11:29:42 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:35:14.881 11:29:42 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:35:14.881 11:29:42 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:35:14.881 11:29:42 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:35:14.881 11:29:42 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:35:14.881 11:29:42 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:35:14.881 11:29:42 -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:14.881 11:29:42 -- common/autotest_common.sh@10 -- # set +x 00:35:14.881 11:29:42 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:35:14.881 11:29:42 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:35:14.881 11:29:42 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:35:14.881 11:29:42 -- common/autotest_common.sh@10 -- # set +x 00:35:20.156 INFO: APP EXITING 00:35:20.156 INFO: killing all VMs 00:35:20.156 INFO: killing vhost app 00:35:20.156 INFO: EXIT DONE 00:35:22.692 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:35:22.692 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:35:22.692 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:35:22.692 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:35:22.692 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:35:22.692 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:35:22.692 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:35:22.692 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:35:22.692 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:35:22.692 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:35:22.692 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:35:22.692 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:35:22.692 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:35:22.951 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:35:22.951 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:35:22.951 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:35:22.951 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:35:26.244 Cleaning 00:35:26.244 Removing: /var/run/dpdk/spdk0/config 00:35:26.244 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:35:26.244 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:35:26.244 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:35:26.244 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:35:26.244 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:35:26.244 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:35:26.244 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:35:26.244 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:35:26.244 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:35:26.244 Removing: /var/run/dpdk/spdk0/hugepage_info 00:35:26.244 Removing: /var/run/dpdk/spdk1/config 00:35:26.244 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:35:26.244 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:35:26.244 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:35:26.244 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:35:26.244 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:35:26.244 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:35:26.244 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:35:26.244 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:35:26.244 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:35:26.244 Removing: /var/run/dpdk/spdk1/hugepage_info 00:35:26.244 Removing: /var/run/dpdk/spdk2/config 00:35:26.244 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:35:26.244 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:35:26.244 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:35:26.244 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:35:26.244 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:35:26.244 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:35:26.244 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:35:26.244 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:35:26.244 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:35:26.244 Removing: /var/run/dpdk/spdk2/hugepage_info 00:35:26.244 Removing: /var/run/dpdk/spdk3/config 00:35:26.244 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:35:26.244 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:35:26.244 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:35:26.244 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:35:26.244 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:35:26.244 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:35:26.244 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:35:26.244 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:35:26.244 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:35:26.244 Removing: /var/run/dpdk/spdk3/hugepage_info 00:35:26.244 Removing: /var/run/dpdk/spdk4/config 00:35:26.244 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:35:26.244 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:35:26.244 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:35:26.244 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:35:26.244 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:35:26.244 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:35:26.244 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:35:26.244 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:35:26.244 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:35:26.244 Removing: /var/run/dpdk/spdk4/hugepage_info 00:35:26.244 Removing: /dev/shm/bdev_svc_trace.1 00:35:26.244 Removing: /dev/shm/nvmf_trace.0 00:35:26.244 Removing: /dev/shm/spdk_tgt_trace.pid3860093 00:35:26.244 Removing: /var/run/dpdk/spdk0 00:35:26.244 Removing: /var/run/dpdk/spdk1 00:35:26.244 Removing: /var/run/dpdk/spdk2 00:35:26.244 Removing: /var/run/dpdk/spdk3 00:35:26.244 Removing: /var/run/dpdk/spdk4 00:35:26.244 Removing: /var/run/dpdk/spdk_pid102520 00:35:26.244 Removing: /var/run/dpdk/spdk_pid10277 00:35:26.244 Removing: /var/run/dpdk/spdk_pid105200 00:35:26.244 Removing: /var/run/dpdk/spdk_pid11012 00:35:26.244 Removing: /var/run/dpdk/spdk_pid113697 00:35:26.244 Removing: /var/run/dpdk/spdk_pid113702 00:35:26.244 Removing: /var/run/dpdk/spdk_pid11488 00:35:26.244 Removing: /var/run/dpdk/spdk_pid118730 00:35:26.244 Removing: /var/run/dpdk/spdk_pid11969 00:35:26.244 Removing: /var/run/dpdk/spdk_pid120697 00:35:26.244 Removing: /var/run/dpdk/spdk_pid122662 00:35:26.244 Removing: /var/run/dpdk/spdk_pid123711 00:35:26.244 Removing: /var/run/dpdk/spdk_pid125739 00:35:26.244 Removing: /var/run/dpdk/spdk_pid12656 00:35:26.244 Removing: /var/run/dpdk/spdk_pid126960 00:35:26.244 Removing: /var/run/dpdk/spdk_pid135694 00:35:26.244 Removing: /var/run/dpdk/spdk_pid136154 00:35:26.244 Removing: /var/run/dpdk/spdk_pid136619 00:35:26.244 Removing: /var/run/dpdk/spdk_pid138904 00:35:26.244 Removing: /var/run/dpdk/spdk_pid139473 00:35:26.244 Removing: /var/run/dpdk/spdk_pid140028 00:35:26.244 Removing: /var/run/dpdk/spdk_pid143852 00:35:26.244 Removing: /var/run/dpdk/spdk_pid143858 00:35:26.244 Removing: /var/run/dpdk/spdk_pid145378 00:35:26.244 Removing: /var/run/dpdk/spdk_pid145936 00:35:26.244 Removing: /var/run/dpdk/spdk_pid145999 00:35:26.244 Removing: /var/run/dpdk/spdk_pid16828 00:35:26.244 Removing: /var/run/dpdk/spdk_pid17288 00:35:26.244 Removing: /var/run/dpdk/spdk_pid23522 00:35:26.244 Removing: /var/run/dpdk/spdk_pid23783 00:35:26.244 Removing: /var/run/dpdk/spdk_pid29045 00:35:26.244 Removing: /var/run/dpdk/spdk_pid33290 00:35:26.244 Removing: /var/run/dpdk/spdk_pid3857942 00:35:26.244 Removing: /var/run/dpdk/spdk_pid3859007 00:35:26.244 Removing: /var/run/dpdk/spdk_pid3860093 00:35:26.244 Removing: /var/run/dpdk/spdk_pid3860728 00:35:26.244 Removing: /var/run/dpdk/spdk_pid3861676 00:35:26.244 Removing: /var/run/dpdk/spdk_pid3861694 00:35:26.244 Removing: /var/run/dpdk/spdk_pid3862687 00:35:26.244 Removing: /var/run/dpdk/spdk_pid3862893 00:35:26.244 Removing: /var/run/dpdk/spdk_pid3863147 00:35:26.244 Removing: /var/run/dpdk/spdk_pid3864769 00:35:26.244 Removing: /var/run/dpdk/spdk_pid3865932 00:35:26.244 Removing: /var/run/dpdk/spdk_pid3866322 00:35:26.244 Removing: /var/run/dpdk/spdk_pid3866587 00:35:26.244 Removing: /var/run/dpdk/spdk_pid3866785 00:35:26.244 Removing: /var/run/dpdk/spdk_pid3867000 00:35:26.244 Removing: /var/run/dpdk/spdk_pid3867250 00:35:26.244 Removing: /var/run/dpdk/spdk_pid3867501 00:35:26.244 Removing: /var/run/dpdk/spdk_pid3867787 00:35:26.244 Removing: /var/run/dpdk/spdk_pid3868536 00:35:26.244 Removing: /var/run/dpdk/spdk_pid3871530 00:35:26.244 Removing: /var/run/dpdk/spdk_pid3871785 00:35:26.244 Removing: /var/run/dpdk/spdk_pid3872041 00:35:26.244 Removing: /var/run/dpdk/spdk_pid3872134 00:35:26.244 Removing: /var/run/dpdk/spdk_pid3872544 00:35:26.244 Removing: /var/run/dpdk/spdk_pid3872704 00:35:26.244 Removing: /var/run/dpdk/spdk_pid3873044 00:35:26.244 Removing: /var/run/dpdk/spdk_pid3873206 00:35:26.244 Removing: /var/run/dpdk/spdk_pid3873529 00:35:26.244 Removing: /var/run/dpdk/spdk_pid3873535 00:35:26.244 Removing: /var/run/dpdk/spdk_pid3873791 00:35:26.244 Removing: /var/run/dpdk/spdk_pid3873802 00:35:26.244 Removing: /var/run/dpdk/spdk_pid3874365 00:35:26.244 Removing: /var/run/dpdk/spdk_pid3874619 00:35:26.244 Removing: /var/run/dpdk/spdk_pid3874921 00:35:26.244 Removing: /var/run/dpdk/spdk_pid3878621 00:35:26.244 Removing: /var/run/dpdk/spdk_pid3882895 00:35:26.244 Removing: /var/run/dpdk/spdk_pid3893635 00:35:26.244 Removing: /var/run/dpdk/spdk_pid3894130 00:35:26.244 Removing: /var/run/dpdk/spdk_pid3898441 00:35:26.244 Removing: /var/run/dpdk/spdk_pid3898868 00:35:26.244 Removing: /var/run/dpdk/spdk_pid3903142 00:35:26.244 Removing: /var/run/dpdk/spdk_pid3909020 00:35:26.244 Removing: /var/run/dpdk/spdk_pid3911633 00:35:26.244 Removing: /var/run/dpdk/spdk_pid3922077 00:35:26.244 Removing: /var/run/dpdk/spdk_pid3931229 00:35:26.244 Removing: /var/run/dpdk/spdk_pid3932891 00:35:26.244 Removing: /var/run/dpdk/spdk_pid3933911 00:35:26.244 Removing: /var/run/dpdk/spdk_pid3951379 00:35:26.244 Removing: /var/run/dpdk/spdk_pid3955412 00:35:26.244 Removing: /var/run/dpdk/spdk_pid4001525 00:35:26.244 Removing: /var/run/dpdk/spdk_pid4006844 00:35:26.244 Removing: /var/run/dpdk/spdk_pid4012605 00:35:26.244 Removing: /var/run/dpdk/spdk_pid4019325 00:35:26.244 Removing: /var/run/dpdk/spdk_pid4019327 00:35:26.244 Removing: /var/run/dpdk/spdk_pid4020243 00:35:26.244 Removing: /var/run/dpdk/spdk_pid4021024 00:35:26.244 Removing: /var/run/dpdk/spdk_pid4021860 00:35:26.244 Removing: /var/run/dpdk/spdk_pid4022544 00:35:26.244 Removing: /var/run/dpdk/spdk_pid4022547 00:35:26.244 Removing: /var/run/dpdk/spdk_pid4022778 00:35:26.244 Removing: /var/run/dpdk/spdk_pid4022836 00:35:26.244 Removing: /var/run/dpdk/spdk_pid4022992 00:35:26.244 Removing: /var/run/dpdk/spdk_pid4023768 00:35:26.244 Removing: /var/run/dpdk/spdk_pid4024623 00:35:26.244 Removing: /var/run/dpdk/spdk_pid4025542 00:35:26.504 Removing: /var/run/dpdk/spdk_pid4026073 00:35:26.504 Removing: /var/run/dpdk/spdk_pid4026224 00:35:26.504 Removing: /var/run/dpdk/spdk_pid4026459 00:35:26.504 Removing: /var/run/dpdk/spdk_pid4027479 00:35:26.504 Removing: /var/run/dpdk/spdk_pid4028461 00:35:26.504 Removing: /var/run/dpdk/spdk_pid4037281 00:35:26.504 Removing: /var/run/dpdk/spdk_pid4066133 00:35:26.504 Removing: /var/run/dpdk/spdk_pid4071124 00:35:26.504 Removing: /var/run/dpdk/spdk_pid4072753 00:35:26.504 Removing: /var/run/dpdk/spdk_pid4074525 00:35:26.504 Removing: /var/run/dpdk/spdk_pid4074606 00:35:26.504 Removing: /var/run/dpdk/spdk_pid4074836 00:35:26.504 Removing: /var/run/dpdk/spdk_pid4074863 00:35:26.504 Removing: /var/run/dpdk/spdk_pid4075360 00:35:26.504 Removing: /var/run/dpdk/spdk_pid4077194 00:35:26.504 Removing: /var/run/dpdk/spdk_pid4077981 00:35:26.504 Removing: /var/run/dpdk/spdk_pid4078464 00:35:26.504 Removing: /var/run/dpdk/spdk_pid4080643 00:35:26.504 Removing: /var/run/dpdk/spdk_pid4081056 00:35:26.504 Removing: /var/run/dpdk/spdk_pid4081772 00:35:26.504 Removing: /var/run/dpdk/spdk_pid4086006 00:35:26.504 Removing: /var/run/dpdk/spdk_pid4091446 00:35:26.504 Removing: /var/run/dpdk/spdk_pid4091447 00:35:26.504 Removing: /var/run/dpdk/spdk_pid4091448 00:35:26.504 Removing: /var/run/dpdk/spdk_pid4095236 00:35:26.504 Removing: /var/run/dpdk/spdk_pid4103566 00:35:26.504 Removing: /var/run/dpdk/spdk_pid4107602 00:35:26.504 Removing: /var/run/dpdk/spdk_pid4114118 00:35:26.504 Removing: /var/run/dpdk/spdk_pid4115465 00:35:26.504 Removing: /var/run/dpdk/spdk_pid4116960 00:35:26.504 Removing: /var/run/dpdk/spdk_pid4118503 00:35:26.504 Removing: /var/run/dpdk/spdk_pid4122991 00:35:26.504 Removing: /var/run/dpdk/spdk_pid4127336 00:35:26.504 Removing: /var/run/dpdk/spdk_pid4131354 00:35:26.504 Removing: /var/run/dpdk/spdk_pid4138726 00:35:26.504 Removing: /var/run/dpdk/spdk_pid4138843 00:35:26.504 Removing: /var/run/dpdk/spdk_pid4143435 00:35:26.504 Removing: /var/run/dpdk/spdk_pid4143664 00:35:26.504 Removing: /var/run/dpdk/spdk_pid4143897 00:35:26.504 Removing: /var/run/dpdk/spdk_pid4144349 00:35:26.504 Removing: /var/run/dpdk/spdk_pid4144361 00:35:26.504 Removing: /var/run/dpdk/spdk_pid4148862 00:35:26.504 Removing: /var/run/dpdk/spdk_pid4149428 00:35:26.504 Removing: /var/run/dpdk/spdk_pid4153768 00:35:26.504 Removing: /var/run/dpdk/spdk_pid4156517 00:35:26.504 Removing: /var/run/dpdk/spdk_pid4161959 00:35:26.504 Removing: /var/run/dpdk/spdk_pid4167753 00:35:26.504 Removing: /var/run/dpdk/spdk_pid4176460 00:35:26.504 Removing: /var/run/dpdk/spdk_pid4183465 00:35:26.504 Removing: /var/run/dpdk/spdk_pid4183527 00:35:26.504 Removing: /var/run/dpdk/spdk_pid43013 00:35:26.504 Removing: /var/run/dpdk/spdk_pid43542 00:35:26.504 Removing: /var/run/dpdk/spdk_pid47741 00:35:26.504 Removing: /var/run/dpdk/spdk_pid47995 00:35:26.504 Removing: /var/run/dpdk/spdk_pid52247 00:35:26.504 Removing: /var/run/dpdk/spdk_pid58098 00:35:26.504 Removing: /var/run/dpdk/spdk_pid60662 00:35:26.504 Removing: /var/run/dpdk/spdk_pid71253 00:35:26.504 Removing: /var/run/dpdk/spdk_pid80048 00:35:26.504 Removing: /var/run/dpdk/spdk_pid81659 00:35:26.504 Removing: /var/run/dpdk/spdk_pid82572 00:35:26.504 Removing: /var/run/dpdk/spdk_pid8637 00:35:26.504 Removing: /var/run/dpdk/spdk_pid9115 00:35:26.504 Removing: /var/run/dpdk/spdk_pid9800 00:35:26.763 Removing: /var/run/dpdk/spdk_pid98709 00:35:26.763 Clean 00:35:26.763 11:29:54 -- common/autotest_common.sh@1453 -- # return 0 00:35:26.763 11:29:54 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:35:26.763 11:29:54 -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:26.763 11:29:54 -- common/autotest_common.sh@10 -- # set +x 00:35:26.763 11:29:54 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:35:26.763 11:29:54 -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:26.763 11:29:54 -- common/autotest_common.sh@10 -- # set +x 00:35:26.763 11:29:54 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:35:26.763 11:29:54 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:35:26.763 11:29:54 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:35:26.763 11:29:54 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:35:26.763 11:29:54 -- spdk/autotest.sh@398 -- # hostname 00:35:26.763 11:29:54 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-08 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:35:27.022 geninfo: WARNING: invalid characters removed from testname! 00:35:48.959 11:30:15 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:50.864 11:30:17 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:52.768 11:30:19 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:54.672 11:30:21 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:56.049 11:30:23 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:57.956 11:30:25 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:59.861 11:30:27 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:35:59.861 11:30:27 -- spdk/autorun.sh@1 -- $ timing_finish 00:35:59.861 11:30:27 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:35:59.861 11:30:27 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:35:59.861 11:30:27 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:35:59.861 11:30:27 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:35:59.861 + [[ -n 3780438 ]] 00:35:59.861 + sudo kill 3780438 00:35:59.870 [Pipeline] } 00:35:59.887 [Pipeline] // stage 00:35:59.892 [Pipeline] } 00:35:59.908 [Pipeline] // timeout 00:35:59.913 [Pipeline] } 00:35:59.927 [Pipeline] // catchError 00:35:59.932 [Pipeline] } 00:35:59.946 [Pipeline] // wrap 00:35:59.952 [Pipeline] } 00:35:59.964 [Pipeline] // catchError 00:35:59.973 [Pipeline] stage 00:35:59.976 [Pipeline] { (Epilogue) 00:35:59.989 [Pipeline] catchError 00:35:59.991 [Pipeline] { 00:36:00.018 [Pipeline] echo 00:36:00.024 Cleanup processes 00:36:00.030 [Pipeline] sh 00:36:00.317 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:00.317 157106 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:00.405 [Pipeline] sh 00:36:00.743 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:00.744 ++ grep -v 'sudo pgrep' 00:36:00.744 ++ awk '{print $1}' 00:36:00.744 + sudo kill -9 00:36:00.744 + true 00:36:00.756 [Pipeline] sh 00:36:01.041 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:36:13.268 [Pipeline] sh 00:36:13.555 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:36:13.555 Artifacts sizes are good 00:36:13.570 [Pipeline] archiveArtifacts 00:36:13.577 Archiving artifacts 00:36:13.706 [Pipeline] sh 00:36:13.993 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:36:14.009 [Pipeline] cleanWs 00:36:14.019 [WS-CLEANUP] Deleting project workspace... 00:36:14.019 [WS-CLEANUP] Deferred wipeout is used... 00:36:14.026 [WS-CLEANUP] done 00:36:14.028 [Pipeline] } 00:36:14.046 [Pipeline] // catchError 00:36:14.060 [Pipeline] sh 00:36:14.344 + logger -p user.info -t JENKINS-CI 00:36:14.354 [Pipeline] } 00:36:14.369 [Pipeline] // stage 00:36:14.376 [Pipeline] } 00:36:14.390 [Pipeline] // node 00:36:14.396 [Pipeline] End of Pipeline 00:36:14.448 Finished: SUCCESS